CN113220461A - Operation method and device of distributed operation medium - Google Patents

Operation method and device of distributed operation medium Download PDF

Info

Publication number
CN113220461A
CN113220461A CN202110588125.0A CN202110588125A CN113220461A CN 113220461 A CN113220461 A CN 113220461A CN 202110588125 A CN202110588125 A CN 202110588125A CN 113220461 A CN113220461 A CN 113220461A
Authority
CN
China
Prior art keywords
application
processing result
calling
distributed
side car
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110588125.0A
Other languages
Chinese (zh)
Inventor
周玄
顾欣
夏龙飞
张远征
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110588125.0A priority Critical patent/CN113220461A/en
Publication of CN113220461A publication Critical patent/CN113220461A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The invention belongs to the technical field of big data, and provides a method and a device for operating a distributed runtime medium, wherein the method for operating the distributed runtime medium comprises the following steps: and calling a first application side car according to the access link, and forwarding a processing result of the first application to the first application so that the first application forwards the processing result to the second application side car and the second application. The invention exposes and encapsulates the distributed capability through the http/grpc protocol, integrates the lightweight sdk of each language in the sidecar, enables the sidecar to have the characteristic of cross-language, extracts and adds friendly dual-protocol communication through the modularization capability, and developers can easily construct distributed application programs which run on the cloud and the edge and contain multiple languages and elasticity.

Description

Operation method and device of distributed operation medium
Technical Field
The invention belongs to the technical field of big data, and particularly relates to a method and a device for operating a medium during distributed operation.
Background
The service grid provides technical services as an infrastructure layer of inter-service communication, which is responsible for building the complex service topology of modern cloud-native applications to reliably deliver requests. At the heart of this is to provide a unified global approach to control and measure all request traffic between applications or services, in practice the service grid is usually implemented in the form of an array of lightweight network agents deployed together with the application code, without the need for the application to be aware of the agents' presence. The existing service grid does not bring more new functions to people, is used for solving the problems which are solved by other tools, such as load balancing, disconnection, retry, detection and the like, and is realized again under the cloud native ecological environment based on KUBERNETES. The SIDECAR agent (SIDAR) mode proposed by the service grid well solves the problem of network communication in the micro-service architecture.
However, in addition to NETWORKING (NETWORKING), lifecycle (LIFECYCLE), STATE (STATE), and BINDING (BINDING) are also one of the problems to be solved by distributed applications. Network problems can be solved by means of a service grid such as ISTIO. The other three solutions are not good at present, and the three aspects still need to be paid attention to during development, so that the burden and the development cost of developers are increased.
Disclosure of Invention
The invention belongs to the technical field of big data, and aims at the problems in the prior art, the distributed capability is exposed and packaged through an http/grpc protocol, the lightweight sdk of each language is integrated in a sidecar, so that the sidecar has the characteristic of cross-language, and developers can easily construct distributed application programs which run on the cloud and the edge and contain multiple languages and elasticity by extracting friendly dual-protocol communication through the modularization capability. Meanwhile, the side car is used as an independent process and decoupled with the service, as the mode is developed, the management function of the micro service is more and more abundant, and the problem of micro service under the middle platform architecture can be gradually solved in the near future.
In order to solve the technical problems, the invention provides the following technical scheme:
in a first aspect, the present invention provides a method for operating a distributed runtime medium, including:
generating an access link according to the calling request;
and calling a first application side car according to the access link, and forwarding a processing result of the first application to the first application so that the first application forwards the processing result to a second application side car and the second application.
In an embodiment, the generating an access link according to a call request includes:
analyzing the calling request to determine a side car port, a version number, calling information among services, a unique identification number of an application and a calling method in the distributed system;
and generating an access link according to the port of the side car, the version number, the calling information among the services, the unique identification number of the application and the calling method in the distributed system.
In an embodiment, the invoking a first application sidecar according to the access link and forwarding a processing result of the first application to the first application includes:
utilizing a naming analysis component to analyze and search the first application side car with the first application ID;
and receiving the processing result and forwarding the processing result to the first application.
In an embodiment, said causing the first application to forward the processing result to the second application sidecar and the second application includes:
and receiving the processing result sent by the first application and forwarding the processing result to the second application side car, so that the second application side car forwards the processing result to the second application.
In one embodiment, the first application and the second application are written in different compiling languages.
In a second aspect, the present invention provides a running apparatus of a distributed runtime medium, including:
the access link generation module is used for generating an access link according to the calling request;
and the first side car calling module is used for calling a side car of the first application according to the access link and forwarding the processing result of the first application to the first application so that the first application forwards the processing result to the side car of the second application and the second application.
In one embodiment, the access link generation module includes:
the calling request interest-counting unit is used for analyzing the calling request to determine a sidecar port, a version number, calling information among services, a unique identification number of an application and a calling method in the distributed system;
and the access link generating unit is used for generating an access link according to the port of the side car, the version number, the calling information among the services, the unique identification number of the application and the calling method in the distributed system.
In one embodiment, the first sidecar calling module includes:
the first side car searching unit is used for searching the first application side car by utilizing the name analysis component to analyze and the first application ID;
a processing result forwarding first unit, configured to receive the processing result and forward the processing result to the first application;
a processing result receiving unit, configured to receive the processing result sent by the first application, and forward the processing result to the second application-side sidecar, so that the second application-side sidecar forwards the processing result to the second application;
the first application and the second application are written in different compiling languages.
In a third aspect, the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the steps of the method for executing a distributed runtime medium being implemented when the processor executes the program.
In a fourth aspect, the invention provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of execution of the distributed runtime medium.
As can be seen from the above description, according to the operation method and apparatus for a distributed runtime medium provided in the embodiments of the present invention, an access link is first generated according to a call request; and then, calling the first application side car according to the access link, and forwarding the processing result of the first application to the first application so that the first application forwards the processing result to the second application side car and the second application. The invention exposes and encapsulates the distributed capability through the http/grpc protocol, integrates the lightweight sdk of each language in the sidecar, enables the sidecar to have the characteristic of cross-language, extracts and adds friendly dual-protocol communication through the modularization capability, and developers can easily construct distributed application programs which run on the cloud and the edge and contain multiple languages and elasticity. Meanwhile, the side car is used as an independent process and decoupled with the service, as the mode is developed, the management function of the micro service is more and more abundant, and the problem of micro service under the middle platform architecture can be gradually solved in the near future.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for operating a distributed runtime medium according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating step 100 according to an embodiment of the present invention;
FIG. 3 is a first flowchart illustrating a step 200 according to an embodiment of the present invention;
FIG. 4 is a second flowchart illustrating a step 200 according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating a method for operating a distributed runtime medium in an exemplary embodiment of the present invention;
FIG. 6 is a block diagram of an apparatus for operating a distributed runtime medium according to an embodiment of the present invention;
fig. 7 is a block diagram showing the structure of the access link generation module 10 in the embodiment of the present invention;
fig. 8 is a first block diagram illustrating the structure of the first sidecar call module 20 according to an embodiment of the present invention;
fig. 9 is a block diagram of the first sidecar call module 20 according to the embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of this application and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
An embodiment of the present invention provides a specific implementation of a running method of a distributed runtime medium, and referring to fig. 1, the method specifically includes the following steps:
step 100: and generating an access link according to the call request.
It can be understood that Sidecar mode (Sidecar pattern) and the harley car are similar principles: different components of an application are deployed to different processes or containers to provide isolation and encapsulation, with each component of the application maintaining an update individually. This model may also enable applications to be composed of heterogeneous components and technologies, such as java services and consul registries. Add other functions to the application: monitoring, logging, configuration center, routing, fusing and other functions; this avoids making the application bulky by using aop technology or other means of adding code on the application itself. Specifically, the sidecar mode has the following advantages: low coupling: adding an enhancement function to the application container without changing it; single duty: the responsibilities for each container are different; even if the sidecar container fails, the application container is not affected; can be reused; are updated respectively without mutual influence
In implementation, step 100 specifically exposes the packaged distributed capability provisioning calls in a language-independent manner, such as HTTP/gRPC API, and extracts the distributed capabilities as building blocks, packages them into distributed runtime, and exposes them through a canonical API to provide different distributed capabilities, such as service calls, state management, publish subscriptions, monitor, and so on. In contrast to the service grid, the sidecars in this scenario are exposed to the applications.
Next, according to POST/GET/PUT/DELETE http:// localhost:<DRPort>/version/invoke/<appId>/method/<method-name>the method comprises the steps of accessing, wherein DRPort is a port exposed by a side car in distributed operation, version is a corresponding version number, invoke refers to calling between services, apid is a unique identification number of application side application, and method-name is a specific calling method. That assumes python app needsTo access the nodeapp method requires a POST request, for example, tohttp://localhost:3500/v1.0/invoke/nodeapp/method/ neworder
Step 200: and calling a first application side car according to the access link, and forwarding a processing result of the first application to the first application so that the first application forwards the processing result to a second application side car and the second application.
Specifically, first, the first application sidecar is paired with the actual service and has an interface included in the service, and therefore, the calling request is forwarded to the service, that is, the first application, again, and corresponding business processing is completed. And after the first application side finishes processing, returning the result to the side car on the self side.
And then, the first application side vehicle sends the processing result to a second application side vehicle, and the second application side vehicle sends the processing result to a second application so as to complete one service call.
As can be seen from the above description, in the operation method of the distributed runtime medium according to the embodiment of the present invention, an access link is first generated according to a call request; and calling a first application side car according to the access link, and forwarding a processing result of the first application to the first application so that the first application forwards the processing result to a second application side car and the second application. The invention exposes and encapsulates the distributed capability through the http/grpc protocol, integrates the lightweight sdk of each language in the sidecar, enables the sidecar to have the characteristic of cross-language, extracts and adds friendly dual-protocol communication through the modularization capability, and developers can easily construct distributed application programs which run on the cloud and the edge and contain multiple languages and elasticity. Meanwhile, the side car is used as an independent process and decoupled with the service, as the mode is developed, the management function of the micro service is more and more abundant, and the problem of micro service under the middle platform architecture can be gradually solved in the near future.
In one embodiment, referring to fig. 2, step 100 comprises:
step 101: analyzing the calling request to determine a side car port, a version number, calling information among services, a unique identification number of an application and a calling method in the distributed system;
step 102: and generating an access link according to the port of the side car, the version number, the calling information among the services, the unique identification number of the application and the calling method in the distributed system.
For step 101 and step 102, an interface specification of an inter-service method call is provided, which needs to be specified according to POST/GET/PUT/DELETE http:// localhost:<DRPort>/version/invoke/<appId>/method/<method-name>wherein: DRPort is the port exposed by the side car during distributed operation, version is the corresponding version number, invoke refers to calling between services, appid is the unique identification number of application side application, and method-name is the specific calling method. Assuming that the python app needs a method of accessing the nodeapp, a request is required from the POST to, for examplehttp://localhost:3500/v1.0/invoke/nodeapp/method/neworder
In one embodiment, referring to fig. 3, step 200 comprises:
step 201: utilizing a naming analysis component to analyze and search the first application side car with the first application ID;
where name resolution relies on a lightweight implementation of SDKs in various languages. These forms, such as DRPort, URL forms are all directly defined in a configuration manner on the application side. The significance of using interface specifications is to implement control of network communication between services to perform functions such as service discovery, flow control, retry fusing, security access, etc., and the related network control functions are also integrated into the agent of the sidecar.
Step 202: and receiving the processing result and forwarding the processing result to the first application.
In one embodiment, referring to fig. 4, step 200 further comprises:
step 203: receiving the processing result sent by the first application and forwarding the processing result to the second application-side sidecar, so that the second application-side sidecar forwards the processing result to the second application;
in one embodiment, the first application and the second application are written in different compiling languages.
The distributed capability provisioning calls are exposed using the HTTP/gRPC API in a language independent manner, and lightweight sdk for each language is integrated in the sidecar, making it cross-language feature, and developers can easily build distributed applications that run on the cloud and edge and contain multiple languages and elasticity by modular capability extraction plus friendly dual protocol communication.
To further illustrate the present solution, the present invention also provides a specific application example of the running method of the distributed runtime medium by taking nodeapp (first application) and python app (second application) as examples, see fig. 5, which specifically includes the following contents.
Belongs to introduction:
distributed runtime: an execution environment required for the distributed application to run is provided.
Building blocks: a runtime module for providing some distributed base capability.
S1: and establishing a calling link according to the interface specification.
Implementing cross-service method calls by encapsulating service call capabilities, such as nodeaps expose an API: HTTP://10.0.0.2:8000/neworder, according to the mode in the prior art, can be accessed by HTTP POST API directly. In the service grid, the original POST URL is also used, and the sidecar agent of the service grid carries out flow holding and calls the service in an application-insensitive mode. However, in the present application, a specification of an interface for inter-service method invocation is provided in distributed runtime, and it is necessary to perform the following operations according to POST/GET/PUT/DELETE http:// localhost:<DRPort>/version/invoke/<appId>/method/<method-name>the method comprises the steps of accessing, wherein DRPort is a port exposed by a side car in distributed operation, version is a corresponding version number, invoke refers to calling between services, apid is a unique identification number of application side application, and method-name is a specific calling method. Then, assuming that the python app needs a method of accessing the nodeapp, a request is made from the POST to, for examplehttp://localhost:3500/v1.0/invoke/nodeapp/method/ neworder
S2: the sidecar on the python app side is called.
The URL will first call the sidecar on the python app side, which finds the sidecar on the nodeap side and forwards the request to it, resolved by the name resolution component and with the unique appid.
S3: forwarding the processing result to the python app.
First, the sidecar on the nodeapp side returns the processing result to the sidecar on the python app side. Then, the sidecar on the python app side returns the data to the python app. A service call is completed.
In addition, under the interface specification, distributed operation can also achieve resource binding and event triggering based on an event-driven architecture. By establishing a binding of triggers to resources, events can be received and sent from any external source (e.g., database, queue, file system, etc.) without resorting to message queues, enabling flexible business scenarios. Two binding modes exist in distributed operation, and input binding is performed: when an event of an external resource occurs, by means of input binding, an application can pass through a specific API: POST http:// localhost: < appPort >/< name > receives an event from an external resource for handling specific logic. And (4) output binding: the output binding allows for the invocation of external resources. For example, in an order processing scenario, after the order is successfully created, the order information may be processed through a binding API of the distributed runtime: POST/PUT http:// localhost: < DRPort >/v1.0/bindings/< name > are output onto the Kafka specific queue.
On the other hand, the problem of state sharing and concurrency consistency among services is a topic which cannot be bypassed in a distributed mode. For state sharing, the distributed runtime stores and reads states in a friendly HTTP API manner and supports concurrent and consistent behavior through option setup. And (3) storing: POST http:// localhost: < DRPort >/v1.0/state/< storename >
Reading: GET http:// localhost: < DRPort >/v1.0/state/< storename >/< key >
And (3) deleting: DELETE http:// localhost: < DRPort >/v1.0/state/< storename >/< key >
It can be understood that the modularization and the interface specification bring the advantages that only the specification interface needs to be concerned when developing, and some simple and easily available storage components are used for debugging, such as a database storage component and the like; other storage components can be introduced (replaced) at the time of operation without changing the service code.
As can be seen from the above description, in the operation method of the distributed runtime medium according to the embodiment of the present invention, an access link is first generated according to a call request; then, calling a first application sidecar according to the access link, and forwarding a processing result of the first application to the first application; and finally, forwarding the processing result to a second application side car and a second application. The invention optimizes the defect that the prior service grid technology only covers the network communication layer, encapsulates and sinks the distributed capability as the interface specification matched during the operation to simplify the technical complexity of the distributed application development, and provides a method for optimizing the service grid capability.
Based on the same inventive concept, the embodiment of the present application further provides an operating apparatus of a distributed runtime medium, which can be used to implement the method described in the foregoing embodiment, such as the following embodiments. Because the principle of solving the problem of the operation device of the distributed runtime medium is similar to that of the operation method of the distributed runtime medium, the implementation of the operation device of the distributed runtime medium can refer to the implementation of the operation method of the distributed runtime medium, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. While the system described in the embodiments below is preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
An embodiment of the present invention provides a specific implementation manner of an operation apparatus of a distributed runtime medium, which is capable of implementing an operation method of the distributed runtime medium, and referring to fig. 6, the operation apparatus of the distributed runtime medium specifically includes the following contents:
an access link generating module 10, configured to generate an access link according to the call request;
the first sidecar calling module 20 is configured to call a first application sidecar according to the access link, and forward a processing result of the first application to the first application, so that the first application forwards the processing result to a second application sidecar and the second application.
In one embodiment, referring to fig. 7, the access link generating module 10 includes:
a call request interest unit 101, configured to parse the call request to determine a sidecar port, a version number, call information between services, a unique identification number of an application, and a call method in the distributed system;
and the access link generating unit 102 is configured to generate an access link according to the port of the side car, the version number, the calling information between services, the unique identification number of the application, and the calling method in the distributed system.
In one embodiment, referring to fig. 8, the first sidecar calling module 20 includes:
a first sidecar searching unit 201, configured to search the first application sidecar by using the naming resolution component for resolution and the first application ID;
a processing result forwarding first unit 202, configured to receive the processing result and forward the processing result to the first application;
in an embodiment, referring to fig. 9, the first sidecar calling module 20 further includes:
a processing result receiving unit 203, configured to receive the processing result sent by the first application, and forward the processing result to the second application-side sidecar, so that the second application-side sidecar forwards the processing result to the second application;
the first application and the second application are written in different compiling languages.
As can be seen from the above description, in the running apparatus of the distributed runtime medium according to the embodiment of the present invention, first, an access link is generated according to a call request; then, calling a first application sidecar according to the access link, and forwarding a processing result of the first application to the first application; and finally, forwarding the processing result to a second application side car and a second application. The invention exposes and encapsulates the distributed capability through the http/grpc protocol, integrates the lightweight sdk of each language in the sidecar, enables the sidecar to have the characteristic of cross-language, extracts and adds friendly dual-protocol communication through the modularization capability, and developers can easily construct distributed application programs which run on the cloud and the edge and contain multiple languages and elasticity. Meanwhile, the side car is used as an independent process and decoupled with the service, as the mode is developed, the management function of the micro service is more and more abundant, and the problem of micro service under the middle platform architecture can be gradually solved in the near future.
An embodiment of the present application further provides a specific implementation manner of an electronic device, which is capable of implementing all steps in the operation method of the distributed runtime medium in the foregoing embodiment, and referring to fig. 10, the electronic device specifically includes the following contents:
a processor (processor)1201, a memory (memory)1202, a communication Interface 1203, and a bus 1204;
the processor 1201, the memory 1202 and the communication interface 1203 complete communication with each other through the bus 1204; the communication interface 1203 is used for implementing information transmission between related devices such as server-side devices and client-side devices;
the processor 1201 is configured to call the computer program in the memory 1202, and the processor executes the computer program to implement all the steps in the method for running the distributed runtime medium in the above embodiments, for example, the processor executes the computer program to implement the following steps:
step 100: generating an access link according to the calling request;
step 200: and calling a first application side car according to the access link, and forwarding a processing result of the first application to the first application so that the first application forwards the processing result to a second application side car and the second application.
Embodiments of the present application further provide a computer-readable storage medium capable of implementing all steps in the running method of the distributed runtime medium in the foregoing embodiments, where the computer-readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the computer program implements all steps of the running method of the distributed runtime medium in the foregoing embodiments, for example, when the processor executes the computer program, the processor implements the following steps:
step 100: generating an access link according to the calling request;
step 200: and calling a first application side car according to the access link, and forwarding a processing result of the first application to the first application so that the first application forwards the processing result to a second application side car and the second application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Although the present application provides method steps as in an embodiment or a flowchart, more or fewer steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or client product executes, it may execute sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, in implementing the embodiments of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, and the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
The embodiments of this specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The described embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the embodiments of the present disclosure, and is not intended to limit the embodiments of the present disclosure. Various modifications and variations to the embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present specification should be included in the scope of the claims of the embodiments of the present specification.

Claims (10)

1. A method of operating a distributed runtime medium, comprising:
generating an access link according to the calling request;
and calling a first application side car according to the access link, and forwarding a processing result of the first application to the first application so that the first application forwards the processing result to a second application side car and the second application.
2. The method of claim 1, wherein generating the access link according to the invocation request comprises:
analyzing the calling request to determine a side car port, a version number, calling information among services, a unique identification number of an application and a calling method in the distributed system;
and generating an access link according to the port of the side car, the version number, the calling information among the services, the unique identification number of the application and the calling method in the distributed system.
3. The method for running the distributed runtime medium according to claim 1, wherein the invoking a first application sidecar according to the access link and forwarding a processing result of the first application to the first application comprises:
utilizing a naming analysis component to analyze and search the first application side car with the first application ID;
and receiving the processing result and forwarding the processing result to the first application.
4. The method of operating a distributed runtime medium of claim 1, wherein said causing the first application to forward the processing result to the second application sidecar and a second application comprises:
and receiving the processing result sent by the first application and forwarding the processing result to the second application side car, so that the second application side car forwards the processing result to the second application.
5. The method of claim 1, wherein the first application and the second application are written in different compilation languages.
6. An apparatus for operating a distributed runtime medium, comprising:
the access link generation module is used for generating an access link according to the calling request;
and the first side car calling module is used for calling a side car of the first application according to the access link and forwarding the processing result of the first application to the first application so that the first application forwards the processing result to the side car of the second application and the second application.
7. The apparatus for running a distributed runtime medium of claim 6, wherein the access link generation module comprises:
the calling request interest-counting unit is used for analyzing the calling request to determine a sidecar port, a version number, calling information among services, a unique identification number of an application and a calling method in the distributed system;
and the access link generating unit is used for generating an access link according to the port of the side car, the version number, the calling information among the services, the unique identification number of the application and the calling method in the distributed system.
8. The apparatus for running a distributed runtime medium of claim 6, wherein the first sidecar invocation module comprises:
the first side car searching unit is used for searching the first application side car by utilizing the name analysis component to analyze and the first application ID;
a processing result forwarding first unit, configured to receive the processing result and forward the processing result to the first application;
a processing result receiving unit, configured to receive the processing result sent by the first application, and forward the processing result to the second application-side sidecar, so that the second application-side sidecar forwards the processing result to the second application;
the first application and the second application are written in different compiling languages.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of operation of the distributed runtime medium of any one of claims 1 to 5 are implemented when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of execution of the distributed runtime medium of any one of claims 1 to 5.
CN202110588125.0A 2021-05-28 2021-05-28 Operation method and device of distributed operation medium Pending CN113220461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110588125.0A CN113220461A (en) 2021-05-28 2021-05-28 Operation method and device of distributed operation medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110588125.0A CN113220461A (en) 2021-05-28 2021-05-28 Operation method and device of distributed operation medium

Publications (1)

Publication Number Publication Date
CN113220461A true CN113220461A (en) 2021-08-06

Family

ID=77099638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110588125.0A Pending CN113220461A (en) 2021-05-28 2021-05-28 Operation method and device of distributed operation medium

Country Status (1)

Country Link
CN (1) CN113220461A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10623390B1 (en) * 2017-08-24 2020-04-14 Pivotal Software, Inc. Sidecar-backed services for cloud computing platform
US20200133789A1 (en) * 2018-10-25 2020-04-30 EMC IP Holding Company LLC Application consistent snapshots as a sidecar of a containerized application
CN112817565A (en) * 2021-01-20 2021-05-18 ***股份有限公司 Micro-service combination method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10623390B1 (en) * 2017-08-24 2020-04-14 Pivotal Software, Inc. Sidecar-backed services for cloud computing platform
US20200133789A1 (en) * 2018-10-25 2020-04-30 EMC IP Holding Company LLC Application consistent snapshots as a sidecar of a containerized application
CN112817565A (en) * 2021-01-20 2021-05-18 ***股份有限公司 Micro-service combination method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109284197B (en) Distributed application platform based on intelligent contract and implementation method
CN109933522B (en) Test method, test system and storage medium for automatic case
CN112035228B (en) Resource scheduling method and device
US20100318974A1 (en) Business object mockup architecture
WO2002001349A2 (en) System and method for coordination-centric design of software systems
CN113268319A (en) Business process customization and distributed process scheduling method based on micro-service architecture
US10089084B2 (en) System and method for reusing JavaScript code available in a SOA middleware environment from a process defined by a process execution language
Tang et al. Modeling enterprise service-oriented architectural styles
Ameur-Boulifa et al. Behavioural semantics for asynchronous components
CN110457132B (en) Method and device for creating functional object and terminal equipment
Ezenwoye et al. RobustBPEL2: Transparent autonomization in business processes through dynamic proxies
US20170286261A1 (en) System and method for providing runtime tracing for a web-based client accessing a transactional middleware platform using an extension interface
CN111522623B (en) Modularized software multi-process running system
Fortier et al. Dyninka: a FaaS framework for distributed dataflow applications
CN113448655A (en) C standard dynamic library calling method and device
US10268496B2 (en) System and method for supporting object notation variables in a process defined by a process execution language for execution in a SOA middleware environment
CN116755799A (en) Service arrangement system and method
US20160291941A1 (en) System and method for supporting javascript as an expression language in a process defined by a process execution language for execution in a soa middleware environment
Mostinckx et al. Mirror‐based reflection in AmbientTalk
CN113220461A (en) Operation method and device of distributed operation medium
CN109189382B (en) Business Process System
CN109669793B (en) Object calling method in middleware process
CN113312031A (en) Naming service interface of software communication system structure
US10223142B2 (en) System and method for supporting javascript activities in a process defined by a process execution language for execution in a SOA middleware environment
Jakóbczyk et al. Cloud-Native Architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination