CN115883655A - Service request processing method and device, electronic equipment and storage medium - Google Patents
Service request processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115883655A CN115883655A CN202211562193.0A CN202211562193A CN115883655A CN 115883655 A CN115883655 A CN 115883655A CN 202211562193 A CN202211562193 A CN 202211562193A CN 115883655 A CN115883655 A CN 115883655A
- Authority
- CN
- China
- Prior art keywords
- service request
- configuration information
- service
- agent
- control plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 77
- 238000012545 processing Methods 0.000 claims abstract description 64
- 230000008569 process Effects 0.000 claims abstract description 43
- 238000004590 computer program Methods 0.000 claims description 18
- 239000003795 chemical substances by application Substances 0.000 description 98
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure relates to a service request processing method, a service request processing device, an electronic device and a storage medium. The method comprises the steps of acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in a data processor, so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information. Compared with the prior art, the method and the system have the advantages that the plurality of service requests are configured to the plurality of agent instances, so that the plurality of agent instances can process the service requests in parallel, the high-concurrency service requests can be processed, the problem that the high-concurrency service requests are difficult to bear by resources such as a limited processor, a limited memory, a maximum connection number and the like of the existing centralized agent is solved, the service requests can be processed by the plurality of agent instances in parallel, the service request processing speed can be improved, the service request processing efficiency can be improved, and the user experience can be improved.
Description
Technical Field
The present disclosure relates to the field of data processing, and in particular, to a service request processing method and apparatus, an electronic device, and a storage medium.
Background
The Data Processing Unit (DPU) is a new generation of computing chip that is Data-centric, I/O-intensive, supports infrastructure resource layer virtualization using software-defined technology routes, improves computing system efficiency, reduces overall ownership cost of the overall system, improves Data Processing efficiency, and reduces performance loss of other computing chips.
The Service Mesh (Service Mesh) is an infrastructure layer in cloud services, and is focused on realizing reliable transmission of Service requests among micro services, and providing flow control layer operations such as Service discovery, load balancing, request routing, rule configuration and the like on the basis of the Service Mesh. The service grids on the current DPUs typically use centralized agents, and deploying the centralized agents on the DPUs can provide the functions of required service administration and traffic decoupling, configuration-based flow control, and the like for application services on the cloud server nodes.
However, resources such as a limited processor, a limited memory, a limited maximum connection number, and the like of the centralized agent are difficult to bear highly concurrent service requests, and when the scale of an application service deployed by a cloud server node is gradually increased, not only the function of a service grid cannot be satisfied, but also even basic service requests and service traffic forwarding cannot be completed, which affects user experience.
Disclosure of Invention
In order to solve the technical problem, the present disclosure provides a service request processing method, an apparatus, an electronic device, and a storage medium, so as to process highly concurrent service requests and improve user experience.
In a first aspect, an embodiment of the present disclosure provides a service request processing method, which is applied to cloud server nodes, where a control plane proxy is deployed on each cloud server node, and the method includes:
obtaining at least one service request;
obtaining configuration information from a serving mesh control plane based on the control plane proxy;
and respectively issuing the configuration information to a plurality of agent instances in a data processor so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information.
In some embodiments, the configuration information includes at least one of:
attribute information of each service request in the at least one service request, configuration information of each service request in the at least one service request by a user, and configuration information of a service grid.
In some embodiments, the obtaining configuration information from the serving mesh control plane based on the control plane proxy comprises:
synchronizing with a serving mesh control plane through the control plane proxy;
configuration information is obtained from the serving mesh control plane.
In a second aspect, an embodiment of the present disclosure provides a service request processing method applied to a data processor, where a plurality of proxy instances are deployed in a system on chip of the data processor, the method including:
receiving configuration information sent by a cloud server node based on the plurality of agent instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor;
processing each of the at least one service request based on the configuration information.
In some embodiments, said processing each of said at least one service request based on said configuration information comprises:
obtaining a corresponding relation between each service request in the at least one service request and the plurality of agent instances according to the configuration information;
determining a target agent instance corresponding to each service request in the at least one service request from the corresponding relation;
and processing each service request based on the target agent instance corresponding to each service request in the at least one service request.
In some embodiments, said processing each service request based on its corresponding target agent instance comprises:
for each service request in the at least one service request, establishing a connection between the service request and the target agent instance corresponding to the service request;
and calling the target proxy instance connected with the service request to process the service request.
In a third aspect, an embodiment of the present disclosure provides a service request processing apparatus, which is applied to cloud server nodes, where a control plane proxy is respectively deployed on each cloud server node, and the apparatus includes:
a first obtaining module, configured to obtain at least one service request;
a second obtaining module, configured to obtain configuration information from a serving mesh control plane based on the control plane proxy;
and the issuing module is used for respectively issuing the configuration information to a plurality of agent instances in the data processor so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information.
In a fourth aspect, an embodiment of the present disclosure provides a service request processing apparatus applied to a data processor, where multiple proxy instances are deployed in a system on chip of the data processor, and the apparatus includes:
the receiving module is used for receiving the configuration information sent by the cloud server node based on the plurality of proxy instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor;
a processing module configured to process each service request of the at least one service request based on the configuration information.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first or second aspect.
In a sixth aspect, the disclosed embodiments provide a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the method according to the first or second aspect.
In a seventh aspect, the disclosed embodiments also provide a computer program product comprising a computer program or instructions, which when executed by a processor, implement the method according to the first or second aspect.
According to the service request processing method, the service request processing device, the electronic device and the storage medium, the at least one service request is obtained, the configuration information is obtained from the service grid control plane based on the control plane proxy, and the configuration information is respectively issued to the plurality of proxy instances in the data processor, so that the plurality of proxy instances in the data processor process each service request in the at least one service request based on the configuration information. Compared with the prior art that service requests are processed through a centralized agent, the method and the system for processing the service requests have the advantages that the service requests are configured to the agent instances, so that the agent instances process the service requests in parallel, high-concurrency service requests can be processed, requirements of all service requests, flow forwarding and grid control layers can be met, the problem that resources such as a limited processor, a limited memory and a maximum connection number of the existing centralized agent are difficult to bear the high-concurrency service requests is solved, the service requests can be processed in parallel through the agent instances, the processing speed of the service requests can be increased, the processing efficiency of the service requests can be improved, and user experience can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the embodiments or technical solutions in the prior art description will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a service request processing method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating processing of a service request based on a multi-instance proxy according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a service request processing method provided by an embodiment of the present disclosure;
fig. 4 is a flowchart of a service request processing method according to another embodiment of the disclosure;
fig. 5 is a schematic structural diagram of a service request processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a service request processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
The Data Processing Unit (DPU) is a new generation of computing chip that is Data-centric, I/O-intensive, supports infrastructure resource layer virtualization using software-defined technology routes, improves computing system efficiency, reduces overall ownership cost of the overall system, improves Data Processing efficiency, and reduces performance loss of other computing chips.
The Service Mesh (Service Mesh) is an infrastructure layer in cloud services, and is focused on realizing reliable transmission of Service requests among micro services, and providing flow control layer operations such as Service discovery, load balancing, request routing, rule configuration and the like on the basis of the Service Mesh. The service grids on the current DPUs typically use centralized agents, and deploying the centralized agents on the DPUs can provide the functions of required service administration and traffic decoupling, configuration-based flow control, and the like for application services on the cloud server nodes.
However, resources such as a limited processor, a limited memory, a limited maximum connection number, and the like of the centralized agent are difficult to bear highly concurrent service requests, and when the scale of an application service deployed by a cloud server node is gradually increased, not only the function of a service grid cannot be satisfied, but also even basic service requests and service traffic forwarding cannot be completed, which affects user experience.
To address this problem, embodiments of the present disclosure provide a service request processing method, which is described below with reference to specific embodiments.
Fig. 1 is a flowchart of a service request processing method provided by the embodiment of the disclosure, which is applied to cloud server nodes, and a control plane proxy is respectively deployed on each cloud server node. It can be understood that the service request processing method provided by the embodiment of the present disclosure may also be applied in other scenarios.
The service request processing method shown in fig. 1 is described as follows, and the method includes the following steps:
s101, at least one service request is obtained.
In some optional embodiments, the cloud server node obtains at least one service request. Specifically, the client sends at least one service request to the cloud server node, and the cloud server node receives the at least one service request sent by the client. As shown in fig. 2, for example, two cloud server nodes are used, each cloud server node is deployed with one Service grid control plane agent (control plane agent), the cloud server node has multiple Service containers (Service Pod), and the cloud server node can obtain multiple Service requests from the Service containers.
S102, obtaining configuration information from a service grid control plane based on the control plane agent.
After obtaining the at least one service request, the cloud server node may obtain configuration information from a serving grid control plane through the control plane proxy.
As shown in fig. 2, the service grid control plane may obtain configuration information such as full service application node information, user configuration information, and service grid configuration information managed by a system architecture (e.g., kubernets), and further, the control plane agent in the cloud server node may obtain the configuration information from the service grid control plane.
In some embodiments, the configuration information includes at least one of: attribute information of each service request in the at least one service request, configuration information of each service request in the at least one service request by a user, and configuration information of a service grid.
Optionally, the attribute information of each service request in the at least one service request includes an Internet Protocol (IP), a physical address (MAC), a port number, a communication Protocol, and the like of each service request. The configuration information of the user for each service request in the at least one service request, for example, the user assigns a traffic weight for each service request. The configuration information of the service grid includes the correspondence between the service containers and the proxy instances, i.e. by which proxy instance the service request in each service container is processed.
In some embodiments, S102 includes, but is not limited to S1021, S1022:
and S1021, synchronizing with the service grid control plane through the control plane proxy.
After obtaining the configuration information, the cloud service node synchronizes with the service grid control plane through the control plane proxy.
S1022, obtaining configuration information from the service grid control plane.
After synchronizing with a serving mesh control plane through the control plane proxy, further, a cloud service node may obtain configuration information from the serving mesh control plane.
S103, respectively issuing the configuration information to a plurality of agent instances in the data processor, so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information.
As shown in fig. 2, after the cloud server node obtains the configuration information, the control plane proxy in the cloud server node sends the configuration information to a plurality of proxy instances in the data processor, and the plurality of proxy instances in the data processor receive the configuration information and process each service request in the at least one service request based on the configuration information.
In some embodiments, each proxy instance forms a cloud-native service grid cell based on the respective established connections and service requests, and a plurality of cloud-native service grid cells on one cloud server node collectively form a cloud-native service grid. The cloud native service grid can provide services such as service discovery, load balancing, request routing and rule configuration for each service container. When the cloud native service grid processes external service requests or responses and service requests between nodes of the cross-cloud server, the flow acceleration can be realized through the DPU.
The method and the device for processing the service requests acquire the configuration information from the service grid control plane based on the control plane proxy by acquiring the at least one service request, and respectively issue the configuration information to the plurality of proxy instances in the data processor, so that the plurality of proxy instances in the data processor process each service request in the at least one service request based on the configuration information. Compared with the prior art that service requests are processed through a centralized agent, the method and the system have the advantages that the service requests are configured to the agent instances, so that the agent instances process the service requests in parallel, high-concurrency service requests can be processed, requirements of all service requests, flow forwarding and grid control layers can be met, the problem that resources such as a limited processor, a limited memory and a maximum connection number of an existing centralized agent are difficult to bear the high-concurrency service requests is solved, the service requests can be processed through the agent instances in parallel, service request processing speed can be increased, service request processing efficiency can be improved, and user experience can be improved.
Fig. 3 is a flowchart of a service request processing method provided by an embodiment of the present disclosure, which is applied to a data processor, and a plurality of agent instances are deployed in a system on chip of the data processor, as shown in fig. 3, the method includes the following steps:
s301, receiving configuration information sent by a cloud server node based on the plurality of proxy instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor.
In this embodiment, a plurality of agent instances are deployed in a system-on-chip of the data processor, the system-on-chip being an operating system deployed on a DPU. The cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor. Further, a plurality of agent instances in the data processor may receive configuration information sent by the cloud server node.
S302, processing each service request in the at least one service request based on the configuration information.
After receiving the configuration information sent by the cloud server node, the data processor processes each service request in the at least one service request according to the configuration information. Specifically, the data processor may configure a plurality of proxy instances based on the configuration information, and further process each service request of the at least one service request through the plurality of proxy instances.
In some embodiments, S302 includes, but is not limited to, S3021, S3022, S3023:
s3021, obtaining a corresponding relationship between each service request of the at least one service request and the plurality of agent instances according to the configuration information.
After receiving the configuration information sent by the cloud server node, the data processor may obtain, according to the configuration information, a correspondence between each service request of the at least one service request and the plurality of agent instances. One proxy instance may process one service request or may process multiple service requests, which is not limited herein. That is, the proxy instance and the service request may be in a one-to-one relationship or a one-to-many relationship.
S3022, determining a target proxy instance corresponding to each service request in the at least one service request from the corresponding relation.
After obtaining the correspondence, the data processor may determine a target agent instance corresponding to each service request from the correspondence.
S3023, processing each service request based on the target proxy instance corresponding to each service request in the at least one service request.
Further, the data processor processes each service request through a target agent instance corresponding to the service request in the at least one service request.
The embodiment of the disclosure receives configuration information sent by a cloud server node based on the plurality of proxy instances. The cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor. Further, each of the at least one service request is processed based on the configuration information. Compared with the prior art that service requests are processed through a centralized agent, the method and the system have the advantages that the service requests are configured to the agent instances, so that the agent instances process the service requests in parallel, high-concurrency service requests can be processed, requirements of all service requests, flow forwarding and grid control layers can be met, the problem that resources such as a limited processor, a limited memory and a maximum connection number of an existing centralized agent are difficult to bear the high-concurrency service requests is solved, the service requests can be processed through the agent instances in parallel, service request processing speed can be increased, service request processing efficiency can be improved, and user experience can be improved.
Fig. 4 is a flowchart of a service request processing method according to another embodiment of the disclosure, and as shown in fig. 4, the method includes the following steps:
s401, receiving configuration information sent by a cloud server node based on the plurality of agent instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor.
Specifically, the implementation process and principle of S401 and S301 are consistent, and are not described herein again.
S402, obtaining the corresponding relation between each service request in the at least one service request and the plurality of agent instances according to the configuration information.
After receiving the configuration information sent by the cloud server node, the data processor may obtain, according to the configuration information, a correspondence between each service request in the at least one service request and the plurality of agent instances. One proxy instance may process one service request or may process multiple service requests, which is not limited herein. That is, the proxy instance and the service request may be in a one-to-one relationship or a one-to-many relationship.
S403, determining a target agent instance corresponding to each service request in the at least one service request from the corresponding relation.
After obtaining the correspondence, the data processor may determine a target agent instance corresponding to each service request from the correspondence.
S404, for each service request in the at least one service request, establishing connection between the service request and the target agent instance corresponding to the service request.
After determining the target agent instance corresponding to each service request, the data processor establishes a session connection between each service request and the target agent instance corresponding to the service request. As shown in fig. 2, each Envoy agent, i.e. the target agent instance, establishes a Transmission Control Protocol (TCP) session with a corresponding service request on the cloud server node, so that each Envoy agent works within a reasonable traffic bearing range, thereby bearing a highly-concurrent service request.
S405, the target agent instance connected with the service request is called to process the service request.
After the service request establishes a connection with a target proxy instance, the data processor may invoke the target proxy instance to which the service request is connected to process the service request. The plurality of service requests are configured to the plurality of agent instances so that the plurality of agent instances can process the service requests in parallel, the problem that resources such as a limited processor, a memory, a maximum connection number and the like of the existing centralized agent are difficult to bear high-concurrency service requests is solved, the plurality of agent instances can process the service requests in parallel, and therefore the service request processing speed can be improved, and the service request processing efficiency can be improved.
The embodiment of the disclosure receives the configuration information sent by the cloud server node based on the plurality of agent instances. The cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor. Further, according to the configuration information, a corresponding relation between each service request in the at least one service request and the plurality of agent instances is obtained, and a target agent instance corresponding to each service request in the at least one service request is determined from the corresponding relation. For each service request in the at least one service request, establishing a connection between the service request and the target agent instance corresponding to the service request. And then the target agent instance connected with the service request is called to process the service request. Compared with the prior art, the multiple service requests are configured to the multiple agent instances, so that the multiple agent instances process the service requests in parallel, the high-concurrency service requests can be processed, the requirements of all service requests, traffic forwarding and grid control layers can be met, the problem that resources such as a limited processor, a limited memory, a limited maximum connection number and the like of an existing centralized agent are difficult to bear the high-concurrency service requests is solved, the multiple agent instances can process the service requests in parallel, the service request processing speed can be increased, the service request processing efficiency is improved, and the user experience is improved.
Fig. 5 is a schematic structural diagram of a service request processing apparatus according to an embodiment of the present disclosure. The service request processing apparatus may be a cloud server node as in the above embodiments, or the service request processing apparatus may be a component or assembly in the cloud server node. The service request processing apparatus provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the service request processing method, as shown in fig. 5, the service request processing apparatus 50 includes: a first obtaining module 51, a second obtaining module 52 and a sending module 53; the first obtaining module 51 is configured to obtain at least one service request; a second obtaining module 52 is configured to obtain configuration information from a serving mesh control plane based on the control plane proxy; the issuing module 53 is configured to issue the configuration information to a plurality of agent instances in the data processor, respectively, so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information.
Optionally, the configuration information includes at least one of the following: attribute information of each service request in the at least one service request, configuration information of each service request in the at least one service request by a user, and configuration information of a service grid.
Optionally, when the second obtaining module 52 obtains the configuration information from the serving mesh control plane based on the control plane proxy, the second obtaining module is specifically configured to: synchronizing with a serving mesh control plane through the control plane proxy; configuration information is obtained from the serving mesh control plane.
The service request processing apparatus in the embodiment shown in fig. 5 may be used to implement the technical solution of the method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 6 is a schematic structural diagram of a service request processing apparatus according to an embodiment of the present disclosure. The service request processing means may be a data processor as in the above embodiments, or the service request processing means may be a component or assembly in the data processor. The service request processing apparatus provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the service request processing method, as shown in fig. 6, the service request processing apparatus 60 includes: a receiving module 61 and a processing module 62; the receiving module 61 is configured to receive configuration information sent by a cloud server node based on the plurality of agent instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor; the processing module 62 is configured to process each service request of the at least one service request based on the configuration information.
Optionally, when the processing module 62 processes each service request in the at least one service request based on the configuration information, specifically configured to: obtaining a corresponding relation between each service request in the at least one service request and the plurality of agent instances according to the configuration information; determining a target agent instance corresponding to each service request in the at least one service request from the corresponding relation; and processing each service request based on the target agent instance corresponding to each service request in the at least one service request.
Optionally, when the processing module 62 processes each service request based on the target agent instance corresponding to each service request in the at least one service request, specifically configured to: for each service request in the at least one service request, establishing a connection between the service request and the target agent instance corresponding to the service request; and calling the target proxy instance connected with the service request to process the service request.
The service request processing apparatus in the embodiment shown in fig. 6 may be configured to execute the technical solution of the foregoing method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be a cloud server node or a data processor. As shown in fig. 7, the electronic device 70 may execute the processing procedure provided in the service request processing method embodiment, and includes: memory 71, processor 72, computer programs and communication interface 73; wherein the computer program is stored in the memory 71 and is configured to be executed by the processor 72 in a service request handling method as described above.
In addition, the embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the service request processing method described in the above embodiment.
Furthermore, the embodiments of the present disclosure also provide a computer program product, which includes a computer program or instructions, and when the computer program or instructions are executed by a processor, the method for processing the service request is implemented as described above.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
obtaining at least one service request;
obtaining configuration information from a serving mesh control plane based on the control plane proxy;
and respectively issuing the configuration information to a plurality of agent instances in a data processor so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information.
Or the computer readable medium carrying one or more programs which, when executed by the electronic device, cause the electronic device to:
receiving configuration information sent by a cloud server node based on the plurality of agent instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor;
processing each of the at least one service request based on the configuration information.
In addition, the electronic device may also perform other steps in the service request processing method as described above.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, including conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A service request processing method is applied to cloud server nodes, and a control plane agent is respectively deployed on each cloud server node, and the method comprises the following steps:
obtaining at least one service request;
obtaining configuration information from a serving mesh control plane based on the control plane proxy;
and respectively issuing the configuration information to a plurality of agent instances in a data processor so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information.
2. The method of claim 1, wherein the configuration information comprises at least one of:
attribute information of each service request in the at least one service request, configuration information of each service request in the at least one service request by a user, and configuration information of a service grid.
3. The method of claim 1, wherein the obtaining configuration information from a serving mesh control plane based on the control plane proxy comprises:
synchronizing with a serving mesh control plane through the control plane proxy;
configuration information is obtained from the serving mesh control plane.
4. A method of service request processing applied to a data processor, wherein a plurality of proxy instances are deployed in a system on a chip of the data processor, the method comprising:
receiving configuration information sent by a cloud server node based on the plurality of agent instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor;
processing each of the at least one service request based on the configuration information.
5. The method of claim 4, wherein the processing each service request of the at least one service request based on the configuration information comprises:
obtaining a corresponding relation between each service request in the at least one service request and the plurality of agent instances according to the configuration information;
determining a target agent instance corresponding to each service request in the at least one service request from the corresponding relation;
and processing each service request based on the target agent instance corresponding to each service request in the at least one service request.
6. The method of claim 5, wherein the processing each of the at least one service request based on its corresponding target proxy instance comprises:
for each service request in the at least one service request, establishing a connection between the service request and the target agent instance corresponding to the service request;
and calling the target agent instance connected with the service request to process the service request.
7. A service request processing device is applied to cloud server nodes, a control plane agent is respectively deployed on each cloud server node, and the service request processing device comprises:
a first obtaining module, configured to obtain at least one service request;
a second obtaining module, configured to obtain configuration information from a serving mesh control plane based on the control plane proxy;
and the issuing module is used for respectively issuing the configuration information to a plurality of agent instances in the data processor so that the plurality of agent instances in the data processor process each service request in the at least one service request based on the configuration information.
8. A service request processing apparatus applied to a data processor, deploying a plurality of agent instances in a system on a chip of the data processor, comprising:
the receiving module is used for receiving configuration information sent by the cloud server node based on the plurality of proxy instances; the cloud server node is used for acquiring at least one service request, acquiring configuration information from a service grid control plane based on a control plane agent, and respectively issuing the configuration information to a plurality of agent instances in the data processor;
a processing module configured to process each service request of the at least one service request based on the configuration information.
9. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211562193.0A CN115883655B (en) | 2022-12-07 | 2022-12-07 | Service request processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211562193.0A CN115883655B (en) | 2022-12-07 | 2022-12-07 | Service request processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115883655A true CN115883655A (en) | 2023-03-31 |
CN115883655B CN115883655B (en) | 2024-06-07 |
Family
ID=85766312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211562193.0A Active CN115883655B (en) | 2022-12-07 | 2022-12-07 | Service request processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115883655B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117061338A (en) * | 2023-08-16 | 2023-11-14 | 中科驭数(北京)科技有限公司 | Service grid data processing method, device and system based on multiple network cards |
CN117176819A (en) * | 2023-09-27 | 2023-12-05 | 中科驭数(北京)科技有限公司 | Service network service-based unloading method and device |
CN117395141A (en) * | 2023-12-07 | 2024-01-12 | 江苏征途技术股份有限公司 | Method for simplifying station room intelligent auxiliary and artificial intelligent visual gateway configuration |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103581283A (en) * | 2012-07-26 | 2014-02-12 | 国际商业机器公司 | Method and system for controlling support center |
CN107911390A (en) * | 2017-12-14 | 2018-04-13 | 浪潮软件股份有限公司 | Handle method, proxy server, application server and the system of service request |
CN108234223A (en) * | 2018-04-19 | 2018-06-29 | 郑州云海信息技术有限公司 | A kind of security service design method of data center's total management system |
EP3382555A1 (en) * | 2017-03-30 | 2018-10-03 | Wipro Limited | System and method for management of services in a cloud environment |
CN108847990A (en) * | 2013-03-29 | 2018-11-20 | 思杰***有限公司 | Mobile device management function is provided |
US20190190785A1 (en) * | 2016-09-30 | 2019-06-20 | Intel Corporation | Geographic service classification and routing |
CN111917870A (en) * | 2020-07-30 | 2020-11-10 | 广州津虹网络传媒有限公司 | Request processing method, system, device, electronic equipment and storage medium |
CN112000365A (en) * | 2020-08-24 | 2020-11-27 | 百度时代网络技术(北京)有限公司 | Service grid configuration method, device, equipment and medium based on micro-service architecture |
CN112395070A (en) * | 2019-08-12 | 2021-02-23 | 阿里巴巴集团控股有限公司 | Data processing system and method |
CN112738284A (en) * | 2021-04-01 | 2021-04-30 | 腾讯科技(深圳)有限公司 | Data transmission method, device, equipment and storage medium in service integration |
CN113726843A (en) * | 2021-07-13 | 2021-11-30 | 阿里巴巴新加坡控股有限公司 | Edge cloud system, data transmission method, device and storage medium |
CN113765988A (en) * | 2021-02-26 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
US20220116445A1 (en) * | 2021-04-12 | 2022-04-14 | Miltiadis Filippou | Disintermediated attestation in a mec service mesh framework |
CN115834708A (en) * | 2022-11-23 | 2023-03-21 | 中科驭数(北京)科技有限公司 | Load balancing method, device, equipment and computer readable storage medium |
-
2022
- 2022-12-07 CN CN202211562193.0A patent/CN115883655B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103581283A (en) * | 2012-07-26 | 2014-02-12 | 国际商业机器公司 | Method and system for controlling support center |
CN108847990A (en) * | 2013-03-29 | 2018-11-20 | 思杰***有限公司 | Mobile device management function is provided |
US20190190785A1 (en) * | 2016-09-30 | 2019-06-20 | Intel Corporation | Geographic service classification and routing |
EP3382555A1 (en) * | 2017-03-30 | 2018-10-03 | Wipro Limited | System and method for management of services in a cloud environment |
CN107911390A (en) * | 2017-12-14 | 2018-04-13 | 浪潮软件股份有限公司 | Handle method, proxy server, application server and the system of service request |
CN108234223A (en) * | 2018-04-19 | 2018-06-29 | 郑州云海信息技术有限公司 | A kind of security service design method of data center's total management system |
CN112395070A (en) * | 2019-08-12 | 2021-02-23 | 阿里巴巴集团控股有限公司 | Data processing system and method |
CN111917870A (en) * | 2020-07-30 | 2020-11-10 | 广州津虹网络传媒有限公司 | Request processing method, system, device, electronic equipment and storage medium |
CN112000365A (en) * | 2020-08-24 | 2020-11-27 | 百度时代网络技术(北京)有限公司 | Service grid configuration method, device, equipment and medium based on micro-service architecture |
CN113765988A (en) * | 2021-02-26 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
CN112738284A (en) * | 2021-04-01 | 2021-04-30 | 腾讯科技(深圳)有限公司 | Data transmission method, device, equipment and storage medium in service integration |
US20220116445A1 (en) * | 2021-04-12 | 2022-04-14 | Miltiadis Filippou | Disintermediated attestation in a mec service mesh framework |
CN113726843A (en) * | 2021-07-13 | 2021-11-30 | 阿里巴巴新加坡控股有限公司 | Edge cloud system, data transmission method, device and storage medium |
CN115834708A (en) * | 2022-11-23 | 2023-03-21 | 中科驭数(北京)科技有限公司 | Load balancing method, device, equipment and computer readable storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117061338A (en) * | 2023-08-16 | 2023-11-14 | 中科驭数(北京)科技有限公司 | Service grid data processing method, device and system based on multiple network cards |
CN117061338B (en) * | 2023-08-16 | 2024-06-07 | 中科驭数(北京)科技有限公司 | Service grid data processing method, device and system based on multiple network cards |
CN117176819A (en) * | 2023-09-27 | 2023-12-05 | 中科驭数(北京)科技有限公司 | Service network service-based unloading method and device |
CN117395141A (en) * | 2023-12-07 | 2024-01-12 | 江苏征途技术股份有限公司 | Method for simplifying station room intelligent auxiliary and artificial intelligent visual gateway configuration |
CN117395141B (en) * | 2023-12-07 | 2024-05-24 | 江苏征途技术股份有限公司 | Method for simplifying station room intelligent auxiliary and artificial intelligent visual gateway configuration |
Also Published As
Publication number | Publication date |
---|---|
CN115883655B (en) | 2024-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115883655B (en) | Service request processing method and device, electronic equipment and storage medium | |
CN109561171B (en) | Configuration method and device of virtual private cloud service | |
US20170026306A1 (en) | Systems and methods for autonomous resource discovery, management, and stitching | |
AU2020373037B2 (en) | Cloud service for cross-cloud operations | |
CN113938533B (en) | Inter-application communication method and device, electronic equipment and computer readable medium | |
CN115834708A (en) | Load balancing method, device, equipment and computer readable storage medium | |
US11689642B2 (en) | Routing application control and data-plane traffic in support of cloud-native applications | |
JP2019525604A (en) | Network function NF management method and NF management apparatus | |
CN112968965B (en) | Metadata service method, server and storage medium for NFV network node | |
CN114650281B (en) | File downloading method, device, equipment and storage medium based on complex network | |
CN117176722A (en) | Dynamic reconstruction method, device and server | |
CN113472848A (en) | Network fusion method and device of virtual machine and container and related equipment | |
CN116886496A (en) | DPU-based data processing method, device, equipment and readable storage medium | |
CN116010130B (en) | Cross-card link aggregation method, device, equipment and medium for DPU virtual port | |
CN116232884A (en) | Proxy instance management method, device, electronic equipment and storage medium | |
US11683400B1 (en) | Communication protocol for Knative Eventing's Kafka components | |
CN115499432A (en) | Family terminal computing resource management system and computing resource scheduling method | |
CN113472565B (en) | Method, apparatus, device and computer readable medium for expanding server function | |
CN115242754B (en) | Information return method, request response method, message sending method and device | |
CN111314457B (en) | Method and device for setting virtual private cloud | |
WO2022228121A1 (en) | Service providing method and apparatus | |
CN111953804B (en) | Network data transmission method, device, medium and electronic equipment | |
CN117692324A (en) | Information transmission method, apparatus, device, computer readable medium, and program product | |
CN117812159A (en) | Message transmission method, device, equipment and storage medium | |
CN116954822A (en) | Container arranging system and use method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |