CN113765816B - Flow control method, system, equipment and medium based on service grid - Google Patents

Flow control method, system, equipment and medium based on service grid Download PDF

Info

Publication number
CN113765816B
CN113765816B CN202110881328.9A CN202110881328A CN113765816B CN 113765816 B CN113765816 B CN 113765816B CN 202110881328 A CN202110881328 A CN 202110881328A CN 113765816 B CN113765816 B CN 113765816B
Authority
CN
China
Prior art keywords
service
target
network card
physical network
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110881328.9A
Other languages
Chinese (zh)
Other versions
CN113765816A (en
Inventor
叶磊
钟成
贺环宇
庄清惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Innovation Co
Original Assignee
Alibaba Singapore Holdings Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Singapore Holdings Pte Ltd filed Critical Alibaba Singapore Holdings Pte Ltd
Priority to CN202110881328.9A priority Critical patent/CN113765816B/en
Publication of CN113765816A publication Critical patent/CN113765816A/en
Application granted granted Critical
Publication of CN113765816B publication Critical patent/CN113765816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a flow control method, a system, equipment and a medium based on a service grid. In the embodiment of the application, the architecture of the service grid is improved, and the processing work of the data surface in the service grid is sunk into the physical network card in a tightly coordinated manner of software and hardware, so that the processing work of the data surface in the service grid does not occupy the computing resource on the host, and the host can concentrate on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve network throughput and reduce network delay, thereby being capable of serving inter-service communication performance under the grid.

Description

Flow control method, system, equipment and medium based on service grid
Technical Field
The present application relates to the field of cloud network technologies, and in particular, to a service grid-based flow control method, system, device, and medium.
Background
Service mesh (service mesh): is a dedicated infrastructure layer for handling inter-service communications. It is responsible for reliably delivering requests through complex service topologies including modern cloud-native applications. In practice, the service grid is typically implemented by a set of lightweight network agents that are deployed with application code without the need to perceive the application itself.
The purpose of the service grid is to realize the management of service traffic, in the field of cloud protogenesis, a Sidecar form can be generally adopted, and a Proxy Sidecar is built in each deployed atomic unit Pod to connect all service traffic entering and exiting the Pod, which needs to occupy a lot of memory resources; in addition, the flow control can be completed only by entering and exiting the kernel and the user mode for a plurality of times, the processing performance is poor, and the connection efficiency between services is poor.
Disclosure of Invention
Aspects of the present application provide a service grid-based flow control method, system, apparatus, and medium for improving inter-service communication performance under a service grid.
An embodiment of the present application provides a flow control system, including: the system comprises a calling end and a plurality of service ends, wherein the calling end is provided with a first physical network card, and the service ends are provided with a second physical network card; the first physical network card and the second physical network card are subscribed with a flow forwarding rule corresponding to the appointed service;
the calling end is used for initiating a service calling request to the first physical network card;
the first physical network card is used for determining a target micro-service pointed by the service calling request; forwarding the service call request to a second physical network card assembled on a target service end capable of providing the target micro service according to a flow forwarding rule corresponding to the target micro service;
And the second physical network card is used for initiating call to the target micro-service on the target service end according to the resource positioning identification in the service call request and the flow forwarding rule corresponding to the target micro-service.
The embodiment of the application also provides a communication terminal, which comprises a memory, a processor and a physical network card, wherein the physical network card is arranged on the communication terminal;
the memory is used for storing one or more computer instructions;
the processor is coupled with the memory and is used for executing the one or more computer instructions to provide the flow forwarding rule corresponding to the at least one micro-service running on the communication end to the physical network card;
the physical network card is used for distributing the input flow which flows to the communication terminal to the destination container group POD on the communication terminal based on the flow forwarding rule; based on the flow forwarding rule, forwarding the output flow sent by the communication terminal to a destination container group (POD) on the communication terminal or other communication terminals;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
The embodiment of the application also provides a flow control method based on the service grid, which is suitable for a communication end in the service grid, wherein the communication end is provided with a physical network card, the physical network card comprises a flow forwarding rule corresponding to at least one micro-service running on the communication end, and the method comprises the following steps:
Under the condition of receiving input flow, distributing the input flow to a target container group POD on the local communication end by utilizing the physical network card;
under the condition of sending out output flow, forwarding the output flow to a destination container group POD on the communication terminal or other communication terminals by using the physical network card;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the foregoing service grid-based flow control method.
In the embodiment of the application, the architecture of the service grid is improved, and the processing work of the data surface in the service grid is sunk into the physical network card in a tightly coordinated manner of software and hardware, so that the processing work of the data surface in the service grid does not occupy the computing resource on the host, and the host can concentrate on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve network throughput and reduce network delay, thereby being capable of serving inter-service communication performance under the grid.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic diagram of a flow control system according to an exemplary embodiment of the present application;
FIG. 2 is a logical schematic diagram of an exemplary implementation of sinking processing functions of a data plane in a service grid into a physical network card;
FIG. 3 is a logical schematic diagram of another exemplary implementation of sinking processing functions of a data plane in a service grid into a physical network card;
FIG. 4 is a logical schematic of an exemplary implementation of sinking rule subscription functionality in a service grid into a physical network card;
FIG. 5 is a logical schematic of another exemplary implementation of sinking rule subscription functionality in a service grid into a physical network card;
fig. 6 is a schematic structural diagram of a communication terminal according to another exemplary embodiment of the present application;
fig. 7 is a flowchart of a flow control method based on a service grid according to another exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
At present, under the service grid, flow control work needs to be carried out in a kernel and a user mode, so that resources are occupied, and the processing performance is poor. To this end, in some embodiments of the application: the architecture of the service grid is improved, and the processing work of the data surface in the service grid is sunk into the physical network card in a tightly coordinated manner of software and hardware, so that the processing work of the data surface in the service grid does not occupy the computing resources on the host machine any more, and the host machine can concentrate on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve network throughput and reduce network delay, thereby being capable of serving inter-service communication performance under the grid.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a flow control system according to an exemplary embodiment of the present application. As shown in fig. 1, the system includes: call end 10 and several service ends 20. The calling terminal 10 and the service terminal 20 may be nodes in a cloud network, a cloud server, etc., and the physical implementation forms of the calling terminal 10 and the service terminal 20 are not limited in this embodiment.
The flow control scheme provided by the embodiment can be applied to a scene of using a service grid to conduct flow management on micro services. The micro-service is an architecture scheme for constructing applications, and the micro-service architecture is different from a more traditional single-body scheme and can split the applications into a plurality of core functions. Each function is called a service, and can be individually constructed and deployed, which means that the services do not affect each other when working (and failing), and the services can communicate with each other and cooperate with each other to provide the user with the desired content. A service grid is a specialized infrastructure layer for handling inter-service communications. It is responsible for reliably delivering requests through complex service topologies including modern cloud-native applications. In practice, the service grid is typically implemented by a set of lightweight network agents that are deployed with application code without the need to perceive the application itself.
The flow control scheme provided by the embodiment improves the traditional service grid. It is proposed to sink the processing work of the data plane in the service grid into the physical network card on the host. In fig. 1, a schematic configuration of a flow control system is shown by taking one-time inter-service communication as an example. It should be appreciated that there are substantially more communication ends under the service grid.
Referring to fig. 1, both the calling end 10 and the service end 20 are equipped with physical network cards, and for convenience of distinction, we describe the physical network card equipped on the calling end 10 as a first physical network card 30 and the physical network card on the service end 20 as a second physical network card 40. In this embodiment, the physical network card may be an intelligent network card with an independent processor, which can support a programming customization function. The microservices may run in a set of containers, which may be deployed on a host, i.e., a communication end in this embodiment (communication end is used herein as a generic term for calling end and service end). From the deployment relationship, a communication end can comprise a plurality of container groups, the container groups share a physical network card, and a plurality of micro services can be operated in one container group.
Based on this, in this embodiment, a processing program of a data plane in a service grid may be written in advance for the physical network card, and written into the physical network card, so that the physical network card has the processing capability of the data plane in the service grid. In this embodiment, the writing of the processing program of the data plane may be performed with reference to the relevant logic of the side car agent in the conventional service grid, which is not described in detail herein. The processing functions of the sidecar agent on the data plane in the traditional service grid can include, but are not limited to, traffic interception, 4-7 layer network packet parsing, routing forwarding and the like, and the processing functions can be sunk by writing related processing programs into the physical network card.
After the physical network card is given the processing capability of the data plane in the service grid, referring to fig. 1, the calling end 10 may initiate a service call request to the first physical network card 30. Here, all the service call requests initiated by the calling end 10 are all streamed to the first physical network card 30, and the physical network card may perform stream filtering on the service call requests, that is, determine which service call requests need to be flow controlled in the sidecar mode. Compared with the traditional service grid, the call end 10 does not need to adopt the soft stream guidance modes such as iptable and the like to carry out stream filtration, so that the resource consumption of the host in this aspect can be effectively reduced.
The first physical network card 30 may obtain, in advance, a traffic forwarding rule and service registration information of each micro service in the service grid, and based on this, the first physical network card 30 may determine the target micro service to which the service call request is directed. For example, the first physical network card 30 may parse the micro service name and port of the desired call from the service call request. The first physical network card 30 may further forward the service call request to the second physical network card 40 assembled on the target server 20 that may provide the target micro service according to the traffic forwarding rule corresponding to the target micro service. The flow forwarding rule of the target micro service may include an address of at least one container group capable of providing the target micro service, and the first physical network card 30 may determine, through policies such as load balancing, a target container group for a service call request of this time, and configure an access address of the target container group to a local service call request, so that the first physical network card 30 may forward the service call request to the second physical network card 40 assembled on the target server 20 where the target container group is located.
In this embodiment, the communication process between the first physical network card 30 and the second physical network card 40 is similar to the communication process between the sidecar agents in the conventional service network card, and the access address in the service call request can be continuously routed in a manner consistent with the communication process between the sidecar agents, so that the service call request arrives at the second physical network card 40 on the target server 20.
For the second physical network card 40, a call may be initiated to a target micro service on the target server 20 according to the resource location identifier in the service call request and the traffic forwarding rule corresponding to the target micro service. Wherein the traffic under the service grid is typically 4-7 layers, for example, in this embodiment, the service invocation request may use a 4-7 layer network protocol; in addition, as mentioned above, there are multiple micro services on the target service end 20, the calling end 10 will identify the micro service that is expected to be called in the service call request by means of the resource location identifier, so that the second physical network card 40 can determine the target micro service that is expected to be called by the service call request according to the resource location identifier in the service call request, and continue to route the service call request according to the traffic forwarding rule corresponding to the target micro service, so as to initiate the call to the internal target micro service.
The target micro-service may respond to the current service invocation request and may generate service response data. In this regard, the second physical network card 40 may return service response data generated by the target service to the micro-service of the initiator that initiated the service invocation request in the initiator 10. The traffic forwarding rules corresponding to the target micro-service comprise forwarding rules for input traffic and output traffic. In the above-mentioned process of calling the target service, the first physical network card 30 forwards the service call request according to the forwarding rule of the output flow, and the second physical network card 40 forwards the service call request according to the forwarding rule of the input flow. Here, in the response reflux process of the target service, the second physical network card 40 may process the service response data according to the forwarding rule of the output flow, and the first physical network card 30 may forward the service response data according to the forwarding rule of the input flow, so as to ensure that the service response data can reach the micro-service of the initiator in the calling end 10. The response reflow process is symmetrical to the service invocation process and will not be described in detail here.
In summary, in this embodiment, the architecture of the service grid may be improved, and it is proposed that the processing work of the data plane in the service grid is sunk into the physical network card in a tightly coordinated manner of software and hardware, so that the processing work of the data plane in the service grid does not need to occupy the computing resource on the host, and the host may concentrate on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve network throughput and reduce network delay, thereby being capable of serving inter-service communication performance under the grid.
In the above or below embodiments, the sinking of the relevant functions of the sidecar agent may be performed in a variety of implementations.
In one implementation, only the processing functions of the data plane in the service grid may be sunk into the physical network card.
In this implementation, fig. 2 is a logic diagram of an exemplary implementation of sinking a processing function of a data plane in a service grid into a physical network card, and referring to fig. 2, this implementation may be: the target server 20 may have a plurality of container groups deployed thereon, the target micro service may operate on a target container group of the plurality of container groups, and the second physical network card 40 may be configured with a flow control component on each container group on the target server 20. The flow control component may be used to perform various processing tasks for the data plane in the service grid including, but not limited to, traffic interception, 4-7 layer network packet parsing, and routing forwarding.
In this exemplary implementation, the second physical network card 40 may forward the service invocation request to the target flow control component corresponding to the target container group when receiving the service invocation request; and the target flow control component may execute an operation of initiating a call to the target micro-service on the target server 20 according to the resource location identifier in the service call request and the flow forwarding rule corresponding to the target service. Accordingly, in this exemplary implementation, the flow control component may be configured in the second physical network card 40 for each container group on the target server 20 separately according to a 1:1 correspondence, which is consistent with the deployment of the sidecar agent in the legacy service grid. Thus, the function change work in the sinking process of the processing function of the data surface in the service grid can be reduced, and the workload in the sinking process is reduced.
In this implementation, fig. 3 is a logic diagram of another exemplary implementation of sinking a processing function of a data plane in a service grid into a physical network card, and referring to fig. 3, this implementation may be: a plurality of container groups are deployed on the target service end 20, the target micro service may run on a target container group of the plurality of container groups, and a flow control component common to the plurality of container groups may be configured on the second physical network card 40.
In this exemplary implementation, the second physical network card 40 may forward the service invocation request onto the common flow control component upon receipt of the service invocation request; the shared flow control component may perform an operation of initiating a call to a target micro-service on the target server 20 according to the resource location identifier in the service call request and the flow forwarding rule corresponding to the target service. Accordingly, in this exemplary implementation, only one common flow control component needs to be configured in the second physical network card 40, which has a lower requirement on the processing capability of the physical network card, so that the hardware cost of the physical network card can be effectively saved.
In addition, the foregoing description has been given by taking the second physical network card 40 as an example, it should be understood that the processing work of the related data plane may be sunk into the first physical network card 30 in the calling end 10 in the same implementation manner.
In summary, in this implementation manner, the processing work of the data plane in the service grid may be sunk into the physical network card, and the processing work of the data plane such as flow interception, 4-7 layer network packet parsing, and routing forwarding is performed by one or more flow control components configured in the physical network card, which may effectively reduce the processing pressure of the communication end.
In addition to the processing work of the data plane, there are other works in the service grid that need to be performed, such as rule subscription works, etc.
In another implementation, the rule subscription job may also be sunk into the physical network card. In this implementation, fig. 4 is a logic diagram of an exemplary implementation of sinking rule subscription functions in a service grid into a physical network card, and referring to fig. 4, this implementation may be: configuring a rule subscription component on the second physical network card 40 for each container group on the target server 20; the target rule subscription component corresponding to the target container group can be used for acquiring the traffic forwarding rule subscribed for the target micro-service.
Based on this, in the case where a common flow control component is configured on the second physical network card 40, the target rule subscription component may provide the flow forwarding rule subscribed for the target micro service to the common flow control component. In this case, rule subscription components are configured for each container group on the communication end in the physical network card, each rule subscription component performs its own role, and the shared flow control component can acquire the flow forwarding rule from the corresponding rule subscription component as required, so as to support the shared flow control component to perform flow control on a plurality of micro services on the communication end, and the interaction efficiency under this condition is higher.
In the case that the second physical network card 40 is configured with a flow control component corresponding to each container group, the target rule subscription component may provide the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located. In this case, the deployment structure of the sidecar agent in the traditional service grid is inherited, that is, the sidecar agent in the traditional service grid is completely sunk into the physical network card in a 1:1 mode; in the physical network card, a rule subscription component and a flow control component are respectively configured for each container group on the communication end. The ecological universality of the implementation scheme is better, and the modification to the functions is smaller in the sinking process.
Fig. 5 is a logic diagram of another exemplary implementation of sinking rule subscription functionality in a service grid into a physical network card, referring to fig. 5, the implementation may be: configuring a rule subscription component shared by a plurality of container groups on the second physical network card 40; the rule subscription component may obtain traffic forwarding rules for each micro-service subscription on the target server 20.
Based on this, in the case where a common flow control component is configured on the second physical network card 40, the rule subscription component may provide the flow forwarding rule subscribed for each micro service on the target server 20 to the common flow control component. Under the condition, each micro-service on the communication end shares the rule subscription component and the flow control component, the shared rule subscription component and the flow control component can cooperate with each other to support the flow control of a plurality of micro-services on the communication end, the requirement on the hardware of the physical network card is lower, and the hardware cost of the physical network card can be saved.
In the case where the second physical network card 40 is configured with a flow control component corresponding to each container group, the rule subscription component may provide the flow forwarding rule subscribed for the target micro-service to the target flow control component corresponding to the target container group where the target micro-service is located. In this case, a rule subscription component shared on the physical network card may provide forwarding rules required for the plurality of flow control components to support the plurality of flow control components to perform the processing of the data plane.
In addition, the foregoing describes the sinking process of the rule subscription work by taking the second physical network card 40 as an example, and it should be understood that the same implementation manner may be adopted to sink the relevant rule subscription work into the first physical network card 30 in the calling end 10.
In summary, in this implementation manner, the rule subscription in the service grid may be sunk into the physical network card, and the rule subscription is performed by one or more rule subscription components configured in the physical network card, which may effectively reduce the processing pressure of the communication end.
In yet another implementation, the rule subscription work may be maintained in the communication end. In this implementation, an exemplary implementation may be: a rule subscription component is configured on the target server 20 for each container group, respectively. Based on the above, the target rule subscription component corresponding to the target container group can be used for acquiring the traffic forwarding rule subscribed for the target micro-service; the traffic forwarding rule is provided to the second physical network card 40.
In the case where a common flow control component is configured on the second physical network card 40, the target rule subscription component on the target server 20 may provide the flow forwarding rule subscribed for the target micro service to the common flow control component on the second physical network card 40. In this case, the communication end configures a rule subscription component for each container group, where each rule subscription component performs its own role, and the shared flow control component on the second physical network card 40 may obtain the flow forwarding rule from the corresponding rule subscription component as required, so as to support the shared flow control component to perform flow control on multiple micro services on the communication end.
In the case that the second physical network card 40 is configured with a flow control component corresponding to each container group, the target rule subscription component on the target server 20 may provide the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located on the second physical network card 40. In this case, the rule subscription component and the flow control component of each container group are configured one-to-one. The implementation scheme has better ecological universality and less change on a rule distancing component in the communication end.
Another exemplary implementation may be: a rule subscription component common to multiple container groups is configured on the target server 20. Based on this, the rule subscription component may obtain the traffic forwarding rule subscribed for each micro service on the target server 20, and provide the traffic forwarding rule to the second physical network card 40.
In the case where a common flow control component is configured on the second physical network card 40, the common rule subscription component on the target server 20 may provide the flow forwarding rules subscribed for each micro service on the target server 20 to the common flow control component on the second physical network card 40. In this case, the rule subscription component shared by the micro services on the communication end and the shared flow control component on the physical network card may cooperate with each other.
In the case where the second physical network card 40 is configured with a flow control component corresponding to each container group, the common rule subscription component on the target server 20 may provide the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located on the second physical network card 40. In this case, the common rule subscription component on the communication end can provide the required forwarding rules for the multiple flow control components on the physical network card, so as to support the multiple flow control components on the physical network card to execute the processing work of the data plane.
In addition, the foregoing description has been made with respect to the reservation scheme of the rule subscription work by taking the second physical network card 40 as an example, and it should be understood that the same implementation manner may be adopted to reserve the related rule subscription work in the calling end 10.
In the implementation manner, the communication process between the rule subscription component on the communication end and the flow control component on the physical network card is related, and in order to ensure data security, a secure channel can be established between the rule subscription component on the communication end and the flow control component on the physical network card, and based on the secure channel, the flow forwarding rule can be safely transmitted by adopting schemes such as encryption.
In summary, in this implementation, the rule subscription in the service grid may be retained in the communication end, and may cooperate with the flow control component in the physical network card to implement flow control.
The flow control scheme provided in this embodiment is exemplarily described below using a shopping application as an example.
For example, the shopping application may split multiple core functions, each of which may be deployed as a micro-service, such as a micro-service a corresponding to a search function and a micro-service B corresponding to a product recommendation function, for example. Communication may need to occur between microservices a and B during the use of the shopping application by the user.
Firstly, the micro service A and the micro service B can register the service with the control surface of the service grid, so that the control surface can register the service information such as the address and the port of the physical network card, the service name, the traffic forwarding rule and the like assembled on the communication end where the micro service A and the micro service B are respectively positioned. Based on this, the micro services a and B can subscribe to each other's service information.
Taking micro service a and micro service B as http services as examples.
The micro service a can act as a request initiator to initiate a call request to the micro service B by inputting a URL, and the call request will flow to the physical network card a' assembled on the communication end a where the micro service a is located.
After receiving the call request, the physical network card a 'can analyze the information of the called service name, port and the like, and can forward the call request to the physical network card B' assembled on the communication terminal B where the micro service B is located according to the flow forwarding rule of the subscribed micro service B;
after the physical network card B' receives the call request, call can be initiated to the micro service B in the communication terminal B in a transparent transmission mode or according to the URL and the flow forwarding rule of the micro service B.
The micro service B may respond to the call request and return service impact data to the micro service a in the original path to enable communication between the micro service a and the micro service B.
Fig. 6 is a schematic structural diagram of a communication terminal according to another exemplary embodiment of the present application. As shown in fig. 6, the communication terminal may include: the memory 60 and the processor 61 are also equipped with a physical network card 62.
A processor 61 coupled to the memory 60 for executing a computer program in the memory 60 for providing the physical network card with traffic forwarding rules corresponding to at least one micro-service running on the communication side;
the physical network card is used for distributing the input flow flowing to the communication terminal to the target container group POD on the communication terminal based on the flow forwarding rule; based on a flow forwarding rule, forwarding output flow sent by a communication terminal to a destination container group (POD) on the communication terminal or other communication terminals;
wherein the micro-services required for incoming traffic or outgoing traffic are run in the destination container group POD.
The communication terminal provided in this embodiment may provide the flow forwarding rule corresponding to the micro service to the physical network card, based on which the flow control job may be sunk into the physical network card, and the physical network card controls the input flow and the output flow. It should be noted that, communication may be performed between micro services running on the communication end, so that in this case, two micro services in communication may share the same physical network card, that is, the physical network card may forward the output traffic initiated by the communication end to the corresponding destination container group POD on the communication end.
In an alternative embodiment, the physical network card 62 may be used to, in forwarding the output traffic sent by the communication end to the destination container group POD on the other communication end:
acquiring a first output flow, wherein the first output flow comprises a first service call request;
determining a target micro-service pointed by the first service call request;
and forwarding the first service call request to a third physical network card assembled on a target communication end capable of providing the target micro service according to the flow forwarding rule corresponding to the target micro service, so that the third physical network card initiates call to the target micro service in the target container group POD on the target communication end according to the resource positioning identifier in the first service call request and the flow forwarding rule corresponding to the target micro service.
The above procedure is the step executed when the communication end where the physical network card is located is used as the calling end in the foregoing system embodiment.
In addition, the communication end where the physical network card is located may also be used as the service end in the foregoing system embodiment, where in this case, the physical network card is used to, in a process of distributing the input traffic flowing to the communication end to the destination container group POD on the communication end:
receiving a first input flow, wherein the first input flow comprises a second service call request forwarded by a fourth physical network card;
And according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the requested target micro-service, calling the target micro-service in the target container group POD on the communication terminal.
In response to this, in an alternative embodiment, the communication components in the physical network card 62 may include respective flow control components for each container group on the communication end where it is located, based on which the processor 61 may be specifically configured to:
forwarding the second service call request to a target flow control component corresponding to a target container group where the target micro-service is located; and calling the target micro-service by utilizing the target flow control component according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service.
In an alternative embodiment, the communication components in the physical network card 62 may include a flow control component that is common to multiple container groups on the communication end where it is located, based on which the processor 61 may be specifically configured to:
forwarding the second service invocation request to the shared flow control component; and calling the target micro-service according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service by utilizing the shared flow control component.
In an alternative embodiment, the rule subscription component may be configured separately for each container group on the communication side, based on which the processor 61 may be specifically configured to:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a target rule subscription component corresponding to the target container group where the target micro-service is located on the communication terminal.
In an alternative embodiment, a common rule subscription component may be configured for multiple container groups on the communication side, based on which the processor 61 may be specifically configured to:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a common rule subscription component on the communication terminal.
In an alternative embodiment, the physical network card 62 may be deployed with a rule subscription component corresponding to each container group on the communication end, and based on this, the processor 61 may be specifically configured to:
acquiring a traffic forwarding rule subscribed for the target micro-service by using a target rule subscription component corresponding to a target container group in which the target micro-service is located; under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro-service to the shared flow control component; under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, the physical network card 62 may be deployed with a rule subscription component that is common to multiple container groups on the communication end where it is located, based on which the processor 61 may be specifically configured to:
acquiring a traffic forwarding rule subscribed for each micro service on a communication end by using the shared rule subscription component;
under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for each micro-service to the shared flow control component;
under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, the service invocation request may employ a seven-layer network protocol.
In an alternative embodiment, the processor 61 may be further configured to return service response data generated by the target service to the micro-service of the initiator that initiates the second service invocation request in the third physical network card.
Further, as shown in fig. 6, the communication terminal further includes: power supply assembly 63, and the like. Only some of the components are schematically shown in fig. 6, which does not mean that the communication terminal only comprises the components shown in fig. 6.
It should be noted that, for the technical details of the embodiments of the communication end, reference may be made to the related descriptions of the calling end and the service end in the foregoing system embodiments, which are not repeated herein for the sake of brevity, but should not cause a loss of the protection scope of the present application.
Fig. 7 is a flow chart of a service grid-based flow control method according to another exemplary embodiment of the present application, which may be performed by a flow control device, which may be implemented as a combination of software and/or hardware, and which may be integrated in a communication terminal. Referring to fig. 7, a communication end is equipped with a first physical network card, where the first physical network card includes a traffic forwarding rule corresponding to at least one micro service running on the communication end, and the method may include:
step 700, under the condition that the input flow is received, distributing the input flow to a target container group POD on the local communication terminal by using a first physical network card;
step 701, under the condition of sending out output flow, forwarding the output flow to a destination container group (POD) on the communication terminal or other communication terminals by using a first physical network card;
wherein the micro-services required for incoming traffic or outgoing traffic are run in the destination container group POD.
In an alternative embodiment, the step of forwarding the output traffic to the destination container group POD on the other communication end using the first physical network card may include:
acquiring a first output flow, wherein the first output flow comprises a first service call request;
determining a target micro-service pointed by the first service call request;
and forwarding the first service call request to a second physical network card assembled on a target communication end capable of providing the target micro service according to the flow forwarding rule corresponding to the target micro service, so that the second physical network card initiates call to the target micro service in the target container group POD on the target communication end according to the resource positioning identifier in the first service call request and the flow forwarding rule corresponding to the target micro service.
In the flow chart shown in fig. 7, the steps performed when the communication terminal is the calling terminal in the foregoing system embodiment are shown.
In addition, the communication end may also be used as a service end in the foregoing system embodiment, in which case, the step of distributing the input traffic to the destination container group POD on the local communication end by using the first physical network card may include:
receiving a first input flow, wherein the first input flow comprises a second service call request initiated by a third physical network card;
And according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the requested target micro-service, calling the target micro-service in the target container group POD on the communication terminal.
With this in mind, in an alternative embodiment, the first physical network card may be deployed with a flow control component corresponding to each container group on the communication end, where the method may specifically include:
forwarding the second service call request to a target flow control component corresponding to a target container group where the target micro-service is located; and calling the target micro-service by utilizing the target flow control component according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service.
In an alternative embodiment, the first physical network card may be deployed with a flow control component that is common to a plurality of container groups on a communication end where the first physical network card is located, and based on this, the method may specifically include:
forwarding the second service invocation request to the shared flow control component; and calling the target micro-service according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service by utilizing the shared flow control component.
In an alternative embodiment, a rule subscription component may be configured on the communication end for each container group, and based on this, the method may specifically include:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a target rule subscription component corresponding to the target container group where the target micro-service is located on the communication terminal.
In an alternative embodiment, a common rule subscription component may be configured for a plurality of container groups on the communication end, based on which the method may specifically include:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a common rule subscription component on the communication terminal.
In an alternative embodiment, a rule subscription component corresponding to each container group on the communication end of the first physical network card may be deployed in the first physical network card, and based on this, the method may specifically include:
acquiring a traffic forwarding rule subscribed for the target micro-service by using a target rule subscription component corresponding to a target container group in which the target micro-service is located; under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro-service to the shared flow control component; under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, a rule subscription component shared by a plurality of container groups on a communication end where the rule subscription component is located can be deployed in the first physical network card, and based on this, the method can specifically include:
acquiring a traffic forwarding rule subscribed for each micro service on a communication end by using the shared rule subscription component;
under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for each micro-service to the shared flow control component;
under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, the service invocation request may employ a seven-layer network protocol.
In an alternative embodiment, the method may further include returning, by the first physical network card, service response data generated by the target service to an initiator micro-service that initiates the second service invocation request in the third physical network card.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 700 to 702 may be device a; for another example, the execution body of steps 700 and 701 may be device a, and the execution body of step 702 may be device B; etc.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as 700, 701, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish between different requests, physical network cards, and the like, and do not represent a sequence, and are not limited to the "first" and "second" being of different types.
Accordingly, the embodiment of the present application also provides a computer readable storage medium storing a computer program, where the computer program when executed can implement each step of the above method embodiment that can be executed by the communication end.
The memory of FIG. 6 described above is used to store a computer program and may be configured to store various other data to support operations on a computing platform. Examples of such data include instructions for any application or method operating on a computing platform, contact data, phonebook data, messages, pictures, videos, and the like. The memory may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The communication assembly of fig. 6 is configured to facilitate wired or wireless communication between the device in which the communication assembly is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a mobile communication network of WiFi,2G, 3G, 4G/LTE, 5G, etc., or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further comprises a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The power supply assembly shown in fig. 6 provides power for various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A flow control system, comprising: the system comprises a calling end and a plurality of service ends, wherein the calling end is provided with a first physical network card, and the service ends are provided with a second physical network card; the first physical network card and the second physical network card are subscribed with a flow forwarding rule corresponding to the appointed service;
The calling end is used for initiating a service calling request to the first physical network card;
the first physical network card is used for determining a target micro-service pointed by the service calling request; determining a target container group for responding to the service call request according to the flow forwarding rule corresponding to the target micro-service; forwarding the service call request to a second physical network card assembled on a target server where the target container group is located;
and the second physical network card is used for continuing to route the service call request according to the resource positioning identifier in the service call request and the flow forwarding rule corresponding to the target micro-service so as to initiate call to the target micro-service on the target service end, wherein the target micro-service operates on the target container group.
2. The system of claim 1, wherein the target server has a plurality of container groups disposed thereon, the target microservice operates on a target container group of the plurality of container groups, and the second physical network card has a flow control component configured thereon for each container group on the target server;
the second physical network card is specifically configured to forward the service call request to a target flow control component corresponding to the target container group;
And the target flow control component is used for initiating call to the target micro-service on the target service end according to the resource positioning identification in the service call request and the flow forwarding rule corresponding to the target micro-service.
3. The system of claim 1, wherein the target service end is provided with a plurality of container groups, the target micro service operates on a target container group in the plurality of container groups, and the second physical network card is provided with a flow control component shared by the plurality of container groups;
the second physical network card is used for forwarding the service calling request to the flow control component;
and the flow control component is used for initiating call to the target micro-service on the target service end according to the resource positioning identification in the service call request and the flow forwarding rule corresponding to the target micro-service.
4. A system according to any one of claims 2 or 3, wherein a rule subscription component is respectively configured on the target server for each container group;
the target rule subscription component corresponding to the target container group is used for acquiring a flow forwarding rule subscribed for the target micro-service;
And providing the flow forwarding rule to the second physical network card.
5. A system according to any one of claims 2 or 3, wherein the target server is configured with a rule subscription component common to the plurality of container groups;
the rule subscription component is used for acquiring a traffic forwarding rule subscribed for each micro service on the target server;
and providing the flow forwarding rule to the second physical network card.
6. A system according to any one of claims 2 or 3, wherein the second physical network card is configured with a rule subscription component for each container group;
the target rule subscription component corresponding to the target container group is used for acquiring a flow forwarding rule subscribed for the target micro-service;
providing a flow forwarding rule subscribed for the target micro service to a shared flow control component under the condition that the shared flow control component is configured on the second physical network card;
and under the condition that the second physical network card is provided with flow control components corresponding to each container group, providing the flow forwarding rule subscribed for the target micro-service to the target flow control component corresponding to the target container group where the target micro-service is located.
7. A system according to any one of claims 2 or 3, wherein a rule subscription component common to the plurality of container groups is configured on the second physical network card;
the rule subscription component is used for acquiring a traffic forwarding rule subscribed for each micro service on the target server;
providing the flow forwarding rule subscribed for each micro service on the target server to the shared flow control component under the condition that the shared flow control component is configured on the second physical network card;
and under the condition that the second physical network card is provided with flow control components corresponding to each container group, providing the flow forwarding rule subscribed for the target micro-service to the target flow control component corresponding to the target container group where the target micro-service is located.
8. The system of claim 1, wherein the service invocation request employs a seven-layer network protocol.
9. The system of claim 1, wherein the second physical network card is further configured to return service response data generated by the target service to an originating micro-service that originated the service invocation request in the invocation end.
10. The communication terminal is characterized by comprising a memory, a processor and a physical network card;
the memory is used for storing one or more computer instructions;
the processor is coupled with the memory and is used for executing the one or more computer instructions to provide the flow forwarding rule corresponding to the at least one micro-service running on the communication end to the physical network card;
the physical network card is configured to continuously route a service call request included in an input flow to the communication end based on the flow forwarding rule, so as to distribute the input flow to the communication end to a destination container group POD on the communication end; determining a target container group POD for responding to a service call request in the output flow sent by the communication terminal based on the flow forwarding rule, and forwarding the output flow sent by the communication terminal to the target container group POD on the communication terminal or other communication terminals;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
11. The communication terminal according to claim 10, wherein the physical network card is configured to, in forwarding the output traffic sent by the communication terminal to the destination container group POD on the other communication terminal:
Acquiring a first output flow, wherein the first output flow comprises a first service call request;
determining a target micro-service pointed by the first service call request;
determining a target container group POD for responding to the first service call request according to the flow forwarding rule corresponding to the target micro-service; and forwarding the first service calling request to a third physical network card assembled on a destination communication end where the target container group POD is located, so that the third physical network card can continuously route the first service calling request according to a resource positioning identifier in the first service calling request and a flow forwarding rule corresponding to the target micro service to initiate calling to the target micro service in the destination container group POD on the destination communication end.
12. The communication terminal according to claim 10, wherein the physical network card is configured to, in distributing an input traffic flowing to the communication terminal to a destination container group POD on the communication terminal:
receiving a first input flow, wherein the first input flow comprises a second service call request forwarded by a fourth physical network card;
and continuing to route the second service call request according to the resource positioning identification in the second service call request and the flow forwarding rule corresponding to the requested target micro-service so as to initiate call to the target micro-service in the target container group POD on the communication terminal.
13. The utility model provides a flow control method based on service grid, which is characterized in that is applicable to the communication end in the service grid, the communication end is equipped with a physical network card, the physical network card contains at least one micro-service corresponding flow forwarding rule running on the communication end, the method includes:
under the condition that input traffic is received, continuing to route a service call request contained in the input traffic by utilizing the physical network card according to the forwarding rule of the input traffic so as to distribute the input traffic to a destination container group (POD) on the communication terminal;
under the condition of sending out output flow, determining a target container group POD for responding to a service call request in the output flow by utilizing the physical network card according to a forwarding rule of the output flow, and forwarding the output flow to the target container group POD on a communication terminal or other communication terminals;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
14. A computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the service grid-based flow control method of claim 13.
CN202110881328.9A 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid Active CN113765816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110881328.9A CN113765816B (en) 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110881328.9A CN113765816B (en) 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid

Publications (2)

Publication Number Publication Date
CN113765816A CN113765816A (en) 2021-12-07
CN113765816B true CN113765816B (en) 2023-12-15

Family

ID=78788398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110881328.9A Active CN113765816B (en) 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid

Country Status (1)

Country Link
CN (1) CN113765816B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579199B (en) * 2022-02-22 2024-04-26 阿里巴巴(中国)有限公司 Method, system and storage medium for expanding agent in service grid
CN114826906B (en) * 2022-04-13 2023-09-22 北京奇艺世纪科技有限公司 Flow control method, device, electronic equipment and storage medium
CN117061338A (en) * 2023-08-16 2023-11-14 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581042A (en) * 2013-10-30 2014-02-12 华为技术有限公司 Method and device for sending data package
CN105550130A (en) * 2015-12-14 2016-05-04 中电科华云信息技术有限公司 Container based dynamic arrangement method for application environment and system applying method
CN106375131A (en) * 2016-10-20 2017-02-01 浪潮电子信息产业股份有限公司 Uplink load balancing method of virtual network
CN107395781A (en) * 2017-06-29 2017-11-24 北京小度信息科技有限公司 Network communication method and device
US10007509B1 (en) * 2015-12-08 2018-06-26 Amazon Technologies, Inc. Container handover for device updates
CN108494607A (en) * 2018-04-19 2018-09-04 云家园网络技术有限公司 The design method and system of big double layer network framework based on container
US10313495B1 (en) * 2017-07-09 2019-06-04 Barefoot Networks, Inc. Compiler and hardware interactions to remove action dependencies in the data plane of a network forwarding element
CN110149231A (en) * 2019-05-21 2019-08-20 优刻得科技股份有限公司 Update method, apparatus, storage medium and the equipment of virtual switch
CN110858138A (en) * 2018-08-22 2020-03-03 北京航天长峰科技工业集团有限公司 Alarm receiving and processing system based on micro-service technology
CN112398687A (en) * 2020-11-13 2021-02-23 广东省华南技术转移中心有限公司 Configuration method of cloud computing network, cloud computing network system and storage medium
CN112511611A (en) * 2020-11-19 2021-03-16 腾讯科技(深圳)有限公司 Communication method, device and system of node cluster and electronic equipment
CN112910692A (en) * 2021-01-19 2021-06-04 中原银行股份有限公司 Method, system and medium for controlling service grid flow based on micro service gateway
CN113037812A (en) * 2021-02-25 2021-06-25 中国工商银行股份有限公司 Data packet scheduling method and device, electronic equipment, medium and intelligent network card

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10230571B2 (en) * 2014-10-30 2019-03-12 Equinix, Inc. Microservice-based application development framework
US11044193B2 (en) * 2019-08-23 2021-06-22 Vmware, Inc. Dynamic multipathing using programmable data plane circuits in hardware forwarding elements

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581042A (en) * 2013-10-30 2014-02-12 华为技术有限公司 Method and device for sending data package
US10007509B1 (en) * 2015-12-08 2018-06-26 Amazon Technologies, Inc. Container handover for device updates
CN105550130A (en) * 2015-12-14 2016-05-04 中电科华云信息技术有限公司 Container based dynamic arrangement method for application environment and system applying method
CN106375131A (en) * 2016-10-20 2017-02-01 浪潮电子信息产业股份有限公司 Uplink load balancing method of virtual network
CN107395781A (en) * 2017-06-29 2017-11-24 北京小度信息科技有限公司 Network communication method and device
US10313495B1 (en) * 2017-07-09 2019-06-04 Barefoot Networks, Inc. Compiler and hardware interactions to remove action dependencies in the data plane of a network forwarding element
CN108494607A (en) * 2018-04-19 2018-09-04 云家园网络技术有限公司 The design method and system of big double layer network framework based on container
CN110858138A (en) * 2018-08-22 2020-03-03 北京航天长峰科技工业集团有限公司 Alarm receiving and processing system based on micro-service technology
CN110149231A (en) * 2019-05-21 2019-08-20 优刻得科技股份有限公司 Update method, apparatus, storage medium and the equipment of virtual switch
CN112398687A (en) * 2020-11-13 2021-02-23 广东省华南技术转移中心有限公司 Configuration method of cloud computing network, cloud computing network system and storage medium
CN112511611A (en) * 2020-11-19 2021-03-16 腾讯科技(深圳)有限公司 Communication method, device and system of node cluster and electronic equipment
CN112910692A (en) * 2021-01-19 2021-06-04 中原银行股份有限公司 Method, system and medium for controlling service grid flow based on micro service gateway
CN113037812A (en) * 2021-02-25 2021-06-25 中国工商银行股份有限公司 Data packet scheduling method and device, electronic equipment, medium and intelligent network card

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Macvlan的docker容器网络架构;杨鑫;吴之南;钱松荣;;微型电脑应用(第05期) *

Also Published As

Publication number Publication date
CN113765816A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN113765816B (en) Flow control method, system, equipment and medium based on service grid
EP3398305B1 (en) Method and architecture for virtualized network service provision
CN109429295B (en) Method for selecting AMF, system and storage medium
JP6622394B2 (en) Managing multiple active subscriber identity module profiles
CN113760452B (en) Container scheduling method, system, equipment and storage medium
KR20120127050A (en) A Method and Apparatus of selecting network connection in M2M communication
CN108370538A (en) A kind of method and device of selection network slice
EP3417655B1 (en) Method and apparatus for selecting network slices and services
CN112491944A (en) Edge application discovery method and device, and edge application service support method and device
CN112533177A (en) Method, device, apparatus and medium for providing and discovering moving edge calculation
CN116530208A (en) Communication method, device and system
US9894700B2 (en) Direct routing of communication sessions for mobile IP communication end points
WO2022022440A1 (en) Network reconnection method, and device, system and storage medium
CN112653716B (en) Service binding method and device
WO2023143574A1 (en) Equipment selection method and apparatus
CN114980359B (en) Data forwarding method, device, equipment, system and storage medium
CN112995311B (en) Service providing method, device and storage medium
US20140341033A1 (en) Transmission management device, system, and method
CN117560726A (en) Method, device and system for managing network resources
CN114258088B (en) Method, device and system for discovering intermediate session management function device, and storage medium
WO2022022842A1 (en) Service request handling
CN112565086A (en) Distributed network system, message forwarding method, device and storage medium
CN102761914A (en) M2M platform, and method and system for processing terminal business data
CN105025468A (en) Method of realizing data transmission management, apparatus and terminal equipment
US20230132096A1 (en) Registration procedure for ensuring service based on a selection of the best available network slice of the same slice type

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40069109

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240311

Address after: # 03-06, Lai Zan Da Building 1, 51 Belarusian Road, Singapore

Patentee after: Alibaba Innovation Co.

Country or region after: Singapore

Address before: Room 01, 45th Floor, AXA Building, 8 Shanton Road, Singapore

Patentee before: Alibaba Singapore Holdings Ltd.

Country or region before: Singapore