CN115941602A - Message processing method, system, device and storage medium - Google Patents

Message processing method, system, device and storage medium Download PDF

Info

Publication number
CN115941602A
CN115941602A CN202211262426.5A CN202211262426A CN115941602A CN 115941602 A CN115941602 A CN 115941602A CN 202211262426 A CN202211262426 A CN 202211262426A CN 115941602 A CN115941602 A CN 115941602A
Authority
CN
China
Prior art keywords
address
message
target
container group
forwarded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211262426.5A
Other languages
Chinese (zh)
Inventor
丁雨霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211262426.5A priority Critical patent/CN115941602A/en
Publication of CN115941602A publication Critical patent/CN115941602A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a message processing method, a system, a device and a storage medium. The method comprises the following steps: acquiring a message to be forwarded; the message to be forwarded contains a destination IP address; in a non-kernel environment, inquiring a first matching table to obtain a plurality of back-end container group IP addresses matched with the destination IP address; in a non-kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded; and forwarding the replaced message to be forwarded. The forwarding scheme provided by the embodiment of the application transfers the functions realized by the kernel in the prior art to the non-kernel environment, so that the burden of the kernel can be reduced.

Description

Message processing method, system, device and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, a system, an apparatus, and a storage medium for processing a packet.
Background
With the rapid development of cloud originality, the application of container service based on a container cluster management system (kubernets) is more and more extensive. The container network is used as a foundation stone of cloud native, is an essential basic component of a cloud native platform, and is one of the greatest challenges in the construction of the cloud native container platform.
The data forwarding capability on the data plane in current container network solutions depends on the core component (kube-proxy) provided by the container cluster management system.
However, the core component adopts the function of an inter-core firewall tool (iptables) or an IP Virtual Server (IPVS), which makes the existing container network solution not suitable for large-scale traffic scenarios.
Disclosure of Invention
In view of the above, the present application is proposed to provide a message processing method, system, device and storage medium that solve the above problems, or at least partially solve the above problems.
Thus, in an embodiment of the present application, a message processing method is provided, including:
acquiring a message to be forwarded; the message to be forwarded contains a destination IP address;
in a non-kernel environment, inquiring a first matching table to obtain a plurality of back-end container group IP addresses matched with the destination IP address;
in a non-kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded;
and forwarding the replaced message to be forwarded.
In another embodiment of the present application, a message processing system is provided, wherein the system includes: a plurality of nodes; at least one container group is arranged on the node;
the node is configured to:
acquiring a message to be forwarded; the message to be forwarded contains a destination IP address;
in a non-kernel environment, inquiring a first matching table to obtain a plurality of back-end container group IP addresses matched with the destination IP address;
in a non-kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded;
and forwarding the replaced message to be forwarded.
In another embodiment of the present application, a message processing apparatus is provided. The message processing device comprises: a memory and a processor, wherein,
the memory is used for storing programs;
the processor is coupled to the memory, and configured to execute the program stored in the memory, so as to implement any of the message processing methods described above.
In still another embodiment of the present application, there is provided a computer-readable storage medium storing a computer program, which when executed by a computer, can implement the message processing method described in any one of the above.
In the technical scheme provided by the embodiment of the application, under the non-kernel environment, a plurality of back-end container group IP addresses matched with the target IP address in the message to be forwarded are searched, and one back-end container group IP address is selected from the back-end container group IP addresses to replace the target IP address in the message to be forwarded. Namely: the load balancing function realized by the kernel in the prior art is transferred to the non-kernel environment, so that the burden of the kernel can be reduced. Under the large-scale flow scene, the message forwarding performance can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a block diagram of a message processing system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a message processing method according to an embodiment of the present application;
fig. 3 is a block diagram of a message processing system according to an embodiment of the present application;
fig. 4 is an exemplary diagram of a packet forwarding pipeline according to an embodiment of the present application;
fig. 5 is a first schematic diagram illustrating message processing according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a message processing according to an embodiment of the present application;
fig. 7 is a block diagram of a message processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below according to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Further, in some flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Before introducing the message processing method provided by the embodiment of the present application, a system architecture related to the message processing method provided by the embodiment of the present application is introduced. As shown in fig. 1, the message processing system includes: a plurality of nodes 10; at least one container group 101 is arranged on the node 10;
the node 10 is configured to:
acquiring a message to be forwarded; the message to be forwarded contains a destination IP address;
in a non-kernel environment, inquiring a first matching table to obtain a plurality of back-end container group IP addresses matched with the destination IP address;
in a non-kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded;
and forwarding the replaced message to be forwarded.
In the technical scheme provided by the embodiment of the application, under the non-kernel environment, a plurality of back-end container group IP addresses matched with the target IP address in the message to be forwarded are searched, and one back-end container group IP address is selected from the back-end container group IP addresses to replace the target IP address in the message to be forwarded. Namely: the functions realized by the kernel in the prior art are transferred to the non-kernel environment, so that the burden of the kernel can be reduced. Under the large-scale flow scene, the message forwarding performance can be improved.
In a specific example, a programmable logic device 102 is further disposed on the node 10; each of the at least one container group 101 is in communication connection with the programmable logic device 102; the uncore environment described above is located in the programmable logic device 102. The programmable logic device 102 may include, but is not limited to: FPGA (Programmable Array Logic), CPLD (Complex Programmable Logic Device).
In practical application, the implementation modes of the container network are as follows: logical networks (overlay networks) and physical networks (underlay networks). Here, underlay refers to a physical network, which is composed of physical devices and physical links. Common physical devices include switches, routers, firewalls, load balancing, intrusion detection, behavior management, etc., which are connected by specific links to form a conventional physical network, which we refer to as an underlay network. The overlay is a Tunneling technology, and VXLAN (Virtual Extensible Local Area Network), NVGRE (Network Virtualization using general Routing protocol, i.e., network Virtualization) and STT (Stateless Transport tunnel) are three typical Tunneling technologies, which all implement a large two-layer Network by using a Tunneling technology. And encapsulating the original data frame message of the two-layer data and then transmitting the encapsulated data frame message through the tunnel. In a word, one or more logic networks, namely virtual networks, can be created on the existing physical network by using the overlay technology, so that a plurality of problems of a physical data center, especially a cloud data center, are effectively solved, and the automation and the intelligence of the data center are realized.
In the container network, the node 10 is located in an underlay network, and the container group Pod is located in an overlay network. The container group Pod is the smallest scheduling unit in Kubernetes, and each Pod is a running instance of an application. Kubernetes does not manage containers directly, but through Pod. One Pod contains the following: one or more containers, typically one, are placed in a Pod unless multiple containers tightly couple to the shared resource; shared storage resources (e.g., data volumes), containers in a Pod are shareable storage space; a shared IP address, containers in the Pod can access each other; and options defining how the container should function.
As shown in fig. 1, in the container network, a container group 101 is communicatively connected to a programmable logic device 102 through a virtual network card device. The virtual network card equipment consists of two virtual network card interfaces and a connecting line for connecting the two virtual network card interfaces; one virtual network card interface 1031 of the virtual network card device is disposed in the container group 101, and the other virtual network card interface 1032 is disposed in the programmable logic device 102. The node 10 is further provided with a virtual gateway device 104 and a virtual tunnel device 105. The virtual gateway device 104 and the virtual tunnel device 105 are respectively disposed on the programmable logic device 102. The node 10 is further provided with a physical network card 106.
The specific implementation of each unit in the above system and the interaction process between the units will be described in detail in the following embodiments.
Fig. 2 is a flowchart illustrating a message processing method according to an embodiment of the present application. As shown in fig. 2, the message processing method includes the following steps:
201. and acquiring a message to be forwarded.
And the message to be forwarded contains a destination IP address.
202. And in the non-kernel environment, querying a first matching table to obtain a plurality of back-end container group IP addresses matched with the destination IP address.
203. And in the non-kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded.
204. And forwarding the replaced message to be forwarded.
In 202, the uncore environment may be located in a programmable logic device.
In kubernets, a resource type, service (Service), is provided, which aggregates multiple container groups Pod providing the same Service and provides a uniform entry address, for example: a service IP address. The container group Pod service at the back end can be accessed by accessing the ingress address of the service. That is, there is a mapping relationship between the service IP address and the plurality of backend container group Pod IP addresses. The first matching table may be constructed based on this mapping relationship.
In an example, at least one service IP address and a plurality of backend container group IP addresses corresponding to the service IP addresses may be stored in the first matching table. And inquiring the first matching table, namely inquiring whether the destination IP address is stored in the first matching table. If the destination IP address is stored in the first matching table, it indicates that the destination IP address is a service IP address, and therefore, a plurality of backend container group IP addresses corresponding to the destination IP address, that is, a plurality of backend container group IP addresses matched with the destination IP address need to be obtained from the first matching table.
In 203, in an example, a target backend container group IP address may be randomly selected from the plurality of backend container group IP addresses to replace the destination IP address in the to-be-forwarded message. In another example, a target backend container group IP address may be selected from the multiple backend container group IP addresses according to a preset load balancing policy (or load balancing algorithm) to replace the destination IP address in the to-be-forwarded message.
In the technical scheme provided by the embodiment of the application, under the non-kernel environment, a plurality of back-end container group IP addresses matched with the target IP address in the message to be forwarded are searched, and one back-end container group IP address is selected from the back-end container group IP addresses to replace the target IP address in the message to be forwarded. Namely: the functions realized by the kernel in the prior art are transferred to the non-kernel environment, so that the burden of the kernel can be reduced. Under the large-scale flow scene, the message forwarding performance can be improved.
In one implementation, a message processing pipeline is built in the uncore environment. The method may further include:
205. in the uncore environment, the message processing pipeline is executed to implement steps 202 and 203 described above.
In an implementation, the message processing pipeline may be written based on a Protocol-Independent Packet processor language P4 (Programming Protocol-Independent Packet Processors). The P4 programming language can support both software and hardware implementations. The software implementation means that a message processing pipeline written based on P4 depends on a kernel; hardware implementation means that a packet processing pipeline written based on P4 does not depend on a kernel, the packet processing pipeline is loaded in a programmable logic device, and the programmable logic device provides a non-kernel environment for the execution of the packet processing pipeline.
It should be noted that, when the message processing pipeline is written based on P4, the message processing pipeline may be implemented by software when the flow metric is small; and when the flow scale is large, the message processing assembly line is realized by hardware. That is to say, writing a message processing pipeline by adopting the P4 language provides possibility for flexible software and hardware switching under a container network scene.
In general, a P4 pipeline includes: the analyzer, the matching action table component and the anti-analyzer; wherein the match action table component operates based on the match action table.
When the message processing pipeline is written based on P4, the first matching table comprises a first matching action table. Accordingly, the aforementioned 202 "in the uncore environment, querying the first matching table to obtain a plurality of backend container group IP addresses matching the destination IP address" includes:
2021. and in the non-kernel environment, querying the first matching action table to obtain a target action matched with the destination IP address.
2022. In the non-kernel environment, executing the target action to obtain a plurality of back-end container group IP addresses matched with the destination IP address.
In the above 2021, the first matching action table stores the first matching item and the corresponding first processing action; the first match includes a service IP address. The first matching action table may store a plurality of first matching entries, and different first matching entries may include different service IP addresses.
Inquiring a first matching item matched with the destination IP address in a first matching action table; and taking the first processing action corresponding to the first matching item as the target action.
In the above 2022, the target action is performed to obtain a plurality of backend container group IP addresses matching the destination IP address. Note: the destination IP address is a service IP address.
Optionally, the first matching item may further include: the port number and protocol number corresponding to the corresponding service IP address. In practical application, the destination port number and the target protocol number in the message to be forwarded can be determined; a first matching entry matching the destination IP address, the destination port number, and the destination protocol number is looked up in a first matching action table.
Accordingly, in 203, "in the non-kernel environment, selecting a target backend container group IP address from the plurality of backend container group IP addresses to replace the destination IP address in the to-be-forwarded message" may include:
2031. and in the non-kernel environment, executing the target action to select a target back-end container group IP address from the back-end container group IP addresses and replace the target IP address in the message to be forwarded with the target container group IP address.
Specifically, the target action is executed, and the following steps are implemented: the method comprises the steps of obtaining a plurality of back-end container group IP addresses matched with a target IP address, selecting a target back-end container group IP address from the back-end container group IP addresses, replacing the target IP address in a message to be forwarded with the target container group IP address, and replacing a target port number in the message to be forwarded with a port number corresponding to the target container group IP address.
In practical application, the method may further include: after step 203, the tuple information corresponding to the message to be forwarded and the IP address of the target container group may be stored in the connection tracking table for subsequent use. The tuple information may be a five-tuple. The quintuple includes a source IP address, a destination IP address, a source port number, a destination port number, and a protocol number of the packet to be forwarded. Each tuple information recorded in the connection tracking table is used to identify an established service connection.
In addition, after the step 203, the anti-tuple information and the destination IP address corresponding to the packet to be forwarded may also be stored in the anti-connection tracking table correspondingly for subsequent use. The anti-tuple information corresponding to the message to be forwarded refers to tuple information of reverse flow of the message to be forwarded. The reverse tuple information may be a reverse quintuple, where the reverse quintuple refers to a quintuple of a reverse flow of the packet to be forwarded. And the source IP address in the reverse quintuple is the IP address of the target back-end container set. Each anti-tuple information recorded in the anti-link tracking table is also used to identify an established service connection.
In an implementation scenario, before the step 202, the method may further include the following steps:
206. determining tuple information of the message to be forwarded;
207. in the non-kernel environment, inquiring a connection tracking table to determine whether a target back-end container group IP address matched with the tuple information is stored in the connection tracking table;
208. and when the target back-end container group IP address matched with the tuple information is not stored in the connection tracking table, triggering the step of inquiring a first matching table in the non-kernel environment to obtain a plurality of back-end container group IP addresses matched with the target IP address.
Specifically, a connection tracking table is searched to determine whether tuple information of the message to be forwarded is stored in the connection tracking table; if the packet is stored, the packet to be forwarded is a non-first packet, that is, a related service connection is established before the packet to be forwarded, so that load balancing operation does not need to be performed again according to the first matching table to establish the service connection; if not, a load balancing operation needs to be performed according to the first matching table to establish the service connection, that is, the above steps 202 and 203 need to be performed.
Optionally, the method may further include the following steps:
209. and when the connection tracking table stores the target back-end container group IP address matched with the tuple information, replacing the target IP address in the message to be forwarded with the target back-end container group IP address.
If the tuple information of the message to be forwarded is stored in the connection tracking table, acquiring a back-end container group IP address corresponding to the tuple information of the message to be forwarded from the connection tracking table, and using the back-end container group IP address as a target back-end container group IP address matched with the tuple information of the message to be forwarded; and replacing the target IP address in the message to be forwarded with the target back-end container group IP address.
In one example, the method may further include the steps of:
210. and determining whether the destination IP address is a service IP address or not according to a service IP table.
211. And when the destination IP address is a service IP address, executing the step of inquiring a first matching table in the non-kernel environment to obtain a plurality of back-end container group IP addresses matched with the destination IP address.
In the above 210, at least one service IP address is stored in the service IP table. The purpose of querying the service Ip address is to determine whether the destination Ip address belongs to the service Ip address.
If the service IP table stores the destination IP address, the destination IP address belongs to the service IP address; if the destination IP address is not stored in the service IP table, the destination IP address does not belong to the service IP address.
In the above 211, when the destination IP address is stored in the service IP table, the above steps 202 and 203 are executed.
When the destination IP address is not stored in the service IP table, it indicates that the destination IP address belongs to the IP address of the container group, and then the packet to be forwarded may be a response packet. Therefore, the method can further comprise the following steps:
212. and when the destination IP address is not the service IP address, determining the source IP address in the message to be forwarded.
213. In a non-kernel environment, querying a reverse connection tracking table to obtain a target service IP address matched with tuple information of the message to be forwarded; and replacing the source IP address in the message to be forwarded with the target service IP address.
In 213, it is queried whether tuple information of the message to be forwarded is stored in the backward connection tracking table; if the tuple information of the message to be forwarded is stored in the backward connection tracking table, the message to be forwarded is a response message; otherwise, it indicates that the message to be forwarded is not a response message.
And when the tuple information of the message to be forwarded is stored in the reverse connection tracking table, taking the service IP address corresponding to the tuple information stored in the reverse connection tracking table as a target service IP address matched with the tuple information of the message to be forwarded.
And replacing the source IP address in the message to be forwarded with the target service IP address so as to ensure that the request end can normally identify when receiving the response message. Usually, when the request end requests, the destination IP address is the service IP address, and then the source IP address in the response message must also be the service IP address, so that the request end will consider the response message as returned by its request object.
In order to meet the requirement of users for diverse selection of message forwarding schemes, the method may further include the following steps:
214. and determining a target processing environment from the kernel environment and the non-kernel environment according to the user configuration information.
215. When the target processing environment is an uncore environment, steps 202 and 203 are performed.
216. When the target processing environment is a kernel environment, inquiring a first matching table in the kernel environment to obtain a plurality of back-end container group IP addresses matched with the target IP address; and in the kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded.
The operations of querying, obtaining, selecting, and replacing in the kernel environment may refer to specific implementations in the non-kernel environment and will not be described in detail herein.
In practical application, the method may further include:
217. and analyzing the received message to obtain an analysis result.
218. And determining the message type of the message according to the analysis result.
219. And determining the message to be forwarded according to the message type of the message.
The steps 217, 218, and 219 may be performed in the uncore environment, and in particular, may be implemented by a message forwarding pipeline running in the uncore environment.
The message forwarding assembly line comprises an analyzer, and the analyzer can be used for analyzing the message to obtain an analysis result. The message carries the message type, and the analysis result includes the message type carried in the message. The message types may include: an Address Resolution Protocol (ARP) message type, an IP message type, and a tunnel encapsulation message type.
In an implementation scenario, the "determining a packet to be forwarded according to a packet type of the packet" in the above 219 may include one or more of the following steps:
s10, when the message type of the message is the type of the address resolution protocol message, determining that no message to be forwarded exists, and answering the message in a substituted manner.
S11, when the message type of the message is the IP message type, the message is used as the message to be forwarded.
S12, when the message type of the message is a tunnel encapsulation message type, decapsulating the message to obtain a decapsulated message; and taking the decapsulated message as the message to be forwarded.
In S10, ARP is a protocol that resolves an IP Address into an ethernet MAC (Media Access Control Address) Address (or physical Address). The ARP message is answered in a substitute mode, and the ARP message processing pressure can be effectively dispersed. The pipeline can randomly generate a random MAC address and return an ARP response message. The specific implementation of the surrogate can be found in the prior art and will not be described in detail here.
In the above S12, at least one remote node IP address is stored in the tunnel table.
And when the external source IP address in the message is matched with the IP address of a certain remote node in the tunnel table, determining that the message type of the message is a tunnel encapsulation message type.
In an example, the tunnel table may further store at least one remote container group IP address for use in packet encapsulation. Specific procedures of use are described below.
The message forwarding pipeline written based on P4 may be referred to as P4 pipeline for short, and each table in the above is a matching action table. The following describes an embodiment of the present application, with an example that a P4 pipeline loaded on a programmable logic device is used to implement packet forwarding:
the programmable logic device can receive the message through the virtual network interface card, the virtual gateway device and the virtual tunnel device which are arranged on the programmable logic device, namely the message can enter the message forwarding pipeline through the virtual network interface card, the virtual gateway device and the virtual tunnel device which are arranged on the programmable logic device. In addition, the message processed by the programmable logic device is sent out through a virtual network card interface, a virtual gateway device and a virtual tunnel device which are arranged on the programmable logic device.
The message sent by the local container group Pod through the virtual network card interface arranged on the local container group Pod enters the programmable logic device through the corresponding virtual network card interface arranged on the programmable logic device; the message sent by the remote container group Pod or the message sent by the physical network card interface on the remote node enters the programmable logic device through the virtual tunnel equipment arranged on the programmable logic device; messages sent by the local node (specifically, a physical network card interface on the local node) enter the programmable logic device through the virtual gateway device arranged on the programmable logic device.
For example: fig. 3 shows four packet forwarding paths (1), (2), (3), and (4), where path (1) illustrates the communication between the local container group and the local node; the path (2) illustrates the communication of the local container group with the local container group; path (3) illustrates the communication of the remote node with the local container group; path (4) illustrates the communication of the local container group with the remote container group. Path (1) shown in fig. 3: the messages sent by the first container group 111 sequentially pass through the first virtual network card interface 113, the second virtual network card interface 114, the first programmable logic device 117, and the first virtual gateway device 118, and reach the first physical network card 110. Path (2) as shown in fig. 3: the messages sent by the first container group 111 sequentially pass through the first virtual network card interface 111, the second virtual network card interface 114, the first programmable logic device 117, the fourth virtual network card interface 116, and the third virtual network card interface 115, and reach the second container group 112. Path (3) as shown in fig. 3: the message sent by the second physical network card 120 sequentially passes through the second virtual gateway device 129, the second programmable logic device 127, the second virtual tunnel device 128, the first virtual tunnel device 119, the first programmable logic device 117, the fourth virtual network card interface 116, and the third virtual network card interface 115, and reaches the second container group 112. Path (4) as shown in fig. 3: the message sent by the second container group 112 reaches the third container group 121 through the third virtual network card interface 115, the fourth virtual network card interface 116, the first programmable logic device 117, the first virtual tunnel device 119, the second virtual tunnel device 128, the sixth virtual network card interface 124, and the fifth virtual network card interface 123.
The P4 pipeline relates to a plurality of matching action tables, and the specific number may be set according to actual needs, which is not specifically limited in the embodiment of the present application.
The service backend selection table (i.e. the first matching table above) is configured to store a plurality of first matching entries, where service IP addresses included in different first matching entries may be different, and first processing actions corresponding to different first matching entries may also be different.
In the embodiment of the present application, parsing the packet and performing the forwarding operation on the packet to be forwarded are both implemented by using a P4 pipeline.
The P4 pipeline runs in programmable logic hardware, that is, in an uncore environment, and is configured to implement the following steps:
400. and acquiring a message to be forwarded.
And the message to be forwarded contains a destination IP address.
401. And searching a target first matching item matched with the destination IP address in the service back-end selection table.
402. And executing a first processing action corresponding to the target first matching item.
In 401, the destination IP address is matched with the first matching entry in the service backend selection table. And if the service IP address included in the first matching item is the destination IP address, taking the first matching item as a target first matching item.
In 402, the first processing action (i.e. the target action in the foregoing) corresponding to the target first matching item is executed, that is to say: the method comprises the steps of obtaining a plurality of back-end container group IP addresses corresponding to a target IP address, and selecting the target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in a message to be forwarded.
In practical applications, the first matching entry may further include a port number and a destination protocol number corresponding to the service IP address. Accordingly, the above 401 includes: and searching a target first matching item matched with the target IP address, the target port number and the target protocol number in the service back-end selection table.
Correspondingly, a first processing action corresponding to the target first matching item is executed, that is: the method comprises the steps of obtaining a plurality of back-end container group IP addresses corresponding to a target IP address, selecting the target back-end container group IP address from the back-end container group IP addresses, replacing the target IP address in a message to be forwarded with the target back-end container group IP address, and replacing a target port number in the message to be forwarded with a port number corresponding to the target back-end container group IP address.
In practical applications, the at least one matching action table further includes: a trace table is connected. The above method may further comprise: after the step 402, adding a new second matching item and a corresponding second processing action in the connection tracking table; the new second matching item includes tuple information of the packet to be forwarded, for example: a quintuple; the quintuple includes a source IP address, a destination IP address, a far port number, a destination port number, and a protocol number of the packet to be forwarded. The second processing action corresponding to the new second matching item comprises: modifying the target IP address in the message to be forwarded into the selected target back-end container group IP address, specifically: the second processing action corresponding to the new second matching item comprises: and modifying the target IP address and the target source IP address in the message to be forwarded into the selected target back-end container group IP address and the corresponding port number.
Furthermore, after the above step 402, a new fourth matching item and its corresponding third processing action are added to the backward connection tracking table; the new fourth matching item includes reverse tuple information corresponding to the packet to be forwarded, for example: reverse quintuple. The reverse quintuple refers to a quintuple of reverse traffic of the packet to be forwarded. The source IP address in the reverse quintuple is the selected target backend container group IP address. The third processing action corresponding to the new fourth matching item comprises: and modifying the source IP address in the message to be forwarded into the destination IP address. Specifically, the third processing action corresponding to the new fourth matching item includes: and modifying the source IP address and the source port number in the message to be forwarded into the destination IP address and the port number corresponding to the destination IP address.
In practical application, the connection tracking table is used for storing a second matching item and a corresponding second processing action: the second matching item comprises tuple information of the existing service connection; the second processing action comprises: and modifying the target IP address in the message to be forwarded into the IP address of the back-end container group corresponding to the existing service connection. The number of the second matching items may be one or more, tuple information included in different second matching items may be different, and corresponding second processing actions may also be different. The second processing action may specifically include: and modifying the destination IP address and the destination port number in the message to be forwarded into the IP address and the port number of the back-end container group corresponding to the existing service connection. The P4 pipeline is further configured to implement:
403. and searching a target second matching item matched with the tuple information of the message to be forwarded in a connection tracking table.
404. If the target second matching item is not found, the step 401 is executed.
In 403, the tuple information of the packet to be forwarded is matched with the second matching entry in the connection tracking table.
In the above 404, if the target second matching item is not found, it indicates that the packet to be forwarded is the first packet, and therefore, a load balancing operation needs to be performed according to the service backend selection table to establish a corresponding service connection.
Optionally, the P4 pipeline is further configured to implement:
405. and if the target second matching item is found, executing a second processing action corresponding to the target second matching item.
In 405, a second processing action corresponding to the target second matching item is executed, that is, the destination IP address in the message to be forwarded is replaced with the back-end container group IP address corresponding to the tuple information of the message to be forwarded in the connection tracking table.
When a target second matching item matched with tuple information corresponding to the message to be forwarded exists in the connection tracking table, the message to be forwarded is indicated to be a non-first packet, namely, related service connection is established before, so that load balancing operation does not need to be performed according to a service back-end selection table again to establish new service connection, and a target IP address in the message to be forwarded can be directly replaced by a back-end container group IP address and a port number corresponding to the tuple information of the message to be forwarded in the connection tracking table.
In practical application, in order to meet the requirement of a user for diversity selection of a packet forwarding scheme, the P4 pipeline includes: a first branch and a second branch; requesting a kube-proxy component on the node to execute service back-end selection operation aiming at a message to be forwarded in the first branch; the second branch relates to the service backend selection table; the at least one matching action table comprises: a branch switching table (corresponding to the service IP table above); the branch switching table is used for storing a third matching item and a corresponding switching action thereof; the third match includes a service IP address; the switching action is one of a first switching action and a second switching action; the first switching action comprises distributing a message to be forwarded to the first branch; the second switching action comprises distributing the message to be forwarded to the second branch. The branch switching table may include a plurality of third matching entries, and different third matching entries may include different service IP addresses.
Optionally, the P4 pipeline is further configured to implement:
406. and executing the matching operation of the destination IP address in the message to be forwarded in the branch switching table.
407. And when a target third matching item matched with the destination IP address in the message to be forwarded exists in the branch switching table, executing a switching action corresponding to the target third matching item.
In 406, the destination IP address in the packet to be forwarded is matched with the third matching entry in the branch switching table.
In 407, when the target third matching entry exists in the branch switching table, it indicates that the packet to be forwarded is to access a service, and performs a switching action corresponding to the target third matching entry. If the switching action is the first switching action, distributing the message to be forwarded to the first branch; if the switching action is the second switching action, the message to be forwarded is distributed to the second branch.
After receiving the message to be forwarded, the first branch sends the message to be forwarded to a kube-proxy component on a first node, so that the kube-proxy component executes service back-end selection operation aiming at the message to be forwarded, and after the kube-proxy component modifies the message to be forwarded, the modified message to be forwarded is sent to a programmable logic device through virtual gateway equipment arranged on the programmable logic device, so that the message to be forwarded reenters a P4 pipeline in the programmable logic device.
After receiving the message to be forwarded, the second branch may perform operations related to the service backend selection table and the connection tracking table in the above embodiments, which are not described herein again.
The switching actions in the branch switching table are configured by the user. In an implementable scheme, the P4 pipeline is also used for implementing
408. And acquiring user configuration information.
Wherein the user configuration message includes indication information indicating a target branch selected by a user from the first branch and the second branch.
409. And correspondingly filling the switching action corresponding to the target branch in the branch switching table according to the configuration message.
In one embodiment, the switching action corresponding to each third matching entry in the branch switching table may be configured as the switching action corresponding to the target branch. The first branch corresponds to a first handover action and the second branch corresponds to a second handover action.
In another specific example, the configuration message includes a service IP address and indication information for indicating a target branch selected by a user from the first branch and the second branch for the service IP address, and the service IP address in the configuration information and a corresponding handover action of the target branch are correspondingly filled into a branch handover table. The number of the service IP addresses in the configuration information may be one or more, and each service IP address corresponds to corresponding indication information. In this embodiment, the user may select a corresponding branch for each service IP address, which is more flexible and convenient.
Optionally, the P4 pipeline is further configured to implement:
410. and when the third target matching item does not exist in the branch switching table, searching a fourth target matching item matched with the tuple information of the message to be forwarded in a backward connection tracking table.
The backward connection tracking table is used for storing a fourth matching item storage and a third processing action corresponding to the fourth matching item storage; the fourth matching item comprises reverse tuple information corresponding to the existing service connection; and the third processing action comprises the step of modifying the source IP address in the message to be forwarded into a service IP address corresponding to the existing service connection.
411. And when the target fourth matching item is found, executing a third processing action corresponding to the target fourth matching item so as to replace the source IP address in the message to be forwarded with a service IP address corresponding to the tuple information of the message to be forwarded in the reverse connection tracking table.
In the above 410, when the target third matching entry is not found, it indicates that the packet to be forwarded is not addressed to the service. The tuple information corresponding to the message to be forwarded can be matched with the fourth matching item in the backward connection tracking table.
In the above 411, when the target fourth matching entry is found, it indicates that the packet to be forwarded is reverse traffic of the existing service connection, and the source IP address in the packet to be forwarded needs to be modified into the service IP address corresponding to the existing service connection, specifically, the source IP address and the source port number in the packet to be forwarded may be modified into the service IP address and the port number corresponding to the existing service connection. Therefore, the source IP address and the source port number of the message seen by the access side can be ensured to be consistent with the destination IP address and the destination port number requested by the access side.
Optionally, the P4 pipeline is further configured to implement:
412. and analyzing the message to obtain an analysis result.
413. And determining the message type of the message according to the analysis result.
414. And determining the message to be forwarded according to the message type of the message.
The specific implementation of the steps 412, 413, and 414 can be found in the above embodiments, and will not be described in detail here.
After the destination IP address is replaced, the source IP address is replaced, or when the target fourth matching item matching the tuple information of the packet to be forwarded is not found in the backward connection tracking table, the P4 pipeline is further configured to implement:
415. and searching a target seventh matching item which is matched with the destination IP address in the current message in the first container group table.
The current packet refers to a packet to be forwarded at the current time, and the packet to be forwarded may be replaced by a destination IP address or a source IP address, or may be an un-replaced packet to be forwarded. Wherein the first container group table is included in the at least one matching action table. The first container group table includes: a seventh matching item and its corresponding sixth processing action. The seventh match comprises a container group IP address located on the node, and the sixth processing action comprises: and updating the value of the Time To Live (TTL Live) in the current message and recording the interface identifier of the device side virtual network card interface corresponding To the IP address of the container group. The device side virtual network card interface refers to a virtual network card interface arranged on the programmable logic device.
416. And when a target seventh matching item matched with the destination IP address in the current message exists in the first container group table, executing a corresponding sixth processing action so as to update the TTL in the current message and record the interface identifier of the device side virtual network card interface corresponding to the destination IP address in the current message.
Optionally, the P4 pipeline is further configured to implement:
417. and searching a target eighth matching item matched with the interface identifier of the device side virtual network card interface corresponding to the destination IP address in the current message in the virtual network interface table.
In addition, the at least one matching action table further includes the virtual network interface table, the virtual network interface table includes an eighth matching item and a seventh processing action corresponding thereto, and the eighth matching item includes: the interface identification of the device side virtual network interface; the seventh processing action comprises: and modifying the destination MAC address in the current message into the MAC address of the container group side virtual network card interface opposite to the device side virtual network card interface, wherein the container group side virtual network card interface is arranged on the container group to be accessed by the current message.
418. And when a target eighth matching item which is matched with the interface identifier of the device side virtual network card interface corresponding to the target IP address in the current message exists in the virtual network interface table, executing a seventh processing action corresponding to the target eighth matching item to modify the target MAC address in the current message into the MAC address of the container group side virtual network card interface corresponding to the device side virtual network card interface.
419. And performing inverse analysis processing on the message obtained in the step 418 to obtain an inverse analysis processed message, and sending the inverse analysis processed message through the virtual network card interface identified by the recorded interface identifier.
420. And when the seventh target matching item matched with the destination IP address in the current message does not exist in the first container group table, searching a ninth target matching item matched with the destination IP address in the current message in the second container group table.
The second container group table includes: a ninth matching item and its corresponding eighth processing action. The ninth matching item includes: a container group IP address located on a remote node. The eighth processing action comprises: and updating the value of Time To Live (TTL) in the current message and recording the equipment identification of the virtual tunnel equipment on the local node.
421. When a target ninth matching item matched with the destination IP address in the current message exists in the second container group table, executing an eighth processing action corresponding to the target ninth matching item
422. And searching a target sixth matching item matched with the destination IP address in the current message in the tunnel table.
423. And when a target sixth matching item matched with the destination IP address in the current message exists in the tunnel table, executing a fifth processing action corresponding to the target sixth matching item to perform tunnel encapsulation on the current message.
Wherein the operation of tunnel encapsulation can be executed by the tunnel device identified by the recorded device identification.
424. And performing inverse analysis processing on the encapsulated message obtained in the step 423, and sending the encapsulated message through the tunnel device identified by the recorded device identifier.
Fig. 4 shows a schematic diagram of a packet forwarding pipeline provided in an embodiment of the present application. As shown in fig. 4, the packet forwarding pipeline includes:
300. and entering a message.
301. The analyzer analyzes the message to determine the message type of the message.
302. And when the message type of the message is the ARP message type, carrying out ARP answering instead.
The design of the production line does not need to utilize a two-layer MAC to forward, so that the ARP messages sent by all the virtual network card interfaces are answered in a substitute mode, and the ARP requests are blocked in the physical machine.
303. And when the message type of the message is the tunnel encapsulation message type, matching in the tunnel table.
304. And if the packet is matched with the upper tunnel table, decapsulating to obtain the packet to be forwarded.
Note: and when the message type of the message is the IP message type, directly using the message as the message to be forwarded.
305. Matching is performed in the branch switching table.
306. And if the first branch is matched, sending the message to be forwarded to the Kube-proxy assembly, and inputting the message into the production line after the message is processed by the Kube-proxy assembly.
307. If the second branch is matched, the matching is performed in the connection tracking table.
308. If the connection tracking table is matched, the destination address of the message to be forwarded is converted directly according to the existing service connection.
309. And if the connection tracking table is not matched, matching in the service back end selection table.
310. And if the matching is performed on the service back-end selection table, performing destination address conversion on the message to be forwarded according to the matched information, and performing connection tracking.
311. And if the upper branch switching table is not matched, matching in the backward connection tracking table.
312. If the reverse connection tracking table is matched, the source address conversion is carried out on the message to be forwarded according to the existing service connection.
If there is no match to the reverse link tracking table or after step 312 is completed, step 313 is performed.
313. A match is made in the first container group table.
314. If the first container group table is matched, the TTL is updated and the interface identifier of the target virtual network card interface is recorded
315. And matching in the virtual network card interface table.
316. And if the virtual network interface table is matched, rewriting the destination MAC address in the current message according to the matched information.
317. If the first container group table is matched, matching is carried out in the second container group table.
318. And if the second container group table is matched, updating the TTL and recording the equipment identifier of the tunnel equipment.
319. Matching is performed in the tunnel table.
320. And if the current message is matched with the upper tunnel table, performing tunnel encapsulation on the current message.
The correct node IP address and node MAC address can be filled in by the tunneling device at the outer layer.
321. The reverse analyzer performs reverse analysis on the current message.
After the data packet is shuttled during the matching action, the fields in the packet header may be changed according to the design of the table processing action, and important intermediate metadata may be added. The reverse analyzer reassembles the packet header after the pipeline has processed the forwarding logic for the packet.
322. And sending out the message.
In this embodiment, the pipeline performs three-tier forwarding using ip addresses. The assembly line can divide an inlet and an outlet according to different P4 architectures, and adaptation is carried out according to the selected P4 model in the aspect of implementation.
P4 models include, but are not limited to: PSA (Portable Switch Architecture), PNA (Portable NIC Architecture). The specific implementation of each step in the embodiments of the present application may participate in the corresponding content in each of the above embodiments, and is not described herein again.
The technical solution provided by the embodiment of the present application will be described below with reference to a search scenario as an example:
as shown in fig. 5, a first container group 111 provides a search service and a second container group 112 provides a database service. When a user needs to search for information, a search request may be sent through their client 40. After the search request reaches the first container group 111. The first container group 111 generates a database query message according to the search request, and sends the database query message to the second virtual network card interface 114 through the first virtual network card interface 113, and then enters a message forwarding pipeline loaded in the first programmable logic device. The message forwarding pipeline sends the database query message to the third virtual network card interface through the fourth virtual network card interface by processing, and then the database query message enters the second container group 112, the second container group 112 executes database query operation after receiving the database query message, obtains a query result, and returns the query result to the first container group 111 in an original way. After receiving the query result, the first container group 111 returns the search result to the client 40.
If the prior art is adopted, as shown in fig. 6, the database query message needs to enter the pipeline first, then exit from the pipeline and enter the Kube-proxy module, the Kube-proxy module performs service (service) back-end selection operation, and then the Kube-proxy module puts the message into the pipeline, and then the pipeline forwards the message to the second container group 112. As can be seen from fig. 5, the forwarding path of the packet is long, and the delay is long.
In summary, in the technical solution provided in the embodiment of the present application, a pipeline of a Kubernetes cloud native container network data path is described using P4, which relates to basic network scenarios such as overlay network and overlay network intercommunication, overlay and underlay intercommunication, and service access (service). The data plane is endowed with the programmable capability by virtue of the capability of P4 and can be controlled by software programming; the method provides possibility for flexible hardware unloading and software and hardware combination in a container network scene, and improves message forwarding performance; the multi-back-end capability of the container network is given, and software and hardware are switched according to requirements; the development cost and the period are reduced. In addition, the Kube-proxy assembly can be flexibly utilized, two pipeline branches integrating the Kube-proxy and independent of the Kube-proxy are provided, and flexible selection can be performed according to requirements. In a practical traffic scenario, the components that manage the pipeline may rely on the services that use the pipeline, forming a circular dependency, so integrating a Kube-proxy is necessary. However, the performance of the kube-proxy is poor, and unified management cannot be achieved, so that the universality of the scheme can be improved by simultaneously supporting two ways.
Fig. 7 shows a schematic structural diagram of a message forwarding apparatus according to an embodiment of the present application. As shown in fig. 7, the message forwarding apparatus includes a memory 1101 and a processor 1102. The memory 1101 may be configured to store various other data to support operations on the message forwarding device. Examples of such data include instructions for any application or method operating on the message forwarding device. The memory 1101 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The memory 1101 is used for storing programs;
the processor 1102 is coupled to the memory 1101, and configured to execute the program stored in the memory 1101, so as to implement the data processing method provided by each of the above method embodiments.
Further, as shown in fig. 7, the message forwarding apparatus further includes: communication components 1103, display 1104, power components 1105, audio components 1106, and the like. Only some of the components are schematically shown in fig. 7, and it is not meant that the message forwarding apparatus includes only the components shown in fig. 7.
The message forwarding device can be an electronic device. Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps or functions of the data processing method provided by the above method embodiments when executed by a computer.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. A message processing method is characterized by comprising the following steps:
acquiring a message to be forwarded; the message to be forwarded contains a destination IP address;
in the non-kernel environment, inquiring a first matching table to obtain a plurality of back-end container group IP addresses matched with the destination IP address;
in a non-kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded;
and forwarding the replaced message to be forwarded.
2. The method of claim 1, wherein a packet processing pipeline is built into the uncore environment; the method comprises the following steps:
in the uncore environment, executing the message processing pipeline to implement: the step of inquiring the first matching table to obtain a plurality of back end container group IP addresses matched with the target IP address and the step of selecting the target back end container group IP address from the plurality of back end container group IP addresses to replace the target IP address in the message to be forwarded.
3. The method of claim 2, wherein the packet forwarding pipeline is written based on a protocol independent packet processor language P4.
4. The method of claim 3, wherein the first match table comprises a first match action table;
in an uncore environment, querying a first matching table to obtain a plurality of backend container group IP addresses matched with the destination IP address, comprising:
in a non-kernel environment, inquiring the first matching action table to obtain a target action matched with the destination IP address;
in the non-kernel environment, executing the target action to obtain a plurality of back-end container group IP addresses matched with the destination IP address.
5. The method of any of claims 1 to 4, wherein the uncore environment is located in a programmable logic device.
6. The method of any of claims 1 to 4, wherein before querying a first matching table in an uncore environment to obtain a plurality of backend container group IP addresses that match the destination IP address, the method further comprises:
determining tuple information of the message to be forwarded;
in a non-kernel environment, inquiring a connection tracking table to determine whether a target back-end container group IP address matched with the tuple information is stored in the connection tracking table;
and when the target back-end container group IP address matched with the tuple information is not stored in the connection tracking table, triggering the step of querying a first matching table in the non-kernel environment to obtain a plurality of back-end container group IP addresses matched with the target IP address.
7. The method of claim 6, further comprising:
and when the connection tracking table stores the target back-end container group IP address matched with the tuple information, replacing the target IP address in the message to be forwarded with the target back-end container group IP address.
8. The method of any of claims 1 to 4, further comprising:
determining whether the destination IP address is a service IP address or not according to a service IP table;
and when the destination IP address is a service IP address, executing the step of inquiring a first matching table in the non-kernel environment to obtain a plurality of back-end container group IP addresses matched with the destination IP address.
9. The method of claim 8, further comprising:
when the destination IP address is not the service IP address, determining a source IP address in the message to be forwarded;
in a non-kernel environment, inquiring a reverse connection tracking table to obtain a target service IP address matched with tuple information of the message to be forwarded; and replacing the source IP address in the message to be forwarded with the target service IP address.
10. The method of any of claims 1 to 4, further comprising:
determining a target processing environment from a kernel environment and a non-kernel environment according to user configuration information;
when the target processing environment is an uncore environment, executing a step of querying a first matching table in the uncore environment to obtain a plurality of back-end container group IP addresses matched with the target IP address and a step of selecting the target back-end container group IP address from the plurality of back-end container group IP addresses in the uncore environment to replace the target IP address in the message to be forwarded.
11. The method of claim 10, further comprising:
when the target processing environment is a kernel environment, inquiring a first matching table in the kernel environment to obtain a plurality of back-end container group IP addresses matched with the target IP address; and in the kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded.
12. A message processing system, the system comprising: a plurality of nodes; at least one container group is arranged on the node;
the node is configured to:
acquiring a message to be forwarded; the message to be forwarded contains a destination IP address;
in the non-kernel environment, inquiring a first matching table to obtain a plurality of back-end container group IP addresses matched with the destination IP address;
in a non-kernel environment, selecting a target back-end container group IP address from the back-end container group IP addresses to replace the target IP address in the message to be forwarded;
and forwarding the replaced message to be forwarded.
13. A message processing apparatus, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled to the memory, is configured to execute the program stored in the memory to implement the message processing method according to any one of claims 1 to 11.
14. A computer-readable storage medium storing a computer program, wherein the computer program is capable of implementing the message processing method according to any one of claims 1 to 11 when executed by a computer.
CN202211262426.5A 2022-10-14 2022-10-14 Message processing method, system, device and storage medium Pending CN115941602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211262426.5A CN115941602A (en) 2022-10-14 2022-10-14 Message processing method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211262426.5A CN115941602A (en) 2022-10-14 2022-10-14 Message processing method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN115941602A true CN115941602A (en) 2023-04-07

Family

ID=86653230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211262426.5A Pending CN115941602A (en) 2022-10-14 2022-10-14 Message processing method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN115941602A (en)

Similar Documents

Publication Publication Date Title
US10911549B2 (en) Proxy routing based on path headers
US10547463B2 (en) Multicast helper to link virtual extensible LANs
CN106998286B (en) VX L AN message forwarding method and device
US9264362B2 (en) Proxy address resolution protocol on a controller device
CN112470436A (en) Multi-cloud connectivity using SRV6 and BGP
US9253274B2 (en) Service insertion architecture
US10164866B2 (en) Virtual extensible LAN intercommunication mechanism for multicast in networking
CN107113241B (en) Route determining method, network configuration method and related device
CN105993161B (en) Element, method, system and computer readable storage device for resolving an address
CN109428749A (en) Network management and relevant device
US20190020489A1 (en) Extranet connectivity in lisp networks
US11184277B1 (en) Reducing routing rules used to route traffic
US20230067091A1 (en) Method and device for segment routing service processing, routing equipment, and storage medium
CN113783781A (en) Method and device for interworking between virtual private clouds
CN112242952B (en) Data forwarding method, cabinet top type switch and storage medium
WO2015043679A1 (en) Moving stateful applications
US11397713B2 (en) Historical graph database
US9929951B1 (en) Techniques for using mappings to manage network traffic
US8606890B2 (en) Managing communication between nodes in a virtual network
US20220141080A1 (en) Availability-enhancing gateways for network traffic in virtualized computing environments
CN116389599A (en) Gateway service request processing method and device and cloud native gateway system management method and device
CN114374641B (en) Three-layer message forwarding method and device
CN115941602A (en) Message processing method, system, device and storage medium
Amadeo et al. Towards software-defined fog computing via named data networking
US20230032441A1 (en) Efficient flow management utilizing unified logging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination