CN117240935A - Data plane forwarding method, device, equipment and medium based on DPU - Google Patents

Data plane forwarding method, device, equipment and medium based on DPU Download PDF

Info

Publication number
CN117240935A
CN117240935A CN202311139383.6A CN202311139383A CN117240935A CN 117240935 A CN117240935 A CN 117240935A CN 202311139383 A CN202311139383 A CN 202311139383A CN 117240935 A CN117240935 A CN 117240935A
Authority
CN
China
Prior art keywords
data
forwarded
dpdk
dpu
data plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311139383.6A
Other languages
Chinese (zh)
Inventor
梅澳
秦阳
李玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202311139383.6A priority Critical patent/CN117240935A/en
Publication of CN117240935A publication Critical patent/CN117240935A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a data plane forwarding method, a device, equipment and a medium based on a DPU (data plane unit), which can improve the performance and throughput of data plane application and are suitable for a low-delay scene of the DPU. The DPDK directly takes over the network card, so that the data packet can be rapidly processed before reaching the data plane, the processing delay of the data packet is reduced, and the DPU is very suitable for a DPU low-delay scene. And a user mode protocol stack based on DPDK replaces a Linux kernel protocol stack, and bypasses an operating system kernel to be processed in a user space, directly accesses network and physical storage equipment hardware, and obviously improves the performance and throughput of data plane application.

Description

Data plane forwarding method, device, equipment and medium based on DPU
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data plane forwarding method, device, equipment, and medium based on a DPU.
Background
Although Linux kernel protocol stacks perform well under most common network workloads, under the conditions of high load, high concurrency or large-scale data processing in a DPU scene, performance bottlenecks can occur due to lock contention, memory access delay and the like, and the requirement of low latency can not be met under the pressure of large-scale data. When processing a message, the Linux kernel protocol stack needs to trigger actions such as interrupt, memory copy, context switching and the like, and the actions can increase the consumption of resources and reduce the performance of an operating system, so that the problems of delay increase, throughput reduction and performance bottleneck are caused. Limited by a Linux kernel protocol stack, the real-time processing requirement of high-speed traffic of a large-scale data center in a DPU scene can not be met.
Disclosure of Invention
In view of the above, the present application provides a data plane forwarding method, apparatus, device and medium based on a DPU, which can improve the performance and throughput of data plane application, and is suitable for a low-delay scenario of the DPU.
In order to achieve the above purpose, the technical scheme of the application is as follows:
the data plane forwarding method based on the DPU realizes forwarding of a data plane to be forwarded based on the DPU, wherein the DPU comprises an SOC and a network processor;
on the SOC side, configuring a DPDK to take over a corresponding network card, processing data to be forwarded from a client in a user space based on a user state protocol stack of the DPDK, and transmitting the data to be forwarded to a network processor;
and the network processor transmits the received data to be forwarded to the server to complete forwarding.
The processing the data to be forwarded in the user space by the user state protocol stack based on the DPDK comprises the following steps:
the DPDK receives a message containing the forwarding data and reaches a user state protocol stack;
the data to be forwarded, which accords with the message rule, reaches a service grid data plane through NAT conversion;
the service grid data plane matches the destination port and the IP address according to the configuration information, establishes connection with the service end, and acquires the response data of the request;
and transmitting the data to be forwarded to the network processor according to the response data.
And converting the response data into a source IP address and corresponding information according to the NAT.
When the user state protocol stack based on the DPDK processes data to be forwarded in the user space, the user state protocol stack distributes messages to the data to be forwarded, judges whether the message rule is met, if so, the data to be forwarded reaches a service grid data plane through NAT conversion, otherwise, the data to be forwarded is transmitted to a network processor through the DPDK through a routing table.
When the DPDK receives data to be forwarded for message distribution, the method is realized through a large page memory, a memory pool, zero copy, core binding and polling message receiving and transmitting mechanism.
The user state protocol stack based on DPDK transparently hives the appointed client flow data to the service grid data plane for transparent proxy according to the custom configuration message rule.
The application also provides a data plane forwarding device to be forwarded based on the DPU, wherein the DPU comprises an SOC and a network processor, and the SOC comprises a DPDK, a user state protocol stack and a service grid data plane;
wherein, the DPDK is configured to take over the corresponding network card; the user state protocol stack processes the data to be forwarded in the user space and transmits the data to be forwarded to the network processor;
and the network processor transmits the received data to be forwarded to a server.
Wherein, the DPDK receives the data to be forwarded to carry out message distribution; in the user mode protocol stack, the data to be forwarded, which accords with the message rule, reaches a service grid data plane through NAT conversion; the service grid data plane is used for matching a destination port and an IP address according to the configuration information, establishing connection with the server to obtain request response data, and transmitting data to be forwarded to the network processor through the DPDK according to the response data; and the data to be forwarded which does not accord with the message rule is directly transmitted to the network processor through the DPDK by the routing table.
The application also provides an electronic device, which comprises a processor and a memory for storing instructions executable by the processor; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the DPU-based data plane forwarding method of the present application.
The present application also provides a computer readable storage medium storing a computer program for executing the DPU-based data plane forwarding method of the present application.
The beneficial effects are that:
1. in the application, the DPDK directly takes over the network card, so that the data packet can be rapidly processed before reaching the data plane, the processing delay of the data packet is reduced, and the method is very suitable for DPU low-delay scenes. And a user mode protocol stack based on DPDK replaces a Linux kernel protocol stack, and bypasses an operating system kernel to be processed in a user space, directly accesses network and physical storage equipment hardware, and obviously improves the performance and throughput of data plane application.
2. In the preferred embodiment of the application, the user mode protocol stack based on DPDK can transparently hijack the appointed flow to the centralized data plane for transparent proxy according to the user-defined configuration, thereby facilitating the distribution management of the flow. The transparent proxy is carried out by hijacking the flow to the data plane in the DPU, so that the host resources are saved, the calculation consumption of the host CPU is reduced, and the system performance and stability are further improved by optimizing the distribution of the calculation resources of the host processor.
3. In the preferred embodiment of the application, the DPDK configures the large page memory, and the page table overhead is reduced by the large page memory, so that the cache of the CPU is better utilized, the memory performance is improved, and the faster CPU access speed is provided.
Drawings
Fig. 1 is a schematic diagram of a Linux kernel compared with a DPDK-based user mode protocol stack flow processing procedure in an embodiment of the present application.
Fig. 2 is a schematic flow chart of a data plane forwarding method based on a DPU according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of processing data to be forwarded in a user space by using a DPDK-based user mode protocol stack according to an embodiment of the present application.
Fig. 4 is a schematic diagram of judging whether the judgment accords with the message rule in the user mode protocol stack according to the embodiment of the present application.
Fig. 5 is a schematic flow chart of a data plane forwarding method based on a DPU according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a data plane forwarding device architecture based on a DPU according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The application will now be described in detail by way of example with reference to the accompanying drawings.
The Linux kernel protocol stack is a set of software components in the Linux operating system responsible for handling network communications. Linux operating systems enable network connectivity, data transfer, and network communications by supporting different network protocols and functions, such as routing, firewalls, network Address Translation (NAT), flow control, and the like. Meanwhile, the system also provides a set of API (such as socket API) which allows the application program to communicate through the network, thereby realizing reliable and efficient network communication capability. The Linux kernel protocol stack is a huge and complex software system, and relates to a plurality of protocols and functions, so that the code complexity of the kernel protocol stack is increased, and the development and maintenance difficulties are increased.
DP (Data Plane) refers to all functions and processes that forward Data packets or Data frames from one interface to another according to control Plane logic. The data plane functions are composed of a routing table, a forwarding table and routing logic, the data plane data packets pass through the router, the input and output of data frames are accomplished based on control plane logic, and the data plane is also called a forwarding plane. In practical applications, for some critical services, the data plane forwarding needs to be ensured to be completed within a fixed time limit, and the traditional manner of receiving a data packet in the Linux kernel protocol stack is that a CPU triggers an interrupt, that is, a network card driver notifies the CPU to process through the interrupt after receiving the data packet, and then the CPU copies the data and gives it to the kernel protocol stack. After receiving the interrupt signal, the processor interrupts the current task and jumps to the corresponding interrupt handler, a plurality of operations are involved in the process, a certain delay is introduced to the operating system by the triggering and processing of the interrupt, a large number of CPU interrupts are generated under the scene of high load and high concurrency, the instantaneity of the operating system is reduced, and the CPU cannot run other programs due to the fact that the logic of the interrupt handler is too complex and excessive processor resources are occupied. In addition, context information of a current task is saved in the context switching process of the Linux kernel, context information of a new task is loaded, additional expenditure is needed when task scheduling and state information are managed, when a large number of tasks need to be switched, task response time is prolonged, scheduling imbalance among the tasks is caused by frequent context switching, response capability of an operating system is further affected, and problems such as data consistency exist; in practical application servers, a page virtual memory is basically adopted, memory pages default to 4k, a large number of page mapping entries can be generated in a processor with large memory space based on the small capacity, and because the cache space of a TLB (Translation Lookaside Buffer, standby cache area) is limited, the mapping entries of the TLB fast table are frequently changed, a large number of fast page tables Miss can be generated, and the performance of an operating system is influenced.
The DPU (Data Processing Unit, data processing unit or special data processor) is a SoC (System on Chip) running a Linux operating System, and the main objective is to meet special computing requirements at the network side by assuming network, storage and safe acceleration processing tasks originally assumed by the CPU, so as to optimize and improve the performance of the data center. In Linux kernel, the network card driver is in kernel mode, and after receiving data packet, the driver will process the kernel protocol stack and copy the data packet to user mode application layer buffer area, and the time consumed in the copying process is over half of the data packet processing flow. In a large-scale data center large-volume data scenario, a large number of data reading and writing operations are performed, frequent copying operations are performed, enough memory space needs to be allocated for storing source data and target data, large memory occupation is caused, a series of problems such as operating system blocking and the like are caused under the condition that an operating system with limited memory and a plurality of data streams are processed simultaneously, and in a DPU high-speed traffic scenario, large-scale and high-frequency memory copying operations have large influence on the efficiency, response and time delay of the operating system, so that the performance of the operating system cannot meet the service requirements.
It can be seen that the DPU is a new generation of computing chips that are data-centric, I/O-intensive, support infrastructure resource layer virtualization using software-defined technology routes, and that have the advantages of improving computing system efficiency, reducing the total cost of ownership of the overall system, improving data processing efficiency, and reducing the performance loss of other computing chips. Fig. 1 is a schematic diagram of a data plane forwarding method and apparatus based on a DPU according to an embodiment of the present application, where the method may be performed by a data plane forwarding apparatus based on a DPU, and the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device.
In the embodiment of the application, as shown in figure 1, the DPDK-based user mode protocol stack replaces the Linux kernel protocol stack, bypasses the operation system kernel to process in user space, directly accesses network and physical storage equipment hardware, saves CPU interrupt time and memory copying time, avoids performance bottleneck caused by the Linux kernel protocol stack, accelerates flow circulation of micro-services, reduces time delay of an operation system, and remarkably improves the performance and throughput of data plane application.
Fig. 2 is a schematic flow chart of a data plane forwarding method based on a DPU according to an embodiment of the present disclosure. As shown in fig. 2, the forwarding method includes the following steps:
step 11, on the SOC (System On Chip) side, configuring a DPDK (Data Plane Development Kit ) to take over a corresponding network card, and bypassing a kernel protocol to process data;
specifically, according to the name of the host network card, the vfio-pci driver of dpdk is mounted, the network card is disabled, the script file is used for binding the network card into the designated vifo-pci driver, and the network card is enabled to complete binding.
Step 12, processing the data to be forwarded from the client in the user space by using a user mode protocol stack based on DPDK;
step 13, transmitting the data to be forwarded to a network processor;
and step 14, the network processor transmits the received data to be forwarded to a server side to complete forwarding.
Specifically, fig. 3 is a schematic diagram illustrating a flow of processing data to be forwarded in a user space by using a DPDK-based user mode protocol stack according to an embodiment of the present disclosure. As shown in fig. 3, the user mode protocol stack processes data to be forwarded in a user space, and includes the following steps:
step 301: the DPDK receives a message containing the forwarding data and reaches a user state protocol stack;
step 302: the data to be forwarded, which accords with the message rule, is converted to a service grid data plane through NAT (Network Address Translation, network address conversion, which is a network protocol used for converting IP addresses between different networks, and a plurality of devices are allowed to share a common IP address);
step 303: the service grid data plane matches the destination port and the IP address according to the configuration information, establishes connection with the service end, and acquires the response data of the request;
step 304: and transmitting the data to be forwarded to the network processor according to the response data.
In one example, the response data is translated into a source IP address and corresponding information according to NAT.
Fig. 4 is a schematic diagram illustrating whether a message rule is met in a user mode protocol stack according to an embodiment of the present disclosure. As shown in fig. 4, when the user mode protocol stack based on DPDK processes data to be forwarded in the user space, determining whether the user mode protocol stack meets a message rule includes the following steps:
step 401, distributing the message of the data to be forwarded in the user state protocol stack;
step 402, judging whether the data to be forwarded accords with a message rule; if yes, go to step 403, otherwise go to step 404;
step 403, the data to be forwarded is converted by NAT to reach a service grid data plane, the service grid data plane matches the destination port and the IP address according to the configuration information, and establishes connection with the server to obtain the request response data; according to the response data, transmitting the data to be forwarded to a network processor through a DPDK by a routing table, and ending the data processing to be forwarded in the stage;
step 404, the data to be forwarded is transmitted to the network processor via the DPDK through the routing table.
Fig. 5 shows a flowchart of a data plane forwarding method based on a DPU according to an embodiment of the present disclosure. As shown in fig. 5, the forwarding method includes the steps of:
step 51, configuring a DPDK to take over a corresponding network card at the SOC side, and bypassing a kernel protocol to process data;
step 52, the DPDK receives the message with forwarding data from the client to reach the user state protocol stack;
step 53, the message distribution is carried out on the data to be forwarded in the user mode protocol stack;
step 54, judging whether the data to be forwarded accords with the message rule; if yes, go to step 55, otherwise go to step 56;
step 55, the data to be forwarded is converted by NAT to reach the service grid data plane, and step 57 is executed;
step 56, transmitting the data to be forwarded to the network processor through the DPDK by the routing table, and executing step 59;
step 57, the service grid data plane matches the destination port and the IP address according to the configuration information, establishes connection with the service end, and acquires the response data of the request;
and step 58, transmitting the data to be forwarded to the network processor through the DPDK by the routing table according to the response data.
And step 59, the network processor transmits the received data to be forwarded to the server side to complete forwarding.
In some embodiments, the DPDK configures a large page memory, through which page table overhead is reduced, better utilizing the cache of the CPU, improving memory performance, and providing faster CPU access speed.
In some embodiments, in order to improve the performance of an application program and the whole operating system, when the DPDK collects data to be forwarded for message distribution, the DPDK is implemented through a large page memory, a memory pool, zero copy, core binding and a polling message transceiving mechanism.
In some embodiments, the user state protocol stack based on DPDK may configure a message rule according to a user definition, transparently hijack the specified client flow data into the service grid data plane according to the message rule, and transparently proxy. The service grid data plane is a centralized data plane, so that the distribution management of traffic is facilitated. The transparent proxy is carried out by hijacking the flow to the data plane in the DPU, so that the host resources are saved, the calculation consumption of the host CPU is reduced, and the system performance and stability are further improved by optimizing the distribution of the calculation resources of the host processor.
As can be seen from the above description, the data plane forwarding method based on the DPU provided by the embodiments of the present application directly takes over the network card through the DPDK, so that the data packet can be rapidly processed before reaching the data plane, and the processing delay of the data packet is reduced, which is very suitable for the low delay scenario of the DPU.
The DPU-based data plane forwarding device of the present application may be implemented in software and/or hardware and may be generally integrated in an electronic device. Taking a client flow access server as an example, data to be forwarded is client flow, and a data plane forwarding device architecture based on a DPU provided by an embodiment of the present application is shown in fig. 6, where in a section of executable program, the data plane forwarding device includes a DPU and a service grid control plane; the DPU comprises an SOC (System On Chip) and a network processor, wherein the SOC comprises a DPDK, a user state protocol stack and a service grid data plane; the service grid control plane comprises a client and a server.
Wherein, the DPDK is configured to take over the corresponding network card; the user state protocol stack is used for distributing messages for receiving client flow through the DPDK, judging whether the message rule is met, if so, enabling the client flow to reach a service grid data plane through NAT conversion, otherwise, directly transmitting data to be forwarded to the network processor through the DPDK through the routing table; the service grid data plane is used for matching the destination port and the IP address according to the configuration information, establishing connection with the server to obtain request response data, and transmitting the data to be forwarded to the network processor through the DPDK according to the response data.
Specifically, the user state protocol stack transmits data to be forwarded to a network processor in the DPU through a DPDK through a routing table; the network processor transmits the data to be forwarded to the server.
The DPU-based data plane forwarding device provided by the embodiment of the application can execute the DPU-based data plane forwarding method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
The embodiment of the present application further provides an electronic device, and fig. 7 shows a structure of the electronic device provided by the embodiment of the present application, for example, the electronic device 70 may include a processor 71, a memory 72 and a transmission device 73, where the processor is a DPU, and the processor is configured to execute the data plane forwarding method based on the DPU mentioned in the foregoing embodiment, where the processor and the memory may be connected by a bus or other manners, and are exemplified by connection through the bus. The transmission device can be connected with the processor and the memory in a wired or wireless mode. The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the data plane forwarding method in the embodiments of the present application. The processor executes the various functional applications of the processor and data processing by running the non-transitory software programs, instructions and modules stored in the memory, i.e., implementing the data plane forwarding method in the method embodiments described above. The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory may optionally include memory located remotely from the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The one or more modules are stored in the memory that, when executed by the processor, perform the data plane forwarding method of the embodiments.
As another aspect, the present application also provides a computer-readable storage medium, which may be a computer-readable storage medium contained in the apparatus described in the above embodiment; or may be a computer-readable storage medium, alone, that is not assembled into a device. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art. The computer readable storage medium stores one or more programs for use by one or more processors in performing the data plane forwarding method described in the present application.
In summary, the above embodiments are only preferred embodiments of the present application, and are not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. The data plane forwarding method based on the DPU is characterized in that the data plane forwarding to be forwarded is realized based on the DPU, and the DPU comprises an SOC and a network processor;
on the SOC side, configuring a DPDK to take over a corresponding network card, processing data to be forwarded from a client in a user space based on a user state protocol stack of the DPDK, and transmitting the data to be forwarded to a network processor;
and the network processor transmits the received data to be forwarded to the server to complete forwarding.
2. The method of claim 1, wherein the DPDK-based user mode protocol stack processes the data to be forwarded in user space includes:
the DPDK receives a message containing the forwarding data and reaches a user state protocol stack;
the data to be forwarded, which accords with the message rule, reaches a service grid data plane through NAT conversion;
the service grid data plane matches the destination port and the IP address according to the configuration information, establishes connection with the service end, and acquires the response data of the request;
and transmitting the data to be forwarded to the network processor according to the response data.
3. The method of claim 2, wherein the response data is translated into the source IP address and corresponding information according to NAT.
4. A method according to any one of claims 1-3, characterized in that when the DPDK-based user mode protocol stack processes data to be forwarded in user space, the data to be forwarded is distributed in the user mode protocol stack to determine whether the packet rule is met, if so, the data to be forwarded is NAT-converted to the service grid data plane, otherwise, the data to be forwarded is transmitted to the network processor via the DPDK through the routing table.
5. A method according to any one of claims 1-3, wherein the DPDK is implemented by large page memory, memory pool, zero copy, core binding and polling messaging mechanisms when receiving data to be forwarded for message distribution.
6. A method according to any one of claims 1-3, characterized in that the DPDK based user mode protocol stack transparently hijacking the specified client traffic data into the service grid data plane for transparent proxy according to custom configuration message rules.
7. The data plane forwarding device to be forwarded based on the DPU is characterized in that the DPU comprises an SOC and a network processor, and the SOC comprises a DPDK, a user state protocol stack and a service grid data plane;
wherein, the DPDK is configured to take over the corresponding network card; the user state protocol stack processes the data to be forwarded in the user space and transmits the data to be forwarded to the network processor;
and the network processor transmits the received data to be forwarded to a server.
8. The apparatus of claim 7, wherein the DPDK receives data to be forwarded for message distribution; in the user mode protocol stack, the data to be forwarded, which accords with the message rule, reaches a service grid data plane through NAT conversion; the service grid data plane is used for matching a destination port and an IP address according to the configuration information, establishing connection with the server to obtain request response data, and transmitting data to be forwarded to the network processor through the DPDK according to the response data; and the data to be forwarded which does not accord with the message rule is directly transmitted to the network processor through the DPDK by the routing table.
9. An electronic device comprising a processor, a memory for storing instructions executable by the processor; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the DPU-based data plane forwarding method of any one of the preceding claims 1-6.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program for executing the DPU-based data plane forwarding method of any one of the preceding claims 1-6.
CN202311139383.6A 2023-09-05 2023-09-05 Data plane forwarding method, device, equipment and medium based on DPU Pending CN117240935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311139383.6A CN117240935A (en) 2023-09-05 2023-09-05 Data plane forwarding method, device, equipment and medium based on DPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311139383.6A CN117240935A (en) 2023-09-05 2023-09-05 Data plane forwarding method, device, equipment and medium based on DPU

Publications (1)

Publication Number Publication Date
CN117240935A true CN117240935A (en) 2023-12-15

Family

ID=89097695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311139383.6A Pending CN117240935A (en) 2023-09-05 2023-09-05 Data plane forwarding method, device, equipment and medium based on DPU

Country Status (1)

Country Link
CN (1) CN117240935A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539664A (en) * 2024-01-08 2024-02-09 北京火山引擎科技有限公司 Remote procedure call method, device and storage medium based on DPU

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539664A (en) * 2024-01-08 2024-02-09 北京火山引擎科技有限公司 Remote procedure call method, device and storage medium based on DPU
CN117539664B (en) * 2024-01-08 2024-05-07 北京火山引擎科技有限公司 Remote procedure call method, device and storage medium based on DPU

Similar Documents

Publication Publication Date Title
US11372802B2 (en) Virtual RDMA switching for containerized applications
US7996569B2 (en) Method and system for zero copy in a virtualized network environment
US20220224657A1 (en) Technologies for accelerating edge device workloads
US10872056B2 (en) Remote memory access using memory mapped addressing among multiple compute nodes
US9450780B2 (en) Packet processing approach to improve performance and energy efficiency for software routers
US10114792B2 (en) Low latency remote direct memory access for microservers
US9813283B2 (en) Efficient data transfer between servers and remote peripherals
RU2637428C2 (en) Scalable direct data exchange between nodes via express type peripheral components interconnection bus (pcie)
CN112422615A (en) Communication method and device
US20220391341A1 (en) Cross bus memory mapping
US20240152290A1 (en) Data writing method, data reading method, apparatus, device, system, and medium
CN117240935A (en) Data plane forwarding method, device, equipment and medium based on DPU
CN114640716A (en) Cloud network cache acceleration system and method based on fast network path
Abbasi et al. A performance comparison of container networking alternatives
Wang et al. vSocket: virtual socket interface for RDMA in public clouds
US11283723B2 (en) Technologies for managing single-producer and single consumer rings
CN109698845B (en) Data transmission method, server, unloading card and storage medium
CN113726636A (en) Data forwarding method and system of software forwarding equipment and electronic equipment
CN106790162B (en) Virtual network optimization method and system
US20230109396A1 (en) Load balancing and networking policy performance by a packet processing pipeline
US20180091447A1 (en) Technologies for dynamically transitioning network traffic host buffer queues
CN115766729A (en) Data processing method for four-layer load balancing and related device
EP4187868A1 (en) Load balancing and networking policy performance by a packet processing pipeline
US20230185624A1 (en) Adaptive framework to manage workload execution by computing device including one or more accelerators
US11792139B2 (en) Efficient packet reordering using hints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination