CN116319303A - Network card virtualization method based on DPU cross-card link aggregation - Google Patents

Network card virtualization method based on DPU cross-card link aggregation Download PDF

Info

Publication number
CN116319303A
CN116319303A CN202310166659.3A CN202310166659A CN116319303A CN 116319303 A CN116319303 A CN 116319303A CN 202310166659 A CN202310166659 A CN 202310166659A CN 116319303 A CN116319303 A CN 116319303A
Authority
CN
China
Prior art keywords
dpu
queue
virtual
virtual interface
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310166659.3A
Other languages
Chinese (zh)
Inventor
李玮
田芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202310166659.3A priority Critical patent/CN116319303A/en
Publication of CN116319303A publication Critical patent/CN116319303A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/22Arrangements for detecting or preventing errors in the information received using redundant apparatus to increase reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to a network card virtualization method based on DPU cross-card link aggregation, the method comprising: combining a first virtual interface with a second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface, wherein the first virtual interface is provided by a first DPU, and the second virtual interface is provided by a second DPU; based on the multi-queue virtual interface, message transmission between the virtual machine and the first DPU and/or the second DPU is realized. By configuring the multi-queue virtual interface, even if a single DPU fails, message transmission can be performed through other DPUs through the multi-queue virtual interface, single-point failure is eliminated, and high available requirements of a production environment are met.

Description

Network card virtualization method based on DPU cross-card link aggregation
Technical Field
The disclosure relates to the technical field of communication, in particular to a network card virtualization method based on DPU cross-card link aggregation.
Background
The data processing unit or specialized data processor (Data Processing Unit, DPU) is the third significant computational power chip in the data center scene, following the central processor (Central Processing Unit, CPU), graphics processor (Graphics processing unit, GPU), providing a computational engine for high bandwidth, low latency, data intensive computing scenes.
The DPU card divides the physical port into a plurality of virtual ports for the virtual machines running on the host machine, the plurality of virtual ports belong to one DPU card, when the DPU card has hardware failure, network interruption is caused, normal communication between the host machine and the switch is affected, and high availability of the communication link cannot be ensured.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides a network card virtualization method based on DPU cross-card link aggregation, so as to produce high availability requirements of the environment.
In a first aspect, an embodiment of the present disclosure provides a network card virtualization method based on DPU cross-card link aggregation, including:
combining a first virtual interface with a second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface, wherein the first virtual interface is provided by a first DPU, and the second virtual interface is provided by a second DPU;
based on the multi-queue virtual interface, message transmission between the virtual machine and the first DPU and/or the second DPU is realized.
In some embodiments, the implementing, based on the multi-queue virtual interface, packet transmission between a virtual machine and the first DPU and/or the second DPU includes:
acquiring running state information of the first DPU and the second DPU through a DPU manager, wherein the running state information is used for representing whether the first DPU and/or the second DPU has faults or not;
and based on the running state information, selecting a first queue or a second queue in the multi-queue virtual interface to complete message transmission through a virtual interface driver, wherein the first queue corresponds to the first virtual interface, and the second queue corresponds to the second virtual interface.
In some embodiments, the selecting the first queue or the second queue in the multi-queue virtual interface to complete the message transmission based on the running state information includes:
if the first DPU and the second DPU are determined to have no faults based on the running state information, the virtual machine selects the first queue or the second queue for message transmission based on bond load balancing configuration so as to realize flow load balancing.
In some embodiments, the method further comprises:
if the first DPU is determined to have a fault based on the running state information, selecting the second queue to complete message transmission; or alternatively, the process may be performed,
and if the second DPU is determined to have a fault based on the running state information, selecting the first queue to finish message transmission.
In some embodiments, before the implementing, based on the multi-queue virtual interface, packet transmission between a virtual machine and the first DPU and/or the second DPU, the method further comprises:
respectively acquiring link information of the first DPU and/or the second DPU;
based on the link information, the virtual switch can send the traffic received by the virtual function agents of the first DPU and/or the second DPU from the cascade ports of the same DPU, and the traffic received by the corresponding cascade ports is sent from the virtual function agents of the same DPU.
In some embodiments, the obtaining link information of the first DPU and/or the second DPU includes:
determining the combination information of the virtual function agent and the cascade port of the first DPU to obtain first link information of a first data unit;
and determining the combination information of the virtual function agent and the cascade port of the second DPU to obtain second link information of a second data unit.
In some embodiments, the method further comprises:
aggregating a first network interface of the first DPU and a second network interface of the second DPU into a logic network port;
based on the logic network port, message transmission between the switch and the first DPU and/or the second DPU is realized.
In a second aspect, an embodiment of the present disclosure provides a network card virtualization device based on DPU cross-card link aggregation, including:
the first acquisition module is used for combining a first virtual interface with a second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface, wherein the first virtual interface is provided by a first DPU, and the second virtual interface is provided by a second DPU;
and the first transmission module is used for realizing message transmission between the virtual machine and the first DPU and/or the second DPU based on the multi-queue virtual interface.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method according to the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a computer program for execution by a processor to implement the method of the first aspect.
In a fifth aspect, embodiments of the present disclosure also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a network card virtualization method based on DPU cross-card link aggregation as described above.
According to the network card virtualization method based on the DPU cross-card link aggregation, even if a single DPU fails through configuration of the multi-queue virtual interface, message transmission can be carried out through other DPUs through the multi-queue virtual interface, single-point failure is eliminated, and high available requirements of a production environment are met.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a communication system;
fig. 2 is a flowchart of a network card virtualization method based on DPU cross-card link aggregation according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure;
fig. 4 is a flowchart of a network card virtualization method based on DPU cross-card link aggregation according to another embodiment of the present disclosure;
fig. 5 is a flowchart of a network card virtualization method based on DPU cross-card link aggregation according to another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a network card virtualization device based on DPU cross-card link aggregation according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
Fig. 1 is a schematic diagram of a communication system. As shown in fig. 1, a Virtual Machine (VM) and a virtual switch (OVS) are running in a host (host), and the host further includes a Data Processing Unit (DPU). The DPU will split the physical port (MAC) into several virtual ports (VFs) for use by virtual machines running on the host. Each virtual port has a respective corresponding virtual port proxy (VF rep) forming a (VF, VF rep) combination. The virtual port agent is used to transmit the message to the virtual switch. The physical ports also form a (MAC) combination of corresponding tandem ports (uplinks) for transmitting messages to the virtual switch. The virtual switch is used for processing and forwarding the received message.
The dashed line in fig. 1 shows a path for the transmission of a message. The physical port of the DPU receives the message sent by the switch through the port0, and sends the message to the virtual switch through the cascade port (uplink), the virtual switch processes and distributes the packet according to a certain rule, and distributes the message to the virtual ports (only one virtual machine and corresponding virtual port are shown in the figure) connected with different virtual machines through the virtual port proxy. In this case, the transmission of the message needs to pass through the DPU, so that a single point of failure occurs in the whole system, and once the DPU has a hardware failure, the normal transmission of the message cannot be ensured.
In view of this problem, the embodiments of the present disclosure provide a network card virtualization method based on DPU cross-card link aggregation, and the method is described below with reference to specific embodiments.
Fig. 2 is a flowchart of a network card virtualization method based on DPU cross-card link aggregation according to an embodiment of the present disclosure. The method can be applied to the application scenario shown in fig. 3, where the application scenario includes a host (host), a Virtual Machine (VM), a virtual switch (OVS), a first DPU (DPU 1), a second DPU (DPU 2), and a switch. It can be appreciated that the network card virtualization method based on DPU cross-card link aggregation provided by the embodiments of the present disclosure may also be applied in other scenarios.
The network card virtualization method based on DPU cross-card link aggregation shown in fig. 2 is described below with reference to the application scenario shown in fig. 3, and the method includes the following specific steps:
s201, combining the first virtual interface and the second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface.
Wherein the first virtual interface is provided by a first DPU and the second virtual interface is provided by a second DPU.
In connection with fig. 2, a plurality of DPU cards, i.e., a plurality of DPUs, are configured in a host. In the embodiment of the disclosure, two DPU cards, i.e., a first DPU (DPU 1) and a second DPU (DPU 2), are taken as examples. It will be appreciated that, in practical applications, the number of DPUs may be any integer, and embodiments of the present disclosure are not limited thereto.
The virtual network card back end (VF packet) is used for merging a first virtual interface (VF 1) of the first DPU and a second virtual interface (VF 2) of the second DPU to obtain a multi-queue virtual interface (VF) which is directly connected to a Virtual Machine (VM). The multi-queue virtual interface includes a plurality of queues, such as the first queue and the second queue (queue 1 and queue 2) in fig. 2, each corresponding to the first virtual interface of the first DPU and the second virtual interface of the second DPU, respectively. For example, a first queue corresponds to a first virtual interface of a first DPU and a second queue corresponds to a second virtual interface of a second DPU.
S202, based on the multi-queue virtual interface, message transmission between the virtual machine and the first DPU and/or the second DPU is achieved.
Based on the multi-queue virtual interface, a message path is established between the virtual machine and the DPUs. As shown in fig. 2, the virtual machine may receive messages from multiple DPUs (the first DPU and the second DPU), and may send out messages via different DPUs (the first DPU or the second DPU), and even if a single DPU fails, the messages may still be transmitted through other DPUs via the multi-queue virtual interface.
In the embodiment of the disclosure, a first virtual interface is combined with a second virtual interface through the rear end of a virtual network card to obtain a multi-queue virtual interface, wherein the first virtual interface is provided by a first DPU, and the second virtual interface is provided by a second DPU; based on the multi-queue virtual interface, message transmission between the virtual machine and the first DPU and/or the second DPU is realized, and even if a single DPU fails, message transmission can still be performed through other DPUs through the multi-queue virtual interface, so that single-point failure is eliminated, and the high-availability requirement of a production environment is met.
Fig. 4 is a flowchart of a network card virtualization method based on DPU cross-card link aggregation according to another embodiment of the present disclosure. As shown in fig. 4, the method includes the following steps.
S401, combining the first virtual interface and the second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface.
Wherein the first virtual interface is provided by a first DPU and the second virtual interface is provided by a second DPU.
S402, acquiring running state information of the first DPU and the second DPU through a DPU manager.
Referring to fig. 2, a DPU manager (DPU driver) is further running in the host (host), and is configured to sense an operation state of each DPU, and report the collected operation state information of each DPU (the first DPU and the second DPU) to a multi-queue virtual interface controller (VF driver) in the Virtual Machine (VM).
S403, based on the running state information, selecting a first queue or a second queue in the multi-queue virtual interface through a virtual interface driver to complete message transmission.
The first queue corresponds to the first virtual interface, and the second queue corresponds to the second virtual interface.
Based on the operation state information of each DPU acquired by the DPU manager, it can be determined whether the DPU can operate normally or whether there is a fault.
When the message is required to be transmitted and received through the multi-queue virtual interface, the DPU which can normally work is determined based on the running state information of each DPU, the queue corresponding to the DPU is further determined, and the message transmission is completed.
Specifically, referring to the above embodiment, the multi-queue virtual interface includes a plurality of queues, such as the first queue and the second queue (queue 1 and queue 2) in fig. 2, each corresponding to the first virtual interface of the first DPU and the second virtual interface of the second DPU, respectively. And determining a virtual interface corresponding to the DPU based on the corresponding relation, and completing the transmission of the message.
In some embodiments, if it is determined, based on the operation state information, that the first DPU has a fault, the second virtual interface is selected to complete the message transmission; or if the second DPU is determined to have a fault based on the running state information, selecting the first virtual interface to complete message transmission.
In some embodiments, if it is determined, based on the running state information, that no fault exists in both the first DPU and the second DPU, the virtual machine selects the first queue or the second queue for packet transmission based on bond load balancing configuration, so as to implement traffic load balancing.
If the first DPU and the second DPU have no faults, namely the first virtual interface or the second virtual interface in the multi-queue virtual interfaces can normally transmit the message, at the moment, the virtual machine selects one queue from all available queues to transmit the message based on bond load balancing configuration. bond load balancing is configured as a Virtual Machine (VM) internal decision that obtains state information for each queue and performs flow-based average transmission among all available queues. Specifically, an available queue is selected in a load sharing manner, and then a message is sent out through different DPUs. Load balancing is established on the existing network structure, tasks are distributed to a plurality of operation units to be executed, and the method provides a cheap, effective and transparent method for expanding the bandwidth of network equipment and servers, increasing throughput, enhancing network data processing capacity and improving flexibility and usability of the network.
The embodiment of the disclosure combines the first virtual interface and the second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface; acquiring running state information of the first DPU and the second DPU through a DPU manager; based on the running state information, the first queue or the second queue in the multi-queue virtual interface is selected through the virtual interface drive to complete message transmission, and on the basis of eliminating single-point faults and meeting high available demands of a production environment, load balancing among different DPUs is achieved, and the effect of the network card virtualization method based on DPU cross-card link aggregation is further improved.
Fig. 5 is a flowchart of a network card virtualization method based on DPU cross-card link aggregation according to another embodiment of the present disclosure. As shown in fig. 5, the method includes the following steps.
S501, the first network interface of the first DPU and the second network interface of the second DPU are aggregated into a logic network port.
S502, based on the logic network port, message transmission between the switch and the first DPU and/or the second DPU is achieved.
Referring to fig. 2, the first DPU and the second DPU have respective corresponding network interfaces (MACs), respectively corresponding to interfaces (port 0 and port 1) in the switch, and perform bond link aggregation between the host and the switch, so that the first network interface of the first DPU and the second network interface of the second DPU are aggregated into a logical network port.
Similarly, the interfaces (port 0 and port 1) in the switch are subjected to bond link aggregation to obtain a logic interface, and the logic network port of the host and the logic interface of the switch are communicated through bond links.
S503, respectively acquiring link information of the first DPU and/or the second DPU.
Specifically, determining the combination information of the virtual function agent and the cascade port of the first DPU to obtain first link information of a first data unit; and determining the combination information of the virtual function agent and the cascade port of the second DPU to obtain second link information of a second data unit.
S504, based on the link information, the virtual switch can send the traffic received by the virtual function agents of the first DPU and/or the second DPU from the cascade ports of the same DPU, and the traffic received by the corresponding cascade ports is sent from the virtual function agents of the same DPU.
The virtual switch needs to record the link information on each DPU. The link information includes combined information (vf rep, uplink) formed by virtual function agent (vf rep) and cascading port (uplink), and the link information is used to manage the path of message transmission. For example, for a packet received on the virtual function proxy (vf 1 rep) of the first DPU, a cascade port to the first DPU is required; for a message received on the virtual function proxy (vf 2 rep) of the second DPU, a cascade port to the first DPU is required; similarly, for a packet received by the cascade port of the first DPU, the packet needs to be sent to the virtual function proxy (vf 1 rep) of the first DPU; for a message received by the cascade port of the second DPU, a virtual function proxy (vf 2 rep) to the second DPU is required.
S505, merging the first virtual interface and the second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface.
Wherein the first virtual interface is provided by a first DPU and the second virtual interface is provided by a second DPU.
The virtual network card back end (VF packet) is used for merging a first virtual interface (VF 1) of the first DPU and a second virtual interface (VF 2) of the second DPU to obtain a multi-queue virtual interface (VF) which is directly connected to a Virtual Machine (VM). The multi-queue virtual interface includes a plurality of queues, such as the first queue and the second queue (queue 1 and queue 2) in fig. 2, each corresponding to the first virtual interface of the first DPU and the second virtual interface of the second DPU, respectively. For example, a first queue corresponds to a first virtual interface of a first DPU and a second queue corresponds to a second virtual interface of a second DPU.
S506, based on the multi-queue virtual interface, message transmission between the virtual machine and the first DPU and/or the second DPU is achieved.
Based on the multi-queue virtual interface, a message path is established between the virtual machine and the DPUs. As shown in fig. 2, the virtual machine may receive messages from multiple DPUs (the first DPU and the second DPU), and may send out messages via different DPUs (the first DPU or the second DPU), and even if a single DPU fails, the messages may still be transmitted through other DPUs via the multi-queue virtual interface.
The embodiment of the disclosure combines a first network interface of the first DPU and a second network interface of the second DPU into one logic network port; based on the logic network port, message transmission between the switch and the first DPU and/or the second DPU is realized; respectively acquiring link information of the first DPU and/or the second DPU; based on the link information, the virtual switch can send the flow received by the virtual function agents of the first DPU and/or the second DPU from the cascade ports of the same DPU, and the flow received by the corresponding cascade ports is sent from the virtual function agents of the same DPU; combining the first virtual interface with the second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface; based on the multi-queue virtual interface, message transmission between the virtual machine and the first DPU and/or the second DPU is realized, a plurality of DPUs jointly provide a multi-queue virtual interface for the virtual machine, and under the condition of single DPU hardware failure, other DPUs can normally work, and messages of the failed DPU are migrated to the other DPUs for processing, so that the high available demand of the production environment is met.
Fig. 6 is a schematic structural diagram of a network card virtualization device based on DPU cross-card link aggregation according to an embodiment of the present disclosure. The network card virtualization device based on DPU cross-card link aggregation may be a communication system as described in the above embodiment, or the network card virtualization device based on DPU cross-card link aggregation may be a part or component in the communication device. The network card virtualization device based on DPU cross-card link aggregation provided in the embodiment of the present disclosure may execute the processing flow provided in the embodiment of the network card virtualization method based on DPU cross-card link aggregation, as shown in fig. 6, the network card virtualization device 60 based on DPU cross-card link aggregation includes: a first acquisition module 61, a first transmission module 62; the first obtaining module 61 is configured to combine, through a virtual network card back end, a first virtual interface with a second virtual interface, to obtain a multi-queue virtual interface, where the first virtual interface is provided by a first DPU, and the second virtual interface is provided by a second DPU; the first transmission module 62 is configured to implement packet transmission between the virtual machine and the first DPU and/or the second DPU based on the multi-queue virtual interface.
Optionally, the first transmission module 62 includes an acquisition unit 621, a selection unit 622; the obtaining unit 621 is configured to obtain, by using a DPU manager, operation state information of the first DPU and the second DPU, where the operation state information is used to characterize whether the first DPU and/or the second DPU has a fault; the selecting unit 622 is configured to select, based on the running state information, a first queue or a second queue in the multi-queue virtual interface to complete message transmission through a virtual interface driver.
Optionally, the selecting unit 622 is configured to, when it is determined, based on the running state information, that no fault exists in both the first DPU and the second DPU, select, by the virtual machine, the first queue or the second queue to perform packet transmission based on bond load balancing configuration, so as to implement traffic load balancing.
Optionally, the selecting unit 622 is further configured to select the second queue to complete message transmission when it is determined that the first DPU has a fault based on the running state information; or selecting the first queue to complete message transmission under the condition that the second DPU has faults based on the running state information. Optionally, the network card virtualization device 60 based on DPU cross-card link aggregation further includes a second acquisition module 63 and a second transmission module 64; the second obtaining module 63 is configured to obtain link information of the first DPU and/or the second DPU respectively; the second transmission module 64 is configured to implement, based on the link information, that a virtual switch can send traffic received by a virtual function agent of the first DPU and/or the second DPU from a cascade port of the same DPU, and that traffic received by a corresponding cascade port is sent from a virtual function agent of the same DPU.
Optionally, the second obtaining module 63 is configured to determine combined information of the virtual function agent and the cascade port of the first DPU, to obtain first link information of the first data unit; and determining the combination information of the virtual function agent and the cascade port of the second DPU to obtain second link information of a second data unit.
Optionally, the network card virtualization device 60 based on DPU cross-card link aggregation further includes an aggregation module 65; a third transmission module 66; the aggregation module 65 is configured to aggregate the first network interface of the first DPU and the second network interface of the second DPU into a logical network port; the third transmission module 66 is configured to implement packet transmission between the switch and the first DPU and/or the second DPU based on the logical network port.
The network card virtualization device based on DPU cross-card link aggregation in the embodiment shown in fig. 6 may be used to implement the technical solution of the above method embodiment, and its implementation principle and technical effects are similar, and are not repeated here.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. The electronic device may be a device in which the communication system described in the above embodiments is located. The electronic device provided in the embodiment of the present disclosure may execute the processing flow provided in the embodiment of the network card virtualization method based on DPU cross-card link aggregation, as shown in fig. 7, where the electronic device 70 includes: memory 71, processor 72, computer programs and communication interface 73; wherein a computer program is stored in the memory 71 and configured to perform by the processor 72 a network card virtualization method based on DPU cross-card link aggregation as described above.
In addition, the embodiment of the disclosure further provides a computer readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the network card virtualization method based on DPU cross-card link aggregation described in the above embodiment.
In addition, the embodiment of the disclosure further provides a computer program product, which comprises a computer program or instructions, wherein the computer program or instructions implement the network card virtualization method based on the DPU cross-card link aggregation as described above when being executed by a processor.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A network card virtualization method based on DPU cross-card link aggregation, the method comprising:
combining a first virtual interface with a second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface, wherein the first virtual interface is provided by a first DPU, and the second virtual interface is provided by a second DPU;
based on the multi-queue virtual interface, message transmission between the virtual machine and the first DPU and/or the second DPU is realized.
2. The method according to claim 1, wherein said implementing message transmission between a virtual machine and the first DPU and/or the second DPU based on the multi-queue virtual interface comprises:
acquiring running state information of the first DPU and the second DPU through a DPU manager, wherein the running state information is used for representing whether the first DPU and/or the second DPU has faults or not;
and based on the running state information, selecting a first queue or a second queue in the multi-queue virtual interface to complete message transmission through a virtual interface driver, wherein the first queue corresponds to the first virtual interface, and the second queue corresponds to the second virtual interface.
3. The method of claim 2, wherein selecting the first queue or the second queue in the multi-queue virtual interface to complete the message transmission based on the operational status information comprises:
if the first DPU and the second DPU are determined to have no faults based on the running state information, the virtual machine selects the first queue or the second queue for message transmission based on bond load balancing configuration so as to realize flow load balancing.
4. The method according to claim 2, wherein the method further comprises:
if the first DPU is determined to have a fault based on the running state information, selecting the second queue to complete message transmission; or alternatively, the process may be performed,
and if the second DPU is determined to have a fault based on the running state information, selecting the first queue to finish message transmission.
5. The method according to claim 1, wherein prior to said implementing a message transfer between a virtual machine and said first DPU and/or said second DPU based on said multi-queue virtual interface, said method further comprises:
respectively acquiring link information of the first DPU and/or the second DPU;
based on the link information, the virtual switch can send the traffic received by the virtual function agents of the first DPU and/or the second DPU from the cascade ports of the same DPU, and the traffic received by the corresponding cascade ports is sent from the virtual function agents of the same DPU.
6. The method of claim 5, wherein the obtaining link information for the first DPU and/or the second DPU comprises:
determining the combination information of the virtual function agent and the cascade port of the first DPU to obtain first link information of a first data unit;
and determining the combination information of the virtual function agent and the cascade port of the second DPU to obtain second link information of a second data unit.
7. The method according to claim 1, wherein the method further comprises:
aggregating a first network interface of the first DPU and a second network interface of the second DPU into a logic network port;
based on the logic network port, message transmission between the switch and the first DPU and/or the second DPU is realized.
8. A network card virtualization device based on DPU cross-card link aggregation, comprising:
the first acquisition module is used for combining a first virtual interface with a second virtual interface through the rear end of the virtual network card to obtain a multi-queue virtual interface, wherein the first virtual interface is provided by a first DPU, and the second virtual interface is provided by a second DPU;
and the first transmission module is used for realizing message transmission between the virtual machine and the first DPU and/or the second DPU based on the multi-queue virtual interface.
9. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1-7.
CN202310166659.3A 2023-02-22 2023-02-22 Network card virtualization method based on DPU cross-card link aggregation Pending CN116319303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310166659.3A CN116319303A (en) 2023-02-22 2023-02-22 Network card virtualization method based on DPU cross-card link aggregation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310166659.3A CN116319303A (en) 2023-02-22 2023-02-22 Network card virtualization method based on DPU cross-card link aggregation

Publications (1)

Publication Number Publication Date
CN116319303A true CN116319303A (en) 2023-06-23

Family

ID=86835365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310166659.3A Pending CN116319303A (en) 2023-02-22 2023-02-22 Network card virtualization method based on DPU cross-card link aggregation

Country Status (1)

Country Link
CN (1) CN116319303A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116932332A (en) * 2023-08-08 2023-10-24 中科驭数(北京)科技有限公司 DPU running state monitoring method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116932332A (en) * 2023-08-08 2023-10-24 中科驭数(北京)科技有限公司 DPU running state monitoring method and device
CN116932332B (en) * 2023-08-08 2024-04-19 中科驭数(北京)科技有限公司 DPU running state monitoring method and device

Similar Documents

Publication Publication Date Title
US11283718B2 (en) Hybrid network processing load distribution in computing systems
US10116574B2 (en) System and method for improving TCP performance in virtualized environments
US10296392B2 (en) Implementing a multi-component service using plural hardware acceleration components
US8913613B2 (en) Method and system for classification and management of inter-blade network traffic in a blade server
US10127067B2 (en) Method and computing device for selecting protocol stack for virtual machines
WO2016037554A1 (en) Hybrid network system, communication method and network node
CN106453156B (en) Communication means and device between a kind of virtual unit
Cheng et al. Application-aware SDN routing for big data networking
CN113810205A (en) Method for reporting and receiving service computing power information, server and data center gateway
US20050169309A1 (en) System and method for vertical perimeter protection
US8458702B1 (en) Method for implementing user space up-calls on java virtual machine before/after garbage collection
CN116319303A (en) Network card virtualization method based on DPU cross-card link aggregation
US20200162388A1 (en) Data communication method and data communications network
US8478877B2 (en) Architecture-aware allocation of network buffers
CN113296974B (en) Database access method and device, electronic equipment and readable storage medium
CN113709220A (en) High-availability realization method and system of virtual load balancer and electronic equipment
CN111786887A (en) Data forwarding method, apparatus, computing device, and medium executed by control device
US11811685B1 (en) Selective packet processing including a run-to-completion packet processing data plane
CN116055426A (en) Method, equipment and medium for traffic offload forwarding in multi-binding mode
CN116455836A (en) Intelligent network card, cloud server and traffic forwarding method
CN117135114A (en) Multipath transmission method, apparatus, device, readable storage medium and program product
US20150012663A1 (en) Increasing a data transfer rate
CN110611613B (en) Multipath routing method and device based on network equipment
US10735349B2 (en) Non-transitory computer-readable storage medium, packet control method, and packet control device
Na et al. Optimal service placement using pseudo service chaining mechanism for cloud-based multimedia services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination