CN107786458B - DPDK-based multi-port access and egress method - Google Patents

DPDK-based multi-port access and egress method Download PDF

Info

Publication number
CN107786458B
CN107786458B CN201711064766.6A CN201711064766A CN107786458B CN 107786458 B CN107786458 B CN 107786458B CN 201711064766 A CN201711064766 A CN 201711064766A CN 107786458 B CN107786458 B CN 107786458B
Authority
CN
China
Prior art keywords
port
queue
receiving
sending
dpdk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711064766.6A
Other languages
Chinese (zh)
Other versions
CN107786458A (en
Inventor
黄友俊
李星
吴建平
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CERNET Corp
Original Assignee
Next Generation Internet Major Application Technology Beijing Engineering Research Center Co ltd
CERNET Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Next Generation Internet Major Application Technology Beijing Engineering Research Center Co ltd, CERNET Corp filed Critical Next Generation Internet Major Application Technology Beijing Engineering Research Center Co ltd
Priority to CN201711064766.6A priority Critical patent/CN107786458B/en
Publication of CN107786458A publication Critical patent/CN107786458A/en
Application granted granted Critical
Publication of CN107786458B publication Critical patent/CN107786458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2458Modification of priorities while in transit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Abstract

The present disclosure provides a DPDK-based multi-port admission and egress method, including: carrying out balanced flow load setting, distributing flow outlets according to the current load condition of the ports and the service importance, and setting weight for each receiving port receiving queue and each transmitting port transmitting queue, wherein the larger the weight is, the higher the receiving and transmitting priority is; setting a flow table according to which a DPDK forwards a data packet according to the transceiving priority of the queue, carrying out priority forwarding on the transceiving queue with high priority, and determining a five-tuple characterizing a data flow forwarding path, wherein the five-tuple comprises: receiving port number, receiving queue number, CPU logic core, sending port number and sending queue number; and calling an interface function provided by the DPDK to forward the data packet according to a stream forwarding path determined by the receiving port number, the receiving queue number, the sending port number and the sending queue number in the quintuple. The method and the device realize analysis and scheduling of stream weights among DPDK high-performance scene design ports.

Description

DPDK-based multi-port access and egress method
Technical Field
The invention relates to the technical field of internet, in particular to a DPDK-based multi-port access and access method which can be used in technologies such as charging gateways, load balancing and the like and can greatly improve the forwarding performance of multi-port access and access.
Background
DPDK is an abbreviation of Data Plane Development Kit, developed jointly by Intel, 6WIND and other companies, and aims to improve the processing performance of Data packets under the x86 platform. The traditional linux kernel is not specially customized and developed for a large-flow application scene, and although optimization can be performed to improve forwarding performance, the inherent interrupt mechanism limits the capability of reading data messages. And at present, a large-flow application scene is mainly realized by using FPGA programming or a network processor, and the development cost of the two technologies is higher.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
Technical problem to be solved
The present disclosure provides a DPDK-based multi-port admission and egress method to at least partially solve the above-mentioned technical problems.
(II) technical scheme
According to an aspect of the present disclosure, a method for DPDK-based multi-port admission and egress is provided, which includes: carrying out balanced flow load setting, distributing flow outlets according to the current load condition of the ports and the service importance, and setting weight for each receiving port receiving queue and each transmitting port transmitting queue, wherein the larger the weight is, the higher the receiving and transmitting priority is; setting a flow table according to which a DPDK forwards a data packet according to the receiving and sending priorities of the receiving port receiving queue and the sending port sending queue determined by the weight, carrying out priority forwarding on the receiving and sending queue with high priority, and determining a five-tuple representing a data flow forwarding path, wherein the five-tuple comprises: receiving port number, receiving queue number, CPU logic core, sending port number and sending queue number; and calling an interface function provided by the DPDK to forward the data packet according to a stream forwarding path determined by the receiving port number, the receiving queue number, the sending port number and the sending queue number in the quintuple.
In some embodiments of the present disclosure, the step of setting a flow table according to which the DPDK forwards the data packet includes: and when initial setting is carried out, setting a quintuple structure array according to a configuration file or an input parameter, wherein the quintuple structure comprises a receiving port number, a receiving queue number, a sending port number and a sending queue number, and the CPU logic core number is used as a subscript of the corresponding quintuple structure array.
In some embodiments of the present disclosure, the step of performing balanced traffic load setting includes: the weight of the receiving port receiving queue is determined by two parameters of the receiving port flow bandwidth and the service importance, and the weight of the sending port sending queue is determined by the sending port flow bandwidth.
In some embodiments of the present disclosure, the step of setting a weight for each receiving port and each transmitting port includes: the method comprises the following steps that n receiving ports are provided, each port is provided with a receiving and sending queue, the bandwidth of each receiving queue is set to be 0 initially, and the service importance is P1 and P2 … Pn in sequence from high to low; when the equipment starts to work, the flow of each receiving queue starts to be generated, and the bandwidth is calculated every preset time; after working for a preset time, calculating the weight of each queue: (number of bytes received in the queue)/100) traffic importance, and then performing transceiving prioritization according to the weight.
In some embodiments of the present disclosure, before forwarding the data packet, the method further includes:
for a server of a multi-Node architecture, each Node comprises a group of CPU, a memory and a PCI bus, whether the condition that the local CPU accesses the Node-crossing nodes exists is checked, and if the condition that the Node-crossing nodes access exists, a user is reminded.
In some embodiments of the present disclosure, the checking whether there is a situation that the local CPU accesses across Node nodes in the program includes: and the Node-crossing nodes access the memory of the opposite terminal or the PCI equipment.
In some embodiments of the present disclosure, when forwarding the data packet, the port is called to send a data packet function, the function automatically corresponds to the running CPU logic core, and the receive port number, the receive queue number, the transmit port number, and the transmit queue number in the five-tuple are called.
In some embodiments of the present disclosure, when the data packet is forwarded, a port send cache function and a port send cache refresh function are called.
In some embodiments of the present disclosure, a system environment for DPDK operation needs to be set before the setting of the balanced traffic load.
In some embodiments of the present disclosure, the setting a system environment for DPDK operation includes: setting up a system environment for DPDK operation includes: compiling and deploying a DPDK software package, setting a large-page memory, loading igb _ uio kernel modules, binding a network card, mounting a file system hugelbfs and setting CPU isolation.
(III) advantageous effects
From the technical scheme, the DPDK-based multi-port access method has at least one of the following beneficial effects:
(1) by defining the five-tuple concept, it includes: the receiving port number, the receiving queue number, the CPU logic core, the sending port number and the sending queue number uniquely determine a stream forwarding path, so that the stream forwarding function can be realized in a loose coupling manner.
(2) The method for realizing stream scheduling among multiple ports according to the weight is applied to common multi-port forwarding equipment, and the requirements of high multi-port performance requirement and variable stream scheduling strategy requirement are met.
(3) Due to the adoption of the highly modular design, the analysis and the scheduling of the stream weight between the DPDK high-performance scene design ports are realized, and a high-performance and stable reference thought is provided for the research and development of gateway equipment.
Drawings
Fig. 1 is a physical architecture diagram of an application scenario of a DPDK-based multi-port admission and egress method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a method for multi-port admission and egress based on DPDK according to an embodiment of the present disclosure.
Detailed Description
The disclosure provides a DPDK-based multi-port access method. The DPDK runs on a general platform of an X86 architecture, servers of the architecture are common, and a C language developer can realize a high-performance data processing program only by calling an interface provided by the DPDK. The method comprises the following steps: flow load balancing, flow table setting, Socket association checking and data packet forwarding.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings. According to the implementation routine of the invention, the multi-port access function under the invention can be clearly shown.
Certain embodiments of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the disclosure are shown. Indeed, various embodiments of the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements.
In one exemplary embodiment of the present disclosure, a method of DPDK-based multi-port admission and egress is provided. Fig. 1 is a physical architecture diagram of an application scenario of a DPDK-based multi-port admission and egress method according to an embodiment of the present disclosure. As shown in fig. 1, the method for multi-port admission and egress based on DPDK in this embodiment is implemented by multi-port forwarding devices based on DPDK in the figure, where the multi-port forwarding devices are respectively connected to an uplink switch and a downlink switch. The multi-port forwarding device based on the DPDK is used for running a multi-port access and egress-permission program based on the DPDK, and usually a multi-network card device is adopted as the multi-port forwarding device, such as an industrial personal computer with four network cards.
Programs running in the multi-port forwarding device may be developed according to actual requirements, such as SDN, load balancing, traffic analysis, and the like. The present disclosure relates to a method for implementing multi-port access and high performance based on DPDK.
Based on the multi-port forwarding device, the present disclosure provides a method for a multi-port admission and egress system based on a DPDK. Fig. 2 is a flowchart of a method for multi-port admission and egress based on DPDK according to an embodiment of the present disclosure. As shown in fig. 2, the method for multi-port admission and egress based on DPDK in the embodiment of the present disclosure includes:
step a, setting a DPDK environment, which means that some system environments need to be set before the DPDK works, includes: compiling and deploying a DPDK software package, setting a large-page memory, loading igb _ uio kernel modules, binding a network card, mounting a file system hugelbfs, setting CPU isolation and the like. The specific process comprises the following steps:
(1) setting a large-page memory
Creating a mounting point: mkdir-p/mnt/huge
Mounting a large-page file system: mount-t hugelbfs nodv/mnt/huge
Setting a 2G large-page memory:
echo 1024/sys/devices/system/Node/Node0/hugepages/hugepages
-2048kB/nr_hugepages
(2) loading igb _ uio Module
Insmod igb _ uio.ko// load igb _ uio module
(3) Binding network card
Binding two network cards with PCI numbers of 02:05:0 and 02:06: 0:
./DPDK-devbind.py–b igb_uio 02:05:0 02:06:0
(4) setting CPU isolation
isolCPUs 1// indicates isolation of logical CPU # 1
The implementation process of compiling and deploying the DPDK software package, the kernel module and mounting the file system hugelbfs is clear to a person skilled in the art, and is not described in detail here.
Step B, carrying out flow load balance setting;
further, traffic bandwidth, traffic importance of each port of the multi-port forwarding device may be different. Therefore, there is a need to flexibly allocate the egress of traffic according to the current load condition of the port and the importance of the service. Traffic load balancing mainly involves two data structures: a queue sending weight and a queue receiving weight. And allocating weights to the receiving queue and the sending queue of each receiving port, wherein the larger the weight is, the higher the priority is. Two parameters of traffic bandwidth and queue service importance are mainly considered for a receiving port receiving queue, and the traffic bandwidth is mainly considered for a sending port sending queue. For example, the transmit queue weight and receive queue weight structure is set as follows:
the queue weights are as follows:
Struct send_wegiht{
unsegned long sent _ bytes; // number of bytes forwarded
}
The weight of the sending queue is determined by the load condition of the sending port, namely the number of bytes sent, and when the method is used, a one-dimensional array can be defined according to the number of the ports, for example: struct send _ verify send _ queues [ n ];
the receive queue weight structure is as follows:
Figure BDA0001455522060000051
the calculation of the receive queue weight is the product of the received byte number (received _ bytes) and the service importance (priority) of the two attributes of the receive queue weight structure.
The steps comprise the following steps in concrete implementation:
sub-step B1, setting the bandwidth of each receiving queue to be 0 initially, and the service importance is P1 and P2 … Pn according to the sequence from high to low;
substep B2, when the device starts to work, the traffic of each receiving queue starts to be generated, and the bandwidth is calculated every predetermined time;
substep B3, when operating for a predetermined time, calculating the weight of each queue: (number of bytes received in the queue is one hundred (million)/100) service importance, and then priority ranking is performed according to the weight, the higher the weight is, the higher the transceiving priority is, the transceiving queue with the higher priority performs priority forwarding, and the like.
For example, if there are four receive ports, each port is a queue. Initially, the bandwidth of each receiving queue is 0, and the service importance is 1, 2, 3 and 4 in order from high to low. When the device starts to work, the flow of each receiving queue starts to generate, the bandwidth can be calculated every 5 seconds, and the calculation of the bandwidth cannot be too frequent or the generation speed of the flow table is influenced, so that the forwarding performance is influenced.
If the current receiving queue 0 receives the packet 400M, the service importance is C; queue 1 receives packet 350M, service importance is B, queue 2 receives packet 300M, service importance is a, queue 3 receives packet 200M, service importance is D, then queue 0's weight is 400 × 3/100; the weight of queue 1 is 350 x 2/100; queue 2 has a weight of 300 x 1/100, queue 4 has a weight of 200 x 4/100, and weights of 12, 7, 3, and 8, respectively, and then the weights are prioritized so that queue priorities are 1, 4, 2, and 3 from high to low, with queue 1 having the highest priority for preferential transmission, and so on.
Due to the adoption of the method for realizing stream scheduling among multiple ports according to the weight, the method can be applied to more common multi-port forwarding equipment, and meets the requirements of high multi-port performance requirement and variable stream scheduling strategy requirement.
Step C, setting a flow table according to the receiving and sending priorities of the receiving port receiving queue and the sending port sending queue determined by the weight, wherein a developer can specially develop an interface for setting the flow table, the interface is used for other modules to call, the interface is called in the step B, the flow load balance in the step B mainly generates a generation rule of the flow table, and the setting of the flow table in the step is to execute the generated rule to update the flow table.
The DPDK forwarding packet is essentially forwarded according to a flow table, and quintuple is determined according to priorities of a receiving port receiving queue and a sending port sending queue obtained by weights, that is: the receive port number, receive queue, CPU logic core, transmit port number, and transmit queue number uniquely identify a flow. Thus, whenever a program specifies a five-tuple, the forwarding path of the data flow is defined. Preferably, at the initial setting, the quintuple structure is set according to a configuration file or an input parameter.
The meaning of each element of the pentad is described one by one as follows:
(1) the receiving port number refers to which network port receives data, and the number of the port number is an integer, such as 0, which means that data is received from port 0; the receive queue number is the queue number of the receive port, e.g., receiving data from queue number 0 of port0, then the first two elements of the five-tuple are 0, 0;
(2) the CPU logic core refers to the logic core number of the CPU, one reason of DPDK high performance is to monopolize the CPU and avoid being used by other programs, if the CPU logic core is designated to be 0, the CPU 0 needs to be isolated, and therefore the CPU 0 needs to be isolated in the DPDK environment;
(3) the number of the transmitting port is similar to that of the receiving port, and the port from which the message is transmitted is determined; the send queue number is similar to the receive queue number, which defines which queue of a port the message will be sent from.
For example, the quintuple structure is:
Figure BDA0001455522060000071
when in use, a variable struct port _ queue array [16] is defined, wherein an array variable array is defined, and the subscript of the array adopts a CPU logic core. By defining the quintuple concept, the stream forwarding path is uniquely determined, so that the stream forwarding function can be realized in a loose coupling way.
Step D, Socket correlation is checked, wherein Socket means a slot, namely a CPU slot, Socket correlation checking is mainly performed in multi-Node nodes, most of servers adopt a multi-Node architecture at present, and each Node comprises a group of CPU, an internal memory and a PCI bus. Assuming that there are two Node nodes in the current server, the DPDK program runs on the CPU on Node 0. If the DPDK program accesses the memory of Node0 and PCI devices of Node0, the performance is high, and conversely, if the DPDK program accesses the memory or PCI devices of Node1, the performance is affected.
The Socket association is checked, that is, for a server of a multi-Node architecture, whether a local-end CPU accesses an opposite-end memory or a PCI device is checked in a program. Accessing remote memory or PCI devices has a significant impact on performance. In addition to paying attention not to cross Node nodes at the time of environment configuration in step a, the following steps may be taken for checking:
substep D1, acquiring Socket ID of the port;
substep D2, acquiring Socket ID where the logic CPU operated by the current program is located;
and a substep D3, if the Socket ID of the port is not equal to the Socket ID of the logic CPU operated by the current program, indicating that the Node-crossing Node access exists. And if the Node-crossing Node access exists, reminding the user.
Specifically, the following pseudo code may also be used in the program:
If(rte_eth_dev_Socket_ID(port)!=(int)rte_Socket_ID())
v/Explanation Cross Node Access to PCI devices
In the above code, a function rte _ eth _ dev _ Socket _ ID (port) is a Socket ID for acquiring a port, and rte _ Socket _ ID () is a Socket ID for acquiring a logical CPU in which a current program runs.
Step E, invoking an interface provided by the DPDK to forward the data packet, where forwarding the data packet, that is, simply forwarding the data packet, only needs to invoke an interface function provided by the DPDK, for example, a port is used to send a data packet function, and the function automatically corresponds to a CPU logic core where the function is located when being invoked, and needs to use four elements of the structure port _ queue: receiving port number, receiving queue number, sending port number, sending queue number.
The process of forwarding the data packet is simple, and in the step C, after the flow table is set, the step only needs to call the interface according to the flow table to forward the data packet. The packet forwarding is divided into two interfaces: the port sends the data packet function and the port sends the buffer function, the port sends the buffer refreshing function, preferably, the port sends the buffer function and the port sends the buffer refreshing function with better forwarding performance.
So far, the method for the multi-port admission and discharge system based on the DPDK in the embodiment of the present disclosure is introduced.
Because the highly-modularized design is adopted in the method, the analysis and the scheduling of the stream weight between the DPDK high-performance scene design ports are realized, and a high-performance and stable reference thought is provided for the research and development of gateway equipment.
So far, the embodiments of the present disclosure have been described in detail with reference to the accompanying drawings. It is to be noted that, in the attached drawings or in the description, the implementation modes not shown or described are all the modes known by the ordinary skilled person in the field of technology, and are not described in detail. Further, the above definitions of the various elements and methods are not limited to the various specific structures, shapes or arrangements of parts mentioned in the examples, which may be easily modified or substituted by those of ordinary skill in the art.
In addition, unless steps are specifically described or must occur in sequence, the order of the steps is not limited to that listed above and may be changed or rearranged as desired by the desired design. The embodiments described above may be mixed and matched with each other or with other embodiments based on design and reliability considerations, i.e., technical features in different embodiments may be freely combined to form further embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, this disclosure is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the present disclosure as described herein, and any descriptions above of specific languages are provided for disclosure of enablement and best mode of the present disclosure.
The disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. Various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in the relevant apparatus according to embodiments of the present disclosure. The present disclosure may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present disclosure may be stored on a computer-readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Also in the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various disclosed aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, disclosed aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
The above-mentioned embodiments are intended to illustrate the objects, aspects and advantages of the present disclosure in further detail, and it should be understood that the above-mentioned embodiments are only illustrative of the present disclosure and are not intended to limit the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A DPDK-based multi-port access method comprises the following steps:
setting balanced flow load, distributing flow outlets according to the current load condition of ports and the service importance, setting weight for each receiving port receiving queue and each sending port sending queue, determining the receiving and sending priorities of the receiving port receiving queue and the sending port sending queue according to the weight, wherein the larger the weight is, the higher the receiving and sending priority is;
setting a flow table according to which a DPDK forwards a data packet according to the transceiving priorities of a receiving port receiving queue and a transmitting port transmitting queue, carrying out priority forwarding on the transceiving queue with high priority, and determining a quintuple representing a data packet forwarding path, wherein the quintuple comprises: receiving port number, receiving queue number, CPU logic core, sending port number and sending queue number;
and calling an interface function provided by the DPDK to forward the data packet according to a data packet forwarding path determined by the receiving port number, the receiving queue number, the sending port number and the sending queue number in the quintuple.
2. The method of claim 1, wherein the step of setting the flow table according to which the DPDK forwards the packet comprises:
and when initial setting is carried out, setting a quintuple structure array according to a configuration file or an input parameter, wherein the quintuple structure comprises a receiving port number, a receiving queue number, a sending port number and a sending queue number, and the CPU logic core number is used as a subscript of the corresponding quintuple structure array.
3. The method of claim 2, the step of making balanced traffic load settings comprising:
the weight of the receiving port receiving queue is determined by two parameters of the receiving port flow bandwidth and the service importance, and the weight of the sending port sending queue is determined by the sending port flow bandwidth.
4. The method of claim 3, the step of setting a weight for each receive port and transmit port comprising:
the method comprises the following steps that n receiving ports are provided, each port is provided with a receiving and sending queue, the bandwidth of each receiving queue is set to be 0 initially, and the service importance is P1 and P2 … Pn in sequence from high to low;
when the equipment starts to work, the flow of each receiving queue starts to be generated, and the bandwidth is calculated every preset time;
after working for a preset time, calculating the weight of each receiving queue: (number of bytes received in the queue/100) traffic importance, and then prioritizing the transmission and reception according to the weight.
5. The method of claim 1, further comprising, before forwarding the data packet, the steps of:
for a server of a multi-Node architecture, each Node comprises a group of CPU, a memory and a PCI bus, whether the condition that the local CPU accesses the Node-crossing nodes exists is checked, and if the condition that the Node-crossing nodes access exists, a user is reminded.
6. The method of claim 5, wherein said checking whether there is a local CPU cross Node access condition comprises: and the Node-crossing nodes access the memory of the opposite terminal or the PCI equipment.
7. The method according to claim 1, wherein when forwarding the packet, the call port sends a packet function, the function automatically corresponds to the running CPU logic core, and calls the receive port number, the receive queue number, the send port number, and the send queue number in the five-tuple.
8. The method of claim 1, wherein a port send cache function and a port send cache refresh function are invoked when forwarding the packet.
9. The method according to claim 1, wherein a system environment for DPDK operation needs to be set before the setting of the balanced traffic load.
10. The method of claim 9, the setting up a system environment for DPDK operation comprising: setting up a system environment for DPDK operation includes: compiling and deploying a DPDK software package, setting a large-page memory, loading igb _ uio kernel modules, binding a network card, mounting a file system hugelbfs and setting CPU isolation.
CN201711064766.6A 2017-11-02 2017-11-02 DPDK-based multi-port access and egress method Active CN107786458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711064766.6A CN107786458B (en) 2017-11-02 2017-11-02 DPDK-based multi-port access and egress method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711064766.6A CN107786458B (en) 2017-11-02 2017-11-02 DPDK-based multi-port access and egress method

Publications (2)

Publication Number Publication Date
CN107786458A CN107786458A (en) 2018-03-09
CN107786458B true CN107786458B (en) 2021-06-25

Family

ID=61431445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711064766.6A Active CN107786458B (en) 2017-11-02 2017-11-02 DPDK-based multi-port access and egress method

Country Status (1)

Country Link
CN (1) CN107786458B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194581B (en) * 2018-08-24 2020-12-25 新华三技术有限公司 Message processing method and device
CN109787912A (en) * 2019-03-04 2019-05-21 南京邮电大学 Load-balancing method based on NAT under a kind of DPDK environment
CN110333899B (en) * 2019-06-27 2022-11-01 腾讯科技(深圳)有限公司 Data processing method, device and storage medium
CN110768829B (en) * 2019-10-23 2022-02-15 江苏省未来网络创新研究院 Method for realizing linear increase of traffic analysis service performance based on DPDK

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539440A (en) * 2013-07-09 2015-04-22 英特尔公司 Traffic management with ingress control
CN105391635A (en) * 2015-10-16 2016-03-09 电子科技大学 Network virtualization method based on software defined network (SDN)
CN106357537A (en) * 2016-11-09 2017-01-25 北京工业大学 Link monitoring method based on SDN multipath transmission
WO2017020949A1 (en) * 2015-08-03 2017-02-09 Nokia Solutions And Networks Oy Load and software configuration control among composite service function chains

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10148570B2 (en) * 2015-12-29 2018-12-04 Amazon Technologies, Inc. Connectionless reliable transport

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539440A (en) * 2013-07-09 2015-04-22 英特尔公司 Traffic management with ingress control
WO2017020949A1 (en) * 2015-08-03 2017-02-09 Nokia Solutions And Networks Oy Load and software configuration control among composite service function chains
CN105391635A (en) * 2015-10-16 2016-03-09 电子科技大学 Network virtualization method based on software defined network (SDN)
CN106357537A (en) * 2016-11-09 2017-01-25 北京工业大学 Link monitoring method based on SDN multipath transmission

Also Published As

Publication number Publication date
CN107786458A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
CN107786458B (en) DPDK-based multi-port access and egress method
US11645127B2 (en) Microservices architecture
Regnier et al. TCP onloading for data center servers
US11736402B2 (en) Fast data center congestion response based on QoS of VL
US8051212B2 (en) Network interface adapter with shared data send resources
CN109076029B (en) Method and apparatus for non-uniform network input/output access acceleration
US11750418B2 (en) Cross network bridging
JP2014509427A (en) Prioritizing low latency in a PCI Express multiple root I / O virtualization environment
Rizzo et al. 10 Gbit/s line rate packet processing using commodity hardware: Survey and new proposals
WO2020134153A1 (en) Distribution method, system and processing device
CN114553780A (en) Load balancing method and device and network card
He et al. Accl: Fpga-accelerated collectives over 100 gbps tcp-ip
Ara et al. Comparative evaluation of kernel bypass mechanisms for high-performance inter-container communications
CN114666276A (en) Method and device for sending message
Imputato et al. Enhancing the fidelity of network emulation through direct access to device buffers
Panić et al. Improving performance guarantees in wormhole mesh NoC designs
CN107920035B (en) Multi-core processor type device, system and vehicle for deterministic switched Ethernet
US20220276809A1 (en) Interface between control planes
CN116150082A (en) Access method, device, chip, electronic equipment and storage medium
KR20050080704A (en) Apparatus and method of inter processor communication
Richter et al. Resolving performance interference in SR-IOV setups with PCIe Quality-of-Service extensions
Hsieh et al. NF-switch: VNFs-enabled SDN switches for high performance service function chaining
CN114816651A (en) Communication method, device and system
Harbin et al. GMCB: An industrial benchmark for use in real-time mixed-criticality networks-on-chip
Huang et al. FGLB: A fine‐grained hardware intra‐server load balancer based on 100 G FPGA SmartNIC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211207

Address after: 100084 Beijing Haidian District Zhongguancun East Road 1 hospital Qinghua science and Technology Park 8 Building B block seal building

Patentee after: CERNET Co.,Ltd.

Address before: 100084 B1001-C 8, building 1, Zhongguancun East Road, Haidian District, Beijing, 2.

Patentee before: NEXT GENERATION INTERNET MAJOR APPLICATION TECHNOLOGY (BEIJING) ENGINEERING RESEARCH CENTER Co.,Ltd.

Patentee before: Saier Network Co., Ltd

TR01 Transfer of patent right