CN109672575B - Data processing method and electronic equipment - Google Patents

Data processing method and electronic equipment Download PDF

Info

Publication number
CN109672575B
CN109672575B CN201910093781.6A CN201910093781A CN109672575B CN 109672575 B CN109672575 B CN 109672575B CN 201910093781 A CN201910093781 A CN 201910093781A CN 109672575 B CN109672575 B CN 109672575B
Authority
CN
China
Prior art keywords
queue
target
value
interface
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910093781.6A
Other languages
Chinese (zh)
Other versions
CN109672575A (en
Inventor
徐炽云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Technologies Co Ltd Hefei Branch
Original Assignee
New H3C Technologies Co Ltd Hefei Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Technologies Co Ltd Hefei Branch filed Critical New H3C Technologies Co Ltd Hefei Branch
Priority to CN201910093781.6A priority Critical patent/CN109672575B/en
Publication of CN109672575A publication Critical patent/CN109672575A/en
Application granted granted Critical
Publication of CN109672575B publication Critical patent/CN109672575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3036Shared queuing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a data processing method and electronic equipment, which relate to the technical field of computers and are characterized in that when an NFV system is started, if the number of cores allocated for the NFV system is different from each of the set of values, by determining a first value in the set of values, then starting the target cores with the number corresponding to the first numerical value, and starting the target message transceiving queue groups with the number corresponding to the first numerical value by each VF interface, so that each target message transceiving queue group started correspondingly by each VF interface corresponds to each target core one by one, the data processing method and the electronic device enable the NFV system to be started when the number of cores allocated to the NFV system is different from the number of message transceiving queue groups which can be started by the VF interface.

Description

Data processing method and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and an electronic device.
Background
SRIOV (Single Root I/O Virtualization) is a hardware-based Virtualization solution that allows PCI (Peripheral Component interface) devices to be shared between different virtualizers or containers, so that a VF (Virtualized Function) port Virtualized on a PF (Physical Function) port can directly use the hardware functions of the PCI, and different VF interfaces Virtualized on the same PF port can share one or more Physical resources to improve efficiency, where the computing and deployment overhead of network resources is mainly on the hardware devices.
Disclosure of Invention
The present application aims to provide a data processing method and an electronic device, so that when the number of cores allocated to an NFV system is different from the number of packet transceiving queue groups that can be started by a VF interface, the NFV system can be started.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, an embodiment of the present application provides a data processing method, which is applied to an electronic device, where the electronic device stores a network function virtualization NFV system, the NFV system includes at least two virtual function VF interfaces, each VF interface of the at least two VF interfaces has a plurality of corresponding packet transceiving queues created in advance, and the electronic device allocates a plurality of cores to the NFV system, where the method includes: when the NFV system is started, if the number of cores allocated to the NFV system is different from each value in a value group, determining a first value in the value group, where the first value is smaller than the number of cores allocated to the NFV system, and each value in the value group indicates the number of message transceiving queue groups that can be started by each VF interface; starting a number of target cores corresponding to the first numerical value, and starting a number of target message transceiving queue groups corresponding to the first numerical value by each VF interface, so that each target message transceiving queue group started by each VF interface correspondingly corresponds to each target core one by one; and the target cores receive and transmit the messages through the corresponding target message receiving and transmitting queue groups.
In a second aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a network interface card, where the memory stores a network function virtualization NFV system, the network interface card virtualizes at least two virtual function VF interfaces, each VF interface of the at least two VF interfaces is pre-created with a plurality of corresponding packet transceiving queues, and the electronic device allocates a plurality of cores to the NFV system; the processor is configured to, when the NFV system is started, determine a first value in the set of values if the number of cores allocated to the NFV system is different from each value in the set of values, where the first value is smaller than the number of cores allocated to the NFV system, and each value in the set of values represents the number of packet transceiving queue groups that can be started by each VF interface; the processor is further configured to start a number of target cores corresponding to the first value, and start a number of target packet transceiving queue groups corresponding to the first value for each VF interface, so that each target packet transceiving queue group started corresponding to each VF interface corresponds to each target core one to one; the processor is further configured to receive and transmit a packet by the target core through the respective corresponding target packet receiving and transmitting queue group.
Compared with the prior art, according to the data processing method and the electronic device provided by the embodiment of the application, when the NFV system is started, if the number of cores allocated to the NFV system is different from each value in the value group, the first value is determined in the value group, the target cores corresponding to the first value are started, and each VF interface starts the target message transceiving queue group corresponding to the first value, so that each target message transceiving queue group started by each VF interface correspondingly corresponds to each target core one to one, and the target cores can transmit and receive messages through the corresponding target message transceiving queue groups.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic block diagram of an NFV system according to an embodiment of the present application;
fig. 2 is a schematic application scenario diagram provided in the embodiment of the present application;
FIG. 3 is a diagram of another exemplary application scenario provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a data processing method provided by an embodiment of the present application;
fig. 5 is a schematic flow chart of the substeps of S103 in fig. 4.
In the figure: 300-an electronic device; 301-a memory; 302-a processor; 303-bus; 304-network interface card.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
An NFV (Network Function Virtualization) system is a virtual device that can be installed on a common X86 server, and can implement a protocol stack forwarding Function in a user mode by combining with a DPDK (Data plane development Kit) acceleration technology.
For example, referring to fig. 1, fig. 1 is a block diagram of an electronic device 300 according to an embodiment of the present disclosure. The electronic apparatus 300 includes a processor 302, a memory 301, a bus 303, and a Network Interface Card 304 (NIC), and the processor 302, the memory 301, and the Network Interface Card 304 communicate with each other through the bus 303.
The processor 302 is configured to execute an executable module, such as a computer program, stored in the memory 301. The processor 302 according to the embodiment of the present disclosure may be a single Processing element or a combination of multiple Processing elements, for example, the processor 302 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiment of the present disclosure, such as: one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
The memory 301 may be used for storing various programs and data in the electronic device 300, such as program instructions corresponding to the NFV system. The memory 301 may be a single storage device or a combination of a plurality of storage elements, and the memory 301 may include a Random Access Memory (RAM) or a non-volatile memory (non-volatile memory), such as a magnetic disk memory or a Flash memory.
The bus 303 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus 303 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 1, but this does not indicate only one bus or one type of bus.
The network interface card 304, which is a network component mainly working at a link layer, is an interface for connecting a computer and a transmission medium in a local area network, and not only can implement physical connection and electrical signal matching with the transmission medium of the local area network, but also relates to functions of frame sending and receiving, frame encapsulating and decapsulating, medium access control, data encoding and decoding, and data caching, provides a PF port, and virtualizes a VF interface on the PF port by using a virtualization function, so as to be used by an NFV system.
The memory 301 is used for storing programs, such as program instructions for implementing NFV functions, and the processor 302, after receiving the execution instructions, executes the programs stored in the memory 301 to implement the data processing method disclosed in the embodiment of the present application.
The DPDK platform provides the drive of various types of network interface cards in a user mode so as to realize the receiving and sending of messages. The processor 302 includes multiple cores (cores), the network interface card is configured to provide a PF port, and a VF interface may be virtualized on the PF port according to a virtualization function, so that the NFV system receives and sends a packet using the virtualized VF interface, and the processor 302 is allocated to at least one core of the NFV system to process the packet, where the number of cores allocated to the NFV system by the processor 302 may be obtained by the processor 302 receiving a parameter input by a user when the NFV system is started, or obtained by a preset parameter. For example, when the NFV system is started, the user inputs parameters specifying: if core0 and core1 are cores used by the NFV system, processor 302 determines that the cores allocated to the NFV system include two cores, core0 and core1, and isolates core0 and core1, allowing only core0 and core1 to be used by the NFV system. The processor 302 is assigned to at least one core of the NFV system, which means that the processor 302 assigns at least one core to the NFV system, that is, the processor 302 provides overhead of the NFV system for the core.
However, in a virtual environment, the number of messaging queues that a virtual port can create is limited by the hardware ports. For example, in some network interface cards, a maximum of 128 VF interfaces can be created for all PF ports included in the network interface card, for example, for a network interface card with 4 PF ports, each PF port can create a maximum of 32 VF interfaces, and when a message is sent and received, the maximum number of queues that can be started in the VF interfaces depends on the PF port corresponding to the VF interface. Due to the limitation of hardware ports, in some network interface cards, the maximum number of transmit queues and receive queues that the VF interface can start is 4, and the receive queues must be power of 2, that is, the number of receive queues that can start can only be 1, 2, and 4. Therefore, in the virtual environment, if the VF interface is virtualized for the NFV system on the PF port of the network interface card, the maximum number of transmission queues that can be started on each VF interface included in the NFV system is also 4, and the number of receive queues that can be started must also be the power of 2, that is, the number of receive queues that can be started can only be 1, 2, and 4.
Generally, in an NFV system to which a plurality of cores are allocated, a network card multi-queue technology may be used to bind a receive queue and a send queue corresponding to a VF interface with a specific core allocated to the NFV system, so that the core can receive and send messages through the VF interface; moreover, to avoid the concurrent operation of multiple cores on the same queue, on the same VF interface, different cores that start work need to receive and transmit messages corresponding to different receiving queues and sending queues.
Moreover, because each core may have a behavior of receiving and sending a message on each VF interface, when the NFV system is started, a receiving queue and a sending queue are bound for each core in the receiving queue and the sending queue corresponding to each started VF interface; therefore, in order to facilitate understanding, in the embodiment of the present application, a receiving queue and a sending queue started in a messaging queue corresponding to a VF interface are defined as a messaging queue group in a pairwise combination, where each messaging queue group includes a receiving queue and a sending queue, and each messaging queue group corresponds to the same VF interface and is bound to the same core.
For example, referring to fig. 2, fig. 2 is a schematic application scenario diagram provided in the embodiment of the present application, for example, a processor 302 of the electronic device 300 includes 8 cores including a core0, a core1, a core2, a core3, a core4, a core 5, a core 6, and a core 7, and the electronic device 300 allocates two cores, i.e., a core0 and a core1, to the NFV system; the network interface card comprises 4 PF ports including PF0, PF1, PF2 and PF3, and two VF interfaces including VF0 and VF1 are virtualized on the port PF1 for the NFV system; 4 corresponding messaging queues, including 4 receive queues and 4 transmit queues, may be pre-created in memory for VF0, and 4 corresponding messaging queues, including 4 receive queues and 4 transmit queues, may be pre-created for VF 1; since there is a possibility that there is a behavior of sending and receiving a message in both VF0 and VF1, two receive queues and two send queues are all started in 4 corresponding message sending and receiving queues pre-created for VF0 and VF1, and two receive queues (schematically start receive queue 3 and receive queue 4 in fig. 2) started corresponding to VF0 are bound with core0 and core1 respectively, two send queues (schematically start send queue 11 and send queue 12 in fig. 2) started corresponding to VF0 are bound with core0 and core1 respectively, specifically, receive queue 3 is bound with core0, receive queue 4 is bound with core1, send queue 11 is bound with core0, and send queue 12 is bound with core1, therefore, of the two receive queues and two send queues started corresponding to VF0, receive queue 3 and send queue 11 bound with core0 form a message sending and receiving queue group, the receiving queue 4 and the sending queue 12 which are correspondingly bound with the core1 form another message transceiving queue group; similarly, two receiving queues (an exemplary receiving queue 5 and a receiving queue 6 for starting in fig. 2) started correspondingly by the VF1 are bound with the core0 and the core1 respectively, two sending queues (an exemplary sending queue 13 and a sending queue 14 for starting in fig. 2) started correspondingly by the VF1 are bound with the core0 and the core1 respectively, specifically, the receiving queue 5 is bound with the core0, the receiving queue 6 is bound with the core1, the sending queue 13 is bound with the core0, and the sending queue 14 is bound with the core 1; thus, of the two receive queues and two transmit queues activated for VF1, receive queue 5 and transmit queue 13 bound for core0 form one set of messaging queues and receive queue 6 and transmit queue 14 bound for core1 form another set of messaging queues.
It should be noted that, generally, when the NFV system processes a message, a VF interface receiving the message may not be the same as a VF interface sending the message. For example, in the application scenario shown in fig. 2, taking core0 to process a packet as an example, a packet received by VF0 is allocated to receive queue 3 bound to core0 in the packet transceiving queue group corresponding to VF 0; the core0, after obtaining the message received by the VF0, may send the processed message to the sending queue 13 bound to the core0 in the group of message sending and receiving queues corresponding to the VF1, and then send the message by the VF 1.
However, as described above, in a virtual environment, the number of the sending queues and the receiving queues that can be started by the VF interface is limited by the PF port, for example, in some network interface cards, the maximum number of the sending queues and the receiving queues that can be started on the VF interface is 4, and the number of the receiving queues that can be started is only 1, 2, and 4, so that if the number of cores allocated to the NFV system is not matched with the number of the group of messaging queues that can be started by the VF interface, for example, the number of cores allocated to the NFV system is greater than the number of group of messaging queues that can be started by the VF interface, the number of group of messaging queues that can be started by the VF interface and the number of cores that cannot correspond to the same group of messaging queues for concurrent operation may result in that the NFV system cannot be started successfully.
Based on the above defects, a possible implementation manner provided by the embodiment of the present application is as follows: when the NFV system is started, if the number of cores allocated to the NFV system is different from each value in the value group, the first value is determined from the value group, and then the target cores of the number corresponding to the first value are started, and each VF interface starts the target message transceiving queue group of the number corresponding to the first value, so that each target message transceiving queue group started by each VF interface correspondingly corresponds to each target core one-to-one, and then the target cores can transmit and receive messages through the respective corresponding target message transceiving queue groups, and the NFV system can be started.
As described above, when the NFV system is started, for a packet transceiving queue started corresponding to each VF interface, it is necessary to allocate a packet transceiving queue group to each started core, and the manner of the packet transceiving queue group corresponding to each started core on each VF interface is the same, and based on that the VF interface receiving a packet may not be the same as the VF interface sending a packet, the following describes an exemplary data processing method provided in this embodiment with two VF interfaces as implementation objects.
Referring to fig. 3, fig. 3 is another schematic application scenario diagram provided in the embodiment of the present application, where the NFV system includes a VF interface and is allocated with multiple cores (fig. 3 exemplarily includes 5 cores), and is exemplarily illustrated by using 1 VF interface, where the VF interface is pre-created with multiple corresponding messaging queues, for example, as shown in fig. 2, 4 corresponding messaging queues, including 4 receiving queues and 4 sending queues, may be pre-created in a memory by 1 VF interface; for an application scenario in which 5 cores are allocated to the NFV system, since the maximum number of receive queues that can be started is 4 and must be a power of 2, that is, the number of receive queues that can be started is only 1, 2, and 4, a message transceiving queue group is formed by binding one started receive queue and one started transmit queue to the same core, and the number of message transceiving queue groups that can be started on the VF interface is only 1, 2, and 4.
It should be noted that fig. 3 only exemplifies 1 VF interface, but each activated VF interface has the corresponding relationship as shown in fig. 3.
In addition, in this embodiment of the present application, the message transceiving queue group is a logical concept, and one receiving queue and one sending queue bound to the same target core form one message transceiving queue group in the receiving queue and the sending queue started correspondingly by one VF interface, for example, in an application scenario as shown in fig. 2, in two receiving queues and two sending queues started correspondingly by VF0, the receiving queue 3 and the sending queue 11 bound to core0 form one message transceiving queue group, and the receiving queue 4 and the sending queue 12 bound to core1 form another message transceiving queue group.
Referring to fig. 4, fig. 4 shows a schematic flowchart of a data processing method provided in an embodiment of the present application, where the data processing method is applied to an electronic device 300 shown in fig. 1, the electronic device 300 stores the NFV system shown in fig. 3, and in the embodiment of the present application, the data processing method includes the following steps:
s101, starting the NFV system.
S102, judging whether the number of cores distributed by the NFV system is the same as the numerical value in the numerical value group; if yes, go to S105; if not, go to step S103.
In this embodiment of the present application, a value group is recorded in the electronic device 300, and at least one value is recorded in the value group, where each value in the value group represents the number of message transceiving queue groups that can be activated by each VF interface, for example, in an application scenario as shown in fig. 3, since the number of message transceiving queue groups that can be activated by a VF interface is only 1, 2, and 4, the number of message transceiving queue groups that can be activated by the VF interface is only 1, 2, and 4, and the value included in the value group is 1, 2, and 4.
When the NFV system is started, the electronic device 300 compares the number of cores with each value in the value group according to the number of cores allocated to the NFV system, determines whether the number of cores is the same as the value in the value group, and if the number of cores is the same as a certain value in the value group, the NFV system may start the receive queues and the transmit queues having the same number as the number of cores by using each VF interface in a manner that each core corresponds to one message transmit-receive queue group, so that each VF interface forms a target message transmit-receive queue group having the same number as the number of cores, thereby completing the start of the NFV system and executing S105; otherwise, if the number of cores is different from each value in the set of values, S103 is performed.
For example, in the application scenario shown in fig. 3, the number of the number groups includes 1, 2, and 4, and if the number of the cores is 4, the number of the cores is the same as "4" in the number groups, at this time, the NFV system may start 4 receive queues and 4 transmit queues, so that each VF interface forms 4 target packet transceiving queue groups, and at this time, S105 is executed; if the number of cores is 5, the number of cores is different from each value in the value group (the values in the value group are only 1, 2, 4), and then S103 is executed.
S103, determining a first numerical value in the numerical value group.
When it is determined in S102 that the number of cores allocated to the NFV system is not the same as each of the number values in the number value set, a first number value is determined in the number value set, the first number value represents the number of cores to be started by the NFV system currently, and the first number value is smaller than the number of all cores allocated to the NFV system.
Optionally, referring to fig. 5, fig. 5 is a schematic flowchart of the sub-step of S103 in fig. 4, and as a possible implementation, S103 includes the following sub-steps:
s103-1, judging whether the number of cores distributed by the NFV system is greater than the maximum value in the numerical value group; if yes, executing S103-3; if not, S103-2 is executed.
S103-2, taking the maximum value of all the numerical values in the numerical value group, which are smaller than the number of cores allocated to the NFV system, as a first numerical value.
S103-3, taking the maximum value in the value group as the first value.
As described above, since the number of sets of messaging queues that can be started by each VF interface is recorded in the value set, the electronic device 300 needs to determine the first value by integrating the number of cores allocated to the NFV system and the size of each value included in the value set.
When the electronic device 300 determines the first value, it determines the size of both the number of cores allocated to the NFV system and the maximum value in the value group, and if the number of cores allocated to the NFV system is greater than the maximum value in the value group, it indicates that at this time, even if all the packet transceiving queue groups are started, the packet transceiving queue groups still cannot be in one-to-one correspondence with all the cores. Therefore, as a possible implementation manner, in the embodiment of the present application, if the number of cores allocated to the NFV system is greater than the maximum value in the number group, the maximum value in the number group is taken as the first value, so as to maximize the number of cores that start to work. The maximum value in the value group represents the maximum number of message transceiving queue groups that each VF interface can support to start currently.
For example, in the above example, the values included in the value group include 1, 2, and 4, and if the number of cores allocated to the NFV system is 5, the number of cores allocated to the NFV system (which is 5) is greater than the maximum value (which is 4) in the value group, and in this case, 4 is taken as the first value.
At this time, for example, taking two VF interfaces with start sequence number 0 and start sequence number 1 (i.e. VF0 and VF1) as an example for description, for 4 messaging queues corresponding to VF0 and VF1, the correspondence between the opened four receive queues RxQ and four send queues TxQ and cores may be as shown in table 1 below:
table 1 a schematic diagram of a corresponding relationship between a packet transceiving queue group and a core
Figure GDA0003319103730000141
Wherein, Core0, Core1, Core2, Core3 and Core4 respectively represent cores with the numbers of 0, 1, 2, 3 and 4; TxQ: 00. TxQ: 01. TxQ: 02. TxQ: 03 is a transmission queue with sequence numbers 00, 01, 02, 03 among the plurality of transmission queues corresponding to the VF 0; RxQ: 00. RxQ: 01. RxQ: 02. RxQ: 03 denotes a reception queue with sequence numbers 00, 01, 02, and 03, among the plurality of reception queues corresponding to the VF 0; TxQ: 20. TxQ: 21. TxQ: 22. TxQ: reference numeral 23 denotes a transmission queue with sequence numbers 20, 21, 22, and 23, among the plurality of transmission queues corresponding to the VF 1; RxQ: 20. RxQ: 21. RxQ: 22. RxQ: reference numeral 23 denotes a reception queue with sequence numbers 20, 21, 22, and 23, among the plurality of reception queues corresponding to the VF 1; NA indicates no responsibility for the queue.
It should be noted that, in some other embodiments of the present application example, if the number of cores allocated to the NFV system is greater than the maximum value in the number group, other values than the maximum value in the number group may be selected as the first number, for example, in the above example, the number included in the number group is 1, 2, or 4, and if the number of cores allocated to the NFV system is 5, not only 4 but also 1 or 2 may be selected as the first number, as long as the number selected in the number group is smaller than the number of cores allocated to the NFV system.
Also, it is worth noting that in the above example, since each of the number of cores and the set of values allocated for the NFV system is different, the number of cores allocated for the NFV system may only be greater or less than the maximum of the set of values.
Therefore, if the number of cores allocated to the NFV system is smaller than the maximum value in the set of values, it represents that if all the sets of messaging queues are started at this time, there is a possibility that a message received by a partially started set of messaging queues does not have a corresponding core to process a message, and a message obtained by the partially started set of messaging queues is discarded.
As a possible implementation manner, if the number of cores allocated to the NFV system is smaller than the maximum value in the value group, then the maximum value in all the values in the value group that are smaller than the number of cores allocated to the NFV system is taken as the first value, so as to maximize the number of cores that the NFV system can start to operate.
For example, in the above example, the number of the cores included in the value group is 1, 2, and 4, if the number of the cores allocated to the NFV system is 3, the number of the cores (which is 3) is smaller than the maximum value (which is 4) in the value group, and all the values in the value group smaller than the number of the cores include 1 and 2, the maximum value in the value group smaller than the number of the cores, that is, 2 is taken as the first value.
At this time, for example, taking two VF interfaces with start sequence number 0 and start sequence number 1 (i.e. VF0 and VF1) as an example for description, for each of 4 messaging queues corresponding to VF0 and VF1, the correspondence between the two opened receiving queues RxQ and two opened sending queues TxQ and cores may be as shown in table 2 below:
table 2 another exemplary correspondence between packet transmit/receive queue groups and cores
Figure GDA0003319103730000161
Wherein Core0, Core1, and Core2 represent cores having numbers 0, 1, and 2, respectively; TxQ: 00. TxQ: 01 denotes a transmission queue with sequence numbers of 00 and 01 among a plurality of transmission queues corresponding to the VF 0; RxQ: 00. RxQ: 01 denotes a reception queue with sequence numbers of 00 and 01 among a plurality of reception queues corresponding to the VF 0; TxQ: 20. TxQ: 21 denotes a transmission queue with sequence numbers 20 and 21, respectively, among the plurality of transmission queues corresponding to the VF 1; RxQ: 20. RxQ: 21 denotes a receive queue with sequence numbers 20 and 21, respectively, among the plurality of receive queues corresponding to the VF 1; NA indicates no responsibility for the queue.
It should be noted that in some other embodiments of the present application, if the number of cores allocated to the NFV system is smaller than the maximum value in the number group, the other values in all the numbers in the number group smaller than the number of cores allocated to the NFV system are used as the first number, for example, in the above example, all the numbers in the number group smaller than the number of cores allocated to the NFV system include 1 and 2, and then not only 2 may be used as the first number, but also 1 may be selected as the first number.
And S104, starting the target cores with the number corresponding to the first numerical value, and starting the target message transceiving queue groups with the number corresponding to the first numerical value by each VF interface so that each target message transceiving queue group started by each VF interface correspondingly corresponds to each target core one by one.
And S105, the target cores receive and transmit the messages through the corresponding target message receiving and transmitting queue groups.
Therefore, according to the first value determined in the value group, the target cores with the number corresponding to the first value are started, each VF interface starts the receiving queues and the sending queues with the number corresponding to the first value, and a target message transceiving queue group is formed by the receiving queues and the sending queues corresponding to the same target core in all the receiving queues and the sending queues started corresponding to each VF interface, so that each target message transceiving queue group corresponds to each target core one to one.
For example, in the above example, the values included in the value group include 1, 2, and 4, and it is determined that the first value is 4, then 4 cores are started, and each VF interface correspondingly starts 4 receive queues and 4 transmit queues, and in the 4 started receive queues and 4 transmit queues, combine the receive queues and transmit queues corresponding to the same core into one target packet transceiving queue group, so that the 4 cores correspond to the 4 target packet transceiving queue groups started by each VF interface one by one; if the first value is 2, 2 cores are started, and each VF interface starts 2 corresponding receive queues and send queues, and combines the receive queues and send queues corresponding to the same core into one target packet transceiving queue group from the started 2 receive queues and 2 send queues, so that the 2 cores correspond to the 2 target packet transceiving queue groups started corresponding to each VF interface one by one.
For example, in an application scenario as shown in fig. 2, the electronic device 300 allocates two cores, i.e., core0 and core1, to the NFV system, and activates two VF interfaces, i.e., VF0 and VF1, so that 2 receive queues and 2 transmit queues are all activated in a plurality of messaging queues created in advance for VF0 and VF1 (for example, in fig. 2, VF0 activates receive queue 3, receive queue 4, transmit queue 11, and transmit queue 12 correspondingly, and VF1 activates receive queue 5, receive queue 6, transmit queue 13, and transmit queue 14 correspondingly); the method comprises the steps that 2 receiving queues started correspondingly by VF0 correspond to a core0 and a core1 respectively, 2 sending queues started correspondingly by VF0 correspond to a core0 and a core1 respectively, specifically, a receiving queue 3 corresponds to the core0, a receiving queue 4 corresponds to the core1, a sending queue 11 corresponds to the core0, and a sending queue 12 corresponds to the core1, so that the receiving queue 3 and the sending queue 11 corresponding to the core0 serve as a target message transceiving queue group in the 2 receiving queues and the 2 sending queues started correspondingly by VF0, and the receiving queue 4 and the sending queue 12 corresponding to the core1 serve as another target message transceiving queue group; similarly, the 2 receiving queues activated corresponding to VF1 correspond to core0 and core1, and the 2 sending queues activated corresponding to VF1 correspond to core0 and core1, specifically, receiving queue 5 corresponds to core0, receiving queue 6 corresponds to core1, sending queue 13 corresponds to core0, and sending queue 14 corresponds to core1, so that, of the 2 receiving queues and 2 sending queues activated corresponding to VF1, receiving queue 5 and sending queue 13 corresponding to core0 serve as one target packet transceiving queue group, and receiving queue 6 and sending queue 14 corresponding to core1 serve as another target packet transceiving queue group.
Therefore, after each started target message transceiving queue group in each VF interface corresponds to each target core one by one, each target core starts a thread, and the thread of each target core is bound with the corresponding target message transceiving queue group, so that each target core receives messages through the receiving queue in the corresponding target message transceiving queue group and sends the messages through the sending queue in the corresponding target message transceiving queue group, each target core can start the thread and circularly receive and send the messages through the corresponding target message transceiving queue group, and the NFV system can be started.
As described above, one packet transceiving queue group is formed by one receive queue and one transmit queue corresponding to one VF interface. Therefore, it should be noted that, when a target core receives and sends a message through a corresponding target message receiving and sending queue group, for the same message, because VF interfaces for receiving and sending the message may be different, a receiving queue for the target core to receive the message and a sending queue for sending the message may not belong to the same target message receiving and sending queue group.
The thread and the message transceiving queue group of the binding target core are exemplarily described by taking the corresponding relation of the table 2 as an example. Illustratively, when a message receiving and sending queue is created in the memory, a receiving queue and a sending queue included in each message receiving and sending queue group have addresses in the memory; when the thread of the target core and the target message transceiving queue group are bound, sending addresses in respective memories of a receiving queue and a sending queue contained in the target message transceiving queue group to the target core; therefore, the target core receives the message according to the address of the bound receiving queue in the memory and sends the message according to the address of the bound sending queue in the memory.
It should be noted that, when starting the number of target cores corresponding to the first value, since all cores allocated to the NFV system except all target cores have no corresponding message transceiving queue group to transmit and receive messages, as a possible implementation manner, all cores allocated to the NFV system except all started target cores may not be started, so as to avoid a situation that the target message transceiving queue group is sent and received by other cores except the corresponding target cores, and a message is sent and received concurrently.
Moreover, it is worth explaining that each target message transceiving queue group started correspondingly on each VF interface has a one-to-one correspondence relationship with each target core; however, for each target core, the number of target packet transceiving queue groups corresponding to each target core is related to the number of activated VF interfaces, for example, in an application scenario as shown in fig. 2, when two VF interfaces are activated, core0 corresponds to two target packet transceiving queue groups, and core1 corresponds to two target packet transceiving queue groups.
Based on the above design, in the data processing method provided in this embodiment of the present application, when the NFV system is started, if the number of cores allocated to the NFV system is different from each value in the value group, the first value is determined in the value group, and then target cores corresponding to the first value are started, and each VF interface starts a target message transceiving queue group corresponding to the first value, so that each target message transceiving queue group started by each VF interface correspondingly corresponds to each target core one-to-one, and further the target cores can transmit and receive messages through the respective corresponding target message transceiving queue groups.
In some application scenarios, when the NFV system processes a packet, a VF interface for receiving the packet may not be the same as a VF interface for sending the packet, for example, in the application scenario shown in fig. 2, taking core0 as an example to process the packet, the packet received by VF0 may be sent by VF1 after being allocated to core0 for processing.
Therefore, optionally, as a possible implementation manner, the at least two VF interfaces included in the NFV system include a first VF interface and a second VF interface, and the first VF interface and the second VF interface each start a target messaging queue group having a number corresponding to the first value.
Therefore, when the target cores in the NFV system transmit and receive messages through the corresponding target message transmitting and receiving queue groups: and the target core distributes the messages received by the receiving queue corresponding to the target core in the target message receiving and transmitting queue group started by the first VF interface after processing the messages to the target message receiving and transmitting queue group started by the second VF interface, and the messages received by the first VF interface are sent out through the second VF interface after being processed by the target core.
For example, in the application scenario shown in fig. 2, taking core0 as a target core, VF0 as a first VF interface, and VF1 as a second VF interface as an example, when a message is received by the receive queue 3 corresponding to core0 in the target message transmit-receive queue group started by VF0, core0 processes the message, and then allocates the processed message to the transmit queue 13 corresponding to core0 in the target message transmit-receive queue group started by VF1, so that the message is processed by core0 and then sent out by VF 1.
It is worth noting that in some other possible application scenarios in this embodiment, when a target core in the NFV system receives and sends a message through a corresponding target message transceiving queue group, the target core may further allocate, after processing, a message received by a receiving queue corresponding to the target core in the target message transceiving queue group started by the first VF interface to a sending queue corresponding to the target core in the target message transceiving queue group started by the first VF interface, that is, at this time, the message received by the first VF interface is also sent through the first VF interface after being processed by the target core.
For example, in the application scenario shown in fig. 2, taking core0 as a target core, VF0 as a first VF interface, and VF1 as a second VF interface as an example, core0 receives a message in the receive queue 3 corresponding to core0 in the target message transmit-receive queue group started by VF0, and after processing the message, core0 allocates the message to the target message transmit-receive queue group started by VF0, and then sends the message out in VF0 similarly after being processed by core0 in the send queue 11 corresponding to core 0.
In some application scenarios, because the NFV system needs to process not only data packets belonging to a service, but also certain protocol packets, such as an Open Shortest Path First (OSPF) protocol, generally, when the OSPF maintains connection, it is necessary to ensure that the data packets are normally reachable between routes, and if the amount of data between devices is large, the amount of data packets received by the devices exceeds throughput, or the processing speed of a software program in the devices is not fast enough, a hello packet for OSPF keep-alive may be discarded, thereby causing OSPF disconnection, causing the NFV system to fail to normally receive and transmit packets, and causing the NFV system to forward service interruption.
Therefore, optionally, as a possible implementation manner, the respective tasks of all the started target cores of the NFV system are allocated, where all the started target cores include a control core and a forwarding core.
Also, generally, the data volume of a protocol packet is small compared to a data packet. Therefore, in the embodiment of the present application, a protocol packet is allocated to a receive queue in a target packet transceiving queue group corresponding to a control core, so that the control core processes the protocol packet; and distributing the data message to a receiving queue in a target message transceiving queue group corresponding to the forwarding core so that the forwarding core processes the data message.
As a possible implementation manner, a hardware function Flow Director (Flow Director) and an RSS (Receive Side Scaling) on the network interface card may be used to distribute the protocol packet and the data packet. However, in the prior art, RSS does not support separate splitting of the protocol packet, so in the method provided in this embodiment, as a possible implementation manner, the flow director may direct the protocol packet to the receiving queue in the target packet transceiving queue group corresponding to the control core, and the RSS splits the data packet to the receiving queue in the target packet transceiving queue group corresponding to the forwarding core, so that the protocol packet is separately processed by the control core, thereby ensuring that the protocol packet is not lost and the NFV system does not shut down.
For example, in the schematic diagram shown in fig. 3, it is assumed that the NFV system includes 5 cores, the number of receive queues and send queues on the VF interface is 4, and it is determined that at this time, 4 cores respectively correspond to 4 packet transceiving queue groups one to one (e.g., core0, core1, core2, and core3 in fig. 3), and core0 is used as a control core, and core1, core2, and core3 are all used as forwarding cores. At this time, when the NFV system receives the message, the flow director directs the protocol message to the receiving queue corresponding to the core0, so that the protocol message is processed by the core0 alone; and, the RSS distributes the data packet to the receiving queues corresponding to the cores 1, 2 and 3, so that the data packet is processed by the cores 1, 2 and 3 together.
Wherein, as a possible implementation mode, based on the hardware functions of the flow guider and the RSS on the network interface card, the hardware device is different from the core, in order to realize that the flow director guides the protocol message to the receiving queue in the target message receiving and transmitting queue group corresponding to the control core, and the RSS shunts the data packets to receive queues in the set of target messaging queues corresponding to the forwarding cores, the control core, when starting the set of target cores and target messaging queues corresponding in number to the first value, the control core sends the corresponding relation between each target core and each target message transceiving queue group to the network interface card, so that the network interface card utilizes the flow guider to guide the protocol message to a receiving queue in a target message receiving and transmitting queue group corresponding to the control core, and the data message is distributed to a receiving queue in a target message transceiving queue group corresponding to the forwarding core by using RSS.
It can be understood that in some other possible application scenarios in the embodiment of the present application, the protocol packet and the data packet may also be split by using some other manners, for example, in the application scenario shown in fig. 2, a program instruction for identifying a type of the packet is stored, after the packet is sent to the memory by the VF interface and before the packet is allocated to the receive queue, the type of the packet is identified by the program instruction for identifying the type of the packet, and if the packet is identified as the protocol packet, the packet is allocated to the receive queue in the target packet transceiving queue group corresponding to the control core; and if the message is identified as a data message, distributing the message to a receiving queue in a target message transceiving queue group corresponding to the forwarding core. The method for identifying the type of the message may be implemented by matching a specific field of the message, for example, if the ethertype of the message is 0x0806, it is determined that the message is an ARP (Address Resolution Protocol) message and belongs to a Protocol message.
It should be noted that, as a possible implementation manner, when the NFV system is started, the above S101, S102, S103, and S104 may be specifically executed by the control core. For example, in the NFV system shown in fig. 3, the NFV system is allocated with 5 cores, namely, core0, core1, core2, core3, and core4, and takes core0 as a control core, each VF interface is created with 4 corresponding receive queues and 4 corresponding transmit queues in advance, and the number of message transceiving queue groups that can be started may be 1, 2, or 4, so that the values in the value group include 1, 2, and 4; when the NFV system is started, if the core0 determines that the NFV system is allocated with 5 cores, and the number is different from each value in the value group, according to the number 5 of the allocated cores and all the values in the value group, exemplarily, 4 is taken as a first value, and each VF interface corresponds to a started 4 receive queue and a started 4 transmit queue, and 4 cores are started, for example, the started 4 cores include core0, core1, core2, and core 3; thus, the started 4 receive queues correspond to the cores 0, 1, 2, and 3 one to one, the started 4 transmit queues correspond to the cores 0, 1, 2, and 3 one to one, and the receive queues and the transmit queues corresponding to the cores 0, 1, 2, and 3 are combined in pairs to form 4 target packet transmit-receive queue groups, so that the cores 0, 1, 2, and 3 all have one target packet transmit-receive queue group on one VF interface.
Based on the above design, in the data processing method provided in the embodiments of the present application, the protocol packet is separately allocated to the control core for processing, and the data packet is allocated to the forwarding core for processing, so that the protocol packet is ensured not to be lost, and the device is prevented from discontinuous flow.
Based on the above, please continue to refer to fig. 1, in the electronic device 300 shown in fig. 1, when the data processing method is implemented:
the processor 302 is configured to, when the NFV system is started, determine a first value in the value group if the number of cores allocated to the NFV system is different from each value in the value group, where the first value is smaller than the number of cores allocated to the NFV system, and each value in the value group indicates the number of message transceiving queue groups that can be started on each VF interface;
the processor 302 is further configured to start a number of target cores corresponding to the first value, and start a number of target packet transceiving queue groups corresponding to the first value for each VF interface, so that each target packet transceiving queue group started corresponding to each VF interface corresponds to each target core one to one;
processor 302 is further configured to receive and transmit messages from and to the target cores via their respective corresponding target messaging queue groups.
Optionally, as a possible implementation manner, when determining the first value in the set of values, the processor 302 is specifically configured to:
if the number of cores allocated for the NFV system is greater than the maximum value in the set of values, the maximum value in the set of values is taken as the first value.
Optionally, as a possible implementation manner, when determining the first value in the set of values, the processor 302 is specifically configured to:
if the number of cores allocated for the NFV system is less than the maximum value in the set of values, the maximum value of all the values in the set of values that are less than the number of cores allocated for the NFV system is taken as the first value.
Optionally, as a possible implementation manner, the at least two VF interfaces include a first VF interface and a second VF interface, and each packet transceiving queue group includes a receiving queue and a sending queue;
the first VF interface and the second VF interface respectively start a target message transceiving queue group with the quantity corresponding to the first numerical value;
when the target core receives and transmits the packet through the corresponding target packet receiving and transmitting queue group, the processor 302 is specifically configured to:
and the target core distributes the message received by the receiving queue corresponding to the target core in the target message receiving and sending queue started by the first VF interface to the target message receiving and sending queue corresponding to the target core in the target message receiving and sending queue started by the second VF interface so as to send the message by the second VF interface.
Optionally, as a possible implementation manner, the target core includes a control core and a forwarding core, and each packet transceiving queue group includes a receiving queue and a sending queue;
the network interface card 304 is configured to allocate the protocol packet to a receive queue in a target packet transceiving queue group corresponding to the control core, so that the control core processes the protocol packet;
the network interface card 304 is further configured to allocate the data packet to a receive queue in the target packet transceiving queue group corresponding to the forwarding core, so that the forwarding core processes the data packet.
Optionally, as a possible implementation, the network interface card 304 includes a flow director and a receiving end extended RSS;
the flow guider is used for guiding the protocol message to a receiving queue in a target message receiving and transmitting queue group corresponding to the control core;
the RSS is configured to allocate the data packet to a receive queue in a target packet transceiving queue group corresponding to the forwarding core.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
To sum up, in the data processing method and the electronic device provided in the embodiment of the present application, when an NFV system is started, if the number of cores allocated to the NFV system is different from each value in a value group, a first value is determined in the value group, and then target cores corresponding to the first value are started, and each VF interface starts a target message transceiving queue group corresponding to the first value, so that each target message transceiving queue group started by each VF interface correspondingly corresponds to each target core one to one, and further the target cores can transmit and receive messages through the respective corresponding target message transceiving queue groups, and the NFV system can be started, compared with the prior art, when the number of cores allocated to the NFV system is different from the number of message transceiving queue groups that can be started by the VF interfaces, the NFV system can be started; and the protocol message is independently distributed to the control core for processing, and the data message is distributed to the forwarding core for processing, so that the protocol message is ensured not to be lost, and the equipment discontinuous flow is avoided.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A data processing method is applied to an electronic device, where a network function virtualization NFV system is stored in the electronic device, the NFV system includes at least two virtual function VF interfaces, each VF interface of the at least two VF interfaces has a plurality of corresponding packet transceiving queue groups created in advance, the at least two VF interfaces include a first VF interface and a second VF interface, each packet transceiving queue group includes a receiving queue and a sending queue, the first VF interface and the second VF interface respectively start a number of target packet transceiving queue groups corresponding to a first value, and the electronic device allocates a plurality of cores to the NFV system, where the method includes:
when the NFV system is started, if the number of cores allocated to the NFV system is different from each value in a value group, determining the first value in the value group, where the first value is smaller than the number of cores allocated to the NFV system, and each value in the value group represents the number of message transceiving queue groups that can be started by each VF interface;
starting a number of target cores corresponding to the first numerical value, and starting a number of target message transceiving queue groups corresponding to the first numerical value by each VF interface, so that each target message transceiving queue group started by each VF interface correspondingly corresponds to each target core one by one;
and the target core distributes the message received by the receiving queue corresponding to the target core in the target message receiving and sending queue started by the first VF interface to a target message receiving and sending queue started by the second VF interface, so that the message is sent by the second VF interface.
2. The method of claim 1, wherein said step of determining said first value in said set of values comprises:
if the number of cores allocated to the NFV system is greater than the maximum of the set of values, taking the maximum of the set of values as the first value.
3. The method of claim 1 or 2, wherein said step of determining said first value in said set of values comprises:
if the number of cores allocated to the NFV system is less than the maximum value in the set of values, taking the maximum value in all the values in the set of values that is less than the number of cores allocated to the NFV system as the first value.
4. The method of claim 1, wherein the target core comprises a control core and a forwarding core, and each of the set of messaging queues comprises a receive queue and a transmit queue;
the method further comprises the following steps:
distributing a protocol message to a receiving queue in the target message transceiving queue group corresponding to the control core so that the control core processes the protocol message;
and distributing the data message to a receiving queue in the target message transceiving queue group corresponding to the forwarding core so that the forwarding core processes the data message.
5. The method of claim 4, wherein the electronic device further comprises a flow director and a receiving end spread RSS;
the flow director guides the protocol message to a receiving queue in the target message receiving and sending queue group corresponding to the control core;
and the RSS distributes the data message to a receiving queue in the target message transceiving queue group corresponding to the forwarding core.
6. An electronic device is characterized by comprising a processor, a memory and a network interface card, wherein a Network Function Virtualization (NFV) system is stored in the memory, the network interface card is virtualized with at least two Virtual Function (VF) interfaces, each VF interface of the at least two VF interfaces is pre-established with a plurality of corresponding message transceiving queue groups, the at least two VF interfaces comprise a first VF interface and a second VF interface, each message transceiving queue group comprises a receiving queue and a sending queue, the first VF interface and the second VF interface are respectively started with a number of target message transceiving queue groups corresponding to a first numerical value, and the electronic device is allocated with a plurality of cores for the NFV system;
the processor is configured to, when the NFV system is started, determine the first value in the set of values if the number of cores allocated to the NFV system is different from each value in the set of values, where the first value is smaller than the number of cores allocated to the NFV system, and each value in the set of values represents the number of sets of messaging queues that can be started by each VF interface;
the processor is further configured to start a number of target cores corresponding to the first value, and start a number of target packet transceiving queue groups corresponding to the first value for each VF interface, so that each target packet transceiving queue group started corresponding to each VF interface corresponds to each target core one to one;
the processor is further configured to, by the target core, allocate, to a target packet transceiving queue group started by the second VF interface, a packet received by a receive queue corresponding to the target core from the target packet transceiving queue group started by the first VF interface, and allocate, to a transmit queue corresponding to the target core, the packet being sent by the second VF interface.
7. The electronic device of claim 6, wherein, in determining the first value in the set of values, the processor is specifically configured to:
if the number of cores allocated to the NFV system is greater than the maximum of the set of values, taking the maximum of the set of values as the first value.
8. The electronic device of claim 6 or 7, wherein, in determining the first value in the set of values, the processor is specifically configured to:
if the number of cores allocated to the NFV system is less than the maximum value in the set of values, taking the maximum value in all the values in the set of values that is less than the number of cores allocated to the NFV system as the first value.
9. The electronic device of claim 6, wherein the target core comprises a control core and a forwarding core, and each of the groups of messaging queues comprises a receive queue and a transmit queue;
the network interface card is used for distributing a protocol message to a receiving queue in the target message receiving and sending queue group corresponding to the control core so as to enable the control core to process the protocol message;
the network interface card is further configured to allocate a data packet to a receive queue in the target packet transceiving queue group corresponding to the forwarding core, so that the forwarding core processes the data packet.
10. The electronic device of claim 9, wherein the network interface card comprises a director and a receiving end extended RSS;
the flow director is used for guiding the protocol message to a receiving queue in the target message receiving and sending queue group corresponding to the control core;
the RSS is configured to allocate the data packet to a receive queue in the target packet transceiving queue group corresponding to the forwarding core.
CN201910093781.6A 2019-01-30 2019-01-30 Data processing method and electronic equipment Active CN109672575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910093781.6A CN109672575B (en) 2019-01-30 2019-01-30 Data processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910093781.6A CN109672575B (en) 2019-01-30 2019-01-30 Data processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109672575A CN109672575A (en) 2019-04-23
CN109672575B true CN109672575B (en) 2022-03-08

Family

ID=66150085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910093781.6A Active CN109672575B (en) 2019-01-30 2019-01-30 Data processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN109672575B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110333899B (en) * 2019-06-27 2022-11-01 腾讯科技(深圳)有限公司 Data processing method, device and storage medium
CN111277514B (en) * 2020-01-21 2023-07-18 新华三技术有限公司合肥分公司 Message queue distribution method, message forwarding method and related devices
CN112968965B (en) * 2021-02-25 2022-12-09 网宿科技股份有限公司 Metadata service method, server and storage medium for NFV network node

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2938033A1 (en) * 2014-03-19 2015-09-24 Nec Corporation Reception packet distribution method, queue selector, packet processing device, and recording medium
US9875208B2 (en) * 2014-10-03 2018-01-23 Futurewei Technologies, Inc. Method to use PCIe device resources by using unmodified PCIe device drivers on CPUs in a PCIe fabric with commodity PCI switches
CN106713185B (en) * 2016-12-06 2019-09-13 瑞斯康达科技发展股份有限公司 A kind of load-balancing method and device of multi-core CPU
CN108984327B (en) * 2018-07-27 2020-12-01 新华三技术有限公司 Message forwarding method, multi-core CPU and network equipment
CN109284192B (en) * 2018-09-29 2021-10-12 网宿科技股份有限公司 Parameter configuration method and electronic equipment

Also Published As

Publication number Publication date
CN109672575A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
EP3556081B1 (en) Reconfigurable server
JP6513835B2 (en) Packet processing method, host, and system in cloud computing system
CN109076029B (en) Method and apparatus for non-uniform network input/output access acceleration
US10135726B2 (en) Virtualization port layer including physical switch port and logical switch port
CN109672575B (en) Data processing method and electronic equipment
US9154451B2 (en) Systems and methods for sharing devices in a virtualization environment
US9019978B2 (en) Port mirroring at a network interface device
US9910687B2 (en) Data flow affinity for heterogenous virtual machines
US10341264B2 (en) Technologies for scalable packet reception and transmission
US10645051B2 (en) Memory-mapped input/output (I/O) channel
US7809875B2 (en) Method and system for secure communication between processor partitions
CN105049464B (en) Techniques for accelerating network virtualization
US9654421B2 (en) Providing real-time interrupts over ethernet
CN104580011A (en) Data forwarding device and method
CN104272697A (en) Packet processing of data using multiple media access controllers
US9344376B2 (en) Quality of service in multi-tenant network
CN109218230B (en) Techniques for balancing throughput across input ports of a multi-stage network switch
US10616116B1 (en) Network traffic load balancing using rotating hash
US11561916B2 (en) Processing task deployment in adapter devices and accelerators
US9904654B2 (en) Providing I2C bus over ethernet
CN111698141B (en) Message forwarding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant