CN113708965A - High-performance component-based data packet processing framework - Google Patents

High-performance component-based data packet processing framework Download PDF

Info

Publication number
CN113708965A
CN113708965A CN202110971582.8A CN202110971582A CN113708965A CN 113708965 A CN113708965 A CN 113708965A CN 202110971582 A CN202110971582 A CN 202110971582A CN 113708965 A CN113708965 A CN 113708965A
Authority
CN
China
Prior art keywords
data packet
virtual network
network function
vlink
vnode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110971582.8A
Other languages
Chinese (zh)
Other versions
CN113708965B (en
Inventor
李宗垚
张梦清
彭璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Computer Technology and Applications
Original Assignee
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Computer Technology and Applications filed Critical Beijing Institute of Computer Technology and Applications
Priority to CN202110971582.8A priority Critical patent/CN113708965B/en
Publication of CN113708965A publication Critical patent/CN113708965A/en
Application granted granted Critical
Publication of CN113708965B publication Critical patent/CN113708965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/303Terminal profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a high-performance component-based data packet processing framework, and belongs to the field of computer networks. The invention provides basic models of virtual network function components of two components, namely a Vnode and a Vlink, and the virtual network function components can be rapidly developed based on the basic models. A group of simple virtual network function components realized based on Vlink and Vnode, including a packet classifier, two-layer switching, simple routing and a packet queue, are provided, and a user can configure the basic virtual network function components into available network functions by using a configuration language in an ini form under the condition of not developing the virtual network function components and achieve the performance equivalent to a special hardware product. The network function and the service chain formed by the virtual network function components are tested by using a pktgen data packet generation tool, and from the experimental result, the network function constructed by using the method has higher performance and good expansibility, and the service chain formed by different network functions still has higher performance.

Description

High-performance component-based data packet processing framework
Technical Field
The invention belongs to the field of computer networks, and particularly relates to a high-performance component-based data packet processing framework.
Background
Nowadays, network security has become an important part of national security, and in order to ensure the security of a network, various network security devices are required to cope with more and more diversified network attacks.
However, with the increasing number of hardware network security devices, some problems are gradually revealed. First, as network attacks become more complex and varied, new security devices may need to be introduced to cope with a new network attack, and it becomes increasingly difficult to open up space and provision appropriate power conditions for the new devices in a limited network center. Secondly, more and more security policies need to be integrated into the dedicated hardware device, which greatly increases the design difficulty of the hardware. The new network attack continuously appears, the development period of hardware equipment is long, and the hardware safety equipment quickly falls behind the development situation of the network attack and defense technology, even hinders the generation of the new safety technology. Thirdly, the operation of network devices of different manufacturers is very different, and the use of devices of multiple manufacturers makes network administrators tired of adapting to various network operating systems and neglect the guarantee of network security, but if only the devices of the same manufacturer are used, the manufacturers are bound, thereby generating more serious potential safety hazard. Finally, to guarantee high availability at network peak, the hub must reserve enough hardware devices, which causes a lot of waste of resources when idle. The introduction of the concept of network function virtualization has led to the diversion of solutions to these problems. Network function virtualization uses a standard virtualization technology to decouple the implementation of software network functions from underlying hardware and integrate virtual network functions implemented by software into general hardware devices.
The most important problem faced in using network function virtualization to convert network services deployed on dedicated hardware to software is the performance problem, and the order of magnitude difference in performance between software and hardware becomes the bottleneck of the greatest development of network function virtualization. Fortunately, the recent development of some high-speed packet forwarding libraries (DPDK, netmap, PF _ RING) and new home-made hardware technologies (multi-core/many-core processor technologies) makes it possible for the performance of virtual network functions implemented by software to approach hardware. However, it is difficult for the upper application developers to use these underlying libraries or hardware to form high-performance and high-availability network applications, and at the same time, the underlying libraries or hardware drivers need to clearly understand the underlying architecture, which requires that the network function developers need to know certain knowledge on the hardware architecture, which slows down the process of making the network device into software.
Another obstacle to implementing network functions using software is to translate the requirements of network services into instances that are truly deployed on a common server. As for the problems faced by the previous hardware solutions, in the existing framework, to form a truly usable virtual network function using the underlying high-speed packet forwarding library and the hardware technology, a series of parameters such as batch processing size and parallelization need to be repeatedly optimized, and the process of deploying the virtual network function on general hardware is seriously hindered. The underlying diverse forwarding libraries and software and hardware drivers, although bringing higher hardware performance, also bring high complexity, and it is difficult for developers to exploit these libraries to develop realistic network functions. The service required by the user needs a long time to be deployed on general hardware, so that the development of network function virtualization is trapped in a dilemma, the high performance of the hardware can not be achieved, and the rapid development and deployment of the network service can not be realized.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is how to provide a high-performance component-based data packet processing framework to solve the problem that the existing network function virtualization can not achieve the high performance of hardware, and can not realize the rapid development and deployment of network services.
(II) technical scheme
In order to solve the above technical problem, the present invention provides a high-performance componentized data packet processing framework, which comprises a hardware layer, a data packet processing and accelerator driver layer, a virtual network function component layer, a virtual network function layer and a network service layer,
hardware layer: the general hardware provides the support of computing, network, storage and accelerator resources for other layers;
packet processing and accelerator driver layer: managing hardware resources and providing a high-speed data packet processing and managing library for upper network application;
virtual network functional component layer: virtually dividing a virtual network function component VNFC into two basic models Vnode and Vlink; the Vnode processes data packet operation with CPU processing as the center, and Vlink describes the incidence relation among the Vnodes, processes data packet flow among Vnode function chains and between the Vnode and a driver, and a user can add a self-defined virtual network function component through an interface function of the Vnode and a Vlink template;
the virtual network function layer: one virtual network function consists of one or more virtual network function components, each virtual network function is realized by software of a network middleware, and a user builds a self-defined virtual network function by using the virtual network function components;
a network service layer: the network service layer is an integrated function that can be directly used by the service support system, and describes the arrangement topology and the dependency relationship of the virtual network function.
Further, Vnode includes the following seven parts:
packet header filter: when a batch of data packets pass through a data packet header filter, a specific data packet header field defined by a user can be extracted as a data packet annotation to enter an inquirer for processing, and meanwhile, the complete data packet is cached in a memory buffer area;
rule table: the rule table is an implementation of a flow table, keys in the rule table are any combination of data packet information, and the action ID is a unique identifier for indexing a predefined action method body;
the inquiry device comprises: the querier searches a rule table according to specific data packet header field information, and then finds out a matched processing action ID, wherein the action ID corresponds to a predefined packet processing action;
an action table: the action table stores the one-to-one corresponding relation between the action ID and the action method body;
an action executor: the action executor traverses a predefined rule table to match action IDs of a batch of data packets, then executes a method body matched with the action IDs to obtain the whole batch of data packets, and then marks notes of the data packets according to the execution result of the method body;
a distributor: the distributor determines which connection point the data packet should pass through, and redirects the data packet flow by checking the tag information on the data packet annotation so that the data packet passes through the corresponding connection point;
connection point: the connection points are the start and end points of the VNFC. Each VNFC may have one or more input connection points and 3 or more output connection points.
Further, the keys in the rule table are any combination of packet information, which are Macda, Vlan, IPsa and payload.
Further, there can only be one rule table per VNFC, and when the rule table is empty, the VNFC will use the default rule, i.e., bypass the VNFC.
Further, there are only three types of labels placed on the packet-discard, forward to Vlink, and forward to Vnode.
Further, the connection points are registered in a configuration file, each connection point has a unique identifier to direct the details of the connection point, the connection point is a method interface, and the connections between the connection points are method calls.
Further, Vlink includes three parts:
receive/output ring queue: the receiving/outputting ring queue is realized based on a packet processing frame, and the zero-copy data packet processing is realized by using a shared memory management provided by the packet processing frame and a direct memory access technology supported by a CPU (Central processing Unit); when the Vlink is used for interacting with hardware, the receiving/output ring queue is realized in a semi-software and semi-hardware mode by using an input/output queue provided by a hardware driver;
the unloading method comprises the following steps: unloading the data packet load entering from the egress queue in the Vlink into a queue provided by a hardware driver or a software library, receiving the data packet processed by the hardware or the software library in a polling mode, and putting the data packet into a receiving queue;
connection point: only two connection points exist in Vlink, one responsible for a limited number of outputs to predecessor nodes and one responsible for a connection to the input of successor nodes.
Further, a user uses a configuration file in an ini form to configure network services required by specific services from bottom to top, firstly, the user clearly selects component types from a virtual network component library, and then hardware resources are allocated to each component; finally, the connection relation of the connection points between the components is determined, and the group of configuration items serve as descriptors of the virtual network functions.
Furthermore, the framework adopts a mode of combining a pipeline and operation to completion, for an individual virtual network function, all execution logics are operated on the same CPU core in an operation to completion mode to complete, and different virtual network functions execute different data packet processing logics and are deployed on different cores in a pipeline mode among the virtual network functions.
Further, the framework forms an available network by parsing the web service description file to connect the virtual network function components, and the specific steps are as follows:
checking input parameters: checking whether the parameters analyzed by the configuration file are empty, and the number of input ports and output ports is reasonable and matched;
creating a virtual network function: registering identifiers of the virtual network functions, and distributing hardware resources for the virtual network functions according to the configuration files;
creating a Vlink component: registering an identifier of a Vlink component, calling a creation function of the Vlink by using a batch processing size parameter in a configuration file to allocate resources for the Vlink component and initializing;
creating a Vnode component: registering an identifier of a Vnode component, initializing and filling a rule table by using a method for creating the rule table by the Vnode component according to a file path of the rule table in a configuration file, and calling a creation method of the Vnode according to a 16-system data packet header mask of the configuration file to complete the creation of a packet header filter, an action table, an action executor and a distributor;
connecting an input port and an output port: the connection points of the connected Vnode and Vlink are mutually registered;
starting the network service: and creating and starting a default control node, wherein the control node is communicated with the virtual network function through a message queue.
(III) advantageous effects
Compared with the prior art, the invention provides a basic model of a virtual network function component consisting of two components, namely a Vnode and a Vlink, and the virtual network function component can be rapidly developed based on the basic model. A group of simple virtual network function components realized based on Vlink and Vnode, including a packet classifier, two-layer switching, simple routing and a packet queue, are provided, and a user can configure the basic virtual network function components into available network functions by using a configuration language in an ini form under the condition of not developing the virtual network function components and achieve the performance equivalent to a special hardware product. The network function and the service chain formed by the virtual network function components are tested by using a pktgen data packet generation tool, and the network function constructed by using the method has higher performance and good expansibility and still has higher performance after different network functions form the service chain.
Drawings
FIG. 1 is an overall architecture diagram of the present invention;
FIG. 2 is a view of the structure of the Vnode model of the present invention;
FIG. 3 is a block diagram of the Vlink model of the present invention;
FIG. 4 compares the throughput of the present invention with the Linux protocol stack;
FIG. 5 illustrates the throughput of the present invention in a single-port network card scenario;
FIG. 6 illustrates throughput of the present invention in a full port network card scenario;
fig. 7 implements the throughput of the different network functions.
Detailed Description
In order to make the objects, contents and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The invention aims to provide a data packet processing framework which can rapidly convert the existing secure network service requirement into the network function deployed on the general hardware by utilizing the existing software and hardware technology and achieve the performance equivalent to that of hardware network equipment.
1. Integrated framework
Fig. 1 is a diagram of the overall architecture of the present invention, and as shown in fig. 1, the overall architecture comprises the following 5 layers:
hardware layer: common hardware provides support for computing, networking, storage, and accelerator resources for the other layers.
Packet processing and accelerator driver layer: manage hardware resources and provide a high-speed packet processing and management library for upper network applications.
Virtual network functional component layer: the virtual network function component VNFC is an integral part of a virtual network function, which implements a subset of the virtual network function. We divide the virtual network functional components into two basic models, Vnode and Vlink. The Vnode processes data packet operations with CPU processing as the center, such as query of a rule table, parsing of a packet header, and the like; and the Vlink describes the association relationship between Vnodes, and mainly processes the data packet flow between Vnode function chains and between Vnodes and drives (network card drive and acceleration engine drive). And the user can add a self-defined virtual network functional component through the interface function of the Vnode and the Vlink template.
The virtual network function layer: a virtual network function may be composed of one or more virtual network function components. Each virtual network function is a software implementation of network middleware in the real world. The user can build a self-defined virtual network function by using the virtual network function component in a system configuration mode.
A network service layer: the network service layer is an integrated function that can be directly used by the service support system, and describes the arrangement topology and the dependency relationship of the virtual network function.
Model of Vnode
The Vnode implements encapsulation of the rule table lookup process. A typical Vnode as shown in fig. 2 comprises the following seven parts:
packet header filter: when a batch of data packets pass through a data packet header filter, a specific data packet header field defined by a user can be extracted as a data packet annotation to enter an inquirer for processing, and meanwhile, a complete data packet is cached in a memory buffer area.
Rule table: a rule table is one implementation of a flow table. The keys in the rule table may be any combination of packet information, such as: macda, Vlan, IPsa, load, etc. The action ID is a unique identifier that indexes a predefined action method body. Only one rule table can exist for each VNFC, and when the rule table is empty, the VNFC will use the default rule, i.e., bypass the VNFC.
The inquiry device comprises: the querier searches the rule table according to the specific packet header field information and then finds the IDs of the matching processing actions, which correspond to the predefined packet processing actions.
An action table: the action table stores the one-to-one correspondence between action IDs and action methods.
An action executor: and the action executor traverses a predefined rule table to match action IDs of a batch of data packets. And then executing the method body matched with the action ID, acquiring the data packets of the whole batch, and marking the annotations of the data packets according to the execution result of the method body. In our design, there are only three types of labels placed on the packet-discard, forward to Vlink, and forward to Vnode.
A distributor: the distributor decides which connection point the data packet should go through. The distributor redirects the packet flow by examining the tag information on the packet annotation so that the packet passes through the corresponding connection point.
Connection point: the connection points are the start and end points of the VNFC. Each VNFC may have one or more input connection points and 3 or more output connection points. Before we use the connection points, we must register the connection points in a configuration file, each connection point having a unique identifier to guide the details of the connection point. Typically, the connection points are a method interface, and the connections between connection points are method calls.
Vlink model
The Vlivk sends the data packet from the sending queue of the front-drive node to the receiving queue of the subsequent node, and encapsulates the data packet load off-loading onto the accelerator driver by using the uniformly defined interface. As shown in fig. 3, a Vlink mainly comprises three parts:
receive/output ring queue: the receive/output ring queues are implemented based on a packet processing framework. They implement zero-copy packet processing using shared memory management provided by the packet processing framework and direct memory access techniques supported by the CPU. When using Vlink to interact with hardware, the receive/output circular queue will be implemented in a semi-software, semi-hardware manner using the input/output queue provided by the hardware driver.
The unloading method comprises the following steps: the unloading method realizes unloading of data packet load entering from the egress queue in the Vlink to a queue provided by a hardware driver or a software library, receiving the data packet processed by the hardware or the software library in a polling mode, and putting the data packet into a receiving queue (a receiving queue for data packet processing logic).
Connection point: the connections in Vlink are similar to the connections in Vnode, the only difference being that only two connections can exist in Vlink, one responsible for a limited number of connections to the output of the predecessor node and one responsible for a connection to the input of the successor node.
4. Configuring network services
We prefabricate a series of basic components. The network card gateway comprises a two-layer exchange, a three-layer exchange, a route, a firewall, a security gateway and the like which are realized based on a Vnode model, and a multi-queue network card channel, an encryption unloading queue, an accelerator unloading queue, a software buffer queue and the like which are realized based on a Vlink model. The user configures the network service required by specific service from bottom to top by using the configuration file in the ini form. First, the user explicitly selects a component type from the virtual network component library. And then hardware resources such as a network card, a CPU (central processing unit), an accelerator and the like are distributed for each component, and a user can configure the components according to the principle of hardware resource affinity so as to obtain better performance. Finally, the connection relation of the connection points between the components is determined, and the group of configuration items serve as descriptors of the virtual network functions.
In the framework of the method, a mode of combining a pipeline and operation to completion is adopted, for an independent virtual network function, all execution logics are operated and completed on the same CPU core in an operation to completion mode, data in a cache can be fully utilized, and operations such as data prefetching and the like are facilitated. And between the virtual network functions, a pipeline form is adopted, different virtual network functions execute different data packet processing logics and are deployed on different cores, so that a network administrator can conveniently perform independent performance optimization on the execution logics of the virtual network functions, and the overall performance is better. Meanwhile, a plurality of virtual network functions are deployed on a plurality of cores in a pipeline mode, switching among threads can be reduced, and performance is optimized. Based on the data packet forwarding model, the actual network service is divided into the virtual network functions which are connected with each other through the network function connection diagram, and the complete network service can be built through simply configuring the connection relation of the virtual network functions. The ini profile describing the complete network functionality we call the web service description.
5. Running web services
The framework forms an available network by analyzing a network service description file to connect virtual network functional components, and the specific steps are as follows:
checking input parameters: check if the parameters parsed from the configuration file are empty, the number of input ports and output ports is reasonable and matching.
Creating a virtual network function: and registering identifiers of the virtual network functions, and distributing hardware resources such as a CPU (Central processing Unit), a network card and an accelerator for the virtual network functions according to the configuration files.
Creating a Vlink component: an identifier of the Vlink component is registered, and a create function of the Vlink is called to allocate resources for the Vlink component using a batch size parameter (32 as a default if not set) in the configuration file and initialized.
Creating a Vnode component: the identifier of the Vnode component is registered and the action table in Vnode does not currently have the customized functionality of the open user. There are only three default modes of the default action table: discard the packet, send the packet to the port and send the packet to the rule table. This is sufficient for most network functions. The rule table is initialized and populated using the method of creating the rule table using the Vnode components according to the file path of the rule table in the configuration file. And then calling a Vnode creating method according to the 16-system data packet header mask of the configuration file to complete the creation of the packet header filter, the action table, the action executor and the distributor.
Connecting an input port and an output port: the points of attachment of the connected Vnode and Vlink are registered with each other.
Starting the network service: the current framework provides a simple command line tool to use the control node to control the virtual network function (change strategy, start, stop, etc.) during operation.
6. Framework performance metrics
Two servers are connected directly, wherein one server runs pktgen to generate 64B, 128B, 256B, 512B and 1024B data packets as test data sources, and the other server runs a test program. Each server is provided with four CPU nodes and 2 network cards with full duplex and double ports of 40Gb, and a linux system is used.
Scene one the present invention compares with linux protocol stack
And the test server respectively operates the linux network protocol stack and the IP router built based on the method. As shown in fig. 4, in the best case, the throughput of the linux network protocol stack is only 6.3Gb/s, and in the case of a 64B packet size, the IP router set up by the present invention can already achieve the throughput of 6.8Gb/s, and when the data packet size reaches 1024B, the linear speed of the network card can already be achieved. Compared with a linux network protocol stack, the performance of the IP router built by the invention is improved by 8-10 times.
Scenario two Performance and expansibility of the invention
The test server runs the IP router set up by the invention, a single IP router example is tested and run for the first time, the server uses a single network card port and a single CPU node, 4 IP router examples are tested and run for the second time, and the server uses 4 network card ports and all 4 CPU nodes. As shown in fig. 5, when the packet size reaches 512B, the throughput of the IP router can reach the speed of generating the packet by the data source, and as the packet size continues to increase, the processing speed of the IP router will reach the linear speed of the network card port. According to statistics, the average size of the data packets on the network is 744B, and the IP router can ensure forwarding at the network card linear velocity in practical application. As shown in fig. 6, when the IP router expands from a single network card port to multiple network card ports and from a single core to multiple cores, the performance of the IP router is not affected, and the linear velocity of the network card is achieved under the condition of 512B data packet size, and due to the limitation of the network card model, when a single network card uses two ports simultaneously, the linear velocity of the network card can only reach 50 Gb/s.
Scene three Performance of different network functions and service chains implemented by the invention
We use the framework of the invention to build IP Router (RT), Firewall (FW), Flow Classifier (FC) and two-layer exchanger (L2fwd) which are common in network. The performance of the network functions during independent operation is tested firstly, two instances are created for each network function, the two instances are configured and operated on different CPU cores of the same node, each instance processes one port of the dual-port network card of the CPU node, and then eight instances (on 8 CPU cores) of the 4 network functions are connected in series to form 2 series service chains (FW-FC-RT-L2fwd), and each service chain is responsible for processing one port of the dual-port network card of the node. As can be seen from fig. 7, when a single network function is running, when a data packet with a size of 512B is processed, the network function can substantially reach the linear speed of the network card, and when the size of the data packet is larger than 512B, the capability of the security gateway for processing the data packet has substantially reached the rate of the network card for receiving and sending the data packet. From fig. 7, we have found that the performance of the series-connected network function is only slightly lower than the worst-performing part of the network functions that make up it. The routing of packets from one network function to another within an application is accomplished by passing pointers rather than copying the entire packet, so that only a small portion of the communication overhead is incurred in this process.
Compared with the prior art, the invention provides the basic models of the virtual network function components of the two components of the Vnode and the Vlink, and the virtual network function components can be rapidly developed based on the basic models. A group of simple virtual network function components realized based on Vlink and Vnode, including a packet classifier, two-layer switching, simple routing and a packet queue, are provided, and a user can configure the basic virtual network function components into available network functions by using a configuration language in an ini form under the condition of not developing the virtual network function components and achieve the performance equivalent to a special hardware product. The network function and the service chain formed by the virtual network function components are tested by using a pktgen data packet generation tool, and the network function constructed by using the method has higher performance and good expansibility and still has higher performance after different network functions form the service chain.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A high-performance componentized data packet processing framework is characterized by comprising a hardware layer, a data packet processing and accelerator driving layer, a virtual network function component layer, a virtual network function layer and a network service layer,
hardware layer: the general hardware provides the support of computing, network, storage and accelerator resources for other layers;
packet processing and accelerator driver layer: managing hardware resources and providing a high-speed data packet processing and managing library for upper network application;
virtual network functional component layer: virtually dividing a virtual network function component VNFC into two basic models Vnode and Vlink; the Vnode processes data packet operation with CPU processing as the center, and Vlink describes the incidence relation among the Vnodes, processes data packet flow among Vnode function chains and between the Vnode and a driver, and a user can add a self-defined virtual network function component through an interface function of the Vnode and a Vlink template;
the virtual network function layer: one virtual network function consists of one or more virtual network function components, each virtual network function is realized by software of a network middleware, and a user builds a self-defined virtual network function by using the virtual network function components;
a network service layer: the network service layer is an integrated function that can be directly used by the service support system, and describes the arrangement topology and the dependency relationship of the virtual network function.
2. The high performance componentized data packet processing framework of claim 1, wherein Vnode comprises the following seven parts:
packet header filter: when a batch of data packets pass through a data packet header filter, a specific data packet header field defined by a user can be extracted as a data packet annotation to enter an inquirer for processing, and meanwhile, the complete data packet is cached in a memory buffer area;
rule table: the rule table is an implementation of a flow table, keys in the rule table are any combination of data packet information, and the action ID is a unique identifier for indexing a predefined action method body;
the inquiry device comprises: the querier searches a rule table according to specific data packet header field information, and then finds out a matched processing action ID, wherein the action ID corresponds to a predefined packet processing action;
an action table: the action table stores the one-to-one corresponding relation between the action ID and the action method body;
an action executor: the action executor traverses a predefined rule table to match action IDs of a batch of data packets, then executes a method body matched with the action IDs to obtain the whole batch of data packets, and then marks notes of the data packets according to the execution result of the method body;
a distributor: the distributor determines which connection point the data packet should pass through, and redirects the data packet flow by checking the tag information on the data packet annotation so that the data packet passes through the corresponding connection point;
connection point: the connection points are the start and end points of the VNFC. Each VNFC may have one or more input connection points and 3 or more output connection points.
3. The high-performance componentized packet processing framework of claim 2, wherein the keys in the rule table are any combination of packet information, the packet information being Macda, Vlan, IPsa and payload.
4. The performance componentized packet processing framework of claim 2, wherein there is only one rule table per VNFC, and when a rule table is empty, the VNFC will use a default rule, i.e., bypass this VNFC.
5. The high performance componentized data packet processing framework of claim 2, wherein the only three markers placed on the data packet are drop, forward to Vlink, and forward to Vnode.
6. The performance componentized packet processing framework of claim 2, wherein connection points are registered in the configuration file, each connection point having a unique identifier to specify the connection point being directed, the connection point being a method interface, and the connections between connection points being method calls.
7. The high-performance componentized data packet processing framework of any of claims 1-6, wherein Vlink comprises three parts:
receive/output ring queue: the receiving/outputting ring queue is realized based on a packet processing frame, and the zero-copy data packet processing is realized by using a shared memory management provided by the packet processing frame and a direct memory access technology supported by a CPU (Central processing Unit); when the Vlink is used for interacting with hardware, the receiving/output ring queue is realized in a semi-software and semi-hardware mode by using an input/output queue provided by a hardware driver;
the unloading method comprises the following steps: unloading the data packet load entering from the egress queue in the Vlink into a queue provided by a hardware driver or a software library, receiving the data packet processed by the hardware or the software library in a polling mode, and putting the data packet into a receiving queue;
connection point: only two connection points exist in Vlink, one responsible for a limited number of outputs to predecessor nodes and one responsible for a connection to the input of successor nodes.
8. The high-performance componentized data packet processing framework of claim 7, wherein, the user configures the network services required by specific services from bottom to top using the configuration file in the form of ini, first, the user explicitly selects the component type from the virtual network component library, and then allocates hardware resources to each component; finally, the connection relation of the connection points between the components is determined, and the group of configuration items serve as descriptors of the virtual network functions.
9. The high-performance componentized packet processing framework of claim 8, wherein the framework employs a combination of pipelining and run-to-completion, all execution logic is completed on the same CPU core in run-to-completion mode for individual virtual network functions, with different virtual network functions executing different packet processing logic deployed on different cores in pipelined fashion between virtual network functions.
10. The high-performance componentized data packet processing framework of claim 9, wherein the framework forms a usable network by parsing the web service description file to connect the virtual network function components by the specific steps of:
checking input parameters: checking whether the parameters analyzed by the configuration file are empty, and the number of input ports and output ports is reasonable and matched;
creating a virtual network function: registering identifiers of the virtual network functions, and distributing hardware resources for the virtual network functions according to the configuration files;
creating a Vlink component: registering an identifier of a Vlink component, calling a creation function of the Vlink by using a batch processing size parameter in a configuration file to allocate resources for the Vlink component and initializing;
creating a Vnode component: registering an identifier of a Vnode component, initializing and filling a rule table by using a method for creating the rule table by the Vnode component according to a file path of the rule table in a configuration file, and calling a creation method of the Vnode according to a 16-system data packet header mask of the configuration file to complete the creation of a packet header filter, an action table, an action executor and a distributor;
connecting an input port and an output port: the connection points of the connected Vnode and Vlink are mutually registered;
starting the network service: and creating and starting a default control node, wherein the control node is communicated with the virtual network function through a message queue.
CN202110971582.8A 2021-08-24 2021-08-24 High-performance component-based data packet processing system Active CN113708965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110971582.8A CN113708965B (en) 2021-08-24 2021-08-24 High-performance component-based data packet processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110971582.8A CN113708965B (en) 2021-08-24 2021-08-24 High-performance component-based data packet processing system

Publications (2)

Publication Number Publication Date
CN113708965A true CN113708965A (en) 2021-11-26
CN113708965B CN113708965B (en) 2023-04-07

Family

ID=78654212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110971582.8A Active CN113708965B (en) 2021-08-24 2021-08-24 High-performance component-based data packet processing system

Country Status (1)

Country Link
CN (1) CN113708965B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104506620A (en) * 2014-12-23 2015-04-08 西安电子科技大学 Extensible automatic computing service platform and construction method for same
WO2017092568A1 (en) * 2015-11-30 2017-06-08 华为技术有限公司 Configuration method and device for virtual network function
US20170324612A1 (en) * 2014-06-25 2017-11-09 Hewlett Packard Enterprise Development Lp Network function virtualization
CN107911158A (en) * 2017-09-27 2018-04-13 西安空间无线电技术研究所 A kind of method of service architecture and offer service based on virtual data plane
CN108292245A (en) * 2015-11-24 2018-07-17 Nec实验室欧洲有限公司 For managing and the method and network of layout virtual network function and network application
CN108667777A (en) * 2017-03-31 2018-10-16 华为技术有限公司 A kind of service chaining generation method and network function composer NFVO
CN109150567A (en) * 2017-06-19 2019-01-04 中兴通讯股份有限公司 Monitoring method, equipment and the readable storage medium storing program for executing of virtual network function module
CN111683074A (en) * 2020-05-29 2020-09-18 国网江苏省电力有限公司信息通信分公司 NFV-based secure network architecture and network security management method
CN112104491A (en) * 2020-09-04 2020-12-18 中国电子科技集团公司第二十研究所 Service-oriented network virtualization resource management method
CN112306628A (en) * 2020-10-12 2021-02-02 上海交通大学 Virtual network function resource management framework based on multi-core server
US20210250299A1 (en) * 2020-02-07 2021-08-12 Huazhong University Of Science And Technology Container-based network functions virtualization platform

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170324612A1 (en) * 2014-06-25 2017-11-09 Hewlett Packard Enterprise Development Lp Network function virtualization
CN104506620A (en) * 2014-12-23 2015-04-08 西安电子科技大学 Extensible automatic computing service platform and construction method for same
CN108292245A (en) * 2015-11-24 2018-07-17 Nec实验室欧洲有限公司 For managing and the method and network of layout virtual network function and network application
WO2017092568A1 (en) * 2015-11-30 2017-06-08 华为技术有限公司 Configuration method and device for virtual network function
CN108667777A (en) * 2017-03-31 2018-10-16 华为技术有限公司 A kind of service chaining generation method and network function composer NFVO
CN109150567A (en) * 2017-06-19 2019-01-04 中兴通讯股份有限公司 Monitoring method, equipment and the readable storage medium storing program for executing of virtual network function module
CN107911158A (en) * 2017-09-27 2018-04-13 西安空间无线电技术研究所 A kind of method of service architecture and offer service based on virtual data plane
US20210250299A1 (en) * 2020-02-07 2021-08-12 Huazhong University Of Science And Technology Container-based network functions virtualization platform
CN111683074A (en) * 2020-05-29 2020-09-18 国网江苏省电力有限公司信息通信分公司 NFV-based secure network architecture and network security management method
CN112104491A (en) * 2020-09-04 2020-12-18 中国电子科技集团公司第二十研究所 Service-oriented network virtualization resource management method
CN112306628A (en) * 2020-10-12 2021-02-02 上海交通大学 Virtual network function resource management framework based on multi-core server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
房爱军等: "网络功能虚拟化网络功能虚拟化:基于虚拟化的中间件盒子", 《中兴通讯技术》 *

Also Published As

Publication number Publication date
CN113708965B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Vieira et al. Fast packet processing with ebpf and xdp: Concepts, code, challenges, and applications
CN108475244B (en) Accelerating network packet processing
CN111371779B (en) Firewall based on DPDK virtualization management system and implementation method thereof
CN105721535B (en) For carrying out calculating equipment, method and the machine readable storage medium of parallel processing to the service function in service function chain
CN108833299B (en) Large-scale network data processing method based on reconfigurable switching chip architecture
CN102334112B (en) Method and system for virtual machine networking
US8806058B1 (en) Packet forwarding path programming using a high-level description language
US8953584B1 (en) Methods and apparatus for accessing route information in a distributed switch
Chen et al. P4SC: Towards high-performance service function chain implementation on the P4-capable device
WO2024016927A1 (en) Programmable network element compiling system and compiling method
Jouet et al. Bpfabric: Data plane programmability for software defined networks
US9077659B2 (en) Packet routing for embedded applications sharing a single network interface over multiple virtual networks
US20130208722A1 (en) Packet routing with analysis assist for embedded applications sharing a single network interface over multiple virtual networks
CN112166579A (en) Multi-server architecture cluster providing virtualized network functionality
Eran et al. Design patterns for code reuse in HLS packet processing pipelines
RU2584471C1 (en) DEVICE FOR RECEIVING AND TRANSMITTING DATA WITH THE POSSIBILITY OF INTERACTION WITH OpenFlow CONTROLLER
CN113708965B (en) High-performance component-based data packet processing system
Mariño et al. Loopback strategy for in-vehicle network processing in automotive gateway network on chip
Wang et al. OXDP: Offloading XDP to SmartNIC for Accelerating Packet Processing
CN107103058A (en) Big data service combining method and composite service combined method based on Artifact
Ruf et al. A scalable high-performance router platform supporting dynamic service extensibility on network and host processors
JP2004520641A (en) Event bus architecture
Lixin et al. Software-Defined Protocol Independent Parser based on FPGA
Han System design for software packet processing
Shen et al. Paragraph: Subgraph-level network function composition with delay balanced parallelism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant