CN116803067A - Communication method and system supporting multiple protocol stacks - Google Patents

Communication method and system supporting multiple protocol stacks Download PDF

Info

Publication number
CN116803067A
CN116803067A CN202180090912.0A CN202180090912A CN116803067A CN 116803067 A CN116803067 A CN 116803067A CN 202180090912 A CN202180090912 A CN 202180090912A CN 116803067 A CN116803067 A CN 116803067A
Authority
CN
China
Prior art keywords
message
memory
protocol stack
data plane
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180090912.0A
Other languages
Chinese (zh)
Inventor
屈明广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN116803067A publication Critical patent/CN116803067A/en
Pending legal-status Critical Current

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The embodiment of the application provides a communication method and a system supporting multiple protocol stacks, wherein the method comprises the following steps: the network port controller can transmit the received message to the corresponding protocol stack for processing according to the type of the message. The management plane message can be processed by an Ethernet protocol stack, and the data plane message can be processed by a data plane protocol stack. By supporting the Ethernet protocol stack and the data plane protocol stack on the same network port, the requirement of the data plane message on time delay can be met while the compatibility is ensured, and the high-performance transmission of the data plane message is realized.

Description

Communication method and system supporting multiple protocol stacks Technical Field
The embodiment of the application relates to the field of communication, in particular to a communication method and a system supporting multiple protocol stacks.
Background
In the field of autopilot, autopilot systems typically include, but are not limited to, a plurality of sensors, hardware for a communication interconnect bus and a core processor, and an autopilot software system running on the core processor. In the running process of the automatic driving system, various sensor data are required to be input into the automatic driving software system through corresponding communication buses. Illustratively, the data is optionally divided into two categories based on the requirements for communication performance: data plane communication and management plane communication.
Data plane communication: data generated by various sensors outside the SoC (System on Chip), such as Lidar (laser Radar) and Radar (Radar, which may also be millimeter wave Radar). Such data may be used directly in various algorithms in the autopilot software system. Due to the large data volume of such data, bandwidth requirements are generally high (e.g., bandwidths above 2Gbps are required).
Management plane communication: and various device configuration, device state monitoring, image compression transmission and other management surface services are required to be run on the SoC, and the requirements of the services on communication performance are relatively low. However, this type of communication requires a user programming interface and network device management functionality that is compatible with POSIX (Portable Operating System Interface, standard portable operating system interface).
For communication data of the data plane, the real-time performance of the automatic driving system is very high. Thus, if external sensor data cannot be transmitted to the autopilot algorithm within a certain time, the safety and reliability of autopilot will be directly compromised. If the ethernet protocol stack (may also be referred to as a standard protocol stack) and the ethernet port driver are used to receive the communication data of the data plane, each data packet needs to traverse the heavy ethernet protocol stack, and further requires data copying and complex message processing overhead of the ethernet protocol stack. Therefore, the ethernet protocol stack cannot meet the hard index requirement of the autopilot system on the deterministic transmission delay of the communication data of the data plane.
Disclosure of Invention
In order to solve the above technical problems, an embodiment of the present application provides a communication method and system supporting multiple protocol stacks. In the method, the network port controller can transmit the messages of different types to the corresponding protocol stacks for processing according to the types of the messages. So as to realize the rapid processing of the data plane message while ensuring the compatibility of the management plane message.
In a first aspect, an embodiment of the present application provides a communication system supporting multiple protocol stacks. The communication system comprises a network port controller, an Ethernet protocol stack and a data plane protocol stack. The network port controller is used for determining that the received first message is a management plane message; and outputting the first message to an Ethernet protocol stack. And the Ethernet protocol stack is used for responding to the received first message and outputting the first message to the first application. The network port controller is further configured to determine that the received second packet is a data plane packet, and store the second packet in the first memory. The data plane protocol stack is used for analyzing the second message in the first memory to obtain the position information of the appointed field of the second message in the first memory; and outputting the position information of the specified field to the second application, so that the second application obtains the specified field from the first memory according to the position information of the specified field. Thus, the communication system in the embodiment of the application can transmit the messages of different types to the corresponding protocol stacks for processing according to the types of the messages. For example, the management plane message is sent to a standard ethernet protocol stack for transceiving. And for the data plane message, transmitting and receiving the data plane message by a data plane protocol stack. Thus, the transmission requirements of different messages can be met. For example, for management plane messages, they are not time-sensitive, but need to meet their compatibility. Therefore, the Ethernet protocol stack is used for processing the management plane message, so that the requirement of the Ethernet message on compatibility can be met. For another example, the data plane message is sensitive to time delay, and in order to enable the data plane message to reach an application quickly, the data plane protocol stack processes the data packet message, and the data plane protocol stack processes the data plane message in a simplifying way, so that the requirement of the data plane message on the time delay can be met. And the communication system in the embodiment of the application can realize that the same network port controller supports a plurality of protocol stacks, thereby improving the utilization rate of network port resources, and meeting the high-performance requirement of the system while ensuring compatibility by supporting the concurrent operation of double communication stacks through the same network port.
Illustratively, the processing of the management message by the ethernet protocol stack requires at least two copies of the data.
Illustratively, the ethernet protocol stack is compatible with the POSIX user programming interface and network device management functions.
The data plane protocol stack may be, for example, the UIO protocol stack in the following embodiments.
Illustratively, the specified field is optionally a data field.
Illustratively, the location information optionally includes a start address of the data field and length information.
By way of example, the communication system may include a plurality of data plane protocol stacks, each data plane protocol stack corresponding to one or more applications.
The first message and the second message may be from the same external device, for example, a radar, or may be from different external devices, which is not limited by the present application.
In one possible implementation manner, the network port controller includes a correspondence between a feature field and a message type, and is specifically configured to: based on the corresponding relation between the characteristic field and the message type, determining the message type corresponding to the characteristic field of the first message as a management plane message. Thus, the network port controller in the embodiment of the application can carry out the shunting processing on the message according to the type of the message, namely, the management plane message is sent to the Ethernet protocol stack for processing, and the data plane message is sent to the data plane protocol stack for processing.
For example, the portal controller may include a hardware flow table, where the hardware flow table may record the correspondence between the feature field and the packet type.
In one possible implementation manner, the network port controller includes a correspondence between a feature field and a message type, and is specifically configured to: based on the corresponding relation between the characteristic field and the message type, determining the message type corresponding to the characteristic field of the second message as a data plane message. Thus, the network port controller in the embodiment of the application can carry out the shunting processing on the message according to the type of the message, namely, the management plane message is sent to the Ethernet protocol stack for processing, and the data plane message is sent to the data plane protocol stack for processing.
In one possible implementation manner, the network port controller includes a first hardware queue, where the first hardware queue corresponds to the management plane protocol stack, and the network port controller is specifically configured to place the received first message in the first hardware queue after determining that the type of the first message is the management plane message. Thus, the network port controller in the embodiment of the application can place the message in the corresponding queue through the queues corresponding to different protocol stacks. Thus, multiplexing of the same network port is realized, namely, the same network port can support the transceiving of messages of a plurality of protocol stacks. For example, the portal controller may place the management plane message in the first hardware queue and forward the message in the first hardware queue to the ethernet protocol stack bound to the first hardware queue for processing.
In one possible implementation, the network port controller is specifically configured to output the first packet in the first hardware queue to the ethernet protocol stack. Thus, the network port controller can transmit the message of the first hardware queue to the Ethernet protocol stack bound with the first hardware queue for processing.
In one possible implementation manner, the network port controller includes a second hardware queue, where the second hardware queue corresponds to the data plane protocol stack, and the network port controller is specifically configured to place the received second message in the second hardware queue after determining that the type of the second message is a data plane message. Thus, the network port controller in the embodiment of the application can place the message in the corresponding queue through the queues corresponding to different protocol stacks. Thus, multiplexing of the same network port is realized, namely, the same network port can support the transceiving of messages of a plurality of protocol stacks. For example, the network port controller may place the data plane packet in the second hardware queue and forward the packet in the second hardware queue to the data plane protocol stack bound with the second hardware queue for processing.
Illustratively, the portal controller may contain a plurality of hardware queues corresponding to a plurality of data plane protocol stacks.
In one possible implementation manner, the network port controller is specifically configured to output at least one packet in the second hardware queue to the first memory, where the at least one packet includes the second packet. Thus, the network port controller can also receive and dispatch messages in batches. That is, the network port controller can send a plurality of messages in the queue to the data plane protocol stack together for processing, so that the cost of each time the messages pass through the protocol stack is reduced. The network port controller in the embodiment of the application can also realize the function of zero copy of data, namely, outputting at least one message to the memory, so that the application can directly read the data from the memory to avoid the cost required by data copy.
In one possible implementation manner, the network port controller is further configured to write location information of each message in the at least one message in the first memory into the second memory; and reporting the interrupt to the data plane protocol stack. The data plane protocol stack is specifically configured to obtain, from the second memory, location information of each packet in the at least one packet in response to the received interrupt. And reading at least one message in the first memory based on the acquired position information of each message in the at least one message, and determining the position information of the appointed field of each message in the at least one message. Therefore, an interrupt through mechanism can be realized in the embodiment of the application, and the interrupt can be directly transmitted to the user state thread at one time, so that the performance loss of interrupt scheduling uncertainty time delay is avoided.
For example, the data plane protocol stack may process one message at a time, that is, send the location information of the specified fields corresponding to the message to the application one by one, and read the location information from the memory once every time the application receives the information of one specified field.
The data plane protocol stack may process a plurality of messages at a time, that is, the position information of the designated field of each of the plurality of messages is sent to the application together, and the application may read the designated field of the plurality of messages in the memory at the same time, thereby further reducing the system overhead.
In a second aspect, an embodiment of the present application provides a communication method supporting multiple protocol stacks. The method is applied to a communication system supporting multiple protocol stacks, wherein the communication system comprises a network port controller, an Ethernet protocol stack and a data plane protocol stack; the network port controller determines that the received first message is a management plane message; the network port controller outputs the first message to the Ethernet protocol stack; the Ethernet protocol stack responds to the received first message and outputs the first message to the first application; the network port controller determines that the received second message is a data plane message; the network port controller stores the second message into the first memory; the data plane protocol stack analyzes the second message in the first memory to obtain the position information of the appointed field of the second message in the first memory; the data plane protocol stack outputs the position information of the specified field to the second application, so that the second application obtains the specified field from the first memory according to the position information of the specified field.
In one possible implementation manner, the network port controller includes a correspondence between a feature field and a message type, and determines that the received first message is a management plane message, including: based on the corresponding relation between the characteristic field and the message type, determining the message type corresponding to the characteristic field of the first message as a management plane message.
In one possible implementation manner, the network port controller includes a correspondence between a feature field and a message type, and determines that the received second message is a data plane message, including: based on the corresponding relation between the characteristic field and the message type, determining the message type corresponding to the characteristic field of the second message as a data plane message.
In one possible implementation manner, the network port controller includes a first hardware queue, where the first hardware queue corresponds to the management plane protocol stack, and after the network port controller determines that the received first packet is a management plane packet, the method further includes: and placing the received first message in a first hardware queue.
In one possible implementation, the network port controller outputs the first packet to the ethernet protocol stack, including: and outputting the first message in the first hardware queue to an Ethernet protocol stack.
In one possible implementation manner, the network port controller includes a second hardware queue, where the second hardware queue corresponds to the data plane protocol stack, and after determining that the received second packet is a data plane packet, the network port controller includes: and placing the received second message in a second hardware queue.
In one possible implementation manner, the network port controller stores the second message into the first memory, including: outputting at least one message in the second hardware queue to the first memory, wherein the at least one message comprises the second message.
In one possible implementation manner, after the network port controller saves the second message to the first memory, the method further includes: the network port controller writes the position information of each message in at least one message in the first memory into the second memory; the network port controller reports an interrupt to the data plane protocol stack; the data plane protocol stack analyzes the second message in the first memory to obtain the position information of the appointed field of the second message in the first memory, and the method comprises the following steps: the data plane protocol stack responds to the received interrupt and acquires the position information of each message in at least one message from the second memory; the data plane protocol stack reads at least one message in the first memory based on the obtained position information of each message in the at least one message, and determines the position information of the appointed field of each message in the at least one message.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, an embodiment of the present application provides a chip. The chip comprises at least one processor and a network port controller. The portal controller and the processor may implement the method of the first aspect and any implementation manner of the first aspect.
Any implementation manner of the third aspect and any implementation manner of the third aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. The technical effects corresponding to the third aspect and any implementation manner of the third aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium. The computer readable storage medium stores a computer program which, when run on a computer or processor, causes the computer or processor to perform the method of the first aspect or any one of the possible implementations of the first aspect.
Any implementation manner of the fourth aspect and any implementation manner of the fourth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fourth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
In a fifth aspect, embodiments of the present application provide a computer program product. The computer program product comprises a software program which, when executed by a computer or processor, causes the method of the first aspect or any one of the possible implementations of the first aspect to be performed.
Any implementation manner of the fifth aspect and any implementation manner of the fifth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fifth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
Drawings
FIG. 1 is a schematic diagram of a host structure according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an exemplary data plane software stack;
FIG. 3 is a schematic diagram of an exemplary initialization flow;
FIG. 4a is a schematic diagram illustrating the processing of a received message by a portal controller;
FIG. 4b is a schematic diagram illustrating the processing of a received message by a portal controller;
FIG. 4c is a schematic diagram illustrating the processing of a received message by a portal controller;
FIG. 4d is a schematic diagram illustrating the interaction flow of the modules in the receiving direction;
FIG. 4e is a schematic diagram illustrating processing of a received message by a data plane software stack;
FIG. 4f is a schematic diagram illustrating processing of a received message by a data plane software stack;
FIG. 4g is a schematic diagram illustrating processing of a received message by a data plane software stack;
FIG. 4h is a schematic diagram illustrating processing of a received message by an application;
FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
The communication method in the embodiment of the application can be applied to an automatic driving system. By way of example, a host and external devices, etc., may be included in the autopilot system. The communication method in the embodiment of the application can be applied to other application scenes with requirements on compatibility and timeliness of data processing, and the application is not limited.
Fig. 1 is a schematic diagram of a host structure according to an embodiment of the present application. Referring to fig. 1, exemplary hosts include, but are not limited to: an application layer, a kernel layer, a management plane software stack running on the kernel, at least one data plane software stack (e.g., data plane software stacks 1-n) running on the kernel, a physical network port, etc.
Illustratively, the application layer optionally includes one or more application programs. For example: task scheduling, primary-backup communication, MDC (Mobile Data Center ) awareness algorithms, authentication modules (which may also be referred to as authentication applications), and one or more algorithm applications, which in embodiments of the present application may also be referred to as data plane applications, such as apps 0-Appn shown in fig. 1, may include, but are not limited to: fusion algorithm application, perception algorithm application, regulation algorithm, and the like.
Illustratively, in embodiments of the present application, the software stacks may be divided into a management plane software stack and a data plane software stack (e.g., data plane protocol stacks 1-n shown in FIG. 1). Illustratively, the management plane software stack and the data plane software stack are isolated from each other and are not affected by each other, and can run concurrently.
Illustratively, the management plane software stack may be used to process management plane communication data. For example, the physical network port receives management plane data (may also be referred to as a management plane packet, or management plane information) input by an external device (e.g., radar), which is not limited by the present application. The physical network port may output the management plane data to the management plane protocol stack. The management plane protocol stack carries out corresponding processing on the management plane data and outputs the processed data to the application layer.
Illustratively, the management plane software stack is optionally run on a Linux kernel (which may also be referred to as an operating system kernel) of the kernel layer. The management plane software stack includes, but is not limited to: an Ethernet protocol stack, an Ethernet port driver and an Ethernet driver framework.
Illustratively, the Ethernet protocol stack, the Ethernet port driver, and the Ethernet driver framework are in kernel mode. Illustratively, the management plane software stack is compatible with the POSIX standard user programming interface.
Illustratively, the process flow of the management plane data by the management plane software stack may refer to the description in the existing protocol standard. The present application is not described in detail.
Still referring to FIG. 1, an exemplary data plane software stack may include one or more data plane software stacks. Illustratively, each data plane software stack may be bound to one or more applications in the application layer. In the embodiment of the present application, an application binding of each data plane software stack and an application layer is taken as an example.
Illustratively, the UIO (User Input Output, user mode input output) protocol stack in the data plane software stack may be used to parse the data plane packet stored in the memory to parse the address and length of the payload (payload) field (which may also be referred to as a valid data field) of the data plane packet in the memory. And sends the address and length corresponding to the payload to the application layer. So that the application layer can read the payaod field of the data plane message directly from the memory while ignoring other parts (e.g., header) of the data plane message. That is, in the embodiment of the present application, the UIO protocol stack may provide a simple processing manner for the data plane packet, and the decapsulation process simply removes the header of the data plane packet, so as to provide the application layer with the payload field without the header.
Illustratively, the UIO portal driver may include a module running on the Linux kernel of the kernel layer (e.g., uio_k_drv, hereinafter) and a module running on the AOS (Automotive Operate System, autopilot operating system) kernel of the kernel layer (e.g., uio_u_drv, hereinafter). The UIO portal driver may be used to abstract underlying hardware (e.g., a portal controller) so that applications may access (or invoke) the underlying hardware. The UIO portal driver is also used for receiving and sending messages, that is, receiving messages from external devices and sending messages sent to the external devices by upper layer applications.
Illustratively, the UIO driving framework may be used to provide functions such as a bottom layer API (Application Programming Interface, application program interface) compatible with both the AOS kernel and the Linux kernel.
The components included in the application layer, the kernel layer, and the software stack shown in fig. 1 do not constitute a specific limitation on the device. In other embodiments of the application, the apparatus may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components.
Fig. 2 is a schematic diagram of the structure of an exemplary software stack. Referring to fig. 2, for example, in an embodiment of the present application, a physical network port may include a plurality of hardware queues, which may also be referred to as message queues. Illustratively, the plurality of queues may be divided into a management plane queue and a data plane queue. The management plane queue is used for caching management plane data. The data plane queue is used for caching data plane data. Illustratively, the data plane queues optionally include at least one data plane sub-queue. Illustratively, each data plane sub-queue may bind a corresponding data plane software stack.
For example, as shown in FIG. 2, the physical network includes queues 0 through n. Illustratively, in an embodiment of the present application, a single queue is bound to one software stack. For example, queue 0 corresponds to a management plane software stack, with queue 0 optionally being a management plane queue. That is, the message in the queue 0 is sent to the management plane software stack for transceiving. Illustratively, queue 1 corresponds to data plane software stack 1, queue n corresponds to data plane software stack n, and queues 1 through n are optionally data plane queues. Correspondingly, the message in the queue 1 is sent to the data plane software stack 1 for receiving and transmitting. The message in the queue n is sent to the data plane software stack n for receiving and transmitting.
Illustratively, in embodiments of the present application, the management plane software stack may be bound to a plurality of APPs (applications). In fig. 2, only the management plane software stack corresponds to APP0, and in fact, the management plane software stack may correspond to APP0 and other multiple APPs, so as to output the messages in the queue 0 to the corresponding APPs, and send the messages of the multiple APPs to other devices.
Illustratively, in an embodiment of the present application, a single data plane software stack corresponds to one APP. For example, referring to fig. 2, data plane software stack 1 corresponds to APP1, and data plane software stack n corresponds to APPn. That is, the data plane software stack 1 may perform a transmitting/receiving process on the packet of APP1, and the data plane software stack n may perform a transmitting/receiving process on the packet of APPn.
Still referring to FIG. 2, a data plane software stack 1 is illustratively taken. The data plane software stack 1 includes, but is not limited to: the device comprises a UIO protocol stack, a UIO network port driver and a UIO driving frame.
In an exemplary embodiment of the present application, the UIO portal driver may be divided into two parts, including a user mode part and a kernel mode part. For convenience of description, the user mode part will be hereinafter abbreviated as exemplary: the uio_u_drv will be referred to as the kernel mode part simply uio_k_drv.
Illustratively, the UIO portal driver runs on the AOS kernel of the kernel layer (the kernel layer is not separately shown in fig. 2 and the following figures to more clearly show interactions between the physical portal and the management and data plane software stacks). Illustratively, the uio_k_drv optionally includes two components: one component is uio_k_drv running on Linux kernel of kernel layer, and the other component is uio_ak_drv running in AOS kernel.
Referring to fig. 2, fig. 3 is an exemplary initialization flow diagram. By way of example, the initialization procedure shown in fig. 3 can also be understood as a preparation procedure. Referring to fig. 3, the method specifically includes:
s301, the UIO_K_DRV creates a UIO device and creates a shared memory.
In an exemplary embodiment of the present application, the UIO portal driver may further include nic_drv, where the module is a common portal driver running inside the Linux kernel. Illustratively, the nic_drv holds an initialization function that the nic_drv runs. For example, the initialization may include memory allocation, data structure initialization, and the like. And, the nic_drv runs an initialization function such that the nic_drv calls uio_k_drv for initialization.
Illustratively, the uio_k_drv runs an initialization function of the uio_k_drv in response to a call of the nic_drv, such that the uio_k_drv performs creation of the UIO device and creates the shared memory.
Illustratively, the UIO device is optionally a hardware device. For example, it may be a portal controller. It may be understood that, in the embodiment of the present application, the uio_k_drv virtualizes the underlying hardware (e.g., the network port controller) into the UIO device, so that the underlying hardware is exposed to the UIO network port driver in the user mode. The user-state UIO network port driver can operate the bottom hardware. For example, by a related instruction, turn on the UIO device, etc.
Illustratively, the uio_k_drv optionally applies for the MBUF memory from a MBUF (Memroy buffer) module.
Illustratively, the MBUF is used for providing services such as memory allocation, reclamation and the like of a shared memory pool for application programs and the UIO communication stack, and is used for providing memory blocks with continuous physical addresses for the UIO network port driver. And an API interface for converting the virtual address into a physical address for the UIO driver. The specific roles of MBUF are described in reference to prior art embodiments and are not described in detail herein.
Illustratively, MBUF memory is used to store data. In the subsequent process, the network port controller can write the data into the MBUF memory, and the application program can directly read the data from the MBUF memory. And the data of the application program can be written into the MBUF memory, and the network port controller can directly read the data from the MBUF memory and send the data so as to realize zero copy of the data without participation of a CPU (Central processing Unit) so as to reduce CPU overhead.
Illustratively, the uio_k_drv may also apply for and create shared memory from the kernel. Optionally, the shared memory includes, but is not limited to: communication memory between user state and kernel state, hardware register memory, BD (buffer descriptor, cache descriptor) memory, etc.
Illustratively, the uio_k_drv may call a UIO device registration function provided by the operating system kernel. So that the UIO device registers with the operating system kernel. It can be understood that the uio_k_drv returns the created identification information of the UIO device (for example, the device name of the UIO device) to the UIO portal driver in the user mode. Thus, the user-state UIO portal driver can execute related operations on the UIO device, such as opening the UIO device, according to the identification information of the UIO device, such as the device name of the UIO device.
S302, the application program opens the UIO device.
Illustratively, the user-state UIO portal driver, i.e., uio_u_drv, may provide API interface functions to the application. The application program can execute corresponding operation on the UIO device through the API interface function. For example, the application program may input an open instruction through an API interface function provided by the uio_u_drv, where the open instruction optionally includes a corresponding UIO device name, and is configured to instruct to open a UIO device corresponding to the specified UIO device name.
And S303, authenticating the application program by the UIO_U_DRV.
Illustratively, the uio_u_drv receives an operation instruction, such as an open instruction, issued by the application. To ensure device security, the uio_u_drv optionally authenticates the application to detect whether the application has permission to operate on the UIO device.
For example, an authentication module, which may also be referred to as a permission control module, may be included in the autopilot operating system. Alternatively, the authentication module may be located in the application layer shown in fig. 1. Illustratively, the uio_u_drv may invoke an authentication module to authenticate the application to check whether the application is legitimate. Optionally, if the application program is illegal, the uio_u_drv denies the application program access to the UIO device based on the result returned by the authentication module. Optionally, if the application is legal, the uio_u_drv allows the application to access the UIO device based on the result returned by the authentication module.
S304, the UIO_U_DRV binds the process with the SMMU module.
Illustratively, the SMMU (system memory manage unit, system cache management unit) is a hardware module within the SoC that is dedicated to virtual address and physical address translation. It will be appreciated that the module may be used to provide the underlying hardware with functionality for translating between virtual addresses in user mode and physical addresses in hardware. For example, when the network port controller needs to read data from the MBUF memory, it obtains a virtual address corresponding to the data in the MBUF memory. The network port controller can call the SMMU module, so that the SMMU module converts the virtual address of the user mode to obtain the corresponding physical address. The SMMU module may send the physical address to the bus to cause the storage device to extract the corresponding data based on the physical address and transmit to the portal controller.
The binding procedure of the process and the SMMU module is briefly described below. Illustratively, the uio_u_drv calls an interface provided by the SMMU module in response to a received operation instruction of the application program to open the UIO device, and outputs a process ID of the application program to the SMMU module.
Illustratively, the SMMU module returns to the uio_u_drv the SSID (sub stream identifier, data flow identification) of the process after binding with the SMMU module. Illustratively, the SMMU module may understand that looking up the page table of the memory address implements an address translation function, and the SSID may be used to identify the page table to which the virtual address belongs. For example, each application program has an address page table in the operating system, and the SMMU module may find a page table corresponding to the application process based on the SSID, and retrieve a correspondence between a virtual address and a physical address in the page table.
Illustratively, the uio_u_drv receives the SSID returned by the SMMU module. The uio_u_drv assigns the SSID to the portal controller. So that the network port controller can call the SMMU module to perform address translation based on the SSID. It should be noted that, in the embodiment of the present application, the function of the SMMU module is only briefly described, and specific details may refer to the implementation process of the SMMU module in the prior art embodiment, which is not described in detail herein.
S305, UIO_U_DRV maps the shared memory in the kernel mode to the user mode space.
Illustratively, as described above, the kernel-mode uio_k_drv is created with the MBUF memory and multiple shared memories in S301. It can be understood that each memory has a corresponding virtual address in kernel state, and the UIO portal driver in kernel state can read and write the memory based on the virtual address of each memory.
In the embodiment of the application, the UIO_U_DRV can map the memory in the kernel mode to the user mode, so that the UIO driver in the user mode can read and write each memory. Specifically, the uio_u_drv may call a mmap function provided by the operating system, and map a plurality of memories created by the uio_k_drv to a user state. It can be understood that each memory has a corresponding virtual address, and the UIO driver of the user mode can access these memories based on the virtual address of each memory.
Illustratively, the communication memory between the user state and the kernel state may be used to store the number of transmit-receive packet queues, the number of hardware interrupts, the number of threads, the base address of BD (buffer descriptor ), etc. The uio_u_drv may obtain corresponding information, such as a hardware interrupt number, from the communication memory between the user state and the kernel state based on the mapping relationship. Each queue corresponds to a hardware interrupt number, for example, as shown in fig. 2, queue 1 corresponds to a hardware interrupt number 1, and queue n corresponds to a hardware interrupt number n. The hardware interrupt number is system generated and will not be repeated hereinafter.
S306, performing interrupt registration on the UIO_U_DRV to the UIO_AK_DRV.
For example, after the uio_u_drv obtains information such as a transmit-receive interrupt number from a communication memory between a user state and a kernel state, an interrupt registration function provided by the uio_ak_drv may be invoked to perform interrupt registration.
For example, the uio_u_drv may output the acquired information such as the hardware interrupt number to the uio_ak_drv. Illustratively, the uio_ak_drv applies for an interrupt to the operating system kernel and registers an interrupt handling function in response to the received information such as the hardware interrupt number. For example, the operating system may assign a corresponding software interrupt number to the hardware interrupt number. The uio_ak_drv may register interrupt processing based on the interrupt number. In the subsequent message transceiving process, after the message transceiving is completed, the network port controller can report the hardware interrupt number of the queue to the operating system, and the operating system can determine the corresponding software interrupt number and the interrupt event corresponding to the software interrupt number based on the corresponding hardware interrupt number so as to realize that the UIO equipment (such as the network port controller) can sense the interrupt. It should be noted that, the purpose of allocating the software interrupt number is to ensure system security, so that the hardware interrupt number is only used for hardware transmission, and the processing between the cores can be processed based on the software interrupt number.
S307, UIO_U_DRV starts the thread and the UIO device.
Illustratively, the uio_u_drv starts a data plane thread. The data plane thread may be configured to wait for the arrival of a transmit-receive packet event by the portal hardware to perform a transmit-receive packet related process flow in the thread function.
Illustratively, the uio_u_drv starts a management plane thread. The management plane thread is used to call poll functions to block various management plane event messages waiting for the operating system. Such as a network port Link down, link up, network port failure, etc. For example, the management plane thread may apply for an event ID and an interrupt number to an event scheduling module in uio_ak_drv. The supervisor thread may input the event ID and interrupt number to uio_u_drv in correspondence. Accordingly, the uio_u_drv may input the time ID and the interrupt number to the operating system kernel.
For example, the data plane thread may output the event ID to the event management scheduler. For example, the event management scheduler may maintain a correspondence between event IDs and thread IDs of data plane threads.
Illustratively, uio_u_drv enables UIO devices (i.e., portal controllers). For example, the uio_u_drv may write information to the portal controller queue interrupt enable register to initiate a transmit/receive packet hardware interrupt function of a queue (e.g., queue 1 in fig. 2) in the portal controller corresponding to the protocol stack to which the uio_u_drv belongs. The network port controller can respond to the operation of the UIO_U_DRV and start to receive and send messages.
Illustratively, after the preparation flow in fig. 3 is finished, the UIO device suspends the data plane thread and the management plane thread, and enters a sleep state to wait for a packet receiving and transmitting event to arrive.
Referring to fig. 2 and 3, fig. 4a is a schematic diagram illustrating processing of a received message by a network port controller. Referring to fig. 4a, an external device 1 (e.g., radar) sends a message 1 to a host, as an example. Wherein, the message 1 is a management plane message. The dashed lines shown in fig. 4a are schematically shown transmission paths of management plane messages. Illustratively, the portal controller receives message 1. The portal controller may be preconfigured with a hardware flow table. For example, the hardware flow table may record correspondence of address information and communication types (including data plane communication and management plane communication). For example, the network port controller may search in the hardware flow table based on the address information, such as the quintuple, carried in the message, so as to obtain the communication type corresponding to the quintuple information that is successfully matched, so as to determine that the message is a data plane message or a management plane message.
For example, if the portal controller determines that the message 1 is a management plane message, the portal controller may place the message in a queue corresponding to the management plane protocol stack, i.e. the queue 0 in fig. 2. The management plane protocol stack may process the message in queue 0 accordingly and output the data to a corresponding application, such as APP0.
Referring to fig. 2 and 3, fig. 4b is a schematic diagram illustrating processing of a received message by the network port controller. Referring to fig. 4b, an external device 1 (e.g. radar) sends a message 2 to a host, as an example. Wherein, the message 2 is a data plane message. The dashed lines shown in fig. 4b are schematically shown transmission paths of the data plane messages.
Illustratively, the portal controller determines that message 2 is a data plane message. The internet access controller may place the packet into a corresponding queue based on a binding relationship between the queue, the UIO protocol stack and the application program, for example, the internet access controller may place the packet from the external device 1 into the queue 1 after detecting that the packet is a data plane packet based on a corresponding relationship between the external device 1 and the queue 1.
Still referring to fig. 4b, illustratively, a data plane message (e.g., message 2) in queue 1 will be transceived by data plane software stack 1 such that APP1 obtains message 2.
The following describes the processing manner of the data plane message in detail with reference to specific embodiments. Referring to fig. 4c, in an exemplary manner, as described in the steps of fig. 3, the uio_k_drv creates an MBUF memory and one or more shared memories (e.g., BD memory of fig. 4 c) during the preparation phase. After the network port controller receives the complete message of the message 1, the network port controller can store the message 2 into the MBUF memory. For example, the portal controller may read the virtual address of the MBUF memory from the BD memory based on the virtual address of the BD memory in the kernel state. For example, the portal controller may output the virtual addresses of message 2, SSID (concepts detailed above), and MBUF memory to the SMMU module. Illustratively, as described above, the SMMU module may detect a page table of SSID identifications based on the SSID. The SMMU may retrieve the corresponding hardware address in the page table based on the virtual address. For example, the SMMU module may output Data and hardware addresses to a bus and transmit the Data to a memory device, such as a DDR (Double Data Rate) via the bus. DDR may write message 2 into MBUF memory indicated by the hardware address.
Illustratively, after the internet access controller completes writing of the message 2, the internet access controller updates the relevant information of the data recorded in the BD memory. For example, the portal controller will move the write pointer in BD memory to indicate the number of messages currently being written to the MBUF memory by the portal controller. For example, a BD is used to indicate that a message is stored in the memory. Correspondingly, in the subsequent process, the uio_u_drv may determine the number of messages written into the MBUF memory by the network port controller based on pointer movement of the BD memory. And, can also based on writing into the pointer, obtain the relevant information that the message corresponds. For example, the relevant information includes, but is not limited to: the starting address of the message, the length of the message, etc.
With reference to fig. 4c, fig. 4d is a schematic diagram illustrating a message receiving process flow. Referring to fig. 4d, the method specifically includes:
s401, the network port controller reports an interrupt to the kernel of the operating system.
For example, as shown in fig. 4c, after the portal controller writes at least one message into the memory, a hardware interrupt may be generated to trigger the operating system kernel to execute the subsequent steps. For example, as described above, in the preparation phase, the portal controller acquires the hardware interrupt number corresponding to each queue. If the network port controller completes the writing operation of the message 2 in the queue 1, the network port controller may report the hardware interrupt number 1 corresponding to the queue 1 to an operating system kernel (for example, linux kernel in kernel layer).
Optionally, in the embodiment of the present application, the portal controller may generate an interrupt after receiving a plurality of messages, for example, two or more messages, and trigger other modules to execute a processing procedure for the plurality of messages. Therefore, the UIO protocol stack can process a plurality of messages at one time, so that the interruption times are reduced, and the cost required for processing the messages each time is further reduced.
S402, the operating system kernel outputs a software interrupt number to the UIO_K_DRV.
Illustratively, the operating system kernel is responsive to a received hardware interrupt number sent by the portal controller. As described above, the operating system kernel maintains the correspondence between the hardware terminal number and the software terminal number. For example, the operating system kernel may obtain a corresponding software terminal number based on the received hardware interrupt number. Illustratively, the operating system outputs a software interrupt number to the uio_k_drv to invoke an interrupt handling function in the uio_k_drv (see above for concepts).
S403, the UIO_K_DRV outputs the event ID to the event scheduling module.
Illustratively, the uio_k_drv, in response to the received software interrupt number, may obtain an event ID corresponding to the software interrupt number based on the interrupt handling function.
Illustratively, the uio_k_drv outputs an event ID to the event scheduling control module to indicate that there is currently an interrupt event corresponding to the event ID.
S404, the UIO_K_DRV closes the queue interrupt.
For example, the uio_k_drv may instruct the operating system kernel to close the interrupt of the queue during the processing of the current interrupt by the uio_k_drv and other modules. That is, after the message is received in the queue, no interrupt is generated, so as to prevent the current processing flow from being interfered by the subsequent interrupt in the process of processing the current terminal by the uio_k_drv and other modules. It can be appreciated that the corresponding overhead is associated with the processing of the message corresponding to each interrupt by the autopilot system. If the current processing flow is interrupted by a subsequent interrupt and the receiving flow is repeatedly executed, the scheduling overhead will be increased. Thus, closing interrupts may effectively reduce scheduling overhead.
S405, the event scheduling module determines the corresponding thread based on the event ID.
Illustratively, as described above, the event scheduling module records the correspondence between event IDs and threads. For example, in the embodiment of the present application, one event ID corresponds to one thread, and a single thread may process multiple queued messages.
Illustratively, the event scheduling module, in response to the received event ID, determines the thread corresponding to the event ID.
S406, the event scheduling module wakes up the data plane thread.
Illustratively, as described above, the data plane threads in the uio_u_drv are in a dormant state after the preparation flow. For example, after determining that the data plane thread needs to be awakened, the event scheduling module may awaken the data plane thread, so that the data plane thread performs packet transceiving processing.
S407, the data surface thread carries out packet receiving and transmitting processing.
Illustratively, the data plane thread may read the interrupt status register of the queue to determine whether it is a Transmit (TX) interrupt or a Receive (RX) interrupt.
For example, if the data plane thread determines to receive an interrupt (sending an interrupt is described in the embodiments below), it may be determined that the portal controller has successfully received at least one message, for example, one message or two or more messages. Illustratively, as described above, the network port controller writes at least one message into the MBUF memory, and indicates the number of messages received, the address of the message in the memory, the length of the message, and the like by moving the write pointer in the BD memory
In order to better explain the processing mode of the data plane software stack to the data plane message, the processing mode of the message 2 is only described with the data plane software stack. In other embodiments, if other messages are stored in the MBUF memory, the method is the same as that of the message 2, and the description thereof is omitted.
Referring to fig. 4e, exemplary, as described in S305 above, the uio_u_drv maps the kernel-mode MBUF memory and the shared memory to the user mode, that is, the data plane threads in uio_u_drv can read the information in the MBFU memory and the shared memory based on the virtual addresses in the user mode in the MBUF memory and the shared memory.
Still referring to fig. 4e, the exemplary data plane thread may read the relevant information indicated by the read pointer (including the address and length of the message 2 in the MBUF memory) by moving the read pointer in the BD memory until the read pointer coincides with the write pointer. It should be noted that, regarding the pointer reading manner in the BD memory, reference may be made to the related content in the prior art embodiment, and the description of the present application will not be repeated.
With continued reference to fig. 4e, the data plane thread outputs the obtained information such as the address and length of the message 2 in the MBUF memory to the UIO protocol stack.
Referring to fig. 4f, the UIO protocol stack may read the message 2 in the MBUF memory based on the address and length of the obtained message 2 in the MBUF memory, for example. The UIO protocol stack may detect the address and length of the payload field of message 2. The message may optionally include a header and payload field, and may further include fields such as CRC, which are not limited by the present application. The dataplane thread can read the address and length of the payload field of message 2 in the MBUF memory.
Referring to fig. 4g, the UIO protocol stack may send the address and length of the payload field of the acquired packet 2 to APP1, for example.
Referring to fig. 4h, for example, APP1 may read the payload field from the MBUF memory based on the address and length of the payload field entered by the UIO protocol stack. That is, in the embodiment of the present application, the UIO protocol stack may strip the header of the message 2, so that the upper layer application directly obtains the data portion in the message.
It should be noted that, in the embodiment of the present application, only the payload field is used as an example for illustration. In other embodiments, the UIO protocol stack may negotiate with the APP in advance to determine which fields in the message are of interest to the APP, and the UIO protocol stack may send the address and length of the fields of interest to the APP.
In an exemplary embodiment of the present application, as described above, after receiving a plurality of messages, the network port controller may report a hardware interrupt, that is, the MBUF memory already stores a plurality of messages, and correspondingly, the BD memory records relevant information (for example, an address, a length, etc. of the message in the MBUF memory) of each of the plurality of messages.
In one example, the data plane thread and the UIO protocol stack may process multiple messages in the MBUF memory one by one according to the processing manner of the message 2 described above. For example, after the data plane thread obtains information such as an address and a length of a message, the information such as the address and the length of the message is sent to the UIO protocol stack. The UIO protocol stack analyzes the message, sends the address and the length of the payaod field to the APP, and processes other messages in the MBUF memory in sequence.
In another example, the data plane thread and the UIO protocol stack may process multiple messages simultaneously. For example, the data plane thread may obtain information such as an address and a length of each of the plurality of messages from the BD memory. And the data plane thread sends the addresses and the lengths corresponding to the messages to the UIO protocol stack. The UIO protocol stack reads the address and length of the payload field of each of the plurality of messages. And the address and the length of the payload field of each message are sent to the APP together.
In the embodiment of the application, the application can directly read the payload field from the MBFU memory based on the address and the length of the payload field in the MBUF memory of the message input by the data plane software stack. Thus realizing zero copy transmission of the message without the need of copying the message for a plurality of times in the processing flow of a management plane protocol stack, namely an Ethernet software protocol stack.
Optionally, after the application program reads the message, the UIO portal driver may be instructed to release the memory area in the MBUF memory storing the message, so as to recover the memory area, thereby saving the memory resource.
In the embodiment of the application, in the process of the interruption reported by the network port controller, the UIO network port driver responds to the interruption with the highest priority when processing the interruption, and notifies an event dispatcher in the interruption function to notify a corresponding thread in the user state UIO network port driver to timely process the interruption event. Therefore, the requirement of time-lapse deterministic data communication in the vehicle-mounted automatic driving field is met through the interrupt straight-through mode.
S408, the data plane thread enables hardware interrupts.
For example, after the data plane thread completes the interrupt processing, the operating system kernel may be instructed to re-enable the interrupt. The kernel of the operating system responds to the instruction of the data plane thread and can allow the internet access controller to continuously report the interrupt and repeatedly execute the flow.
Exemplary, the module interaction process for the reception direction is shown in fig. 4 d. For the transmission direction, i.e. the application program, it is necessary to transmit data to the external device. The specific flow can be as follows: referring to fig. 2, taking APP1 as an example, APP1 writes data into the MBUF memory. And, APP1 outputs the address of the data in the MBUF memory and the related information such as the data length to the corresponding software stack, namely, the data plane software stack 1. The UIO protocol stack transparently transmits the acquired related information to the UIO network port driver. Illustratively, the UIO portal driver may update the pointer in the BD memory based on the relevant information. The specific updating mode can refer to the prior art, and the application is not limited.
For example, the portal controller may obtain information such as a virtual address of the data in the MBUF memory, a data length, and the like, based on the pointer in the BD memory. Illustratively, the portal controller may retrieve the data from the MBUF memory via the SMMU module. The specific details are similar to the data receiving process and will not be described here again.
For example, after the network port controller obtains the data, the data may be processed accordingly, for example, ethernet encapsulation is performed on the data, so as to obtain a corresponding packet. The network port controller places the message in the queue 1 and sends the message.
For example, after the message in the queue 1 is sent, the portal controller may report the queue interrupt number of the queue 1 to the operating system kernel, and for details, reference may be made to descriptions of S402 to S406. For example, after the uio_u_drv (specifically, the data plane thread) wakes up, it may determine that the current interrupt event is a send interrupt, and may further determine that the data in the MBUF memory has been sent. Illustratively, the uio_u_drv may free up a cache region that stores the data. And re-enables the hardware interrupt, i.e., performs S408.
An apparatus provided by an embodiment of the present application is described below. As shown in fig. 5:
fig. 5 is a schematic structural diagram of a communication device according to an embodiment of the present application. As shown in fig. 5, the communication device 500 may include: processor 501, transceiver 505, and optionally memory 502.
The transceiver 505 may be referred to as a transceiver unit, a transceiver circuit, etc. for implementing a transceiver function. The transceiver 505 may include a receiver, which may be referred to as a receiver or a receiving circuit, etc., for implementing a receiving function, and a transmitter; the transmitter may be referred to as a transmitter or a transmitting circuit, etc., for implementing a transmitting function.
The memory 502 may store a computer program or software code or instructions 504, which computer program or software code or instructions 504 may also be referred to as firmware. The processor 501 may control the MAC layer and the PHY layer by running a computer program or software code or instructions 503 therein or by calling a computer program or software code or instructions 504 stored in the memory 502 to implement the communication method provided by the embodiments of the present application. The processor 501 may be a central processing unit (central processing unit, CPU), and the memory 502 may be, for example, a read-only memory (ROM), or a random access memory (random access memory, RAM).
The processor 501 and transceiver 505 described in the present application may be implemented on an integrated circuit (integrated circuit, IC), analog IC, radio frequency integrated circuit RFIC, mixed signal IC, application specific integrated circuit (application specific integrated circuit, ASIC), printed circuit board (printed circuit board, PCB), electronic device, etc.
The communication device 500 may further include an antenna 506, and the modules included in the communication device 500 are only exemplary, and the present application is not limited thereto.
As described above, the communication device in the above embodiment description may be an automatic driving system, but the scope of the communication device described in the present application is not limited thereto, and the structure of the communication device may not be limited by fig. 5. The communication means may be a stand-alone device or may be part of a larger device. For example, the implementation form of the communication device may be:
(1) A stand-alone integrated circuit IC, or chip, or a system-on-a-chip or subsystem; (2) A set of one or more ICs, optionally including storage means for storing data, instructions; (3) modules that may be embedded within other devices; (4) an in-vehicle apparatus, etc.; (5) others, and so forth.
For the case where the implementation form of the communication device is a chip or a chip system, reference may be made to the schematic structural diagram of the chip shown in fig. 6. The chip shown in fig. 6 includes a processor 601 and an interface 602. Wherein the number of processors 601 may be one or more, and the number of interfaces 602 may be a plurality. Alternatively, the chip or system of chips may include a memory 603.
All relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
Based on the same technical idea, the embodiments of the present application also provide a computer-readable storage medium storing a computer program, the computer program containing at least one piece of code executable by a computer to control the computer to implement the above-mentioned method embodiments.
Based on the same technical idea, the embodiments of the present application also provide a computer program for implementing the above-mentioned method embodiments when the computer program is executed by a terminal device.
The program may be stored in whole or in part on a storage medium that is packaged with the processor, or in part or in whole on a memory that is not packaged with the processor.
Based on the same technical conception, the embodiment of the application also provides a chip which comprises a network port controller and a processor. The network port controller and the processor can realize the method embodiment.
The steps of a method or algorithm described in connection with the present disclosure may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access Memory (Random Access Memory, RAM), flash Memory, read Only Memory (ROM), erasable programmable Read Only Memory (Erasable Programmable ROM), electrically Erasable Programmable Read Only Memory (EEPROM), registers, hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (16)

  1. A communication system supporting multiple protocol stacks is characterized by comprising a network port controller, an Ethernet protocol stack and a data plane protocol stack;
    The network port controller is used for:
    determining that the received first message is a management plane message;
    outputting the first message to the Ethernet protocol stack;
    the ethernet protocol stack is configured to:
    responding to the received first message, and outputting the first message to a first application;
    the network port controller is further configured to:
    determining that the received second message is a data plane message;
    storing the second message into a first memory;
    the data plane protocol stack is configured to:
    analyzing the second message in the first memory to obtain the position information of the appointed field of the second message in the first memory;
    and outputting the position information of the specified field to a second application, so that the second application acquires the specified field from the first memory according to the position information of the specified field.
  2. The system according to claim 1, wherein the portal controller includes a correspondence between a feature field and a message type, and the portal controller is specifically configured to:
    and determining the message type corresponding to the characteristic field of the first message as a management plane message based on the corresponding relation between the characteristic field and the message type.
  3. The system according to claim 1, wherein the portal controller includes a correspondence between a feature field and a message type, and the portal controller is specifically configured to:
    and determining the message type corresponding to the characteristic field of the second message as a data surface message based on the corresponding relation between the characteristic field and the message type.
  4. The system according to claim 2, wherein the portal controller comprises a first hardware queue, the first hardware queue corresponding to the management plane protocol stack, the portal controller being specifically configured to:
    after determining that the type of the first message is a management plane message, placing the received first message in the first hardware queue.
  5. The system of claim 4, wherein the portal controller is specifically configured to:
    outputting the first message in the first hardware queue to the Ethernet protocol stack.
  6. The system of claim 3, wherein the portal controller includes a second hardware queue, the second hardware queue corresponding to the data plane protocol stack, the portal controller being specifically configured to:
    and after determining that the type of the second message is a data plane message, placing the received second message in the second hardware queue.
  7. The system of claim 6, wherein the portal controller is specifically configured to:
    outputting at least one message in the second hardware queue to the first memory, wherein the at least one message comprises the second message.
  8. The system of claim 7, wherein the system further comprises a controller configured to control the controller,
    the network port controller is further configured to:
    writing the position information of each message in the at least one message in the first memory into a second memory;
    reporting an interrupt to the data plane protocol stack;
    the data plane protocol stack is specifically configured to:
    in response to the received interrupt, acquiring the position information of each message in the at least one message from the second memory;
    and reading the at least one message in the first memory based on the acquired position information of each message in the at least one message, and determining the position information of the appointed field of each message in the at least one message.
  9. The communication method supporting the multi-protocol stack is characterized by being applied to a communication system supporting the multi-protocol stack, wherein the communication system comprises a network port controller, an Ethernet protocol stack and a data plane protocol stack;
    The network port controller determines that the received first message is a management plane message;
    the network port controller outputs the first message to the Ethernet protocol stack;
    the Ethernet protocol stack responds to the received first message and outputs the first message to a first application;
    the network port controller determines that the received second message is a data plane message;
    the network port controller stores the second message into a first memory;
    the data plane protocol stack analyzes the second message in the first memory to obtain the position information of the appointed field of the second message in the first memory;
    and the data plane protocol stack outputs the position information of the specified field to a second application, so that the second application acquires the specified field from the first memory according to the position information of the specified field.
  10. The method of claim 9, wherein the portal controller includes a correspondence between a characteristic field and a message type, and wherein the portal controller determines that the received first message is a management plane message, comprising:
    and determining the message type corresponding to the characteristic field of the first message as a management plane message based on the corresponding relation between the characteristic field and the message type.
  11. The method of claim 9, wherein the portal controller includes a correspondence between a characteristic field and a message type, and wherein the portal controller determines that the received second message is a data plane message, comprising:
    and determining the message type corresponding to the characteristic field of the second message as a data surface message based on the corresponding relation between the characteristic field and the message type.
  12. The method of claim 10, wherein the portal controller includes a first hardware queue, the first hardware queue corresponding to the management plane protocol stack, and further comprising, after the portal controller determines that the received first message is a management plane message:
    and placing the received first message in the first hardware queue.
  13. The method of claim 12, wherein the portal controller outputting the first message to the ethernet protocol stack comprises:
    outputting the first message in the first hardware queue to the Ethernet protocol stack.
  14. The method of claim 11, wherein the portal controller includes a second hardware queue, the second hardware queue corresponding to the data plane protocol stack, and wherein after the portal controller determines that the received second message is a data plane message, the method includes:
    And placing the received second message in the second hardware queue.
  15. The method of claim 14, wherein the portal controller storing the second message in the first memory comprises:
    outputting at least one message in the second hardware queue to the first memory, wherein the at least one message comprises the second message.
  16. The method of claim 15, wherein after the portal controller saves the second message in the first memory, further comprising:
    the network port controller writes the position information of each message in the at least one message in the first memory into a second memory;
    the network port controller reports an interrupt to the data plane protocol stack;
    the data plane protocol stack analyzes the second message in the first memory to obtain the position information of the appointed field of the second message in the first memory, and the method comprises the following steps:
    the data plane protocol stack responds to the received interrupt and acquires the position information of each message in the at least one message from the second memory;
    the data plane protocol stack reads the at least one message in the first memory based on the obtained position information of each message in the at least one message, and determines the position information of the appointed field of each message in the at least one message.
CN202180090912.0A 2021-05-31 2021-05-31 Communication method and system supporting multiple protocol stacks Pending CN116803067A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/097148 WO2022251998A1 (en) 2021-05-31 2021-05-31 Communication method and system supporting multiple protocol stacks

Publications (1)

Publication Number Publication Date
CN116803067A true CN116803067A (en) 2023-09-22

Family

ID=84323748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180090912.0A Pending CN116803067A (en) 2021-05-31 2021-05-31 Communication method and system supporting multiple protocol stacks

Country Status (2)

Country Link
CN (1) CN116803067A (en)
WO (1) WO2022251998A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116521603A (en) * 2023-06-30 2023-08-01 北京大禹智芯科技有限公司 Method for realizing MCTP protocol based on FPGA
CN117395329B (en) * 2023-12-13 2024-02-06 井芯微电子技术(天津)有限公司 Method, device and storage medium for receiving and transmitting Ethernet two-layer protocol message
CN118034958A (en) * 2024-04-07 2024-05-14 阿里云计算有限公司 Task state notification system and method for multi-process scene

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019101332A1 (en) * 2017-11-24 2019-05-31 Nokia Solutions And Networks Oy Mapping of identifiers of control plane and user plane
CN108200086B (en) * 2018-01-31 2020-03-17 四川九洲电器集团有限责任公司 High-speed network data packet filtering device
CN110535813B (en) * 2018-05-25 2022-04-22 网宿科技股份有限公司 Method and device for processing coexistence of kernel mode protocol stack and user mode protocol stack
CN110753008A (en) * 2018-07-24 2020-02-04 普天信息技术有限公司 Network data processing device and method based on DPAA
CN112422453B (en) * 2020-12-09 2022-05-24 新华三信息技术有限公司 Message processing method, device, medium and equipment

Also Published As

Publication number Publication date
WO2022251998A1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
CN116803067A (en) Communication method and system supporting multiple protocol stacks
US10924483B2 (en) Packet validation in virtual network interface architecture
US11146508B2 (en) Data processing system
US11093284B2 (en) Data processing system
CN113688072B (en) Data processing method and device
US8645558B2 (en) Reception according to a data transfer protocol of data directed to any of a plurality of destination entities for data extraction
US7849214B2 (en) Packet receiving hardware apparatus for TCP offload engine and receiving system and method using the same
WO2014002500A1 (en) Qos control system, qos control method, and program
US11750418B2 (en) Cross network bridging
EP3402172B1 (en) A data processing system
EP4160399A1 (en) Method and apparatus for running process
CN114201268B (en) Data processing method, device and equipment and readable storage medium
US9069592B2 (en) Generic transport layer mechanism for firmware communication
US20220272070A1 (en) Electronic device for controlling packet transmission, and operating method thereof
EP3011702B1 (en) Network mode conflict resolution
CN115103036A (en) Efficient TCP/IP datagram processing method and system
CN107172139B (en) Wireless protocol stack and implementation method thereof
CN117370262A (en) Communication method, device, chip system and electronic equipment
CN108984324B (en) FPGA hardware abstraction layer
CN110618962A (en) Multi-core network concurrent access method, system and medium of FT-M6678 chip
CN114528114A (en) Data processing method, device and equipment
CN112711546A (en) Memory configuration method and device and storage medium
CN117407187A (en) Data transmission method and device, multi-core processor and storage medium
JPWO2003014947A1 (en) HOST DEVICE, ELECTRONIC DEVICE, AND TRANSMISSION SYSTEM CONTROL METHOD
US10681616B2 (en) Wireless communication device, wireless communication method, computer device, and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination