CN114827053A - Core granulation network processor architecture - Google Patents

Core granulation network processor architecture Download PDF

Info

Publication number
CN114827053A
CN114827053A CN202210702099.4A CN202210702099A CN114827053A CN 114827053 A CN114827053 A CN 114827053A CN 202210702099 A CN202210702099 A CN 202210702099A CN 114827053 A CN114827053 A CN 114827053A
Authority
CN
China
Prior art keywords
module
core
array
fpga acceleration
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210702099.4A
Other languages
Chinese (zh)
Inventor
杨惠
李韬
孙志刚
吕高锋
刘汝霖
熊智挺
卓超
全巍
李存禄
赵国鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202210702099.4A priority Critical patent/CN114827053A/en
Publication of CN114827053A publication Critical patent/CN114827053A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/03Protocol definition or specification 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Abstract

The invention discloses a core granulation network processor architecture, which comprises a core granulation network, an FPGA acceleration module, a multi-core processor array and a configurable switching chip, wherein the core granulation network comprises a core granulation network core, a FPGA acceleration module, a multi-core processor array and a configurable switching chip; the core granulation network comprises three data planes, and the FPGA acceleration module, the multi-core processor array and the configurable switching chip respectively correspondingly bear one of the data planes; the configurable switching chip is respectively connected with the multi-core processor array and the FPGA acceleration module; the multi-core processor array is connected with the FPGA acceleration module; the configurable switching chip is used for realizing the fast forwarding of data; the FPGA acceleration module is used for realizing function acceleration; an array of multicore processors is used for deep processing. The architecture can effectively reduce communication overhead and time overhead, and further realize the optimized layout and processing performance improvement of the core granular network processing architecture performance and functions.

Description

Core granulation network processor architecture
Technical Field
The invention relates to the technical field of computer network communication, in particular to a core granulation network processor architecture.
Background
The traditional high-performance network processor adopts a mainstream run to completion architecture, generally adopts microcode programming similar to assembly, and has the problem of poor programmability. In order to improve the programming capability of the NP, a network processing ecological environment is created. In the commercial NP market, PowerPC and MIPS basically quit the NP stage, commercial high-performance network processors such as LX2160 of Freescale, CN9XXX series network processors of Marvell, the internal structure widely adopts three major parts of a general multi-core processing module, a configurable interface and a hardware accelerator engine, the three major parts are interconnected through an internal system high-performance bus, a single chip integration mode is constructed, the performance and the function are guaranteed on two levels based on an advanced system architecture and an acceleration engine, and the pressure of large research and development cost is realized through multiple flow sheet verification. The multi-core processor module part is basically unified into an ARM instruction set along with the gradual centralization of the instruction set, and is widely integrated with high-performance general multi-core supporting the ARM v8 architecture. However, a mature CPU team needs about 80 persons to maintain, integrates an arm processor core on a single chip, and also relates to the problems related to the intellectual property rights of the arm. In the interface module part, the network processor can multiplex the controller (Ethernet, SATA, PCIe and the like) and the high-speed serial bus port Serdes to form a configurable mode, so that the utilization rate of the controller is improved, and the area is saved. High speed serial bus designs are often accompanied by various problems of crosstalk, noise, jitter, etc., which affect the tape-out success rate. The hardware accelerator part also needs multiple stream slices to mature, and is difficult to understand and apply compared with the general multi-core part. These factors all affect the iteration cycle of the NP, making it impossible to synchronize, quickly iterate and develop fully with the speed of advance of the device and application requirements. The development period of the telecommunication industry is generally long, the network processor chip needs at least 5 years from establishment to mass production, and the supply period is as long as 15 years.
The industry also proposes a solution for constructing a core router based on a programmable switching chip, and adopts a Pipeline structure to replace a network processor, thereby simplifying a data plane. The programmable switching chip integrates a flexible extended programmable pipeline, providing greater buffering, larger table entries, better telemetry and programming functions. However, the switching chip has weak hardware programming support and poor flexibility for implementing new technologies emerging rapidly. Mainstream single-chip network processors, including switch chip replacements, have difficulty achieving a good compromise in terms of high performance, certainty, flexibility, and lead time.
Therefore, it is an urgent need in the art to provide a core granular network processor architecture that can effectively utilize flexible reconfiguration of multiple chips, provide optimized layout of performance and functions, and thus support faster iteration cycles and on-demand service deployment capabilities.
Disclosure of Invention
The invention aims to provide a core granulation network processor architecture, which solves the technical problems that the architecture function and performance layout are poor, the flexibility is insufficient, and the fast iteration period and the dynamically-changed service attachment capability cannot be supported.
Based on the above purposes, the technical scheme provided by the invention is as follows:
a core granulation network processor architecture comprises a core granulation network, an FPGA acceleration module, a multi-core processor array and a configurable switching chip;
the core granulation network comprises three data planes, and the FPGA acceleration module, the multi-core processor array and the configurable switching chip respectively correspondingly bear one of the data planes;
the configurable switching chip is respectively connected with the multi-core processor array and the FPGA acceleration module;
the multi-core processor array is connected with the FPGA acceleration module;
the configurable switching chip is used for realizing rapid data forwarding;
the FPGA acceleration module is used for realizing function acceleration;
the array of multicore processors is used for deep processing.
Preferably, the configurable switching chip comprises: the device comprises a first network interface, an analysis module, a matching module, an execution module and a second network interface;
the first network interface is used for acquiring to-be-processed data of a message type input by a user;
the analysis module is used for analyzing the message to be processed;
the matching module is used for judging whether the analyzed message hits a preset rule table or not;
the execution module is used for processing the analyzed message which hits the preset rule table;
and the second network interface is used for sending the processed message to a user.
Preferably, a buffer area is also included;
the analyzing module is configured to analyze the to-be-processed packet specifically as follows:
the analysis module is used for analyzing the message to be processed into metadata and a data packet;
the analysis module is also used for sending the metadata to the matching module;
the analysis module is further configured to send the data packet to the buffer.
Preferably, the matching module is further configured to sequentially forward the analyzed packet that does not hit the preset rule table to the FPGA acceleration module and the multi-core processor array according to a preset forwarding rule.
Preferably, the matching module is further configured to update the preset rule table after the FPGA acceleration module and the multicore processor array complete processing.
Preferably, the FPGA acceleration module comprises a logic module;
the logic module comprises a first logic module and a second logic module;
the first logic module is used for storing processing logic related to control;
the second logic module is used for storing user-defined processing logic.
Preferably, the array of multicore processors comprises a plurality of processor cores;
the processor core is used for formulating the preset rule table of the message;
the processor core is further configured to send the formulated preset rule table to the FPGA acceleration module or the configurable switching chip.
Preferably, a plurality of the processor cores in the array of multicore processors are configured in a pipeline processing mode;
or a plurality of the processor cores in the multi-core processor array are configured to be in an RTC processing mode.
Preferably, the step of processing the analyzed packet hit in the preset rule table by the execution module specifically includes:
the execution module is used for dividing the analyzed message which hits the preset rule table into metadata and a data packet;
the execution module is used for sending the metadata and the data packet to the FPGA acceleration module;
the FPGA acceleration module is used for processing the data packet according to processing logic in the logic module and sending the processed data packet and the metadata to the multi-core processor array;
and the multi-core processor array is used for taking out the metadata and forwarding the processed data packet to the execution module.
According to the core granulation network processor architecture provided by the invention, the core granulation network is divided into three data planes, and the FPGA acceleration module, the multi-core processor array and the configurable switching chip are respectively and correspondingly loaded with one of the data planes. The configurable switching chip is respectively connected with the multi-core processor array and the FPGA acceleration module. And the multi-core processor array is connected with the FPGA acceleration module. The configurable switching chip is mainly used for realizing rapid data forwarding, the FPGA acceleration module is mainly used for realizing function acceleration, and the multi-core processor array is mainly used for deep data processing. In the actual application process, by defining a standardized data message format specification, the core-granulation network processor architecture can effectively shield the difference of transmission channels of a physical layer/a link layer at the bottom layer, realize modular decoupling, and further support the flexible combined configuration of each processing resource of the core-granulation network processor architecture. In the working process, the data messages are processed by the configurable switching chip, and part of the messages are forwarded to the FPGA acceleration module or the multi-core processor array according to the processing action, so that the communication overhead and the time overhead for processing the data messages by the FPGA acceleration module or the multi-core processor array are reduced, and the optimized layout and processing performance improvement of the core-granulation network processing architecture performance and the functions is further realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a core granular network processor architecture according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a core granular network processor architecture according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of another core granular network processor architecture according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Embodiments of the present invention are written in a progressive manner.
The embodiment of the invention provides a core granulation network processor architecture. The method mainly solves the technical problems that in the prior art, the architecture function and performance layout are poor, the flexibility is insufficient, and the fast iteration period and the dynamically-changed business attachment capability are not enough supported.
A core granulation network processor architecture comprises a core granulation network, an FPGA acceleration module, a multi-core processor array and a configurable switching chip;
the core granulation network comprises three data planes, and the FPGA acceleration module, the multi-core processor array and the configurable switching chip respectively correspondingly bear one of the data planes;
the configurable switching chip is respectively connected with the multi-core processor array and the FPGA acceleration module;
the multi-core processor array is connected with the FPGA acceleration module;
the configurable switching chip is used for realizing the fast forwarding of data;
the FPGA acceleration module is used for realizing function acceleration;
an array of multicore processors is used for deep processing.
According to the core granulation network processor architecture provided by the invention, the core granulation network is divided into three data planes, and the FPGA acceleration module, the multi-core processor array and the configurable switching chip are respectively and correspondingly loaded with one of the data planes. The configurable switching chip is respectively connected with the multi-core processor array and the FPGA acceleration module. And the multi-core processor array is connected with the FPGA acceleration module. The configurable switching chip is mainly used for realizing rapid data forwarding, the FPGA acceleration module is mainly used for realizing function acceleration, and the multi-core processor array is mainly used for deep data processing. In the actual application process, by defining a standardized data message format specification, the core-granulation network processor architecture can effectively shield the difference of transmission channels of a physical layer/a link layer at the bottom layer, realize modular decoupling, and further support the flexible combined configuration of each processing resource of the core-granulation network processor architecture. In the working process, the data messages are processed by the configurable switching chip, and part of the messages are forwarded to the FPGA acceleration module or the multi-core processor array according to the processing action, so that the communication overhead and the time overhead for processing the data messages by the FPGA acceleration module or the multi-core processor array are reduced, and the optimized layout and processing performance improvement of the core-granulation network processing architecture performance and the functions is further realized.
It should be noted that, in this embodiment, the configurable switching chip is specifically a protocol independent pipeline implemented based on the configurable switching chip; the FPGA acceleration module is a reconfigurable hardware acceleration engine based on an FPGA; the array of multicore processors is embodied as an array of general purpose multicore processors.
Preferably, the configurable switching chip comprises: the device comprises a first network interface, an analysis module, a matching module, an execution module and a second network interface;
the first network interface is used for acquiring to-be-processed data of a message type input by a user;
the analysis module is used for analyzing the message to be processed;
the matching module is used for judging whether the analyzed message hits a preset rule table or not;
the execution module is used for processing the analyzed message which hits the preset rule table;
the second network interface is used for sending the processed message to the user.
In the actual application process, the protocol independent pipeline realized based on the configurable switching chip comprises a first network interface, an analysis module, a matching module, an execution module and a second network interface. User input data is transmitted to the analysis module through the first network interface in a message form; the analysis module analyzes the message to be processed and transmits the analyzed message to be processed to the matching module; the matching module judges whether the analyzed message hits a rule table preset in the matching module in advance or not and transmits the analyzed message hitting the preset rule table to the execution module; and the execution module processes the message and then transmits the message to the user through the second network interface.
Preferably, a buffer area is also included;
the analysis module is used for analyzing the message to be processed, and specifically comprises the following steps:
the analysis module is used for analyzing the message to be processed into metadata and a data packet;
the analysis module is also used for sending the metadata to the matching module;
the analysis module is also used for sending the data packet to the buffer area.
In the actual application process, a buffer area is further arranged, the analysis module analyzes the message to be processed into metadata and a data packet, then the metadata is sent to the matching module, and the data packet is sent to the buffer area.
Preferably, the matching module is further configured to sequentially forward the analyzed packet that misses the preset rule table to the FPGA acceleration module and the multicore processor array according to a preset forwarding rule.
In the actual application process, the matching module further forwards the analyzed messages which do not hit the preset rule table to the FPGA acceleration module and the multi-core processor array according to the preset forwarding rule.
Preferably, the matching module is further configured to update the preset rule table after the FPGA acceleration module and the multi-core processor array are processed.
In the actual application process, after receiving the analyzed message which does not hit the preset rule table, the reconfigurable hardware acceleration engine and the general multi-core processor array based on the FPGA process the message according to the preset rules of the message. And after the processing is finished, sending finishing information to the matching module, and updating the preset rule table by the matching module according to the finishing information.
Preferably, the FPGA acceleration module comprises a logic module;
the logic module comprises a first logic module and a second logic module;
the first logic module is used for storing and controlling related processing logic;
the second logic module is used for storing user-defined processing logic.
In the practical application process, the FPGA acceleration module further comprises a logic module which comprises a first logic module and a second logic module, wherein the first logic module stores processing logic related to the platform, shields message processing logic related to the bottom layer and the platform, and realizes the data and control message interaction function with other two processing resources. The second logic module stores user-defined processing logic and is used for supporting various new protocols and accelerating various customized message processing functions.
Preferably, the array of multicore processors comprises a plurality of processor cores;
the processor core is used for formulating a preset rule table of the message;
the processor core is also used for sending the formulated preset rule table to the FPGA acceleration module or the configurable exchange chip.
In practical use, the general purpose multi-core processor array includes a plurality of processor cores. The processor core formulates a preset rule table of the message, and after formulation is completed, the preset rule table is issued to a reconfigurable hardware acceleration engine based on an FPGA or a protocol independent assembly line based on a configurable switching chip, so that the follow-up message which can be processed in hardware is prevented from being continuously sent to a CPU, and the processing performance is improved.
Preferably, a plurality of processor cores in the array of the multi-core processor are configured in a pipeline processing mode;
or a plurality of processor cores in the multi-core processor array are configured to be in an RTC processing mode.
In the actual application process, the general multi-core processor array can be configured into a Pipeline (Pipeline) processing mode, namely, each core realizes a network function, and different cores are arranged according to a function service chain; it can also be configured in RTC (run-to-complete) mode, i.e. each core runs all network functions on its own without data interaction between cores.
Preferably, the step of processing the analyzed packet hit in the preset rule table by the execution module specifically includes:
the execution module is used for dividing the analyzed message which hits the preset rule table into metadata and a data packet;
the execution module is used for sending the metadata and the data packet to the FPGA acceleration module;
the FPGA acceleration module is used for processing the data packet according to the processing logic in the logic module and sending the processed data packet and the metadata to the multi-core processor array;
and the multi-core processor array takes out the metadata and forwards the processed data packet to the execution module.
In the actual application process, the execution module divides the analyzed message which hits the preset rule table into metadata and a data packet, and sends the metadata and the data packet to the reconfigurable hardware acceleration engine based on the FPGA, the reconfigurable hardware acceleration engine based on the FPGA processes the data packet according to the processing logic in the logic module, and sends the processed data packet and the metadata to the general multi-core processor array, and the multi-core processor array takes out the metadata in the data packet and forwards the processed data packet to the execution module.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. The above-described device embodiments are merely illustrative, and for example, the division of modules is only one logical function division, and other division manners may be implemented in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be electrical, mechanical or other.
In addition, each functional module in each embodiment of the present invention may be all integrated in one multi-core processor array, or each module may be separately used as one device, or two or more modules may be integrated in one device; each functional module in each embodiment of the present invention may be implemented in a form of hardware, or may be implemented in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by program instructions and related hardware, where the program instructions may be stored in a computer-readable storage medium, and when executed, the program instructions perform the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
A core granular network processor architecture provided by the present invention is described in detail above. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A core granulation network processor architecture is characterized by comprising a core granulation network, an FPGA acceleration module, a multi-core processor array and a configurable switching chip;
the core granulation network comprises three data planes, and the FPGA acceleration module, the multi-core processor array and the configurable switching chip respectively correspondingly bear one of the data planes;
the configurable switching chip is respectively connected with the multi-core processor array and the FPGA acceleration module;
the multi-core processor array is connected with the FPGA acceleration module;
the configurable switching chip is used for realizing rapid data forwarding;
the FPGA acceleration module is used for realizing function acceleration;
the array of multicore processors is used for deep processing.
2. The core granular network processor architecture of claim 1, wherein the configurable switching chip comprises: the device comprises a first network interface, an analysis module, a matching module, an execution module and a second network interface;
the first network interface is used for acquiring to-be-processed data of a message type input by a user;
the analysis module is used for analyzing the message to be processed;
the matching module is used for judging whether the analyzed message hits a preset rule table or not;
the execution module is used for processing the analyzed message which hits the preset rule table;
and the second network interface is used for sending the processed message to a user.
3. The core granular network processor architecture as recited in claim 2, further comprising a buffer;
the analyzing module is configured to analyze the to-be-processed packet specifically as follows:
the analysis module is used for analyzing the message to be processed into metadata and a data packet;
the analysis module is also used for sending the metadata to the matching module;
the analysis module is further configured to send the data packet to the buffer.
4. The core granulation network processor architecture of claim 3, wherein the matching module is further configured to sequentially forward the parsed messages that miss a preset rule table to the FPGA acceleration module and the multi-core processor array according to a preset forwarding rule.
5. The core granulation network processor architecture of claim 4, wherein the matching module is further configured to update the preset rule table after processing by both the FPGA acceleration module and the multi-core processor array is completed.
6. The core granular network processor architecture of claim 2, wherein the FPGA acceleration module comprises a logic module;
the logic module comprises a first logic module and a second logic module;
the first logic module is used for storing processing logic related to control;
the second logic module is used for storing user-defined processing logic.
7. The core granular network processor architecture of claim 6, wherein the array of multicore processors comprises a plurality of processor cores;
the processor core is used for formulating the preset rule table of the message;
the processor core is further configured to send the formulated preset rule table to the FPGA acceleration module or the configurable switching chip.
8. The core granular network processor architecture of claim 7, wherein a plurality of the processor cores in the array of multicore processors are configured in a pipelined processing mode;
or a plurality of the processor cores in the multi-core processor array are configured to be in an RTC processing mode.
9. The core-granular network processor architecture as recited in claim 8, wherein the execution module is configured to process the parsed message that has hit a predetermined rule table by:
the execution module is used for dividing the analyzed message which hits the preset rule table into metadata and a data packet;
the execution module is used for sending the metadata and the data packet to the FPGA acceleration module;
the FPGA acceleration module is used for processing the data packet according to processing logic in the logic module and sending the processed data packet and the metadata to the multi-core processor array;
and the multi-core processor array is used for taking out the metadata and forwarding the processed data packet to the execution module.
CN202210702099.4A 2022-06-21 2022-06-21 Core granulation network processor architecture Pending CN114827053A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210702099.4A CN114827053A (en) 2022-06-21 2022-06-21 Core granulation network processor architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210702099.4A CN114827053A (en) 2022-06-21 2022-06-21 Core granulation network processor architecture

Publications (1)

Publication Number Publication Date
CN114827053A true CN114827053A (en) 2022-07-29

Family

ID=82521643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210702099.4A Pending CN114827053A (en) 2022-06-21 2022-06-21 Core granulation network processor architecture

Country Status (1)

Country Link
CN (1) CN114827053A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170134541A1 (en) * 2015-11-06 2017-05-11 Huawei Technologies Co., Ltd. Packet processing method, apparatus, and device
US20200028776A1 (en) * 2018-07-20 2020-01-23 Netsia, Inc. SYSTEM AND METHOD FOR A TRANSLATOR SUPPORTING MULTIPLE SOFTWARE DEFINED NETWORK (SDN) APPLICATION PROGRAMMING INTERFACES (APIs)
CN112291118A (en) * 2020-12-25 2021-01-29 南京华飞数据技术有限公司 Multi-core data processing device and method based on FPGA
CN112929376A (en) * 2021-02-10 2021-06-08 恒安嘉新(北京)科技股份公司 Flow data processing method and device, computer equipment and storage medium
CN113254081A (en) * 2021-06-16 2021-08-13 中国人民解放军国防科技大学 Mirror image reading and writing system and method for control path in exchange chip
CN113285892A (en) * 2020-02-20 2021-08-20 华为技术有限公司 Message processing system, message processing method, machine-readable storage medium, and program product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170134541A1 (en) * 2015-11-06 2017-05-11 Huawei Technologies Co., Ltd. Packet processing method, apparatus, and device
US20200028776A1 (en) * 2018-07-20 2020-01-23 Netsia, Inc. SYSTEM AND METHOD FOR A TRANSLATOR SUPPORTING MULTIPLE SOFTWARE DEFINED NETWORK (SDN) APPLICATION PROGRAMMING INTERFACES (APIs)
CN113285892A (en) * 2020-02-20 2021-08-20 华为技术有限公司 Message processing system, message processing method, machine-readable storage medium, and program product
CN112291118A (en) * 2020-12-25 2021-01-29 南京华飞数据技术有限公司 Multi-core data processing device and method based on FPGA
CN112929376A (en) * 2021-02-10 2021-06-08 恒安嘉新(北京)科技股份公司 Flow data processing method and device, computer equipment and storage medium
CN113254081A (en) * 2021-06-16 2021-08-13 中国人民解放军国防科技大学 Mirror image reading and writing system and method for control path in exchange chip

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何璐蓓等: "RESSP:基于FPGA的可重构SDN交换结构", 《计算机科学》 *
杨惠: "面向移动高端装备的银河衡芯敏捷交换芯片", 《计算机研究与发展》 *

Similar Documents

Publication Publication Date Title
US9569579B1 (en) Automatic pipelining of NoC channels to meet timing and/or performance
CN108809854B (en) Reconfigurable chip architecture for large-flow network processing
US9699079B2 (en) Streaming bridge design with host interfaces and network on chip (NoC) layers
CN100527697C (en) Means and a method for switching data packets or frames
CN101304322B (en) Network equipment and packet forwarding method
US20100191911A1 (en) System-On-A-Chip Having an Array of Programmable Processing Elements Linked By an On-Chip Network with Distributed On-Chip Shared Memory and External Shared Memory
WO2017128953A1 (en) Server virtualization network sharing apparatus and method
US20170063609A1 (en) Dynamically configuring store-and-forward channels and cut-through channels in a network-on-chip
CN102624738A (en) Serial port server, protocol conversion chip and data transmission method
CN109412897B (en) Shared MAC (media Access control) implementation system and method based on multi-core processor and FPGA (field programmable Gate array)
CN106411872A (en) Method and device for compressing messages based on data message classification
Sun et al. Republic: Data multicast meets hybrid rack-level interconnections in data center
CN114827053A (en) Core granulation network processor architecture
CN107181702B (en) Device for realizing RapidIO and Ethernet fusion exchange
CN105893036B (en) A kind of Campatible accelerator extended method of embedded system
CN102308538B (en) Message processing method and device
Wang et al. Design and implementation of FC-AE-ASM data acquisition and forwarding system
CN106789706B (en) Network shunting system based on TCAM
Tam et al. Efficient scheduling of complete exchange on clusters
WO2022110112A1 (en) Message processing method and device
EP3136251B1 (en) Flit transmission method and device of network on chip
WO2018196833A1 (en) Message sending method and message receiving method and apparatus
Amorim et al. Performance evaluation of single-and multi-hop wireless networks-on-chip with NAS Parallel Benchmarks
US11934337B2 (en) Chip and multi-chip system as well as electronic device and data transmission method
CN203827362U (en) Switch supporting function expansion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220729

RJ01 Rejection of invention patent application after publication