CN114415985A - Stored data processing unit based on numerical control separation architecture - Google Patents

Stored data processing unit based on numerical control separation architecture Download PDF

Info

Publication number
CN114415985A
CN114415985A CN202210333980.1A CN202210333980A CN114415985A CN 114415985 A CN114415985 A CN 114415985A CN 202210333980 A CN202210333980 A CN 202210333980A CN 114415985 A CN114415985 A CN 114415985A
Authority
CN
China
Prior art keywords
data
data processing
processing unit
storage
operations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210333980.1A
Other languages
Chinese (zh)
Inventor
张雪庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210333980.1A priority Critical patent/CN114415985A/en
Publication of CN114415985A publication Critical patent/CN114415985A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a storage data processing unit based on a numerical control separation architecture, which comprises: a processor and a hardware acceleration engine; the hardware acceleration engine is used for realizing protocol unloading operation and data processing operation of a data plane; the processor is used for realizing software processing operation of the control plane and processing operation except data processing operation in the data plane; therefore, in the scheme, the storage data processing unit can distinguish the data stream and the control stream from hardware based on a numerical control separation architecture, so that the mutual influence between the control stream and the data stream is avoided, and the influence on the storage performance is reduced; in addition, in the scheme, a hardware acceleration engine special for executing data processing operation is also independent in the storage data processing unit, so that the data processing operation is realized more efficiently, and the processing performance of the storage data processing unit in the data storage field is improved.

Description

Stored data processing unit based on numerical control separation architecture
Technical Field
The invention relates to the technical field of data storage, in particular to a stored data processing unit based on a numerical control separation architecture.
Background
The current market demand drives the global storage data volume to increase rapidly at the ZB (Zettabyte) level, the performance of a single storage hard disk and the bandwidth of a Central Processing Unit (CPU) for memory access and the bandwidth of a network interface used for storage are also significantly improved, and a client also puts higher demands on the I/O (Input/Output) performance of a storage system, such as: higher bandwidth and IOPS (Input/Output Operations Per Second), lower time delay, etc., however, the semiconductor process in the post-molar times is slow to develop, and the single-core computing power is sluggish (52% - > 3.5%), which brings huge performance improvement challenges to the design of the memory system.
The current mainstream storage system framework is a (computer central) architecture with CPU computing as a center, and is suitable for a traditional storage device usage scenario, with CPU as a center, front end interface cards (such as network cards, FC (fiber Channel) cards), Graphics Processing Units (GPUs), memory and other computing, storage and communication devices are mounted below CPU through a high-speed bus, all computing and control are initiated by CPU, and CPU plays a control role of a key core. However, as the late molars come, the computing power of a CPU core is increasing, so that the CPU becomes a bottleneck in improving the performance of the storage system.
Therefore, how to improve the processing performance of the processor in the field of data storage is a problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a storage data processing unit based on a numerical control separation architecture so as to improve the processing performance of a processor in the field of data storage.
In order to achieve the above object, the present invention provides a storage data processing unit based on a numerical control separation architecture, comprising:
a processor and a hardware acceleration engine;
the hardware acceleration engine is used for realizing protocol unloading operation and data processing operation of a data plane; the processor is configured to implement software processing operations of a control plane and processing operations in the data plane other than the data processing operations.
The hardware acceleration engine comprises a protocol acceleration engine, and the protocol acceleration engine is used for realizing protocol processing operation and hardware acceleration operation of data consistency.
The protocol acceleration engine is specifically used for realizing TCP protocol processing operation and NVMe over Fabric protocol processing operation.
The hardware acceleration engine comprises a data flow acceleration engine, and the data flow acceleration engine is used for realizing at least one of data handling operation, data encoding operation, data transcoding operation, memory comparison operation, data query operation and data insertion operation.
Wherein, the processor is an ARM processor.
Wherein, the software processing operation realized by the ARM processor comprises: at least one of a storage configuration service operation, a chassis management operation, a log collection operation, an exception handling operation, a firmware upgrade operation, a user security operation, a production diagnostic operation.
Wherein the processing operations implemented by the ARM processor in the data plane include: at least one oF NVMe oF Target management operations, IO multipath management operations, cache management operations, Disk management operations, protocol processing operations.
The processor realizes software processing operation of a control plane and processing operation except the data processing operation in the data plane through different processor cores.
The user mode of the storage data processing unit comprises the data plane, the control plane and a common module.
Wherein the common module is configured to implement: task scheduling management operation, memory management operation and driver management operation.
As can be seen from the above solutions, a storage data processing unit based on a numerical control separation architecture provided in an embodiment of the present invention includes: a processor and a hardware acceleration engine; the hardware acceleration engine is used for realizing protocol unloading operation and data processing operation of a data plane; the processor is used for realizing software processing operation of the control plane and processing operation except data processing operation in the data plane; therefore, in the scheme, the storage data processing unit can distinguish the data stream and the control stream from hardware based on a numerical control separation architecture, so that the mutual influence between the control stream and the data stream is avoided, and the influence on the storage performance is reduced; in addition, in the scheme, a hardware acceleration engine special for executing data processing operation is also independent in the storage data processing unit, so that the data processing operation is realized more efficiently, and the processing performance of the storage data processing unit in the data storage field is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a storage data processing unit based on a numerical control separation architecture according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a system according to an embodiment of the present invention;
FIG. 3a is a control flow diagram of a control plane disclosed in an embodiment of the present invention;
fig. 3b is a schematic data flow diagram of the data plane according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a storage data processing unit based on a numerical control separation architecture, which is used for improving the processing performance of a processor in the field of data storage.
Referring to fig. 1, an embodiment of the present invention provides a storage data processing unit based on a numerical control separation architecture, including: a processor 11 and a hardware acceleration engine 12;
the hardware acceleration engine 12 is configured to implement a protocol offload operation and a data processing operation of a data plane; the processor 11 is used to implement software processing operations of the control plane and processing operations in the data plane other than data processing operations.
The storage data Processing unit in the scheme is an SPU (storage Processing Unit), the SPU is applied to a novel storage architecture taking data as a center, the SPU comprises a processor and a hardware acceleration engine, a numerical control separation architecture design is adopted, various I/O hardware acceleration technologies are realized in the SPU, CPU Processing software control flow, GPU Processing AI (Artificial Intelligence) and graphic operation and SSD Processing storage are combined, and sustainable linear increase of the performance of the storage system can be realized.
Specifically, in the embodiment, because the ARM processor is used more, the application is mature, and the power consumption is low, and the like, in the CPU environment, the ARM processor can be used to implement the software processing operation of the control plane and the processing operation in the data plane except the data processing operation, but the solution is not limited to the use of the ARM processor, and other processors can be selected according to the requirements in the practical application; the hardware acceleration engine refers to an IP core in the storage data processing unit.
In this embodiment, the hardware acceleration engine is hardware provided for a storage application scenario and specially used for performing data acceleration processing, when a conventional CPU performs data processing operations, a core of the CPU must be used for processing, and the data processing efficiency is low.
Further, in the conventional scheme, both the data flow and the control flow execute the related operations through the CPU, that is: the core of CPU needs to do operation to data flow and control flow simultaneously, if there is hardware fault, then the control flow can go wrong, this will influence the core of CPU can not normally handle the data flow, thereby influence whole storage performance, and in this application, the storage data processing unit is mainly based on numerical control separation framework, the software processing operation of control plane is realized through different processor cores to the treater, and the processing operation except data processing operation in the data plane, this kind of numerical control separation's mode, can distinguish data flow and control flow from the hardware, thereby avoid the mutual influence between control flow and the data flow, and then reduce the influence to storage performance.
In conclusion, in the scheme, the storage data processing unit can distinguish the data stream and the control stream from hardware based on the numerical control separation architecture, so that the mutual influence between the control stream and the data stream is avoided, the influence on the storage performance is further reduced, and the internal I/O performance of the storage data processing unit is optimized to the maximum extent; according to the scheme, the hardware acceleration engine specially used for executing the data processing operation is also independent in the storage data processing unit, so that the data processing operation can be more efficiently realized, under the requirement of a large number of IO processing in the storage field, the rapid flow of data can be realized through the separation framework and the hardware acceleration engine, the parallel mode processing of multiple concurrent I/O is supported, the processing performance of the storage data processing unit in the data storage field is improved, the I/O processing efficiency is improved, and the equipment cost is saved.
In the embodiment, the hardware acceleration engine comprises a protocol acceleration engine, wherein the protocol acceleration engine is used for realizing protocol processing operation and hardware acceleration operation of data consistency; the protocol acceleration engine is specifically used for realizing TCP protocol processing operation and NVMe over Fabric protocol processing operation.
It should be noted that the network card itself has communication capability, and in this embodiment, a protocol acceleration engine is disposed in the network card, and the protocol acceleration engine supports a protocol offload characteristic, such as: to implement TCP (Transmission Control Protocol) Protocol offload operation, NVMe over Fabric Protocol offload operation, and Data consistency T10-DIF (Data consistency Field) operation hardware offload, that is to say: the method and the device change the processing operation of software in the traditional scheme into the processing operation of hardware (a protocol acceleration engine) so as to save software computing resources and improve the processing speed. Such as: the TCP packet has a packet header, check data and the like, the traditional scheme needs to be processed by a CPU through software (such as removing the packet header, checking data consistency and the like), but in the application, the TCP packet can be directly executed by a protocol acceleration engine in a network card, the process is equivalent to unloading the protocol processing operation to hardware, and further the processing resource of a processor is saved.
Further, the hardware acceleration engine in this embodiment further includes a data flow acceleration engine, where the data flow acceleration engine is configured to implement at least one of a data handling operation, a data encoding operation, a data transcoding operation, a memory comparison operation, a data query operation, and a data insertion operation.
Specifically, when the six operations are realized through the data stream acceleration engine, each data stream acceleration engine can be set to realize different operations, so that the data processing efficiency is improved, and the data storage performance is improved; the data flow acceleration engine can realize accelerated data copying by executing data carrying operation, so that the traditional CPU is replaced to finish memcpy operation, data with a certain size is carried by the data flow acceleration engine, and supported media comprise: memory, non-volatile memory, etc.; the data coding operation executed by the data flow acceleration engine specifically comprises data erasure operation, data encryption, data compression and the like, the erasure operation is realized through the data flow acceleration engine, the scheme that a CPU is used for calculating data in the original architecture is replaced, and due to the fact that a high-speed interconnection shared bus technology is used, data among different modules in the erasure coding process are rarely carried, the cost is extremely low, and quick access of an I/O path is realized; when the data stream acceleration engine is used for realizing the functions of data encryption and data compression, merging can be carried out on a data path, namely: if both data encryption and data compression are required, the I/O paths may be merged, which is shorter and more optimal than the separate processing logic previously. The data flow acceleration engine executes memory comparison operation, can realize accelerated data comparison, replaces the traditional CPU to finish memcmp operation, compares data with certain size through the data flow acceleration engine, and supported comparison comprises the following steps: all-zero detection, all-1 detection, memory comparison and difference value taking and the like, and the comparison operation further comprises data deduplication, data consistency protection and the like.
In conclusion, the scheme can realize high-efficiency data handling operation, data coding operation, data transcoding operation, memory comparison operation, data query operation and data insertion operation by arranging the independent data stream acceleration engine, and reduce the time delay of an I/O path to microsecond level.
Referring to fig. 2, a schematic structural diagram of a system according to an embodiment of the present invention is provided. As can be seen from the figure, the user mode of the storage data processing unit in the present embodiment includes a control plane 21, a data plane 22, and a common module 23. The control plane 21 includes various software processing operations implemented by an ARM processor, and specifically includes: the method comprises the following steps of storage configuration service operation, case management operation, log collection operation, exception handling operation, firmware upgrading operation, user safety operation and production diagnosis operation. The data plane 22 includes a six-level acceleration cooperative processing operation implemented by a data stream acceleration engine, and an NVMe od Target management operation, an IO multi-path management operation, a cache management operation, a Disk management operation, and a protocol processing operation implemented by an ARM processor. The common module is used for realizing that: task scheduling management operation, memory management operation and driver management operation.
Specifically, the core of the ARM processor in the storage data processing unit is mainly used for completing tasks of a control plane, and relatively complex storage operation is executed by utilizing the operational capability of the core of the ARM processor; the storage configuration service operation mainly relates to complex logic tasks such as a human-computer user interface, configuration file reading and writing and the like; the management operation of the case mainly relates to management, exception handling and alarm error reporting of internal or all, shared or unshared hardware devices of the case; the log collection operation mainly involves collection and dump of FW dump in SPU, collection of OS (operating system) in SPU and log file in hardware accelerator; the exception handling operation mainly relates to the handling of various software and hardware exception scenes; the firmware upgrading operation is mainly responsible for upgrading the FW inside the SPU; the user safety operation mainly relates to a safety strategy and a safety mechanism of software and hardware collaborative design; the production diagnostic operation is mainly responsible for the production stage software diagnostic system.
In the data plane 22, the six-level acceleration cooperative processing operation implemented by the data stream acceleration engine specifically includes: data carrying operation, data encoding operation, data transcoding operation, memory comparison operation, data query operation and data insertion operation; NVMe oF Target (destination) management operations refer to: at a destination end (receiving node) of an NVMe over Fabric protocol, performing protocol analysis operation and processing operation on an NVMe (non-volatile memory host controller interface specification) protocol data packet; the IO multi-path management operation refers to that received data are sent to a corresponding storage medium through a corresponding IO path during data transmission; the cache management operation refers to reading frequently read data from a bottom storage medium into a memory in order to improve the storage performance, so that an upper application can directly read corresponding data from the memory without reading from the bottom storage medium; disk management operation refers to managing a storage medium; the protocol processing operation refers to an Initiator of an NVMe protocol/SAS protocol/SATA protocol, which is equivalent to a data sending end, and the protocol processing operation is used for carrying out protocol packing and encapsulation on data according to protocol types, writing the data into a kernel driver through a driving layer and finally writing the data into a hardware medium.
In the common module 23, the task scheduling management operation is used to manage task scheduling policies, such as: task conflict management, task priority management, and the like; the memory management operation is used for performing memory management, the chip applies for the memory to be put into a user state during initialization, and the operations including memory resource pool, memory sharing, data zero copy and the like can be realized through the memory management operation; the driver management operation is used for managing user mode drivers, relates to special drivers of all hardware in the chip, and is used for driving hardware acceleration engines, network cards and the like to realize data reading and writing and control functions; in addition, in the conventional scheme, task scheduling management operation, memory management operation, and driver management operation are implemented in a kernel mode, whereas in the scheme, in order to make the storage data processing unit more efficient, a common module is implemented in a user mode, such as: in the scheme, the driver is arranged in the user mode, so that data can be processed in a non-interrupt round-training mode without blockage, and tasks are efficiently finished.
Referring to fig. 3a, a control flow diagram of a control plane provided for the embodiment of the present invention, and referring to fig. 3b, a data flow diagram of a data plane provided for the embodiment of the present invention. In fig. 3a and 3b, an RNIC (RDMA-aware network interface controller) is a network interface controller supporting RDMA (Remote Direct Memory Access), a protocol acceleration engine includes hardware acceleration operations for TCP protocol offload operation and NVMe over Fabric offload operation, and Data coherency protocol T10-DIF (Data coherency Field), Core is a Core of an ARM processor, L3Cache is a three-level Cache, DRAM is a (Dynamic Random Access Memory) is a Dynamic Random Access Memory, a Data flow acceleration engine is used for implementing Data transfer operation, Data encoding operation, Data operation, Memory comparison operation, Data query operation, Data insertion operation, SATA (Serial ATA) interface, PCIe (peripheral component Access) interface, SAS (Serial computer system interface, SCSI standard interface, serial SCSI) interface is used to connect to other storage media such as: SSD (Solid State Disk) and HDD (Hard Disk Drive).
In fig. 3a, the control plane mainly performs configuration and operation through a Core control related module (cache, memory, network card, etc.), for example, when the network card is configured, it needs to control the network card to perform initialization work or configuration work through related software on the Core, when these operations are performed, processing of the memory is involved, which is the first control flow line in fig. 3a, and in the second control flow line in fig. 3a, when initialization work or configuration work is performed on an acceleration engine or when a controller of a storage medium is configured, processing through the Core is also needed, which also involves processing of the memory. In the data flow shown in fig. 3b, a data packet needs to be acquired from the front end through the network card, the protocol acceleration engine performs protocol processing on the data packet to acquire corresponding data and stores the corresponding data in the memory, the data processing operation is performed on the data through the data flow acceleration engine, the processed data is written into the storage medium through the corresponding interface, the data writing process is completed, the data reading process is a reverse process, and details are not described here.
In conclusion, the storage data processing unit in the scheme can distinguish the data stream and the control stream from hardware based on the numerical control separation architecture, so that the mutual influence between the control stream and the data stream is avoided, the influence on the storage performance is further reduced, and the internal I/O performance of the storage data processing unit is optimized to the maximum extent; the data processing operation can be efficiently realized through the hardware acceleration engine, under the requirement of a large number of IO processing in the storage field, the rapid flow of data can be realized through the separation framework and the hardware acceleration engine, the multi-concurrent I/O processing is supported in a parallel mode, the processing performance of the storage data processing unit in the data storage field is improved, the I/O processing efficiency is improved, and the equipment cost is saved. And, the common module is realized in a user mode, so that the processing efficiency of the storage data processing unit can be further improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A storage data processing unit based on a numerical control separation architecture is characterized by comprising:
a processor and a hardware acceleration engine;
the hardware acceleration engine is used for realizing protocol unloading operation and data processing operation of a data plane; the processor is configured to implement software processing operations of a control plane and processing operations in the data plane other than the data processing operations.
2. The storage data processing unit of claim 1, wherein the hardware acceleration engine comprises a protocol acceleration engine to implement protocol processing operations, and hardware acceleration operations for data coherency.
3. The storage data processing unit of claim 2, wherein the protocol acceleration engine is specifically configured to implement TCP protocol processing operations and NVMe over Fabric protocol processing operations.
4. The storage data processing unit of claim 3, wherein the hardware acceleration engine comprises a data flow acceleration engine configured to implement at least one of a data handling operation, a data encoding operation, a data transcoding operation, a memory comparison operation, a data query operation, and a data insertion operation.
5. The storage data processing unit of claim 1, wherein the processor is an ARM processor.
6. The storage data processing unit of claim 5,
the software processing operation realized by the ARM processor comprises the following steps: at least one of a storage configuration service operation, a chassis management operation, a log collection operation, an exception handling operation, a firmware upgrade operation, a user security operation, a production diagnostic operation.
7. The storage data processing unit of claim 6,
the processing operations implemented by the ARM processor in the data plane include: at least one oF NVMe oF Target management operations, IO multipath management operations, cache management operations, Disk management operations, protocol processing operations.
8. The memory data processing unit of claim 1, wherein the processor implements software processing operations of a control plane by different processor cores, and processing operations in the data plane other than the data processing operations.
9. The storage data processing unit of any of claims 1 to 8, wherein the user state of the storage data processing unit comprises the data plane, the control plane and a common module.
10. The storage data processing unit of claim 9, wherein the common module is configured to implement: task scheduling management operation, memory management operation and driver management operation.
CN202210333980.1A 2022-03-31 2022-03-31 Stored data processing unit based on numerical control separation architecture Pending CN114415985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210333980.1A CN114415985A (en) 2022-03-31 2022-03-31 Stored data processing unit based on numerical control separation architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210333980.1A CN114415985A (en) 2022-03-31 2022-03-31 Stored data processing unit based on numerical control separation architecture

Publications (1)

Publication Number Publication Date
CN114415985A true CN114415985A (en) 2022-04-29

Family

ID=81263269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210333980.1A Pending CN114415985A (en) 2022-03-31 2022-03-31 Stored data processing unit based on numerical control separation architecture

Country Status (1)

Country Link
CN (1) CN114415985A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110109852A (en) * 2019-04-03 2019-08-09 华东计算技术研究所(中国电子科技集团公司第三十二研究所) System and method for realizing TCP _ IP protocol by hardware
CN110892380A (en) * 2017-07-10 2020-03-17 芬基波尔有限责任公司 Data processing unit for stream processing
CN114201421A (en) * 2022-02-17 2022-03-18 苏州浪潮智能科技有限公司 Data stream processing method, storage control node and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110892380A (en) * 2017-07-10 2020-03-17 芬基波尔有限责任公司 Data processing unit for stream processing
CN110109852A (en) * 2019-04-03 2019-08-09 华东计算技术研究所(中国电子科技集团公司第三十二研究所) System and method for realizing TCP _ IP protocol by hardware
CN114201421A (en) * 2022-02-17 2022-03-18 苏州浪潮智能科技有限公司 Data stream processing method, storage control node and readable storage medium

Similar Documents

Publication Publication Date Title
KR101744465B1 (en) Method and apparatus for storing data
CN111722786B (en) Storage system based on NVMe equipment
US9213500B2 (en) Data processing method and device
CN103019622B (en) The storage controlling method of a kind of data, controller, physical hard disk, and system
CN103336745B (en) FC HBA (fiber channel host bus adapter) based on SSD (solid state disk) cache and design method thereof
CN106066890B (en) Distributed high-performance database all-in-one machine system
CN103002046B (en) Multi-system data copying remote direct memory access (RDMA) framework
CN103049220A (en) Storage control method, storage control device and solid-state storage system
CN103403667A (en) Data processing method and device
CN115033188B (en) Storage hardware acceleration module system based on ZNS solid state disk
WO2023000770A1 (en) Method and apparatus for processing access request, and storage device and storage medium
CN107194811A (en) A kind of high frequency transaction quantization system based on FPGA
CN113687977B (en) Data processing device for improving computing performance based on RAID controller
CN113687978B (en) Data processing method for memory array controller
KR102471966B1 (en) Data input and output method using storage node based key-value srotre
CN109375868B (en) Data storage method, scheduling device, system, equipment and storage medium
CN100383721C (en) Isomeric double-system bus objective storage controller
WO2023185639A1 (en) Data interaction system and method based on nvme hard disk
CN102929813A (en) Method for designing peripheral component interconnect express (PCI-E) interface solid hard disk controller
CN115079936A (en) Data writing method and device
WO2023020136A1 (en) Data storage method and apparatus in storage system
WO2023124304A1 (en) Chip cache system, data processing method, device, storage medium, and chip
CN114415985A (en) Stored data processing unit based on numerical control separation architecture
US20060277326A1 (en) Data transfer system and method
CN115878311A (en) Computing node cluster, data aggregation method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220429