WO2021082877A1 - 访问固态硬盘的方法及装置 - Google Patents

访问固态硬盘的方法及装置 Download PDF

Info

Publication number
WO2021082877A1
WO2021082877A1 PCT/CN2020/119841 CN2020119841W WO2021082877A1 WO 2021082877 A1 WO2021082877 A1 WO 2021082877A1 CN 2020119841 W CN2020119841 W CN 2020119841W WO 2021082877 A1 WO2021082877 A1 WO 2021082877A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
ssd
data
network card
nvme
Prior art date
Application number
PCT/CN2020/119841
Other languages
English (en)
French (fr)
Inventor
程韬
何益
李立
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20881444.2A priority Critical patent/EP4040279A4/en
Publication of WO2021082877A1 publication Critical patent/WO2021082877A1/zh
Priority to US17/730,798 priority patent/US20220253238A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Definitions

  • This application relates to the storage field, and in particular to a method and device for accessing a solid-state hard disk.
  • the disk and the host are not in the same chassis in many cases, but are pulled apart through the network.
  • the access method to the SSD is that the network interface card (NIC) first writes data to the host, and then the host writes the data to the SSD.
  • NIC network interface card
  • I/O Input/Output
  • CPU central processing unit
  • the host writes the data to the SSD.
  • PCIe Peripheral Component Interconnect Express
  • the present application provides a method and device for accessing a solid-state hard disk, which can reduce the number of PCIe interactions and reduce time delay.
  • the present application provides a method for accessing a solid state drive, the method is applied to a storage node, the storage node includes a network card and a solid state drive SSD, the network card includes a memory, and the method includes:
  • the network card receives a data saving request sent by a client, where the data saving request includes data to be written;
  • the network card writes the data to be written into the memory of the network card
  • the SSD obtains the data to be written from the memory of the network card and writes it into the SSD.
  • the data saving request sent by the client is received through the NIC, the data saving request includes the data to be written, the NIC writes the data to be written into the memory of the NIC, and the SSD obtains it from the memory of the NIC Data to be written and written to the SSD. Since the NIC writes the data to be written into the memory of the NIC, the SSD can obtain the data to be written from the memory of the NIC and write it into the SSD. Therefore, the data does not pass through the CPU and the memory, and only one DMA is required to complete, avoiding consumption Memory and memory bandwidth have smaller requirements for CPU and memory configuration, and reduce the number of PCIe interactions and reduce latency.
  • the memory of the network card has an I/O queue
  • the SSD has an NVMe I/O queue
  • the data save request also includes an SSD write command
  • the SSD slaves the network card Before acquiring the data to be written in the memory of, the method further includes:
  • the network card writes the SSD write command to the NVMe I/O queue according to the queue information of the NVMe I/O queue, and notifies the SSD of the SSD write to be processed in the NVMe I/O queue command.
  • the NIC writes the data to be written into the memory of the NIC, writes the SSD write command to the NVMeI/O queue of the SSD, and notifies the SSD that there are SSDs to be processed in the NVMeI/O queue Write command, and finally the SSD writes the data to be written from the memory of the NIC to the SSD. Therefore, the data does not pass through the CPU and memory, only one DMA can be completed, avoiding the consumption of memory and memory bandwidth, and the configuration requirements for the CPU and memory are more Be smaller, reduce the number of PCIe interactions, and reduce latency.
  • the method further includes:
  • the SSD writes the write response message into the I/O queue of the network card according to the queue information of the I/O queue, and notifies the network card that there is the write response message in the I/O queue, the The write response message is used to indicate whether the SSD write command is successfully completed;
  • the network card sends the write response message to the client.
  • the method further includes:
  • the network card receives the queue information of the NVMe I/O queue, and the queue information of the NVMe I/O queue includes the first address and depth of the NVMe I/O queue;
  • the SSD receives queue information of the I/O queue, and the queue information of the I/O queue includes the first address and depth of the I/O queue.
  • the SSD receives the queue information of the NIC's I/O queue, and the NIC receives the queue information of the SSD's NVMe I/O queue.
  • the subsequent NIC receives the data storage request, it can follow the SSD
  • the queue information of the NVMe I/O queue writes the SSD write command in the data save request into the NVMe I/O queue of the SSD.
  • the NIC can write the read instruction to the SSD NVMe I/O queue according to the queue information of the SSD's NVMe I/O queue.
  • the present application provides a storage node, the storage node includes a network card and a solid state drive SSD, the network card includes a memory, wherein:
  • the network card is configured to receive a data saving request sent by a client, where the data saving request includes data to be written;
  • the network card is also used to write the data to be written into the memory of the network card
  • the SSD is used to obtain the data to be written from the memory of the network card and write it into the SSD.
  • the memory of the network card has an I/O queue
  • the SSD has an NVMe I/O queue
  • the data save request further includes an SSD write command
  • the network card is also used to: write the SSD write command to the NVMe I/O queue according to the queue information of the NVMe I/O queue, and notify the SSD that there is pending processing in the NVMe I/O queue The SSD write command.
  • the SSD is further used to: after the data to be written is written into the SSD, write a write response message to the I/O queue of the network card according to the queue information of the I/O queue. /O queue, and notify the network card that there is the write response message in the I/O queue, and the write response message is used to indicate whether the SSD write command is successfully completed;
  • the network card is also used to send the write response message to the client.
  • the network card is further used to: receive queue information of the NVMe I/O queue, and the queue information of the NVMe I/O queue includes the first address and depth of the NVMe I/O queue ;
  • the SSD is further configured to receive queue information of the I/O queue, where the queue information of the I/O queue includes the first address and depth of the I/O queue.
  • Figure 1 is a schematic diagram of a system structure applied by this application.
  • FIG. 2 is a schematic flowchart of the initialization process of the method for accessing a solid-state hard disk provided by an embodiment of the application;
  • FIG. 3 is a flowchart of an embodiment of a method for accessing a solid-state hard disk provided by this application;
  • FIG. 4 is a flowchart of an embodiment of a method for accessing a solid-state hard disk provided by this application;
  • FIG. 5 is a flowchart of an embodiment of a method for accessing a solid state drive provided by this application
  • Figure 6 is a schematic diagram of the queue deletion process.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations, and any embodiment or solution described as “exemplary” or “for example” in the embodiments of this application It should not be construed as being more preferable or advantageous than other embodiments or solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • the way to access the SSD is that the NIC first writes the data to the memory of the storage node (such as dynamic random access memory (Dynamic Random Access Memory)). Random Access Memory, DRAM)), and then the CPU of the storage node writes the data to the SSD.
  • the CPU must participate in each I/O operation, which consumes the memory and memory bandwidth of the storage node, and the data is written from the NIC to the CPU.
  • the CPU writes data to the SSD. After multiple PCIe interaction operations, there are more PCIe interactions and longer delays.
  • this application provides a method and device for accessing a solid state drive, which is applied to a storage node.
  • the storage node includes a CPU, a memory, an NIC, and an SSD.
  • the NIC receives the non-volatile high speed of the SSD during the initialization process.
  • Transmission bus (Non-Volatile Memory Express, NVMe) I/O queue queue information SSD receives the queue information of the NIC I/O queue, so that the NIC can directly send NVMe I/O commands from the NIC according to the queue information of the NVMeI/O queue
  • NVMe I/O queue written to the SSD the data address in the NVMe I/O command will directly use the physical address of the NIC's memory. Therefore, the data does not pass through the CPU and memory, and only one data migration (DMA) is required.
  • DMA data migration
  • FIG. 1 is a schematic diagram of a system structure applied by this application.
  • the system of this application includes a client and a storage node.
  • the client can be a host on the user side.
  • the storage node includes CPU11, memory 12, NIC13, and SSD14.
  • CPU11 and NIC13 are connected by PCIe bus
  • CPU11 and SSD14 are also connected by PCIe bus
  • NIC13 and SSD14 are also connected by PCIe bus
  • NIC13's memory is on-chip double-rate synchronous dynamic random access memory (On-chipDouble Data Rate SDRAM, DDR)
  • the memory of SSD14 includes on-chip DDR and flash memory.
  • DDR On-chipDouble Data Rate SDRAM
  • the process of the method for accessing the SSD in this application is: the client initiates an NVMe I/O command to NIC13, and NIC13 can directly send the NVMe I/O command according to the queue information of the NVMe I/O queue learned during the initialization process.
  • the data address in the NVMe I/O command will directly use the physical address of the NIC's memory. Therefore, the data does not pass through the CPU and the memory. It only needs one DMA to complete, avoiding the consumption of memory and Memory bandwidth requires smaller CPU and memory configuration, reduces the number of PCIe interactions, and reduces latency.
  • FIG. 2 is a schematic flowchart of the initialization process of the method for accessing a solid state drive provided by an embodiment of the application. As shown in FIG. 2, the method of this embodiment may include:
  • the NIC receives the queue information of the NVMe I/O queue of the SSD sent by the CPU.
  • the queue information of the NVMe I/O queue includes the first address and depth of the NVMe I/O queue.
  • the NIC determines the NVMe I/O queue according to the queue information of the NVMe I/O queue.
  • the NIC can determine the NVMe I/O queue of the SSD.
  • the NIC receives the NVMe I/O instruction, it will add the NVMe I/O according to the first address of the NVMe I/O queue.
  • the O instruction is written into the NVMe I/O queue, and the SSD NVMe I/O queue is notified that there are NVMe I/O instructions to be processed.
  • the depth of the NVMe I/O queue it can be determined whether the NVMe I/O queue is full.
  • the SSD receives the I/O queue information of the NIC sent by the CPU.
  • the I/O queue information of the NIC includes the first address and depth of the I/O queue.
  • the SSD can determine the I/O queue information of the NIC.
  • the first address of the root I/O queue writes the I/O instruction I/O queue and notify the NIC that there are I/O instructions to be processed in the I/O queue.
  • the depth of the I/O queue it can be determined whether the I/O queue is full.
  • the NVMe I/O queue of the SSD is created by the CPU, and the I/O queue of the NIC is created by the remote data movement (Remote Direct Memory Access, RDMA) driver in the CPU.
  • the SSD receives the queue information of the NIC's I/O queue through the initialization process, and the NIC receives the queue information of the SSD's NVMe I/O queue.
  • the subsequent NIC receives the data storage request, it can follow the SSD's NVMe I/
  • the queue information of the O queue writes the SSD write command in the data storage request to the NVMe I/O queue of the SSD.
  • the NIC can write the read instruction to the SSD NVMe I/O queue according to the queue information of the SSD's NVMe I/O queue.
  • FIG. 3 is a flowchart of an embodiment of a method for accessing a solid state drive provided by this application.
  • the method of this embodiment is applied to a storage node, and the storage node includes a CPU, a memory, an NIC, and an SSD.
  • this embodiment Examples of methods can include:
  • the NIC receives a data saving request sent by the client through the RDMA mode, and the data saving request includes the data to be written.
  • the NIC writes the data to be written into the memory of the NIC.
  • the memory of the NIC has an I/O queue
  • the SSD has an NVMe I/O queue.
  • the data storage request also includes an SSD write command.
  • the SSD obtains the data to be written from the NIC memory.
  • the method of this embodiment may further include:
  • the NIC sends the write response message to the client.
  • the NIC receives the data read request sent by the client.
  • the data read request includes the data information to be read and the SSD read command.
  • the data information to be read includes the Namespace (NS) and logical block where the data to be read is located. Address (Logical Block Address, LBA) and the length of the data to be read.
  • NS Namespace
  • LBA Logical Block Address
  • the SSD reads data according to the data information to be read, and writes the read data to the memory of the NIC.
  • the SSD writes the read response message into the I/O queue of the NIC according to the queue information of the I/O queue, and notifies the NIC that there is a read response message in the I/O queue, and the read response message is used to indicate whether the SSD read command is successful carry out.
  • the data saving request sent by the client is received through the NIC.
  • the data saving request includes the data to be written.
  • the NIC writes the data to be written into the memory of the NIC, and the SSD obtains the request from the memory of the NIC.
  • Write data and write to SSD Since the NIC writes the data to be written into the memory of the NIC, the SSD can obtain the data to be written from the memory of the NIC and write it into the SSD. Therefore, the data does not pass through the CPU and the memory, and only one DMA is required to complete, avoiding consumption Memory and memory bandwidth have smaller requirements for CPU and memory configuration, and reduce the number of PCIe interactions and reduce latency.
  • the NIC records the data storage request, and writes the data to be written into the memory of the NIC in a DMA manner (that is, the on-chip DDR).
  • the NIC engine can knock on the doorbell of the NVMe I/O queue at the SSD to notify that there is a pending SSD write command in the SSDI/O queue.
  • the SSD After receiving the SSD write command, the SSD writes the data to be written into the SSD from the memory of the NIC.
  • the SSD writes the write response message into the I/O queue of the NIC according to the queue information of the I/O queue, and notifies the NIC that there is a write response message in the I/O queue, and the write response message is used to indicate whether the SSD write command is successful carry out.
  • the write response message is written into the I/O queue of the NIC.
  • the NIC does not know it, and needs to knock on the doorbell of the I/O queue on the NIC side to notify the NIC to process the data returned by the SSD to the NIC.
  • the NIC After processing the write response message of the SSD, the NIC sends the write response message to the client.
  • the SSD write command is written to the NVMe I/O queue of the SSD, and the SSD is notified to the NVMe I/O queue There is an SSD write command to be processed.
  • the SSD writes the data to be written from the NIC memory to the SSD. Therefore, the data does not pass through the CPU and the memory, and only needs one DMA to complete it, avoiding the consumption of memory and memory bandwidth, which affects the CPU and memory.
  • the configuration requirements are smaller, and the number of PCIe interactions is reduced, and the delay is reduced.
  • FIG. 5 is a flowchart of an embodiment of a method for accessing a solid state drive provided by this application.
  • the method of this embodiment is applied to the storage node shown in FIG. 1, as shown in FIG. Taking the process, the method of this embodiment may include:
  • the client sends a data disk read request to the NIC.
  • the data disk read request includes data information to be read and an SSD read command.
  • the data information to be read includes the NS, LBA where the data to be read is located, and the length of the data to be read.
  • the NIC After receiving the data disk read request, the NIC obtains the data information to be read.
  • the NIC writes the disk read instruction to the NVMe I/O queue of the SSD according to the queue information of the NVMe I/O queue.
  • the disk read instruction carries the SSD read command and data information to be read, and notifies the SSDNVMe I/O queue of waiting Processed read command.
  • the SSD reads data according to the data information to be read, and writes the read data to the memory of the NIC in a DMA manner.
  • the NIC receives the read response message and learns that the data has been successfully read through the pass-through mode, and the NIC sends the read data to the client.
  • the NIC sends the I/O success status information to the client.
  • FIG. 6 is a schematic diagram of the queue deletion process. As shown in FIG. 6, the method of this embodiment may include:
  • the NIC receives the NVMe I/O queue deletion request sent by the CPU, and the NVMe I/O queue deletion request carries the NVMe I/O queue identifier.
  • the NIC marks the NVMe I/O queue corresponding to the identifier of the NVMe I/O queue as disabled according to the NVMe I/O queue deletion request.
  • the CPU deletes the NVMe I/O queue, it obtains the identifier of the NVMe I/O queue, and sends the NVMe I/O queue deletion request to the NIC.
  • the NVMe I/O queue deletion request carries the NVMe I/O queue.
  • the NIC will mark the NVMe I/O queue corresponding to the NVMe I/O queue identifier as disabled, which means that this NVMe I/O queue will not be used for data transmission until the NVMe I/O queue Was created again.
  • the CPU may send an NVMe I/O queue deletion request to the NIC when a network failure, a NIC failure, or a disk failure occurs.
  • the SSD receives the I/O queue deletion request sent by the CPU, and the I/O queue deletion request carries the identifier of the I/O queue.
  • the SSD marks the I/O queue corresponding to the identifier of the I/O queue as disabled according to the I/O queue deletion request.
  • the CPU may send an I/O queue deletion request to the SSD when a network failure, a NIC failure, or a disk failure occurs.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请提供一种访问固态硬盘的方法及装置。该方法应用于存储节点,所述存储节点包括网卡和固态硬盘SSD,网卡包括内存,该方法包括:网卡接收客户端发送的数据存盘请求,数据存盘请求包括待写入数据;网卡将待写入数据写入网卡的内存;SSD从网卡的内存获取待写入数据,并写入SSD中。

Description

访问固态硬盘的方法及装置 技术领域
本申请涉及存储领域,尤其涉及一种访问固态硬盘的方法及装置。
背景技术
目前多数存储场景下,磁盘与主机很多情况不在同一机框内,而是通过网络被拉远。在被拉远的固态硬盘(Solid StateDisk,SSD)存储背景下,对SSD的访问方式,是由网卡(network interface card,NIC)先将数据写到主机,然后主机再将数据写入SSD。很明显,一方面,每一次输入/输出(Input/Output,I/O)操作,主机中央处理器(Central Processing Unit,CPU)都必须参与其中,会消耗主机内存及内存带宽;另一方面,数据从NIC写到主机,再由主机将数据写入SSD,经过多次并行快捷***部件互连标准(Peripheral Component Interconnect Express,PCIe)交互操作,PCIe交互次数较多,时延较大。
发明内容
本申请提供一种访问固态硬盘的方法及装置,可减少PCIe交互次数,降低时延。
第一方面,本申请提供一种访问固态硬盘的方法,所述方法应用于存储节点,所述存储节点包括网卡和固态硬盘SSD,所述网卡包括内存,所述方法包括:
所述网卡接收客户端发送的数据存盘请求,所述数据存盘请求包括待写入数据;
所述网卡将所述待写入数据写入所述网卡的内存;
所述SSD从所述网卡的内存获取所述待写入数据,并写入所述SSD中。
通过第一方面提供的访问固态硬盘的方法,通过NIC接收客户端发送的数据存盘请求,数据存盘请求包括待写入数据,NIC将待写入数据写入NIC的内存,SSD从NIC的内存获取待写入数据,并写入SSD中。由于NIC将待写入数据写入NIC的内存,SSD可从NIC的内存获取待写入数据,并写入SSD中,因此,数据不经过CPU和内存,只需要一次DMA即可完成,避免消耗内存及内存带宽,对CPU和内存的配置要求更小一点,且减少PCIe交互次数,降低时延。
在一种可能的设计中,所述网卡的内存中具有I/O队列,所述SSD中具有NVMe I/O队列,所述数据存盘请求还包括SSD写命令;在所述SSD从所述网卡的内存获取所述待写入数据之前,所述方法还包括:
所述网卡根据所述NVMe I/O队列的队列信息将所述SSD写命令写入所述NVMe I/O队列,并通知所述SSD所述NVMe I/O队列中有待处理的所述SSD写命令。
通过该实施方式提供的访问固态硬盘的方法,由于NIC将待写入数据写入NIC的内存,将SSD写命令写入SSD的NVMeI/O队列,并通知SSD NVMeI/O队列中有待处理的SSD写命令,最后SSD将待写入数据从NIC的内存写入SSD,因此,数据不经过CPU和内存,只需要一次DMA即可完成,避免消耗内存及内存带宽,对CPU和内存的配置要求更小一点,且减少PCIe交互次数,降低时延。
在一种可能的设计中,在所述待写入数据写入所述SSD之后,所述方法还包括:
所述SSD根据所述I/O队列的队列信息将写响应消息写入所述网卡的I/O队列中,并通 知所述网卡所述I/O队列中有所述写响应消息,所述写响应消息用于指示所述SSD写命令是否成功完成;
所述网卡将所述写响应消息发送给客户端。
在一种可能的设计中,所述方法还包括:
所述网卡接收所述NVMe I/O队列的队列信息,所述NVMe I/O队列的队列信息包括所述NVMe I/O队列的首地址和深度;
所述SSD接收所述I/O队列的队列信息,所述I/O队列的队列信息包括所述I/O队列的首地址和深度。
通过该实施方式提供的访问固态硬盘的方法,SSD接收NIC的I/O队列的队列信息,NIC接收SSD的NVMe I/O队列的队列信息,后续NIC在接收到数据存盘请求后,可根据SSD的NVMe I/O队列的队列信息将数据存盘请求中的SSD写命令写入SSD的NVMe I/O队列。NIC在接收到数据读盘请求后,可根据SSD的NVMe I/O队列的队列信息将读盘指令写入SSD的NVMe I/O队列。
第二方面,本申请提供一种存储节点,存储节点包括网卡和固态硬盘SSD,所述网卡包括内存,其中:
所述网卡用于接收客户端发送的数据存盘请求,所述数据存盘请求包括待写入数据;
所述网卡还用于将所述待写入数据写入所述网卡的内存;
所述SSD用于从所述网卡的内存获取所述待写入数据,并写入所述SSD中。
在一种可能的设计中,所述网卡的内存中具有I/O队列,所述SSD中具有NVMe I/O队列,所述数据存盘请求还包括SSD写命令;
所述网卡还用于:根据所述NVMe I/O队列的队列信息将所述SSD写命令写入所述NVMe I/O队列,并通知所述SSD所述NVMe I/O队列中有待处理的所述SSD写命令。
在一种可能的设计中,所述SSD还用于:在所述待写入数据写入所述SSD之后,根据所述I/O队列的队列信息将写响应消息写入所述网卡的I/O队列中,并通知所述网卡所述I/O队列中有所述写响应消息,所述写响应消息用于指示所述SSD写命令是否成功完成;
所述网卡还用于:将所述写响应消息发送给客户端。
在一种可能的设计中,所述网卡还用于:接收所述NVMe I/O队列的队列信息,所述NVMe I/O队列的队列信息包括所述NVMe I/O队列的首地址和深度;
所述SSD还用于:接收所述I/O队列的队列信息,所述I/O队列的队列信息包括所述I/O队列的首地址和深度。
上述第二方面以及上述第二方面的各可能的设计中所提供的数据访问装置,其有益效果可以参见上述第一方面和第一方面的各可能的实施方式所带来的有益效果,在此不再赘述。
附图说明
图1为本申请应用的一种***结构示意图;
图2为本申请实施例提供的访问固态硬盘的方法初始化过程的流程示意图;
图3为本申请提供的一种访问固态硬盘的方法实施例的流程图;
图4为本申请提供的一种访问固态硬盘的方法实施例的流程图;
图5为本申请提供的一种访问固态硬盘的方法实施例的流程图;
图6为队列删除过程示意图。
具体实施方式
本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明,本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或方案不应被解释为比其它实施例或方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
相关技术中,SSD与中央处理器(Central Processing Unit,CPU)不在同一机框内时,对SSD的访问方式,是由NIC先将数据写到存储节点的内存(如动态随机存取存储器(Dynamic Random Access Memory,DRAM)),然后存储节点的CPU再将数据写入SSD,每一次I/O操作CPU都必须参与其中,会消耗存储节点内存及内存带宽,而且数据从NIC写到CPU,再由CPU将数据写入SSD,经过多次PCIe交互操作,PCIe交互次数较多,时延较大。为解决这一问题,本申请提供一种访问固态硬盘的方法及装置,应用于一种存储节点,存储节点包括CPU、内存、NIC和SSD,通过初始化过程中NIC接收SSD的非易失性高速传输总线(Non-Volatile Memory Express,NVMe)I/O队列的队列信息,SSD接收NIC的I/O队列的队列信息,从而NIC可根据NVMeI/O队列的队列信息将NVMeI/O命令由NIC直接写入SSD的NVMeI/O队列中,NVMe I/O命令中的数据地址将直接使用NIC的内存的物理地址,因此,数据不经过CPU和内存,只需要一次数据搬移(Data migration,DMA)即可完成,避免消耗内存及内存带宽,对CPU和内存的配置要求更小一点,且减少PCIe交互次数,降低时延。下面结合附图详细说明本申请提供的访问固态硬盘的方法的具体过程。
图1为本申请应用的一种***结构示意图,如图1所示,本申请的***包括客户端和存储节点,客户端可以为用户侧的主机,存储节点包括CPU11、内存12、NIC13和SSD14,CPU11和NIC13之间通过PCIe总线连接,CPU11和SSD14之间也通过PCIe总线连接,NIC13和SSD14之间也通过PCIe总线连接,NIC13的内存为片内双倍速率同步动态随机存储器(On-chipDouble Data Rate SDRAM,DDR),SSD14的内存包括片内DDR和闪存。如图1中所示,本申请中访问固态硬盘的方法流程为:客户端向NIC13发起NVMeI/O命令,NIC13可根据初始化过程中获知的NVMe I/O队列的队列信息将NVMeI/O命令直接写入SSD的NVMeI/O队列中,NVMe I/O命令中的数据地址将直接使用NIC的内存的物理地址,因此,数据不经过CPU和内存,只需要一次DMA即可完成,避免消耗内存及内存带宽,对CPU和内存的配置要求更小一点,且减少PCIe交互次数,降低时延。下面结合附图详细说明具体的过程。
图2为本申请实施例提供的访问固态硬盘的方法初始化过程的流程示意图,如图2所示,本实施例的方法可以包括:
S101、NIC接收CPU发送的SSD的NVMe I/O队列的队列信息,NVMe I/O队列的队列信息包括NVMe I/O队列的首地址和深度。
S102、NIC根据NVMe I/O队列的队列信息确定NVMe I/O队列。
具体地,NIC得到NVMe I/O队列的队列信息后,可确定SSD的NVMe I/O队列,当NIC在接收到NVMe I/O指令后,根据NVMe I/O队列的首地址将NVMe I/O指令写入NVMe I/O队列,并通知SSD NVMe I/O队列中有待处理的NVMe I/O指令,根据NVMe I/O队列的深度可确定NVMe I/O队列是否已被写满。
S103、SSD接收CPU发送的NIC的I/O队列信息,NIC的I/O队列信息包括I/O队列的首地址和深度。
S104、SSD根据I/O队列的队列信息确定I/O队列。
具体地,SSD得到NIC的I/O队列信息后,可确定NIC的I/O队列信息,当SSD在接收到I/O指令后,根I/O队列的首地址将I/O指令写入I/O队列,并通知NIC I/O队列中有待处理的I/O指令, 根据I/O队列的深度可确定I/O队列是否已被写满。
其中,SSD的NVMe I/O队列由CPU创建,NIC的I/O队列由CPU内的远程数据搬移(Remote Direct Memory Access,RDMA)驱动来创建。本实施例中,通过初始化过程SSD接收NIC的I/O队列的队列信息,NIC接收SSD的NVMe I/O队列的队列信息,后续NIC在接收到数据存盘请求后,可根据SSD的NVMe I/O队列的队列信息将数据存盘请求中的SSD写命令写入SSD的NVMeI/O队列。NIC在接收到数据读盘请求后,可根据SSD的NVMe I/O队列的队列信息将读盘指令写入SSD的NVMe I/O队列。
图3为本申请提供的一种访问固态硬盘的方法实施例的流程图,本实施例的方法应用于存储节点,该存储节点包括CPU、内存、NIC和SSD,如图3所示,本实施例的方法可以包括:
S201、NIC接收客户端通过RDMA方式发送的数据存盘请求,数据存盘请求包括待写入数据。
S202、NIC将待写入数据写入NIC的内存。
S203、SSD从NIC的内存获取待写入数据,并写入SSD中。
进一步地,本实施例中,NIC的内存中具有I/O队列,SSD中具有NVMe I/O队列,数据存盘请求还包括SSD写命令,在S203中SSD从NIC的内存获取待写入数据之前,本实施例的方法还可以包括:
S204、NIC根据NVMe I/O队列的队列信息将SSD写命令写入SSD的NVMe I/O队列,并通知SSD NVMe I/O队列中有待处理的SSD写命令。
进一步地,在S203中将待写入数据写入SSD之后,本实施例的方法还可以包括:
S206、SSD根据NIC的所述I/O队列的队列信息将写响应消息写入NIC的I/O队列中,并通知NICI/O队列中有写响应消息,写响应消息用于指示SSD写命令是否成功完成。
S207、NIC将写响应消息发送给客户端。
以上是数据写入SSD的过程,本实施例的方法还可以包括读取SSD中的数据的过程,进一步地,本实施例的方法还可以包括:
S208、NIC接收客户端发送的数据读盘请求,数据读盘请求包括待读取数据信息和SSD读命令,待读取数据信息包括待读取数据所在的命名空间(Namespace,NS)、逻辑块地址(Logical Block Address,LBA)和待读取数据的长度。
S209、NIC根据所述NVMe I/O队列的队列信息将读盘指令写入SSD的NVMe I/O队列,读盘指令携带SSD读命令和待读取数据信息,并通知SSDNVMeI/O队列中有待处理的读盘指令。
S210、SSD根据待读取数据信息读取数据,并将读取的数据写入到NIC的内存。
可选的,本实施例的方法在S210之后,还可以包括:
S211、SSD根据所述I/O队列的队列信息将读响应消息写入NIC的I/O队列中,并通知NICI/O队列中有读响应消息,读响应消息用于指示SSD读命令是否成功完成。
S212、NIC将读取的数据发送给客户端。
S213、NIC将I/O成功状态信息发送给客户端。
本实施例提供的访问固态硬盘的方法,通过NIC接收客户端发送的数据存盘请求,数据存盘请求包括待写入数据,NIC将待写入数据写入NIC的内存,SSD从NIC的内存获取待写入数据,并写入SSD中。由于NIC将待写入数据写入NIC的内存,SSD可从NIC的内存获取待写入数据,并写入SSD中,因此,数据不经过CPU和内存,只需要一次DMA即可完成,避免消耗内存及内存带宽,对CPU和内存的配置要求更小一点,且减少PCIe交互次数,降低时延。
下面采用具体的实施例,对图3所示方法实施例的技术方案进行详细说明。
图4为本申请提供的一种访问固态硬盘的方法实施例的流程图,本实施例的方法应用于图1所示的存储节点中,如图4所示,本实施例中详细说明数据写入过程,本实施例的方法可以包括:
S301、客户端向NIC发送数据存盘请求,数据存盘请求包括待写入数据和SSD写命令。
S302、NIC记录该数据存盘请求,并将待写入数据以DMA方式写入NIC的内存(即片内DDR)。
S303、NIC根据NVMe I/O队列的队列信息将SSD写命令写入SSD的NVMeI/O队列,并通知SSD NVMeI/O队列中有待处理的SSD写命令。
具体地,可由NIC的引擎敲SSD处的NVMeI/O队列的门铃来通知SSDI/O队列中有待处理的SSD写命令。
S304、SSD收到SSD写命令后,将待写入数据从NIC的内存写入SSD。
S305、SSD根据所述I/O队列的队列信息将写响应消息写入NIC的I/O队列中,并通知NICI/O队列中有写响应消息,写响应消息用于指示SSD写命令是否成功完成。
具体地,写响应消息写入NIC的I/O队列中,NIC并不知道,需要敲NIC侧I/O队列的门铃来通知NIC处理SSD返回给NIC的数据。
S306、NIC处理完SSD的写响应消息后,将该写响应消息发送给客户端。
本实施例中,NIC接收客户端发送的数据存盘请求后,由于NIC将待写入数据写入NIC的内存,将SSD写命令写入SSD的NVMeI/O队列,并通知SSD NVMeI/O队列中有待处理的SSD写命令,最后SSD将待写入数据从NIC的内存写入SSD,因此,数据不经过CPU和内存,只需要一次DMA即可完成,避免消耗内存及内存带宽,对CPU和内存的配置要求更小一点,且减少PCIe交互次数,降低时延。
图5为本申请提供的一种访问固态硬盘的方法实施例的流程图,本实施例的方法应用于图1所示的存储节点中,如图5所示,本实施例中详细说明数据读取过程,本实施例的方法可以包括:
S401、客户端向NIC发送数据读盘请求,数据读盘请求包括待读取数据信息和SSD读命令,待读取数据信息包括待读取数据所在的NS、LBA和待读取数据的长度。
S402、NIC接收到数据读盘请求后,获取待读取数据信息。
S403、NIC根据所述NVMe I/O队列的队列信息将读盘指令写入SSD的NVMe I/O队列,读盘指令携带SSD读命令和待读取数据信息,并通知SSDNVMeI/O队列中有待处理的读盘指令。
S404、SSD根据待读取数据信息读取数据,并将读取的数据以DMA方式写入到NIC的内存。
S405、SSD写入完成后,根据所述I/O队列的队列信息将读响应消息写入NIC的I/O队列中,并通知NICI/O队列中有读响应消息,读响应消息用于指示SSD读命令是否成功完成,以通知NIC做后续处理。
S406、NIC收到读响应消息,得知已成功通过直通方式读完数据,NIC将读取的数据发送给客户端。
S407、NIC将I/O成功状态信息发送给客户端。
图6为队列删除过程示意图,如图6所示,本实施例的方法可以包括:
S501、NIC接收CPU发送的NVMe I/O队列删除请求,NVMe I/O队列删除请求中携带NVMe I/O队列的标识。
S502、NIC根据NVMe I/O队列删除请求将与NVMe I/O队列的标识对应的NVMe I/O队列标记为非使能。
具体地,CPU在NVMe I/O队列删除的时候,获取该NVMe I/O队列的标识,将NVMe I/O队列删除请求发送给NIC,NVMe I/O队列删除请求中携带NVMe I/O队列的标识,由NIC将与NVMe I/O队列的标识对应的NVMe I/O队列标记为非使能(disabled),表示后续不再使用此NVMe I/O队列进行数据传输,直至NVMeI/O队列被再次创建。
本实施例中,CPU可以是在网络故障、NIC故障或盘故障时向NIC发送NVMe I/O队列删除请求。
S503、SSD接收CPU发送的I/O队列删除请求,I/O队列删除请求中携带I/O队列的标识。
S504、SSD根据I/O队列删除请求将与I/O队列的标识对应的I/O队列标记为非使能。
同样地,CPU将I/O队列删除请求发送给SSD,I/O队列删除请求中携带I/O队列的标识,SSD根据I/O队列删除请求将与I/O队列的标识对应的I/O队列标记为非使能。表示后续不再使用此I/O队列进行数据传输。
本实施例中,CPU可以是在网络故障、NIC故障或盘故障时向SSD发送I/O队列删除请求。
本领域普通技术人员可以理解:在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (8)

  1. 一种访问固态硬盘的方法,其特征在于,所述方法应用于存储节点,所述存储节点包括网卡和固态硬盘SSD,所述网卡包括内存,所述方法包括:
    所述网卡接收客户端发送的数据存盘请求,所述数据存盘请求包括待写入数据;
    所述网卡将所述待写入数据写入所述网卡的内存;
    所述SSD从所述网卡的内存获取所述待写入数据,并写入所述SSD中。
  2. 根据权利要求1所述的方法,其特征在于,所述网卡的内存中具有I/O队列,所述SSD中具有NVMe I/O队列,所述数据存盘请求还包括SSD写命令;在所述SSD从所述网卡的内存获取所述待写入数据之前,所述方法还包括:
    所述网卡根据所述NVMe I/O队列的队列信息将所述SSD写命令写入所述NVMe I/O队列,并通知所述SSD所述NVMe I/O队列中有待处理的所述SSD写命令。
  3. 根据权利要求2所述的方法,其特征在于,在所述待写入数据写入所述SSD之后,所述方法还包括:
    所述SSD根据所述I/O队列的队列信息将写响应消息写入所述网卡的I/O队列中,并通知所述网卡所述I/O队列中有所述写响应消息,所述写响应消息用于指示所述SSD写命令是否成功完成;
    所述网卡将所述写响应消息发送给客户端。
  4. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    所述网卡接收所述NVMe I/O队列的队列信息,所述NVMe I/O队列的队列信息包括所述NVMe I/O队列的首地址和深度;
    所述SSD接收所述I/O队列的队列信息,所述I/O队列的队列信息包括所述I/O队列的首地址和深度。
  5. 一种存储节点,其特征在于,所述存储节点包括网卡和固态硬盘SSD,所述网卡包括内存,其中:
    所述网卡用于接收客户端发送的数据存盘请求,所述数据存盘请求包括待写入数据;
    所述网卡还用于将所述待写入数据写入所述网卡的内存;
    所述SSD用于从所述网卡的内存获取所述待写入数据,并写入所述SSD中。
  6. 根据权利要求5所述的存储节点,其特征在于,所述网卡的内存中具有I/O队列,所述SSD中具有NVMe I/O队列,所述数据存盘请求还包括SSD写命令;
    所述网卡还用于:根据所述NVMe I/O队列的队列信息将所述SSD写命令写入所述NVMe I/O队列,并通知所述SSD所述NVMe I/O队列中有待处理的所述SSD写命令。
  7. 根据权利要求6所述的存储节点,其特征在于,
    所述SSD还用于:在所述待写入数据写入所述SSD之后,根据所述I/O队列的队列信息将写响应消息写入所述网卡的I/O队列中,并通知所述网卡所述I/O队列中有所述写响应消息,所述写响应消息用于指示所述SSD写命令是否成功完成;
    所述网卡还用于:将所述写响应消息发送给客户端。
  8. 根据权利要求6所述的存储节点,其特征在于,
    所述网卡还用于:接收所述NVMe I/O队列的队列信息,所述NVMe I/O队列的队列信息包括所述NVMe I/O队列的首地址和深度;
    所述SSD还用于:接收所述I/O队列的队列信息,所述I/O队列的队列信息包括所述I/O 队列的首地址和深度。
PCT/CN2020/119841 2019-10-28 2020-10-07 访问固态硬盘的方法及装置 WO2021082877A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20881444.2A EP4040279A4 (en) 2019-10-28 2020-10-07 METHOD AND APPARATUS FOR ACCESSING A STATIC SEMICONDUCTOR DISK
US17/730,798 US20220253238A1 (en) 2019-10-28 2022-04-27 Method and apparatus for accessing solid state disk

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911031211.0 2019-10-28
CN201911031211.0A CN112732166B (zh) 2019-10-28 2019-10-28 访问固态硬盘的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/730,798 Continuation US20220253238A1 (en) 2019-10-28 2022-04-27 Method and apparatus for accessing solid state disk

Publications (1)

Publication Number Publication Date
WO2021082877A1 true WO2021082877A1 (zh) 2021-05-06

Family

ID=75589097

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/119841 WO2021082877A1 (zh) 2019-10-28 2020-10-07 访问固态硬盘的方法及装置

Country Status (4)

Country Link
US (1) US20220253238A1 (zh)
EP (1) EP4040279A4 (zh)
CN (1) CN112732166B (zh)
WO (1) WO2021082877A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116909484A (zh) * 2023-08-02 2023-10-20 中科驭数(北京)科技有限公司 数据处理方法、装置、设备及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110213854A1 (en) * 2008-12-04 2011-09-01 Yaron Haviv Device, system, and method of accessing storage
US20160147676A1 (en) * 2014-11-20 2016-05-26 Samsung Electronics Co., Ltd. Peripheral component interconnect (pci) device and system including the pci
CN106210041A (zh) * 2016-07-05 2016-12-07 杭州华为数字技术有限公司 一种数据写入方法及服务器端网卡
CN107003943A (zh) * 2016-12-05 2017-08-01 华为技术有限公司 NVMe over Fabric架构中数据读写命令的控制方法、存储设备和***
CN107077426A (zh) * 2016-12-05 2017-08-18 华为技术有限公司 NVMe over Fabric架构中数据读写命令的控制方法、设备和***
CN108369530A (zh) * 2016-12-05 2018-08-03 华为技术有限公司 非易失性高速传输总线架构中数据读写命令的控制方法、设备和***
CN109936513A (zh) * 2019-02-18 2019-06-25 网宿科技股份有限公司 基于fpga的数据报文处理方法、智能网卡和cdn服务器

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6246682B1 (en) * 1999-03-05 2001-06-12 Transwitch Corp. Method and apparatus for managing multiple ATM cell queues
US6831916B1 (en) * 2000-09-28 2004-12-14 Balaji Parthasarathy Host-fabric adapter and method of connecting a host system to a channel-based switched fabric in a data network
US7353301B2 (en) * 2004-10-29 2008-04-01 Intel Corporation Methodology and apparatus for implementing write combining
US8892820B2 (en) * 2010-03-19 2014-11-18 Netapp, Inc. Method and system for local caching of remote storage data
US9263102B2 (en) * 2010-09-28 2016-02-16 SanDisk Technologies, Inc. Apparatus, system, and method for data transformations within a data storage device
JP2014063497A (ja) * 2012-09-21 2014-04-10 Plx Technology Inc 論理装置の機能を有するpciエクスプレススイッチ
CN106688217B (zh) * 2014-03-08 2021-11-12 狄亚曼提公司 用于融合联网和存储的方法和***
US9934177B2 (en) * 2014-11-04 2018-04-03 Cavium, Inc. Methods and systems for accessing storage using a network interface card
KR102430187B1 (ko) * 2015-07-08 2022-08-05 삼성전자주식회사 RDMA NVMe 디바이스의 구현 방법
CN113407244A (zh) * 2016-03-01 2021-09-17 华为技术有限公司 一种级联板、ssd远程共享访问的***和方法
WO2018119738A1 (en) * 2016-12-28 2018-07-05 Intel Corporation Speculative read mechanism for distributed storage system
US10698808B2 (en) * 2017-04-25 2020-06-30 Samsung Electronics Co., Ltd. Garbage collection—automatic data placement
US10761752B1 (en) * 2017-05-23 2020-09-01 Kmesh, Inc. Memory pool configuration for allocating memory in a distributed network
US10831650B2 (en) * 2018-03-07 2020-11-10 Exten Technologies, Inc. Systems and methods for accessing non-volatile memory and write acceleration cache
CN109117386B (zh) * 2018-07-12 2021-03-09 中国科学院计算技术研究所 一种网络远程读写二级存储的***及方法
CN110896406A (zh) * 2018-09-13 2020-03-20 华为技术有限公司 数据存储方法、装置及服务器
CN109491809A (zh) * 2018-11-12 2019-03-19 西安微电子技术研究所 一种降低高速总线延迟的通信方法
US10592447B1 (en) * 2018-12-28 2020-03-17 EMC IP Holding Company LLC Accelerated data handling in cloud data storage system
US10860223B1 (en) * 2019-07-18 2020-12-08 Alibaba Group Holding Limited Method and system for enhancing a distributed storage system by decoupling computation and network tasks
CN110908600B (zh) * 2019-10-18 2021-07-20 华为技术有限公司 数据访问方法、装置和第一计算设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110213854A1 (en) * 2008-12-04 2011-09-01 Yaron Haviv Device, system, and method of accessing storage
US20160147676A1 (en) * 2014-11-20 2016-05-26 Samsung Electronics Co., Ltd. Peripheral component interconnect (pci) device and system including the pci
CN106210041A (zh) * 2016-07-05 2016-12-07 杭州华为数字技术有限公司 一种数据写入方法及服务器端网卡
CN107003943A (zh) * 2016-12-05 2017-08-01 华为技术有限公司 NVMe over Fabric架构中数据读写命令的控制方法、存储设备和***
CN107077426A (zh) * 2016-12-05 2017-08-18 华为技术有限公司 NVMe over Fabric架构中数据读写命令的控制方法、设备和***
CN108369530A (zh) * 2016-12-05 2018-08-03 华为技术有限公司 非易失性高速传输总线架构中数据读写命令的控制方法、设备和***
CN109936513A (zh) * 2019-02-18 2019-06-25 网宿科技股份有限公司 基于fpga的数据报文处理方法、智能网卡和cdn服务器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4040279A4 *

Also Published As

Publication number Publication date
EP4040279A4 (en) 2022-12-14
EP4040279A1 (en) 2022-08-10
CN112732166B (zh) 2024-06-18
CN112732166A (zh) 2021-04-30
US20220253238A1 (en) 2022-08-11

Similar Documents

Publication Publication Date Title
US20190163364A1 (en) System and method for tcp offload for nvme over tcp-ip
CN107111452B (zh) 应用于计算机***的数据迁移方法和装置、计算机***
US11500797B2 (en) Computer memory expansion device and method of operation
US20190171392A1 (en) Method of operating storage device capable of reducing write latency
WO2022007470A1 (zh) 一种数据传输的方法、芯片和设备
WO2019090493A1 (zh) 内存块回收方法和装置
WO2020015670A1 (zh) 文件发送方法、文件接收方法和文件收发装置
WO2021213209A1 (zh) 数据处理方法及装置、异构***
WO2015196378A1 (zh) 读写闪存中数据的方法、装置及用户设备
US20220253252A1 (en) Data processing method and apparatus
WO2023103704A1 (zh) 数据处理方法、存储介质和处理器
JP5893028B2 (ja) キャッシングに対応したストレージ装置上における効率的なシーケンシャルロギングのためのシステム及び方法
US10853255B2 (en) Apparatus and method of optimizing memory transactions to persistent memory using an architectural data mover
WO2021082877A1 (zh) 访问固态硬盘的方法及装置
CN113377288B (zh) 硬件队列管理***、方法、固态硬盘控制器及固态硬盘
US9904622B2 (en) Control method for non-volatile memory and associated computer system
JP2015158910A (ja) ラップ読出しから連続読出しを行うメモリサブシステム
US20190012279A1 (en) Computer system, communication device, and storage control method
US20240028530A1 (en) Systems and methods for data prefetching for low latency data read from a remote server
WO2017005009A1 (zh) 外部设备扩展卡及输入输出外部设备的数据处理方法
US20040230734A1 (en) Data transfer control system, electronic instrument, and data transfer control method
US20170153994A1 (en) Mass storage region with ram-disk access and dma access
JP2005141344A (ja) ディスクアレイ装置及びディスクアレイ装置の制御方法
KR20200143922A (ko) 메모리 카드 및 이를 이용한 데이터 처리 방법
WO2022126534A1 (zh) 数据处理方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20881444

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020881444

Country of ref document: EP

Effective date: 20220505