CN111913892B - Providing open channel storage devices using CMBs - Google Patents

Providing open channel storage devices using CMBs Download PDF

Info

Publication number
CN111913892B
CN111913892B CN201910385222.2A CN201910385222A CN111913892B CN 111913892 B CN111913892 B CN 111913892B CN 201910385222 A CN201910385222 A CN 201910385222A CN 111913892 B CN111913892 B CN 111913892B
Authority
CN
China
Prior art keywords
command
address
storage device
cache unit
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910385222.2A
Other languages
Chinese (zh)
Other versions
CN111913892A (en
Inventor
贾舒
孙通
郑宏亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Starblaze Technology Co ltd
Original Assignee
Beijing Starblaze Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Starblaze Technology Co ltd filed Critical Beijing Starblaze Technology Co ltd
Priority to CN201910385222.2A priority Critical patent/CN111913892B/en
Priority to CN202111356021.3A priority patent/CN114064522A/en
Priority to PCT/CN2020/093100 priority patent/WO2020224662A1/en
Publication of CN111913892A publication Critical patent/CN111913892A/en
Application granted granted Critical
Publication of CN111913892B publication Critical patent/CN111913892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present application relates to a storage technology, and in particular, to a storage device including: a command interface, a control unit and an NVM; the storage device provides a storage space and a memory space for a host; the cache unit of the memory space stores a first type of address; the control component receives an IO command indicating the cache unit index through the command interface, acquires a first type address from the cache unit according to the cache unit index in the IO command, and accesses the NVM according to the first type address. The application provides the method, the host and the storage device for accessing the storage device by using the physical address under the condition of not modifying the kernel of the operating system, so that the application program can obtain the advantages provided by the open channel solid-state storage device, and the risk caused by modifying the kernel is avoided.

Description

Providing open channel storage devices using CMBs
Technical Field
The present application relates to storage technology, and in particular, to providing an Open Channel (Open Channel) storage device using a CMB (controller memory buffer).
Background
Fig. 1 is a block diagram of a storage device in the prior art. The storage device 102 is coupled to a host for providing storage capabilities to the host. The host and the storage device 102 may be coupled by various methods, including but not limited to, connecting the host and the storage device 102 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (Small Computer System Interface), SAS (Serial Attached SCSI), IDE (Integrated Drive Electronics), USB (Universal Serial Bus), PCIE (Peripheral Component Interconnect Express, PCIE, high speed Peripheral Component Interconnect), NVMe (NVM Express, high speed nonvolatile storage), ethernet, fibre channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The Memory device 102 includes an interface 103, a control section 104, one or more NVM chips 105, and a DRAM (Dynamic Random Access Memory) 110.
NAND flash Memory, phase change Memory, FeRAM (Ferroelectric RAM), MRAM (magnetoresistive Memory), RRAM (Resistive Random Access Memory), XPoint Memory, and the like are common NVM.
The interface 103 may be adapted to exchange data with a host by means such as SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.
The control unit 104 is used to control data transfer between the interface 103, the NVM chip 105, and the DRAM 110, and also used for memory management, host logical address to flash physical address mapping, erase leveling, bad block management, and the like. The control component 104 can be implemented in various manners of software, hardware, firmware, or a combination thereof, for example, the control component 104 can be in the form of an FPGA (Field-programmable gate array), an ASIC (Application-Specific Integrated Circuit), or a combination thereof. The control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO (Input/Output) commands. The control component 104 may also be coupled to the DRAM 110 and may access data of the DRAM 110. FTL tables and/or cached IO command data may be stored in the DRAM.
Control section 104 includes a flash interface controller (or referred to as a media interface controller, a flash channel controller) that is coupled to NVM chip 105 and issues commands to NVM chip 105 in a manner that conforms to an interface protocol of NVM chip 105 to operate NVM chip 105 and receive command execution results output from NVM chip 105. Known NVM chip interface protocols include "Toggle", "ONFI", etc.
In some storage devices, mapping information from logical addresses to physical addresses is maintained using FTL (Flash Translation Layer). The logical addresses constitute the storage space of the storage device as perceived by a host accessing the storage device. The physical address is an address for accessing a physical memory location of the memory device. Address mapping may also be implemented using an intermediate address modality in the related art. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address. In these cases, the read/write commands received by the storage device indicate logical addresses. A table structure storing mapping information from logical addresses to physical addresses is called an FTL table.
The FTL of some memory devices is provided by a host coupled to the memory device, the FTL table is stored by a memory of the host, and the FTL is provided by a CPU of the host executing software. Still other storage management devices disposed between hosts and storage devices provide FTLs. In these cases, the physical address constitutes the storage space of the storage device as perceived by a host accessing the storage device, and the read/write commands received by the storage device indicate the physical address.
According to the NVMe protocol, a host accesses a storage device using logical addresses, and a storage address space provided by the storage device to the host is a logical address space composed of logical addresses.
Recently, Open-Channel Solid State storage device (Open-Channel Solid State Drives) was also provided, and the Open-Channel Solid State storage device Specification was available at http:// lightnvm. io/docs/OCSD-2 _0-20180129.pdf, which is incorporated herein by reference in its entirety. The open channel solid state storage device specification provides an extension on the basis of the NVMe protocol. According to the open channel solid state storage device specification, a host accesses a storage device using physical addresses, and a storage address space provided by the storage device to the host is an address space composed of the physical addresses.
To implement the open channel solid state storage device specification, the storage device is to support the IO commands defined by the open channel solid state storage device specification, while a driver is added to the host's operating system kernel (e.g., the LightNVM subsystem available from https:// openchannels.
Fig. 2 is a schematic diagram of a solid-state memory device using an open channel in the prior art.
A host is coupled with the open channel solid state storage device. The NVMe device driver and the LightNVM subsystem are installed in the operating system of the host. The user space of the host runs the application. The application accesses a storage device (open channel SSD). The NVMe device driver is a driver that runs in the operating system kernel for accessing the NVMe storage device, available from https:// nvmexpress. The LightNVM subsystem is a driver running in the kernel for accessing the open channel SSD.
By way of example, an application accesses a storage device using a logical address. The FTL running at the host translates the logical address to a physical address and provides it to the LightNVM subsystem. The LightNVM subsystem generates an IO command which accords with the open channel solid-state storage device specification through the NVMe device driver and provides the IO command to the open channel SSD. The open channel SSD executes IO commands. The IO command indicates, for example, a host memory address, a physical address of the storage device, and a length of data to be accessed by the IO command. Optionally, the IO command further indicates metadata generated by the storage device and additionally written to the non-volatile storage medium, for example, in the NVMe protocol. The metadata is, for example, a logical address associated with the stored data, check information for protecting the data, and the like.
As yet another example, the application provides a physical address to the LightNVM subsystem to access the storage device.
Disclosure of Invention
NVMe device drivers are generally considered to be mature drivers and have been widely validated. However, using an open channel solid-state storage device requires adding a LightNVM subsystem or a driver providing similar functionality to the kernel, or using a new version of the operating system kernel including the LightNVM subsystem. Modifying or updating the operating system kernel presents risks. Data centers, internet operators, and the like are reluctant to assume the risk of modifying the kernel.
When using an open channel solid state storage device, the host knows the characteristics of the open channel storage device related to the non-volatile storage medium, e.g., to perform an erase operation before writing data to the storage medium. Whereas prior art operating systems provide system calls for accessing storage devices that provide only read (), read ()) and write (), write ()) operations, so that applications in user space lack sufficient capability to operate open channel solid state storage devices without updating the operating system kernel.
According to the embodiment of the application, the method, the host and the storage device for accessing the storage device by using the physical address are provided under the condition that the kernel of the operating system is not modified, so that the application program can obtain the advantages provided by the open channel solid-state storage device, and the risk caused by modifying the kernel is avoided.
According to a first aspect of the present application, there is provided a first storage device according to the first aspect of the present application, comprising: a command interface, a control unit and an NVM; the storage device provides a storage space and a memory space for a host; the cache unit of the memory space stores a first type of address; the control component receives an IO command indicating the cache unit index through the command interface, acquires a first type address from the cache unit according to the cache unit index in the IO command, and accesses the NVM according to the first type address.
According to the first storage device of the first aspect of the present application, there is provided the second storage device of the first aspect of the present application, wherein the cache unit index indicated by the IO command is a substitute for the address of the second type of the IO command.
The third storage device according to the first aspect of the present application is provided according to the first or second storage device of the first aspect of the present application, wherein the first type of address is a physical address of the NVM that indexes the storage device, and the second type of address is a logical address that indexes the storage space.
According to one of the first to third memory devices of the first aspect of the present application, there is provided the fourth memory device of the first aspect of the present application, wherein if the IO command indicates a second type address for the memory device, the control section converts the second type address into the first type address according to a Flash Translation Layer (FTL), and accesses the NVM according to the first type address.
According to one of the first to third memory devices of the first aspect of the present application, there is provided the fifth memory device of the first aspect of the present application, wherein the control section accesses the NVM according to the first type address if the IO command indicates the first type address for the memory device.
According to one of the first to third storage devices of the first aspect of the present application, there is provided the sixth storage device of the first aspect of the present application, wherein if the IO command indicates a key for the KV storage device for the storage device, the control section converts the key into a first type address according to a Flash Translation Layer (FTL), and accesses the NVM according to the first type address.
According to one of the first to sixth storage devices of the first aspect of the present application, there is provided the seventh storage device of the first aspect of the present application, wherein the memory space provided by the storage device to the host is a controller memory buffer defined according to the NVMe protocol or a memory space defined according to the PCIe protocol.
According to a seventh storage device of the first aspect of the present application, there is provided the eighth storage device of the first aspect of the present application, wherein the controller memory buffer is a non-volatile storage space.
According to one of the first to eighth storage devices of the first aspect of the present application, there is provided the ninth storage device of the first aspect of the present application, wherein the first type of address is a physical address complying with the specification of the open channel solid state storage device.
According to a ninth storage device of the first aspect of the present application, there is provided the tenth storage device of the first aspect of the present application, wherein the IO command is generated by operating a storage device driver using a lead ()/pwrite () system call.
According to a tenth storage device of the first aspect of the present application, there is provided the eleventh storage device of the first aspect of the present application, wherein, when a lead ()/write () system call is used, an address parameter "__ offset" is set as a cache location index, and the storage device driver generates the second type address of the IO command with the address parameter "__ offset".
According to a tenth or eleventh storage device of the first aspect of the present application, there is provided the twelfth storage device of the first aspect of the present application, wherein if the IO command is a read command, the read command is generated by calling a Pread () system call.
According to one of the tenth to twelfth storage devices of the first aspect of the present application, there is provided the thirteenth storage device of the first aspect of the present application, wherein if the IO command is a write command, the write command is generated by the storage device driver by calling Pwead () system call.
According to one of the first to thirteenth storage devices of the first aspect of the present application, there is provided the fourteenth storage device of the first aspect of the present application, wherein the first type address associated with the IO command is written into the cache location corresponding to the cache location index used for generating the IO command.
According to one of the first to fourteenth storage devices of the first aspect of the present application, there is provided the fifteenth storage device of the first aspect of the present application, wherein the control unit generates a completion command in response to the completion of the IO command processing, and the completion command includes the cache unit index indicated by the corresponding IO command, so that the host releases the cache unit according to the cache unit index included in the completion command.
According to one of the first to fourteenth storage devices of the first aspect of the present application, there is provided the sixteenth storage device of the first aspect of the present application, wherein in response to a failure in processing an IO command, the control component writes error information corresponding to the IO command into the cache unit indicated by the IO command, so that the host obtains the error information from the cache unit after knowing a processing result of the failure.
According to a fifteenth storage device of the first aspect of the present application, there is provided the seventeenth storage device of the first aspect of the present application, wherein the completion command is a command compliant with NVMe specification.
According to one of the first to seventeenth storage devices of the first aspect of the present application, there is provided the eighteenth storage device of the first aspect of the present application, wherein the cache unit further stores metadata associated with the IO command.
An eighteenth storage device according to the first aspect of the present application provides the nineteenth storage device according to the first aspect of the present application, wherein the metadata associated with the IO command records the second type address of the data accessed by the IO command and/or the verification information of the data.
According to one of the first to nineteenth storage devices of the first aspect of the present application, there is provided the twentieth storage device of the first aspect of the present application, wherein the cache unit further stores extension information associated with the IO command, the control unit obtains the extension information from the cache unit according to a cache unit index in the IO command, and knows an extension meaning of the IO command according to the extension information.
A twentieth storage device according to the first aspect of the present application provides the twenty-first storage device according to the first aspect of the present application, wherein the extension information is extension information representing an erase command.
According to a twentieth storage device of the first aspect of the present application, there is provided the twenty-second storage device of the first aspect of the present application, wherein the extension information is extension information representing a timing of providing a completion command corresponding to the IO command to the host, and the control unit provides the completion command to the host according to the timing indicated by the command.
According to a twenty-second storage device of the first aspect of the present application, there is provided the twenty-third storage device of the first aspect of the present application, wherein the timing of providing the completion command to the host is to provide the completion command corresponding to the write command to the host immediately after the data to be written by the write command is moved to the cache unit, or to provide the completion command corresponding to the write command to the host after the data to be written by the write command is stored in the NVM.
According to a second aspect of the present application, there is provided a first method of accessing a storage device according to the second aspect of the present application, comprising the steps of: adding a first type of address to a cache unit of a memory space of a storage device, wherein the memory space is a memory space which can be accessed by a host and is provided by the storage device; replacing the address of the second type indicated by the IO command by the index of the cache unit to generate the IO command; the IO command is sent to the storage device.
A second method of accessing a memory device according to the second aspect of the present application is provided according to the first method of accessing a memory device of the second aspect of the present application, wherein the first type of addresses are physical addresses of the NVM of the index memory device and the second type of addresses are logical addresses of the memory space of the index memory device.
According to a second aspect of the present application, there is provided a method of accessing a storage device according to the second aspect of the present application, wherein the first type of address is a physical address complying with the specification of an open channel solid state storage device.
According to one of the first to third methods of accessing a storage device of the second aspect of the present application, there is provided a fourth method of accessing a storage device of the second aspect of the present application, wherein a storage device driver is operated using a lead ()/pwrite () system call to generate an IO command.
According to a fourth method of accessing a storage device of the second aspect of the present application, there is provided the fifth method of accessing a storage device of the second aspect of the present application, wherein, when using a lead ()/pwrite () system call, an address parameter "__ offset" is set as a cache location index, and the storage device driver generates an IO command using a parameter "__ offset" of the address.
According to one of the first to fifth methods of accessing a storage device of the second aspect of the present application, there is provided a sixth method of accessing a storage device of the second aspect of the present application, wherein the first type address is written to a cache unit of the storage device in a manner of accessing a host memory.
According to one of the first to sixth methods for accessing a storage device of the second aspect of the present application, a seventh method for accessing a storage device of the second aspect of the present application is provided, wherein a first type address is generated according to a logical address of a storage space of the accessed storage device, or a URI or a keyword is converted into the first type address, so as to store the first type address in a cache unit of the storage device.
According to one of the first to seventh methods for accessing a storage device of the second aspect of the present application, there is provided an eighth method for accessing a storage device of the second aspect of the present application, wherein the FTL table is queried according to the logical address of the accessed storage space to obtain the first type address.
According to one of the first to eighth methods for accessing a storage device of the second aspect of the present application, there is provided a ninth method for accessing a storage device of the second aspect of the present application, wherein in response to the IO command processing being completed, a cache unit indicated by the IO command is released according to a completion command generated by the storage device for the IO command.
According to a ninth method for accessing a storage device of the second aspect of the present application, there is provided the tenth method for accessing a storage device of the second aspect of the present application, wherein the cache unit indicated by the cache unit index is released according to the cache unit index included in the completion command.
According to a ninth method for accessing a storage device of the second aspect of the present application, there is provided the eleventh method for accessing a storage device of the second aspect of the present application, wherein in response to the obtaining of the completion command, the cache unit is released according to a correspondence between the IO command sent to the storage device and the cache unit index used for generating the IO command.
According to one of the first to eighth methods for accessing a storage device of the second aspect of the present application, there is provided a twelfth method for accessing a storage device of the second aspect of the present application, wherein in response to an IO command processing failure, error information corresponding to the IO command is acquired from a cache unit of the storage device.
According to one of the first to twelfth methods of accessing a storage device of the second aspect of the present application, there is provided the thirteenth method of accessing a storage device of the second aspect of the present application, wherein in response to adding the first type of address to the cache location, metadata associated with the IO command is also added to the cache location.
According to a thirteenth method for accessing a storage device of the second aspect of the present application, there is provided the fourteenth method for accessing a storage device of the second aspect of the present application, wherein the metadata associated with the IO command records the address of the second type of the data accessed by the IO command and/or the verification information of the data accessed by the IO command.
According to one of the first to fourteenth methods for accessing a storage device of the second aspect of the present application, a fifteenth method for accessing a storage device of the second aspect of the present application is provided, wherein in response to adding the first type address to the cache unit, extension information associated with the IO command is further added to the cache unit, so that the storage device knows the extension meaning of the IO command according to the extension information.
According to a fifteenth method of accessing a memory device of the second aspect of the present application, there is provided the sixteenth method of accessing a memory device of the second aspect of the present application, wherein the extension information is extension information representing an erase command.
According to a fifteenth method for accessing a storage device of the second aspect of the present application, there is provided the seventeenth method for accessing a storage device of the second aspect of the present application, wherein the extension information is extension information representing a timing at which the storage device provides a completion command corresponding to the IO command to the host.
According to a fifteenth method for accessing a storage device of the second aspect of the present application, there is provided the eighteenth method for accessing a storage device of the second aspect of the present application, wherein the storage device provides the host with the completion command at a timing that the completion command is provided to the host immediately after the data to be written by the write command is moved to the cache unit, or the completion command corresponding to the write command is provided to the host after the data to be written by the write command is stored in the NVM.
According to a third aspect of the present application, there is provided a first computer according to the third aspect of the present application, comprising: a processor and a storage device, the processor performing one of the methods described above.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a block diagram of a prior art storage device;
FIG. 2 is a schematic diagram of a prior art solid state memory device using open channels;
FIG. 3 is a schematic diagram of operating a storage device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of operating a storage device according to yet another embodiment of the present application;
FIG. 5 is a schematic diagram of operating a memory device according to yet another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
FIG. 3 is a schematic diagram of operating a storage device according to an embodiment of the present application.
A storage device, such as a Solid State Drive (SSD), is coupled to a host. The SSD provides the host with NVMe command interface and CMB (controller memory buffer). The NVMe command interface includes a plurality of command queues. The host provides the IO commands according to the embodiment of the application to the SSD through the NVMe command interface and the CMB.
IO commands of the NVMe protocol indicate logical addresses of storage devices. IO commands that access the open channel solid state storage device indicate physical addresses that comply with the open channel solid state storage device specification. Note that in the version of "Open-Channel Solid State drive specifications version 2.0" (Open Channel Solid State storage device specification 2.0) published in 29/1/2018, "logical block addresses (lba)" is used to represent addresses in IO commands provided to the Open Channel storage device, and the meaning of the addresses is the same as that of "logical addresses" in the embodiments of the present application. For purposes of clarity, in this application, as not specifically stated, "logical address" or "LBA" means consistent with a logical address of the NVMe protocol, while "physical address", "PPA" or "PBA" means consistent with an address for accessing a storage device following an IO command of the open channel solid state storage device specification, or an address of an addressable physical storage unit of a non-volatile storage medium representing the storage device.
Whereas the IO command according to an embodiment of the present application indicates the CMB index. The CMB indexes the cache units in the CMB, where the cache units store the physical addresses for the IO commands to access the SSD. Optionally, metadata and extension information associated with the IO command are also stored in the cache unit of the CMB. Metadata associated with the IO command records, for example, a logical address of data accessed by the IO command and/or verification information of the data. Extension information associated with the IO command records, for example, an extension opcode indicating the meaning of the extension of the IO command. For example, when the extended operation code indicates an erase command or indicates that the host receives a completion command that the write command is processed completely, the data corresponding to the write command is written into a cache of the storage device or the nonvolatile storage medium.
Optionally, according to the IO command of the embodiment of the application, the IO command of the NVMe protocol is used, and a part of the IO command of the NVMe protocol, which indicates a logical address, is replaced with the CMB index, without changing other structures of the IO command of the NVMe protocol, so that the IO command of the embodiment of the application can be generated by using the NVMe device driver by replacing the logical address provided to the NVMe device driver with the CMB index.
The CMB is a host-accessible Memory (Memory) space provided by a storage device defined in the NVMe protocol. Alternatively, the CMB may be, for example, non-volatile. Still optionally, the CMB in a storage device according to embodiments of the present application is replaced by a memory space provided by the storage device that is accessible by a host defined according to another (e.g., PCIe protocol).
The kernel space of the host operating system runs the NVMe device driver. The user space of the host runs the application. The application accesses the storage device in a prior art manner such as a file.
The user space further runs an IO command management unit, which is used for generating an IO command according to the embodiment of the present application according to the access of the application program to, for example, a file. As an example, the IO command management unit further includes an FTL table to generate a physical address for the SSD according to an address of the storage space accessed by the application program, or the application program accesses the storage space with a Uniform Resource Identifier (URI) or a keyword, and the IO command management unit further converts the URI or the keyword into the physical address for the SSD.
To generate the IO command, the IO command management unit allocates an available cache unit from the CMB. The CMB appears to the host as an accessible memory space. According to the embodiment of the application, the IO command management unit is responsible for allocation, use and release of the cache unit of the CMB.
The IO command management unit fills the physical address for the SSD generated for the IO command into the cache unit of the allocated CMB (whose index is CMBID).
The IO command management unit generates NVMe commands using an NVMe device driver in the kernel. To generate the NVMe command, the IO command management unit replaces the logical address of the storage space that should be provided to the NVMe device driver with an index (CMB ID) of the cache unit that fills the physical address for the SSD to which the IO command is to access.
The NVMe device driver generates an NVMe command carrying the cache unit index of the CMB and provides the NVMe command to the SSD through an NVMe command interface of the SSD.
And the control part of the SSD acquires the IO command from the NVMe command interface, accesses the CMB according to the cache unit index of the CMB indicated by the IO command, acquires the physical address from the CMB cache unit and accesses the nonvolatile storage medium of the SSD according to the physical address. If the IO command is a read command, the SSD control component transmits the data read from the nonvolatile storage medium to the storage space indicated by the host address of the IO command; if the IO command is a write command, the control component moves the data of the storage space indicated by the host address of the IO command to the nonvolatile storage medium. After the IO command processing is completed, the control section of the SSD also generates a completion command indicating that the IO command processing is completed. The completion command is also, for example, a completion command compliant with NVMe specification.
And releasing the CMB cache unit used by the IO command in response to receiving the completion command corresponding to the IO command. Optionally, the index of the CMB cache unit to be released is indicated in the completion command.
As one example, the IO command management unit operates the NVMe device driver using a system call such as lead ()/pwrite () to generate NVMe commands. The parameter "__ offset" of the system call Pread ()/pwrite () is used to represent a logical address, and in an embodiment according to the present application, the parameter "__ offset" representing the logical address is filled with the index (CMB ID) of the CMB cache element at the time of the call Pread ()/pwrite (). In response to the system call, the NVMe device driver (or other similar storage device driver) would generate the logical address field in the NVMe command with the parameter "__ offset" provided by the Pread ()/pwrite () system call, thereby enabling the replacement of the logical address in the IO command with the index of the cache location (CMB ID).
Still by way of example, in the prior art, in response to a Pread () system call, the NVMe device driver generates a read command to send to the storage device, and in response to a Pwead () system call, the NVMe device driver generates a write command to send to the storage device. According to an embodiment of the present application, the IO command management unit adds, for example, extension information to a cache unit of the CMB, so that although an IO command sent to the storage device is generated by using a system call such as lead ()/write (), the storage device knows what the IO command represents from the extension information in the CMB cache unit to which the IO command corresponds. For example, an E-command (whose own opcode indicates its meaning as a read command) sent to the storage device is generated using the lead () system call, and the E-command is indicated as an erase command in the CMB's cache unit with the extension information. Thus, according to embodiments of the present application, the execution of an erase operation is instructed to the storage device by using, for example, a pre () system call.
According to the SSD of the embodiments of the application, two interfaces are provided to the host: a first interface, such as an NVMe command interface, and a second interface, such as a CMB. For each IO command, the host issues part of the IO command through the first interface and issues other part of the IO command through the second interface. The SSD combines the commands received from the two interfaces, respectively, to obtain a complete IO command, and processes the IO command. In one embodiment, the first interface is a legacy storage device interface, thus using a prior art storage device driver to send portions of the IO command to the SSD through the first interface; the second interface is a memory interface, and therefore the other part of the IO command is also sent to the SSD through the second interface using the prior art. Thereby eliminating the need to run other drivers in the kernel space of the host. In the SSD, a part of the IO command provided by the first interface corresponds to another part of the IO command provided by the second interface. The SSD combines the parts of the IO command from each interface to obtain and process a complete IO command.
The IO command management unit accesses the NVMe device driver and the CMB through a system call such as ioctl, Asynchronous IO (Linux Asynchronous I/O) or a user space library provided by a Linux operating system kernel.
FIG. 4 is a schematic diagram of operating a memory device according to yet another embodiment of the present application.
The host is coupled to the SSD and accesses the storage device address space provided by the SSD through a read command.
The read command provided by the host to the storage device indicates, for example, a physical address that conforms to the open channel solid state storage device specification. The read command provided to the IO command management unit running in the user space of the host indicates, for example, the memory space address. The memory space address is, for example, a logical address, a URI, or a physical address. The read command provided to the IO command management unit further indicates a host address for receiving data read from the SSD according to the read command.
Referring to fig. 4, the host includes storage (shown as host memory). The memory includes a data cache and optionally a host address cache. The host address of the read command indicates the address of the data cache. In some cases, the data cache includes a plurality of contiguous or non-contiguous memory spaces, the plurality of memory spaces of the data cache are indicated using an address list including a plurality of entries, and the address list including the plurality of entries is stored with the host address cache. The read command indicates the host address with the address of the host address cache. And the SSD acquires an address list from the host address cache according to the read command, and transmits the data read out from the SSD to a plurality of storage spaces corresponding to the host address according to the address list.
The SSD includes a command queue, a completion queue, and a CMB. By way of example, the command Queue is an SQ (Submission Queue) defined by the NVMe protocol, the completion Queue is a CQ (completion Queue) defined by the NVMe protocol, and the CMB is a controller memory buffer defined by the NVMe protocol. The SSD also includes a control section and NVM (non-volatile storage medium) for processing IO commands.
The command queue includes a plurality of entries, each entry for accommodating an IO command. The completion queue includes a plurality of entries, each entry for accommodating a completion command corresponding to an IO command. The CMB includes a plurality of cache units, each for accommodating a storage device address (physical address) corresponding to an IO command. Optionally, the cache unit is further configured to store metadata and extension information associated with the IO command. Still optionally, the cache unit is further configured to accommodate an IO command processing result to be delivered to the host through the completion command.
According to the embodiment of the application, the IO command management unit of the host allocates the cache unit of the CMB, and provides the index of the allocated cache unit of the CMB to each IO command.
If the address received by the IO command management unit for the IO command is a logical address or a URI, the IO command management unit further generates a storage device address such as a physical address complying with the open channel solid-state storage device according to the logical address or the URI.
The IO command management unit writes the storage device address for the IO command into the buffer unit of the CMB allocated to the IO command (indicated by an arrow with sequence number (1) in fig. 4). Optionally, the IO command management unit further writes metadata or extension information corresponding to the IO command into a cache unit of the CMB allocated to the IO command. The metadata corresponding to the IO command is, for example, the logical address accessed by the IO command.
The CMB is provided for the host as a section of memory space, so that the IO command management unit writes the address of the storage device into the cache unit of the CMB in a mode of accessing the memory of the host.
The IO command management unit also writes a read command, in which the index of the buffer unit of the CMB allocated for the read command is indicated and in which the storage device address for the read command has been written, into the command queue of the SSD (indicated by an arrow with sequence number (2) in fig. 4). For example, the IO command management unit generates a read command conforming to the NVMe protocol through an NVMe device driver of the host, and uses the index of the cache unit of the CMB as a logical address for the SSD of the read command conforming to the NVMe protocol. As another example, the address received by the IO command management unit for the read command is a URI, and the IO command management unit replaces the index of the CMB cache unit storing the physical address corresponding to the URI with the URI of the read command added to the command queue. Still by way of example, the IO command management unit uses a pre () system call to write a read command to the command queue of the SSD, and when the pre () is called, replaces the parameter "__ offset" therein representing the logical address with the index of the CMB cache element.
The control unit of the storage device acquires a read command from the command queue. In response to recognizing that the read command indicates an index of the cache location of the CMB, the control component also accesses the cache location of the CMB based on the index, obtains, for example, a physical address therefrom, and reads data from the NVM of the SSD based on the physical address. And move the data to the data cache in the host indicated by the host address of the read command. Optionally, if it is recognized that the read command does not indicate the index of the cache unit of the CMB, but indicates a logical address, a physical address, or a key for the KV storage device, the control unit reads data from the nonvolatile storage medium according to the logical address, the physical address, or the key for the KV storage device indicated by the read command.
The control section also generates a completion command for the read command and writes to the completion queue in response to completion of processing of the read command acquired from the command queue. By way of example, the index of the CMB cache location indicated by the read command is also written to the done command. The completion command also indicates the result of the processing, e.g., read command, such as success, failure, and/or error type. Alternatively or further, in response to a failure in processing the read command, the control component writes error information corresponding to the read command into the cache unit of the CMB indicated by the read command, so that the host has an opportunity to acquire the error information to decide how to perform error processing. If the read command is successfully processed, optionally, no additional information is written into the buffer unit of the CMB indicated by the read command.
The IO command processing unit of the host acquires a completion command indicating the IO command processing result from the completion queue through the NVMe driver (indicated by an arrow with sequence number (3) in fig. 4). The processing result of the read command is acquired from the completion command. In one example, the read command is successfully processed, and the IO command management unit releases the cache unit of the CMB indicated by the completion command. Optionally, the index of the cache unit with the CMB is not written in the completion command, but the IO command management unit records a correspondence between the issued IO command and the index of the cache unit of the CMB, and in response to receiving the completion command, releases the CMB cache unit corresponding to the command ID according to the command ID of the completion command.
In yet another example, the read command fails to be processed, the IO command management unit accesses the CMB cache unit according to the cache unit index of the CMB acquired from the completion command, reads out the error information of the read command from the CMB cache unit (indicated by an arrow with sequence number (4) in fig. 4), and releases the CMB cache unit corresponding to the completion command.
FIG. 5 is a schematic diagram of operating a memory device according to yet another embodiment of the present application.
The write command provided by the host to the storage device indicates, for example, a physical address that conforms to the open channel solid state storage device specification. The write command provided to the IO command management unit running in the user space of the host indicates the memory space address. The memory space address is, for example, a logical address, a URI, or a physical address. The write command provided to the IO command management unit also indicates a host address for storing data to be written to the SSD.
The write command indicates the host address with, for example, the address of the host address cache. And the SSD acquires the address list from the host address cache according to the write command, moves the data of the plurality of storage spaces corresponding to the host address to the SSD according to the address list, and writes the data into the nonvolatile storage medium indicated by the physical address. Optionally, one or more storage spaces corresponding to the host address are provided by the CMB, and the host has moved part or all of the data to be written by the write command into the storage space provided by the CMB before adding the write command to the command queue.
The SSD includes a command queue, a completion queue, and a CMB. The CMB includes a plurality of cache units, each for accommodating a storage device address (physical address) corresponding to an IO command. Optionally, the cache unit is further configured to store metadata and extension information associated with the IO command. Still optionally, the cache unit is further configured to accommodate an IO command processing result to be delivered to the host through the completion command.
The IO command management unit of the host allocates the cache unit of the CMB and provides the index of the allocated cache unit of the CMB to each IO command.
The IO command management unit writes the storage device address for the write command into the buffer unit of the CMB allocated for the write command (indicated by an arrow with sequence number (1) in fig. 5). Optionally, the IO command management unit further writes the metadata or the extension information corresponding to the write command into the cache unit of the CMB allocated to the write command. The metadata corresponding to the write command is, for example, a logical address accessed by the write command or check information corresponding to data to be written by the write command.
The IO command management unit also writes a write command, in which the index of the buffer unit of the CMB allocated for the write command is indicated and in which the physical address for the write command has been written, into the command queue of the SSD (indicated by an arrow with sequence number (2) in fig. 5). Still by way of example, the IO command management unit uses a pwrite () system call to write a write command to the command queue of the SSD, and upon invoking pwrite (), replaces the parameter "__ offset" therein representing the logical address with the index of the CMB cache element.
A control section of the storage device acquires a write command from the command queue. And according to the host address of the write command, moving the data to be written into the SSD by the write command from the data cache of the host to the SSD. Alternatively, if the data to be written to the SSD by the write command is already located in the CMB, the operation of moving the data from the data cache may be omitted.
In response to recognizing that the write command indicates an index of the cache unit of the CMB, the control section further accesses the cache unit of the CMB according to the index, acquires, for example, a physical address therefrom, and writes, according to the physical address, data to be written to the SSD by the write command to the nonvolatile storage medium indicated by the physical address.
Optionally, if it is recognized that the write command does not indicate the index of the cache unit of the CMB, but indicates a logical address, a physical address, or a key for the KV storage device, the control component writes the data to be written in the SSD by the write command into the nonvolatile storage medium of the SSD according to the logical address, the physical address, or the key for the KV storage device indicated by the write command.
The control section also generates a completion command for the write command in response to completion of processing of the write command acquired from the command queue, and writes to the completion queue. By way of example, the index of the CMB cache location indicated by the write command is also written to the done command. The completion command also indicates the result of processing, such as a write command.
The IO command processing unit of the host acquires a completion command indicating the IO command processing result from the completion queue through the NVMe driver (indicated by an arrow with sequence number (3) in fig. 5). The processing result of the write command is acquired from the completion command. In one example, the write command is successfully processed, and the IO command management unit releases the cache unit of the CMB indicated by the completion command.
In yet another example, the write command processing fails, the IO command management unit accesses the CMB cache unit according to the cache unit index of the CMB acquired from the completion command, reads out the error information of the write command from the CMB cache unit (indicated by an arrow with sequence number (4) in fig. 5), and releases the CMB cache unit corresponding to the completion command.
In still another example, the IO command management unit further writes extension information to the CMB cache unit to indicate whether the SSD provides the host with a completion command corresponding to the write command immediately after the SSD moves the data to be written by the write command to the SSD, or provides the host with the completion command corresponding to the write command after storing the data to be written by the write command to the nonvolatile storage medium of the SSD. In response, after the SSD acquires the write command, the SSD accesses the CMB cache unit according to the cache unit index of the CMB indicated by the write command, and fetches the extension information from the CMB cache unit to determine a timing of providing a completion command corresponding to the write command to the host.
According to one embodiment of the present application, an application running in a host wishes to issue an erase command to an SSD through an IO command management unit. And the IO command management unit allocates a cache unit of the CMB, and the allocated cache unit writes a physical address and extended information representing an erasing command. The IO command management unit also issues a so-called "read command" or "write command" to the SSD through a lead () or pwrite () system call. The parameter "__ offset" of the "read command" or "write command" indicates the buffer location index of the allocated CMB. On the other hand, although the IO command acquired by the SSD is a "read command" or a "write command" in form, the SSD recognizes that the command indicates an index of the CMB cache unit, acquires extension information and a physical address from the corresponding CMB cache unit, and, based on the acquired extension information, the SSD recognizes that the command indicates an erase operation, and performs an erase operation on the specified physical address.
Thus, the IO command management unit issues a command other than a read command or a write command to the SSD by a system call such as read (), write (), and/or write (). In this way, the host is also able to issue custom private commands to the SSD. The meaning of the private command and optionally the parameters required for executing the private command are indicated by the extension information stored in the cache unit of the CMB, and the SSD recognizes the meaning of the private command by the extension information and processes it.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. An open channel memory device comprising: a command interface, a control unit and an NVM; the storage device provides a storage space and a memory space for a host;
the cache unit of the memory space stores a first type of address; the control component receives an IO command indicating a cache unit index through a command interface, acquires a first-class address from a cache unit according to the cache unit index in the IO command, and accesses the NVM according to the first-class address, wherein the cache unit index indicated by the IO command is a substitute for a second-class address of the IO command, the first-class address is a physical address of the storage device, and the second-class address is a logical address.
2. The open channel memory device of claim 1, wherein the control component accesses the NVM according to the first type address if the IO command indicates the first type address for the memory device.
3. The open channel memory device according to claim 1 or 2, wherein if the IO command indicates a key for the KV memory device for the memory device, the control part converts the key into a first type address according to a Flash Translation Layer (FTL), and accesses the NVM according to the first type address.
4. The open channel memory device of claim 1 or 2, wherein the IO command is generated by operating a memory device driver using a lead ()/pwrite () system call.
5. The open channel memory device according to claim 4, wherein, when using the lead ()/pwrite () system call, the address parameter "__ offset" is set to the cache location index, and the memory device driver generates the second type address of the IO command with the address parameter "__ offset".
6. The open channel storage device of claim 1 or 2, wherein the cache unit further stores metadata associated with the IO command.
7. The open channel storage device according to claim 1 or 2, wherein the cache unit further stores extension information associated with the IO command, and the control unit obtains the extension information from the cache unit according to a cache unit index in the IO command and knows an extension meaning of the IO command according to the extension information.
8. A method of accessing an open channel storage device, comprising the steps of:
adding a first type of address to a cache unit of a memory space of an open channel storage device, wherein the first type of address is a physical address of the storage device, and the memory space is a memory space which can be accessed by a host and is provided by the storage device;
replacing a second type of address indicated by the IO command by the index of the cache unit to generate the IO command, wherein the second type of address is a logic address;
and sending the IO command to the storage device so that the storage device obtains the first type address according to the index of the cache unit and accesses the NVM according to the first type address.
9. A computer, comprising: a processor and a storage device, the processor performing the method of claim 8.
CN201910385222.2A 2019-05-09 2019-05-09 Providing open channel storage devices using CMBs Active CN111913892B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910385222.2A CN111913892B (en) 2019-05-09 2019-05-09 Providing open channel storage devices using CMBs
CN202111356021.3A CN114064522A (en) 2019-05-09 2019-05-09 Computer with a memory card
PCT/CN2020/093100 WO2020224662A1 (en) 2019-05-09 2020-05-29 Storage device that provides open channel by means of cmb

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910385222.2A CN111913892B (en) 2019-05-09 2019-05-09 Providing open channel storage devices using CMBs

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111356021.3A Division CN114064522A (en) 2019-05-09 2019-05-09 Computer with a memory card

Publications (2)

Publication Number Publication Date
CN111913892A CN111913892A (en) 2020-11-10
CN111913892B true CN111913892B (en) 2021-12-07

Family

ID=73050530

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910385222.2A Active CN111913892B (en) 2019-05-09 2019-05-09 Providing open channel storage devices using CMBs
CN202111356021.3A Pending CN114064522A (en) 2019-05-09 2019-05-09 Computer with a memory card

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111356021.3A Pending CN114064522A (en) 2019-05-09 2019-05-09 Computer with a memory card

Country Status (2)

Country Link
CN (2) CN111913892B (en)
WO (1) WO2020224662A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486870A (en) * 2020-11-16 2021-03-12 深圳宏芯宇电子股份有限公司 Computer system and computer system control method
CN113722248B (en) * 2021-07-28 2023-08-22 湖南国科微电子股份有限公司 Command processing method and command processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391391A (en) * 2017-07-19 2017-11-24 深圳大普微电子科技有限公司 The method, system and solid state hard disc of data copy are realized in the FTL of solid state hard disc
CN109213433A (en) * 2017-07-07 2019-01-15 华为技术有限公司 The method and apparatus that data are written in flash memory device
CN109313687A (en) * 2016-01-24 2019-02-05 赛义德·卡姆兰·哈桑 Computer security based on artificial intelligence
CN109726138A (en) * 2017-10-31 2019-05-07 慧荣科技股份有限公司 Data memory device and non-volatile formula memory operating method

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7305524B2 (en) * 2004-10-08 2007-12-04 International Business Machines Corporation Snoop filter directory mechanism in coherency shared memory system
CN101266538B (en) * 2008-05-06 2010-09-08 普天信息技术研究院有限公司 Intelligent memory card interface access control method
CN102436423B (en) * 2011-10-13 2014-09-03 浙江大学 Controller and method for protecting NorFlash core data outside universal sheet
CN103810113B (en) * 2014-01-28 2016-07-06 华中科技大学 A kind of fusion memory system of nonvolatile storage and dynamic random access memory
US9813504B2 (en) * 2015-08-03 2017-11-07 Citrix Systems, Inc. Virtualizing device management services on a multi-session platform
CN106919339B (en) * 2015-12-25 2020-04-14 华为技术有限公司 Hard disk array and method for processing operation request by hard disk array
US10481799B2 (en) * 2016-03-25 2019-11-19 Samsung Electronics Co., Ltd. Data storage device and method including receiving an external multi-access command and generating first and second access commands for first and second nonvolatile memories
US9940980B2 (en) * 2016-06-30 2018-04-10 Futurewei Technologies, Inc. Hybrid LPDDR4-DRAM with cached NVM and flash-nand in multi-chip packages for mobile devices
CN107783917B (en) * 2016-08-26 2024-05-17 北京忆芯科技有限公司 Method and device for generating NVM chip interface command
CN107818052B (en) * 2016-09-13 2020-07-21 华为技术有限公司 Memory access method and device
CN108614671B (en) * 2016-12-12 2023-02-28 北京忆恒创源科技股份有限公司 Key-data access method based on namespace and solid-state storage device
US10073640B1 (en) * 2017-03-10 2018-09-11 Toshiba Memory Corporation Large scale implementation of a plurality of open channel solid state drives

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109313687A (en) * 2016-01-24 2019-02-05 赛义德·卡姆兰·哈桑 Computer security based on artificial intelligence
CN109213433A (en) * 2017-07-07 2019-01-15 华为技术有限公司 The method and apparatus that data are written in flash memory device
CN107391391A (en) * 2017-07-19 2017-11-24 深圳大普微电子科技有限公司 The method, system and solid state hard disc of data copy are realized in the FTL of solid state hard disc
CN109726138A (en) * 2017-10-31 2019-05-07 慧荣科技股份有限公司 Data memory device and non-volatile formula memory operating method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
闪存存储的重构与***构建技术;陆游游 等;《计算机研究与发展》;20181231;第56卷(第1期);第22-34页 *

Also Published As

Publication number Publication date
CN114064522A (en) 2022-02-18
WO2020224662A1 (en) 2020-11-12
CN111913892A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN106354615B (en) Solid state disk log generation method and device
KR101301840B1 (en) Method of data processing for non-volatile memory
US11675698B2 (en) Apparatus and method and computer program product for handling flash physical-resource sets
US20200356491A1 (en) Data storage device and method for loading logical-to-physical mapping table thereof
KR20200072639A (en) Storage device and operating method thereof
US11449270B2 (en) Address translation method and system for KV storage device
CN111913892B (en) Providing open channel storage devices using CMBs
CN111324414B (en) NVM storage media emulator
CN108628762B (en) Solid-state storage device and IO command processing method thereof
CN110515861B (en) Memory device for processing flash command and method thereof
CN110865945B (en) Extended address space for memory devices
CN112148626A (en) Storage method and storage device for compressed data
CN110968527A (en) FTL provided caching
CN111290974A (en) Cache elimination method for storage device and storage device
WO2018041258A1 (en) Method for processing de-allocation command, and storage device
CN110532199B (en) Pre-reading method and memory controller thereof
CN112578993B (en) Method and memory device for processing programming errors of multi-plane NVM
KR20210142863A (en) Apparatus and method for increasing operation efficiency in a memory system
CN112579328A (en) Method for processing programming error and storage device
CN113051189A (en) Method and storage device for providing different data protection levels for multiple namespaces
CN111258491B (en) Method and apparatus for reducing read command processing delay
US20240168876A1 (en) Solving submission queue entry overflow using metadata or data pointers
CN110321057B (en) Storage device with cache to enhance IO performance certainty
CN118193053A (en) NVMe command processing method and related products thereof
CN117311594A (en) Method for managing NVM chip and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant