CN117836751A - Enhancing memory performance using memory access command queues in a memory device - Google Patents

Enhancing memory performance using memory access command queues in a memory device Download PDF

Info

Publication number
CN117836751A
CN117836751A CN202280057426.3A CN202280057426A CN117836751A CN 117836751 A CN117836751 A CN 117836751A CN 202280057426 A CN202280057426 A CN 202280057426A CN 117836751 A CN117836751 A CN 117836751A
Authority
CN
China
Prior art keywords
memory access
queue
memory
access command
die
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280057426.3A
Other languages
Chinese (zh)
Inventor
S·N·桑卡拉纳拉亚南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Publication of CN117836751A publication Critical patent/CN117836751A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access

Abstract

Systems and methods are disclosed that include a processing device operably coupled to a memory device. The processing device performs operations comprising: receiving a memory access command; determining a physical address associated with the memory access command; determining a plane of a die on the memory device referenced by the physical address; inserting the memory access command into a queue associated with the plane; and processing the memory access command from the queue.

Description

Enhancing memory performance using memory access command queues in a memory device
Technical Field
Embodiments of the present disclosure relate generally to memory subsystems and, more particularly, to improving memory performance using memory access command queues in memory devices.
Background
The memory subsystem may include one or more memory devices that store data. The memory device may be, for example, a nonvolatile memory device and a volatile memory device. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.
Drawings
The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure.
FIG. 1 illustrates an example computing system including a memory subsystem according to some embodiments of the disclosure.
FIG. 2 is a flowchart of an example method of utilizing a memory access command queue, according to some embodiments of the present disclosure.
Fig. 3 illustrates dies structured in a four plane queue according to some embodiments of the present disclosure.
FIG. 4 illustrates an example computing system including a memory access command manager and a memory device, according to some embodiments of the disclosure.
FIG. 5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.
Detailed Description
Aspects of the present disclosure relate to improving memory performance using a memory access command queue in a memory device. The memory subsystem may be a storage device, a memory module, or a combination of a storage device and a memory module. Examples of memory devices and memory modules are described below in connection with fig. 1. In general, a host system may utilize a memory subsystem that includes one or more components, such as a memory device that stores data. The host system may provide data for storage at the memory subsystem and may request retrieval of data from the memory subsystem.
The memory subsystem may include a high density non-volatile memory device where data needs to be retained when no power is supplied to the memory device. One example of a non-volatile memory device is a NAND memory device. Other examples of non-volatile memory devices are described below in connection with FIG. 1A. A nonvolatile memory device is a package of one or more dies. Each die may be composed of one or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane is composed of a set of physical blocks. Each block is made up of a set of pages. Each page is made up of a set of memory cells ("cells"). A cell is an electronic circuit that stores information. Depending on the cell type, a cell may store one or more bits of binary information and have various logic states related to the number of bits stored. The logic states may be represented by binary values such as "0" and "1" or a combination of such values.
The memory device may include a plurality of memory cells arranged in a two-dimensional grid. The memory cells are etched onto silicon wafers in an array of columns (hereinafter also referred to as bit lines) and rows (hereinafter also referred to as word lines). A word line may refer to one or more rows of memory cells of a memory device that are used with one or more bit lines to generate an address for each of the memory cells. The intersection of the bit line and the word line constitutes the address of the memory cell. Hereinafter, a block refers to a cell of a memory device for storing data, and may include a group of memory cells, a group of word lines, a word line, or individual memory cells. One or more blocks may be grouped together to form planes of the memory device to allow concurrent operation on each plane. The memory device may include circuitry to perform concurrent memory page accesses for two or more memory planes. For example, a memory device may include respective access line driver circuitry and power circuitry for each plane of the memory device to facilitate concurrent access to pages of two or more memory planes including different page types.
Memory access operations may be performed by the memory subsystem. The memory access operation may be a host initiated operation or a memory subsystem controller initiated operation. For example, the host system may initiate a memory access operation (e.g., a write operation, a read operation, an erase operation, etc.) on the memory subsystem. The host system may send memory access commands (e.g., write commands, read commands) to the memory subsystem to, for example, store data on and read data from memory devices at the memory subsystem. The data to be read or written as specified by the host request is hereinafter referred to as "host data". The host request may contain logical address information (e.g., logical Block Address (LBA), namespace) for the host data, which is the location where the host system is associated with the host data. Logical address information (e.g., LBA, namespace) may be part of metadata of host data. Metadata may also contain error handling data (e.g., ECC codewords, parity codes), data versions (e.g., for distinguishing deadlines for writing data), valid bitmaps (whose LBAs or logical transfer units contain valid data), and so forth. Memory access operations initiated by a memory subsystem controller may involve maintenance operations such as garbage collection, wear leveling, bad block management, block refresh operations, and the like.
In some memory subsystems, a local media controller of a memory device communicates with a die or set of dies via a single communication channel, such as an Open NAND Flash Interface (ONFI) channel. In particular, the local media controller may issue read commands, write commands, and erase commands to the set of dies over a communication channel. In the case of a read command, the local media controller may receive the requested data from the die over a communication channel. Typically, a die can only cache one single plane memory access operation at a time (or two operations of a multi-plane memory access operation). Thus, for a plane level read command, the die cannot receive and queue additional memory access operations until the pending memory access operation is processed. For example, in response to receiving a read command from a local media controller, a die of a memory device may queue the read command, process the read command by retrieving data required by the read command, and then attempt to send the retrieved data to the local media controller. However, if the communication channel is busy (e.g., the local media controller is communicating with other dies of the set), the dies cannot clock the retrieved data until the communication channel is idle. As such, the die remains in an idle state while waiting, which results in reduced efficiency of the memory device.
Aspects of the present disclosure address the above and other deficiencies by implementing a command queue in a memory device. Each die of the memory device may include two or more planes, where each plane includes a set of physical blocks. Each die may be structured to contain a per-plane command queue maintained by the local media controller. The plane command queue may be used to track and store multiple memory access command entries (e.g., one or more read commands, write commands, or any combination thereof) for each plane of the die. The die may process the memory access command entries in the order received (e.g., first in first out, "FIFO") based on the priority of the entries or based on other processing schemes. For example, some memory access operations may be prioritized over other memory access operations. In some embodiments, the die may distinguish between high priority memory access commands and low priority memory access commands. For example, memory access commands with higher priority may be processed out of the plane command queue before memory access commands with lower priority, even though lower priority memory access commands were received by the die before higher priority memory access commands. In some embodiments, memory access commands issued by the host system may be characterized as high priority memory access commands, and memory access commands issued by the memory subsystem controller for managing media or background activity may be characterized as low priority memory access commands.
In some embodiments, memory access command entries may be inserted into a plane command queue in parallel with memory access commands from the plane command queue being processed, without one process affecting the other. For example, a write command from a plane command queue may be processed while a received read command is inserted into the plane command queue.
Advantages of the present disclosure include, but are not limited to, improved performance of the memory device and/or improved quality of service of the host system. For example, multiple memory access operations may be queued on each die of a memory device. This allows each die to handle multiple memory access operations while the communication channel is busy. Thus, embodiments of the present disclosure reduce the amount of time that a die will be idle when a communication channel is busy, which reduces latency and improves performance of the memory device.
FIG. 1 illustrates an example computing system 100 including a memory subsystem 110, according to some embodiments of the disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination thereof.
Memory subsystem 110 may be a storage device, a memory module, or a hybrid of storage devices and memory modules. Examples of storage devices include Solid State Drives (SSDs), flash drives, universal Serial Bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal Flash Storage (UFS) drives, secure Digital (SD) cards, and Hard Disk Drives (HDD). Examples of memory modules include Dual Inline Memory Modules (DIMMs), small DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).
Computing system 100 may be a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., an airplane, an unmanned aerial vehicle, a train, an automobile, or other conveyance), an internet of things (IoT) supporting device, an embedded computer (e.g., an embedded computer included in a vehicle, an industrial plant, or a networked business device), or such a computing device that includes memory and a processing device.
The computing system 100 may include a host system 120 coupled to one or more memory subsystems 110. In some embodiments, host system 120 is coupled to different types of memory subsystems 110. FIG. 1 illustrates one example of a host system 120 coupled to one memory subsystem 110. As used herein, "coupled to" or "coupled with …" generally refers to a connection between components that may be an indirect communication connection or a direct communication connection (e.g., without intermediate components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc.
Host system 120 may include a processor chipset and a software stack executed by the processor chipset. The processor chipset may include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system 120 uses the memory subsystem 110, for example, to write data to the memory subsystem 110 and to read data from the memory subsystem 110.
Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, serial Advanced Technology Attachment (SATA) interfaces, peripheral component interconnect express (PCIe) interfaces, universal Serial Bus (USB) interfaces, fibre channel, serial Attached SCSI (SAS), double Data Rate (DDR) memory buses, small Computer System Interfaces (SCSI), dual Inline Memory Module (DIMM) interfaces (e.g., DIMM socket interfaces supporting Double Data Rates (DDR)), and the like. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110. When memory subsystem 110 is coupled with host system 120 through a physical host interface (e.g., PCIe bus), host system 120 may additionally utilize an NVM high speed (NVMe) interface to access components (e.g., memory device 130). The physical host interface may provide an interface for passing control, address, data, and other signals between the memory subsystem 110 and the host system 120. Fig. 1 shows a memory subsystem 110 as an example. In general, the host system 120 may access multiple memory subsystems via the same communication connection, multiple separate communication connections, and/or a combination of communication connections.
The memory devices 130, 140 may include any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices (e.g., memory device 140) may be, but are not limited to, random Access Memory (RAM), such as Dynamic Random Access Memory (DRAM) and Synchronous Dynamic Random Access Memory (SDRAM).
Some examples of non-volatile memory devices, such as memory device 130, include NAND-type flash memory and write-in-place memory, such as three-dimensional cross-point ("3D cross-point") memory devices, which are cross-point arrays of non-volatile memory cells. The cross-point array of non-volatile memory may be used in conjunction with a stackable cross-meshed data access array for bit storage based on changes in bulk resistance. In addition, in contrast to many flash-based memories, cross-point nonvolatile memories may perform write-in-place operations, where nonvolatile memory cells may be programmed without pre-erasing the nonvolatile memory cells. NAND flash memory includes, for example, two-dimensional NAND (2 DNAND) and three-dimensional NAND (3D NAND).
Each of the memory devices 130 may include one or more arrays of memory cells. One type of memory cell, for example, a Single Level Cell (SLC), may store one bit per cell. Other types of memory cells, such as multi-level cells (MLC), tri-level cells (TLC), quad-level cells (QLC), and five-level cells (PLC), may store multiple bits per cell. In some embodiments, each of the memory devices 130 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, PLC, or any combination thereof. In some embodiments, a particular memory device may include an SLC portion, an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of memory device 130 may be grouped into pages, which may refer to the logical units of the memory device used to store data. For some types of memory (e.g., NAND), pages can be grouped to form blocks.
Although non-volatile memory components such as non-volatile memory cell 3D cross-point arrays and NAND-type flash memory (e.g., 2D NAND, 3D NAND) are described, memory device 130 may be based on any other type of non-volatile memory, such as Read Only Memory (ROM), phase Change Memory (PCM), self-selected memory, other chalcogenide-based memory, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magnetic Random Access Memory (MRAM), spin Transfer Torque (STT) -MRAM, conductive Bridging RAM (CBRAM), resistive Random Access Memory (RRAM), oxide-based RRAM (OxRAM), or non-NOR) flash memory, and Electrically Erasable Programmable Read Only Memory (EEPROM).
The memory subsystem controller 115 (or simply controller 115) may communicate with the memory devices 130 to perform operations such as reading data, writing data, or erasing data at the memory devices 130, among other such operations. The memory subsystem controller 115 may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory subsystem controller 115 may be a microcontroller, dedicated logic circuitry (e.g., a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), etc.), or other suitable processor.
Memory subsystem controller 115 may be a processing device that includes one or more processors (e.g., processor 117) configured to execute instructions stored in local memory 119. In the illustrated example, the local memory 119 of the memory subsystem controller 115 includes embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of the memory subsystem 110, including processing communications between the memory subsystem 110 and the host system 120.
In the illustrated example, the local memory 119 of the memory subsystem controller 115 includes embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of the memory subsystem 110, including processing communications between the memory subsystem 110 and the host system 120.
In some embodiments, local memory 119 may contain memory registers that store memory pointers, retrieved data, and the like. Local memory 119 may also include Read Only Memory (ROM) for storing microcode. Although the example memory subsystem 110 in fig. 1 has been shown as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the memory subsystem controller 115, but instead relies on external control (e.g., provided by an external host or by a processor or controller separate from the memory subsystem).
In general, the memory subsystem controller 115 may receive commands or operations from the host system 120 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory device 130. The memory subsystem controller 115 may be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and Error Correction Code (ECC) operations, encryption operations, cache operations, and address translation between logical addresses (e.g., logical Block Addresses (LBAs), namespaces) and physical addresses (e.g., physical MU addresses, physical block addresses) associated with the memory device 130. The memory subsystem controller 115 may further include host interface circuitry to communicate with the host system 120 via a physical host interface. The host interface circuitry may translate commands received from the host system into command instructions to access the memory device 130, and translate responses associated with the memory device 130 into information for the host system 120.
Memory subsystem 110 may also contain additional circuitry or components not shown. In some embodiments, memory subsystem 110 may include caches or buffers (e.g., DRAMs) and address circuitry (e.g., row decoders and column decoders) that may receive addresses from memory subsystem controller 115 and decode the addresses to access memory device 130.
In some embodiments, memory device 130 includes a local media controller 135 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 130. An external controller (e.g., memory subsystem controller 115) may manage memory device 130 externally (e.g., perform media management operations on memory device 130). In some embodiments, the memory subsystem 110 is a managed memory device that includes a raw memory device 130 with control logic (e.g., local controller 132) on the die and a controller (e.g., memory subsystem controller 115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAAND) device.
Memory subsystem 110 includes a memory access command manager 113 that can issue memory access commands to memory device 130. In some embodiments, the memory subsystem controller 115 includes at least a portion of the memory management component 113. In some embodiments, the memory access command manager 113 is part of the host system 110, an application program, or an operating system. In other embodiments, local media controller 135 comprises at least a portion of memory access command manager 113 and is configured to perform the functions described herein.
Memory access command manager 113 may maintain a memory access command data structure ("controller command data structure") to track memory access commands received from memory subsystem controller 115 and/or host 120. The controller command data structure may include a per-die command queue for each die of the memory device 130, a per-plane command queue for each plane of each die of the memory device 130, a per-die set queue for one or more die sets of the memory device 130, or any combination thereof. For example, each command queue may store pending memory access commands issued to a corresponding plane of memory device 130 and/or memory device 140 (memory access commands of host system 120 and memory access commands of memory subsystem controller 115). The memory access command manager 113 may determine into which command queue to insert a received memory access command by determining which physical address or address range access the memory access command requests.
The memory access command manager 113 may append a unique identifier to each memory access command. The unique identifier may allow the memory subsystem controller 115 to track memory access commands. Memory access command manager 113 may send queued memory access commands to memory device 130 and/or memory device 140. In response to the memory access command being processed (e.g., receiving read data from memory device 130 and/or memory device 140 sent to host 120, writing data to memory device 130 and/or memory device 140, etc.), memory access command manager 113 may evict the corresponding memory access command from the controller command data structure.
In some embodiments, the memory access command manager 113 may process the memory access commands in the order received (e.g., FIFO). By implementing a FIFO structure, the memory access command manager 113 can prevent problems associated with data dependencies. Data dependency is the case of memory access commands that refer to data operated on by a previous memory access command. For example, the memory access command manager may receive a write command to data stored at a physical address on the memory device 130, followed by a read command to data from the same physical address. If the read command is executed before the write command, the read command will return incorrect data to the host system 120 because the write command has not been processed.
In some embodiments, the memory access command manager 113 may maintain and manage two or more controller command data structures. Each of the two or more controller command data structures may include a separate per-plane command queue for the same set of dies of memory device 130 or for different sets of dies of memory device 130. For example, in some embodiments, the memory access command manager 113 may maintain and manage high priority data structures and low priority data structures to prioritize some memory access commands over other memory access commands (of lower priority). The high priority memory access commands may be processed by the memory access command manager 113 (e.g., sent to the memory device 130 and/or the memory device 140) before the low priority memory access commands, even though the low priority memory access commands were received by the memory access command manager 113 before the high priority memory access commands. The high priority memory access commands may be memory access commands issued by the host system 120, while the low priority memory access commands may be memory access commands issued by a memory subsystem controller (e.g., memory access commands related to maintenance operations such as garbage collection, wear leveling, bad block management, block refresh operations, etc.). High priority memory access commands and low priority memory access commands are used as illustrative examples. Each memory access command may contain a priority identifier (e.g., a high priority identifier, a low priority identifier, etc.). The priority identifier may indicate whether the received memory access command is stored in a high priority data structure or a low priority data structure.
In some embodiments, memory access command manager 113 may use a traffic arbiter to regulate between sending high priority memory access commands from high priority data structures to memory devices 130 and/or 140 and low priority memory access commands from low priority data structures. In some embodiments, the memory access command manager 113 may implement a ratio scheme to adjust between sending a certain amount of high priority memory access commands and sending a certain amount of low priority memory access commands. For example, for every five memory access commands, the memory access command manager 113 may send four high priority memory access commands and one low priority memory access command (80% ratio). The ratio may be predetermined during manufacture and/or calibration of the memory subsystem 110 or defined by a user of the memory subsystem 110. In some embodiments, the ratio scheme may be initiated based on a predetermined condition. For example, the memory access command manager 113 can sample the high-priority data structure and the low-priority data structure to determine the amount of high-priority memory access commands in the high-priority data structure and the amount of low-priority memory access commands in the low-priority data structure. In response to meeting the criteria, the memory access command manager 113 can enable or disable the ratio scheme. The criteria may include a ratio of high priority commands to low priority memory access commands exceeding a threshold, an amount of high priority memory access commands in a high priority data structure exceeding a threshold, an amount of low priority memory access commands in a low priority data structure exceeding a threshold, or any combination thereof.
Memory subsystem 110 further includes a queue manager 137 that can manage memory access command queues for memory device 130. In some embodiments, queue manager 137 is part of memory access command manager 113 and/or memory subsystem controller 115. In some embodiments, at least a portion of queue manager 137 is part of host system 120, an application, or an operating system. In other embodiments, local media controller 135 comprises at least a portion of queue manager 137 and is configured to perform the functions described herein.
The queue manager 137 may manage one or more command queue sets for each die of the memory device 130 or for each plane of each die of the memory device 130. Similar to memory access command manager 113, queue manager 137 may maintain a memory access command data structure (hereinafter "memory device command data structure") to track outstanding memory access commands received from memory subsystem controller 115. The memory device command data structure may include a per-plane command queue, a per-die command queue, or any combination thereof for each die of the memory device 130. Each command queue may store pending memory access commands issued to a corresponding plane of memory device 130. The queue manager 137 may determine into which command queue a memory access command received from the memory access command manager 113 is inserted by determining which physical address or address range access the memory access command requests. For example, the host command may provide a logical address that is translated to a physical address by the memory subsystem controller 115. The physical address may contain the identified die, plane, etc. As an illustrative example, memory device 130 may include 16 dies, each die including four planes. Thus, the queue manager 137 can use the memory device command data structure to manage memory access commands for 64 planes (4 planes per die x 16 dies = 64 planes).
Each plane may contain its own memory access command queue (hereinafter "plane queue") capable of storing at least one memory access command. For example, each plane queue may contain a single entry for storing one memory access command, two entries for storing two memory access commands, or more than two entries for storing a corresponding amount of memory access commands. Each plane queue is handled autonomously by the die (e.g., without input from the queue manager 137). For example, in response to one of two pending memory access commands in the processing plane queue, the die may process the second pending memory access command without an instruction from the queue manager 137.
The queue manager 137 may receive the memory access command from the memory access command manager 113, determine the physical address referenced by the memory access command, and insert the memory access command into the appropriate command queue in the memory device command data structure. The queue manager 137 may further translate the memory access commands into a set of die and channel operations and send the memory access commands to the appropriate die on the memory device 130 for insertion into the appropriate plane queues.
In some embodiments, the queue manager 137 can send memory access commands to and receive data from the die using one or more communication channels. The communication channel may include, for example, an open NAND flash interface ONFI channel, or any other channel capable of enabling communication between the queue manager 137 and the die. In some embodiments, multiple communication channels may be used, with each communication channel being connected to a particular set of dies. For example, memory device 130 may include two sets of dies, a first set including dies 0, 2, 4, and 6, and a second set including dies 1, 3, 5, and 7. The queue manager 137 can communicate with a first set of dies using one communication channel and with a second set of dies using another communication channel.
The queue manager 137 can use one or more schemes to process memory access commands from the memory device memory access command structure. In some embodiments, the queue manager 137 may process the memory access commands using a round robin scheme, wherein the queue manager 137 loops sending the memory access commands to each die or to each die in a set. For example, with respect to the first set of dies, queue manager 137 can send memory access commands to die 0, then die 2, then die 4, then die 6, then again die 0, and so on. In some embodiments, the queue manager 137 may include a cycle to send memory access commands followed by a cycle to snoop data from the die set (e.g., snoop data from read commands). In some embodiments, the queue manager 137 may send memory access commands in parallel over multiple communication channels.
In some embodiments, each die may perform memory access command setup operations (e.g., receive memory access commands from the queue manager 137 and insert memory access commands into the appropriate plane queues of the memory device command data structure) and data transfer operations (e.g., process memory access commands from the plane queues) simultaneously. In particular, queue manager 137 can insert new memory access commands into the plane queue while the plane queue is processing pending memory access commands. In an illustrative example, a planar queue of dies may include commands to be processed. In processing the pending memory access command, the die may receive a new memory access command from the queue manager 137. The die may insert a new memory access command into the plane queue without interrupting execution of the pending memory access command. In some embodiments, the die may receive and insert a new memory access command into the command queue as long as the communication channel to the memory subsystem controller 115 is free and the plane queue has at least one empty entry available to record the new memory access command.
In some embodiments, a single memory access command may include a range of memory addresses spanning multiple planes of the die (hereinafter referred to as a "multi-plane memory access command"). In response to receiving the multi-plane memory access command, the queue manager 137 may insert the multi-plane memory access command into two or more plane queues corresponding to an address range of the multi-plane memory access command. For example, in response to receiving a read command containing an address range located in the address space of plane 1 and plane 2, queue manager 137 may insert the read command into the plane queue of plane 1 and into the plane queue of plane 2. With respect to multi-plane read commands, each plane may return a portion of the data requested by the read command. In response, queue manager 137 can perform an assembly operation to merge the partial data retrieved from each plane. With respect to multi-plane write commands, the queue manager 137 may divide the write command into a plurality of write commands, each of which contains a portion of the write command's data. Each of the plurality of write commands may be inserted into an appropriate plane queue.
In some embodiments, queue manager 137 can implement a rule set for multi-plane memory access commands. In some embodiments, the queue manager 137 may send the multi-plane memory access command to the die only when the plane queue of the corresponding plane addressed by the multi-plane memory access command is empty (e.g., the plane queue has no pending memory access command or data to clock). In some embodiments, when the plane queue contains multi-plane commands, the queue manager 137 may insert only single plane memory access commands into the plane queue. In some embodiments, the queue manager 137 may insert only high priority memory access commands into a flat queue with pending memory access commands.
In some embodiments, the queue manager 137 and/or the processing unit of the die may suspend processing command operations. For example, the queue manager 137 and/or the processing unit of the die may suspend writing data to one or more blocks, reading data from one or more blocks, erasing data from one or more blocks, and so forth. In some embodiments, the queue manager 137 and/or the processing unit of the die may contain a limit on how many memory access commands may be suspended per die and/or per plane. For example, during an operation in which die 0 sends data from a low priority memory access command to queue manager 137, the operation may be suspended to send a high priority memory access command from die 2 to queue manager 137 over a communication channel. Once the high priority memory access command is sent, the processing unit of the die may resume the original command operation by sending data from die 0 to queue manager 137.
In some embodiments, a processing unit of the die (e.g., local media controller 135) may evict memory access commands located in a plane queue. For example, the plane queue may store write commands addressed to a particular address space. The die may then insert a new write command into the plane queue that is addressed to the same address space as the previously received write command in the plane queue. Thus, the die may evict a previously received write command.
In some embodiments, the processing unit of the die may prioritize execution of one or more data access commands in the plane queue based on the priority of the data access commands. For example, queue manager 137 may first insert a low priority read command into a plane queue and then insert a high priority read command into the plane queue. The processing unit of the die may identify the priority of each read command and execute the high priority read command before executing the low priority read command.
In some embodiments, the processing unit of the die may send data from the plurality of read commands in a different order than the order in which the read commands were received. In some embodiments, the die may clock the data based on priority. For example, a die may receive two read command data. The die may insert the read command into the appropriate plane queue. The read commands may be inserted into the same plane queue or into a different plane queue. The die may process previously received read commands (with low priority), but because the communication channel is busy, the data related to the read commands cannot be clocked. The die may then process the read command (with high priority) received later. In response to the communication channel becoming idle, the die may first clock the data of the later received read command because the later received read command has a higher priority than the previously received read command. Further details regarding the operation of the plane queues and queue manager 137 are described below.
FIG. 2 is a flowchart of an example method 200, according to some embodiments of the present disclosure, illustrating a process performed to utilize a memory access command queue. The method 200 may be performed by processing logic that may comprise hardware (e.g., a processing device, circuitry, dedicated logic, programmable logic, microcode, the hardware of a device, an integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, method 200 is performed by queue manager 137 of FIG. 1. Although shown in a particular order or sequence, the sequence of processes may be modified unless otherwise specified. Thus, it should be understood that the illustrated embodiments are merely examples, and that the illustrated processes may be performed in a different order, and that some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are also possible.
At operation 210, processing logic may receive a memory access command. For example, processing logic may receive write commands, read commands, erase commands, and the like. The memory access command may be host initiated or memory subsystem controller initiated. In some embodiments, the memory access command may contain a unique identifier that allows the memory subsystem controller 115 to track the memory access command. In some embodiments, the memory access command may include a priority identifier (e.g., a high priority identifier, a low priority identifier, etc.).
At operation 220, processing logic may determine a physical address associated with the memory access command. In an embodiment, the physical addresses may be located in a single plane address space. In some embodiments, the physical address may be located in an address space of multiple planes on the die.
At operation 230, processing logic may determine a plane of the die on the memory device referenced by the physical address. For example, processing logic may use a table to determine which plane of which die contains the address space.
At operation 240, processing logic may insert the memory access command into a queue associated with the plane. In some embodiments, the queue may store two or more memory access commands.
At operation 250, processing logic may process a memory access command from the queue. In one example, in response to the memory access command being a write command, processing logic may write data of the write command to an address space on the plane. In another example, in response to the memory access command being a read command, processing logic may retrieve data of a write command to an address space on the plane and send the retrieved data to the memory subsystem controller.
Fig. 3 shows a diagram of a die 300 structured in a four plane queue, according to some embodiments of the present disclosure. In some embodiments, die 310 may include plane a 332, plane B342, plane C352, and plane D362. Each of the planes may include a plane queue. As shown, plane a 332 contains queue a 334, plane B342 contains plane B344, plane C352 contains queue C354, and plane D362 contains queue D364. Each plane queue contains two entries into which memory access commands can be inserted for processing. As shown, plane A334 contains entry A-1 336 and entry A-2 338, plane B344 contains entry B-1 346 and entry B-2 348, plane C354 contains entry C-1 356 and entry C-2 358, and plane D364 contains entry D-1 366 and entry D-2 368. Main data cache A372, main data cache B374, main data cache C376, and auxiliary data cache D378 may each contain one or more registers for moving data between the queues and blocks of planes.
Fig. 4 illustrates an example computing system 400 including a memory access command manager 413 and a memory device 480, according to some embodiments of the disclosure. The memory device 480 may include eight dies (die-0 480, die-1 481, die-2 482, die-3 483, die-4 484, die-5 485, die-6 486, die-7 487). Each die may contain two or more planes. Each die may include a corresponding set of plane queues (e.g., die-0 queue 460, die-1 queue 461, die-2 queue 462, die-3 queue 463, die-4 queue 464, die-5 queue 465, die-6 queue 466, die-7 queue 467). Each plane queue may store pending memory access commands issued by the memory access command manager 413 to the plane of the corresponding die of the memory device 480. Each plane queue contains at least two entries into which memory access commands can be inserted for processing.
Computing system 400 may further comprise a host system (not shown) capable of communicating with memory access command manager 413. A host system (e.g., host system 120) may send a memory access command to memory access command manager 413.
Memory access command manager 413 may be similar to memory access command manager 113. Memory access command manager 413 may include high priority data structure 420, low priority data structure 425, and queue manager 452, traffic arbiter 448, and traffic arbiter 470. The high priority data structure 420 may be structured to store high priority commands for each die of the memory 475. As shown, high priority data structure 420 includes eight command queues, one command queue per die of memory device 480 (e.g., die-0 queue 430, die-1 queue 431, die-2 queue 432, die-3 queue 433, die-4 queue 434, die-5 queue 435, die-6 queue 436, and die-7 queue 437). The low priority data structure 425 also includes eight command queues, one command queue per die of the memory device 480 (e.g., die-0 queue 440, die-1 queue 441, die-2 queue 442, die-3 queue 443, die-4 queue 444, die-5 queue 445, die-6 queue 446, and die-7 queue 447). Each memory access command may contain a priority identifier (e.g., a high priority identifier, a low priority identifier, etc.) to indicate whether the received memory access command is stored in high priority data structure 420 or low priority data structure 425. The traffic arbiter 448 may adjust between sending high priority memory access commands from the high priority data structure 420 to the queue manager 452 and low priority memory access commands from the low priority data structure 425. For example, the traffic arbiter 448 may implement a ratio scheme to adjust between sending a certain amount of high priority commands and sending a certain amount of low priority memory access commands.
The queue manager 452 can manage a memory device memory access command structure (not shown) to track outstanding memory access commands received from the traffic arbiter 448. The traffic arbiter 470 may process memory access commands from the memory device memory access command structure using one or more schemes. In some embodiments, the traffic arbiter 470 may process the memory access commands using a round robin scheme, where the traffic arbiter 470 loops sending the memory access commands to each die or to each die in the set. For example, with respect to the first set of dies, the traffic arbiter 470 can send a memory access command to die-0 480, then die-2 482, then die-4 484, then die-6 486, then again die-0, etc. In some embodiments, the traffic arbiter 470 may include a cycle to send memory access commands followed by a cycle to snoop data from the set of dies (e.g., snoop data from a read command). The processing device of memory 475 may insert each memory access command into an appropriate plane queue for the appropriate die.
In some embodiments, the traffic arbiter 470 may send memory access commands to and receive data from the die of the memory device 480 using the communication channel 472. In some embodiments, multiple communication channels may be used, with each communication channel being connected to a particular set of dies. For example, a first set of dies includes die-0 480, die-2 482, die-4 484, and die-6 486, and a second set of dies includes die-1 481, die-3 483, die-5485, and die-7 487. Traffic arbiter 470 may communicate with a first set of dies using one communication channel and with a second set of dies using another communication channel.
FIG. 5 illustrates an example machine of a computer system 500 in which a set of instructions for causing the machine to perform any one or more of the methods discussed herein may be executed. In some embodiments, computer system 500 may correspond to a host system (e.g., host system 120 of fig. 1) that contains or utilizes a memory subsystem (e.g., memory subsystem 110 of fig. 1) or may be used to perform operations of a controller (e.g., execute an operating system to perform operations corresponding to memory access command manager 113 of fig. 1). In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment, in the capacity of a server or client machine in a client-server network environment.
The machine may be a Personal Computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a network appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. In addition, while a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 500 includes a processing device 502, a main memory 504 (e.g., read Only Memory (ROM), flash memory, dynamic Random Access Memory (DRAM), such as Synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static Random Access Memory (SRAM), etc.), and a data storage system 518, which communicate with each other via a bus 530. The processing device 502 represents one or more general-purpose processing devices, such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, or a processor implementing other instruction sets, or a processor implementing a combination of instruction sets. The processing device 502 may also be one or more special purpose processing devices, such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a network processor, or the like. The processing device 502 is configured to execute the instructions 526 for performing the operations and steps discussed herein. Computer system 500 may further include a network interface device 508 that communicates over a network 520.
The data storage system 518 may include a machine-readable storage medium 524 (also referred to as a computer-readable medium) on which is stored one or more sets of instructions 526 or software embodying any one or more of the methodologies or functions described herein. The instructions 526 may also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500, the main memory 504 and the processing device 502 also constituting machine-readable storage media. The machine-readable storage medium 524, the data storage system 518, and/or the main memory 504 may correspond to the memory subsystem 110 of fig. 1.
In one embodiment, the instructions 526 include instructions for implementing functions corresponding to the memory access command manager 113 of FIG. 1. While the machine-readable storage medium 524 is shown in an example embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media storing one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. Accordingly, the term "computer-readable storage medium" shall be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.
The present disclosure also relates to apparatus for performing the operations herein. Such a device may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. The structure of various of these systems will be presented as set forth in the following description. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
The present disclosure may be provided as a computer program product or software that may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) readable storage medium such as read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices, and the like.
In the foregoing specification, embodiments of the disclosure have been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims (20)

1. A system, comprising:
a memory device; and
a processing device operably coupled with the memory device to perform operations comprising:
Receiving a memory access command;
determining a physical address associated with the memory access command;
determining a plane of a die on the memory device referenced by the physical address;
inserting the memory access command into a queue associated with the plane; and
the memory access command from the queue is processed.
2. The system of claim 1, wherein the processing device is to perform further operations comprising:
determining a priority of the memory access command; and
the memory access command is processed based on the priority.
3. The system of claim 1, wherein the processing device is to perform further operations comprising:
in response to receiving a memory access command referenced by a physical address, another memory access command located in the queue referenced by the same physical address is evicted.
4. The system of claim 1, wherein the processing device is to perform further operations comprising:
processing a next memory access command from the queue, wherein additional memory access commands are inserted into the queue after the memory access command is inserted into the queue; and
Data associated with the further memory access command is sent before data associated with the memory access command is sent.
5. The system of claim 1, wherein the processing device is to perform further operations comprising:
suspending processing of the memory access command from the queue;
processing additional memory access commands from the queue; and
resume processing the memory access command from the queue.
6. The system of claim 1, wherein the processing device is to perform further operations comprising:
the memory access command is received from at least one of a high priority data structure or a low priority data structure, wherein each data structure comprises at least one of: a per-die command queue for each die of the memory device, a per-plane command queue for each plane of each die of the memory device, or a per-die set queue for one or more die sets of the memory device.
7. The system of claim 6, wherein the memory access commands are received based on a scheme that adjusts between sending a first amount of high priority commands and a second amount of low priority memory access commands.
8. A method, comprising:
receiving a memory access command;
determining an address range associated with the memory access command;
determining at least two planes of die on the memory device referenced by the address range; and
inserting a different portion of the memory access command into each of a plurality of queues, wherein each of the plurality of queues is associated with a respective plane of the plurality of planes; and
the memory access commands from each of the plurality of queues are processed.
9. The method as recited in claim 8, further comprising:
determining a priority of the memory access command; and
the memory access command is processed based on the priority.
10. The method as recited in claim 8, further comprising:
in response to receiving a memory access command referenced by a physical address, another memory access command located in the queue referenced by the same physical address is evicted.
11. The method as recited in claim 8, further comprising:
processing a next memory access command from a queue of the plurality of queues, wherein additional memory access commands are inserted into the queue after the memory access command is inserted into the queue; and
Data associated with the further memory access command is sent before data associated with the memory access command is sent.
12. The method as recited in claim 8, further comprising:
suspending processing of the memory access command from the queue;
processing additional memory access commands from the queue; and
resume processing the memory access command from the queue.
13. The method as recited in claim 1, further comprising:
the memory access command is received from at least one of a high priority data structure or a low priority data structure, wherein each data structure comprises at least one of: a per-die command queue for each die of the memory device, a per-plane command queue for each plane of each die of the memory device, or a per-die set queue for one or more die sets of the memory device.
14. The method of claim 13, wherein the memory access commands are received based on a scheme that adjusts between sending a first amount of high priority commands and a second amount of low priority memory access commands.
15. A non-transitory computer-readable storage medium comprising instructions that when executed by a processing device operably coupled to a memory device perform operations comprising:
receiving a memory access command;
determining a physical address associated with the memory access command;
determining a plane of a die on the memory device referenced by the physical address;
inserting the memory access command into a queue associated with the plane; and
the memory access command from the queue is processed.
16. The non-transitory computer-readable storage medium of claim 15, wherein the processing means is for performing further operations comprising:
determining a priority of the memory access command; and
the memory access command is processed based on the priority.
17. The non-transitory computer-readable storage medium of claim 15, wherein the processing means is for performing further operations comprising:
in response to receiving a memory access command referenced by a physical address, another memory access command located in the queue referenced by the same physical address is evicted.
18. The non-transitory computer-readable storage medium of claim 15, wherein the processing means is for performing further operations comprising:
processing a next memory access command from the queue, wherein additional memory access commands are inserted into the queue after the memory access command is inserted into the queue; and
data associated with the further memory access command is sent before data associated with the memory access command is sent.
19. The non-transitory computer-readable storage medium of claim 15, wherein the processing means is for performing further operations comprising:
suspending processing of the memory access command from the queue;
processing additional memory access commands from the queue; and
resume processing the memory access command from the queue.
20. The non-transitory computer-readable storage medium of claim 15, wherein the processing means is for performing further operations comprising:
the memory access command is received from at least one of a high priority data structure or a low priority data structure, wherein each data structure comprises at least one of: a per-die command queue for each die of the memory device, a per-plane command queue for each plane of each die of the memory device, or a per-die set queue for one or more die sets of the memory device.
CN202280057426.3A 2021-08-25 2022-08-24 Enhancing memory performance using memory access command queues in a memory device Pending CN117836751A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US17/411,572 US11868655B2 (en) 2021-08-25 2021-08-25 Memory performance using memory access command queues in memory devices
US17/411,572 2021-08-25
PCT/US2022/041401 WO2023028163A1 (en) 2021-08-25 2022-08-24 Improved memory performance using memory access command queues in memory devices

Publications (1)

Publication Number Publication Date
CN117836751A true CN117836751A (en) 2024-04-05

Family

ID=85285851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280057426.3A Pending CN117836751A (en) 2021-08-25 2022-08-24 Enhancing memory performance using memory access command queues in a memory device

Country Status (4)

Country Link
US (2) US11868655B2 (en)
KR (1) KR20240043148A (en)
CN (1) CN117836751A (en)
WO (1) WO2023028163A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11907577B2 (en) * 2021-12-06 2024-02-20 Western Digital Technologies, Inc. Command queuing for data storage devices

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8499293B1 (en) * 2005-09-28 2013-07-30 Oracle America, Inc. Symbolic renaming optimization of a trace
US20080077727A1 (en) * 2006-09-25 2008-03-27 Baca Jim S Multithreaded state machine in non-volatile memory devices
KR20140031515A (en) 2012-09-03 2014-03-13 삼성전자주식회사 Memory controller and electronic device having the memory controller
US9092275B2 (en) * 2012-11-20 2015-07-28 International Business Machines Corporation Store operation with conditional push of a tag value to a queue
US10402319B2 (en) 2014-07-25 2019-09-03 Micron Technology, Inc. Apparatuses and methods for concurrently accessing different memory planes of a memory
KR20180045102A (en) 2016-10-24 2018-05-04 삼성전자주식회사 Operation methods of memory controller and storage device including memory controller
KR20190031693A (en) * 2017-09-18 2019-03-27 에스케이하이닉스 주식회사 Memory system and operating method thereof
KR102406340B1 (en) * 2018-02-26 2022-06-13 에스케이하이닉스 주식회사 Electronic apparatus and operating method thereof
KR102516547B1 (en) * 2018-03-08 2023-04-03 에스케이하이닉스 주식회사 Memory controller and memory system having the same
US11237617B2 (en) * 2018-12-31 2022-02-01 Micron Technology, Inc. Arbitration techniques for managed memory
US10877696B2 (en) * 2019-03-28 2020-12-29 Intel Corporation Independent NAND memory operations by plane
US11269552B2 (en) * 2019-06-14 2022-03-08 Micron Technology, Inc. Multi-pass data programming in a memory sub-system having multiple dies and planes
CN112000276B (en) * 2020-06-19 2023-04-11 浙江绍兴青逸信息科技有限责任公司 Memory bank
KR20220018351A (en) * 2020-08-06 2022-02-15 에스케이하이닉스 주식회사 Memory controller and operating method thereof
US20220083266A1 (en) * 2020-09-16 2022-03-17 Kioxia Corporation Plane-based queue configuration for aipr-enabled drives

Also Published As

Publication number Publication date
WO2023028163A1 (en) 2023-03-02
US20240103770A1 (en) 2024-03-28
KR20240043148A (en) 2024-04-02
US11868655B2 (en) 2024-01-09
US20230068605A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
US11573742B2 (en) Dynamic data placement for collision avoidance among concurrent write streams
US11726690B2 (en) Independent parallel plane access in a multi-plane memory device
CN115836277A (en) Checking a state of a plurality of memory dies in a memory subsystem
CN113853653A (en) Managing programming mode transitions to accommodate a constant size of data transfers between a host system and a memory subsystem
US11385820B2 (en) Command batching for a memory sub-system
US11698864B2 (en) Memory access collision management on a shared wordline
CN113590023A (en) Storing regions in a region name space on separate planes of a multi-plane memory device
US20240103770A1 (en) Improved memory performance using memory access command queues in memory devices
CN114724611A (en) Dual interleaving programming of memory devices in a memory subsystem
WO2021179164A1 (en) Maintaining queues for memory sub-systems
US11720490B2 (en) Managing host input/output in a memory system executing a table flush
CN115757208A (en) Improved memory performance during program pause protocol
US11899972B2 (en) Reduce read command latency in partition command scheduling at a memory device
US11693597B2 (en) Managing package switching based on switching parameters
CN114442921B (en) Cache release command for cache reads in memory subsystem
US20230058232A1 (en) Partition command queues for a memory device
US20230056287A1 (en) Dynamic partition command queues for a memory device
US20220404979A1 (en) Managing queues of a memory sub-system
CN113590022A (en) System and method for memory device
CN115237824A (en) Cache read context switching in a memory subsystem
CN115705853A (en) Independent plane architecture in memory devices
CN115729459A (en) Managing distribution of page addresses and partition numbers in a memory subsystem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication