CN114546903A - Storage device and storage system including the same - Google Patents
Storage device and storage system including the same Download PDFInfo
- Publication number
- CN114546903A CN114546903A CN202111368417.XA CN202111368417A CN114546903A CN 114546903 A CN114546903 A CN 114546903A CN 202111368417 A CN202111368417 A CN 202111368417A CN 114546903 A CN114546903 A CN 114546903A
- Authority
- CN
- China
- Prior art keywords
- command
- host
- core
- storage device
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 201
- 230000015654 memory Effects 0.000 claims abstract description 163
- 238000009826 distribution Methods 0.000 claims abstract description 82
- 230000004044 response Effects 0.000 claims abstract description 23
- 229910052751 metal Inorganic materials 0.000 description 65
- 239000002184 metal Substances 0.000 description 65
- 239000010410 layer Substances 0.000 description 49
- 238000010586 diagram Methods 0.000 description 31
- 239000000758 substrate Substances 0.000 description 24
- 239000000872 buffer Substances 0.000 description 21
- 230000002093 peripheral effect Effects 0.000 description 20
- 150000002739 metals Chemical class 0.000 description 12
- 101150098958 CMD1 gene Proteins 0.000 description 10
- 101100382321 Caenorhabditis elegans cal-1 gene Proteins 0.000 description 10
- 238000010801 machine learning Methods 0.000 description 8
- 238000000034 method Methods 0.000 description 7
- 239000010949 copper Substances 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 101100449814 Arabidopsis thaliana GTL1 gene Proteins 0.000 description 4
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 4
- 229910052802 copper Inorganic materials 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 101100481702 Arabidopsis thaliana TMK1 gene Proteins 0.000 description 3
- 229910052782 aluminium Inorganic materials 0.000 description 3
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 3
- 239000011229 interlayer Substances 0.000 description 3
- 230000014759 maintenance of location Effects 0.000 description 3
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 3
- 229910052721 tungsten Inorganic materials 0.000 description 3
- 239000010937 tungsten Substances 0.000 description 3
- 101100058970 Arabidopsis thaliana CALS11 gene Proteins 0.000 description 2
- 101100058964 Arabidopsis thaliana CALS5 gene Proteins 0.000 description 2
- 102100031885 General transcription and DNA repair factor IIH helicase subunit XPB Human genes 0.000 description 2
- 101000920748 Homo sapiens General transcription and DNA repair factor IIH helicase subunit XPB Proteins 0.000 description 2
- 101100049574 Human herpesvirus 6A (strain Uganda-1102) U5 gene Proteins 0.000 description 2
- 101100341076 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) IPK1 gene Proteins 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 101150064834 ssl1 gene Proteins 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 101000641216 Aquareovirus G (isolate American grass carp/USA/PB01-155/-) Non-structural protein 4 Proteins 0.000 description 1
- 101100058961 Arabidopsis thaliana CALS2 gene Proteins 0.000 description 1
- 101100204010 Drosophila melanogaster Ssl gene Proteins 0.000 description 1
- 101000927946 Homo sapiens LisH domain-containing protein ARMC9 Proteins 0.000 description 1
- 102100036882 LisH domain-containing protein ARMC9 Human genes 0.000 description 1
- 101100482995 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) gsl-3 gene Proteins 0.000 description 1
- 101100287040 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) ARG82 gene Proteins 0.000 description 1
- 229910052581 Si3N4 Inorganic materials 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- CXOXHMZGEKVPMT-UHFFFAOYSA-N clobazam Chemical compound O=C1CC(=O)N(C)C2=CC=C(Cl)C=C2N1C1=CC=CC=C1 CXOXHMZGEKVPMT-UHFFFAOYSA-N 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011810 insulating material Substances 0.000 description 1
- 238000009413 insulation Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 150000002736 metal compounds Chemical class 0.000 description 1
- 238000001465 metallisation Methods 0.000 description 1
- 229940044442 onfi Drugs 0.000 description 1
- 229910021420 polycrystalline silicon Inorganic materials 0.000 description 1
- 229920005591 polysilicon Polymers 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- HQVNEWCFYHHQES-UHFFFAOYSA-N silicon nitride Chemical compound N12[Si]34N5[Si]62N3[Si]51N64 HQVNEWCFYHHQES-UHFFFAOYSA-N 0.000 description 1
- 229910052814 silicon oxide Inorganic materials 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 101150062870 ssl3 gene Proteins 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1652—Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
- G06F13/1657—Access to multiple memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
- G06F13/1694—Configuration of memory controller to different memory types
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0658—Controller construction arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Read Only Memory (AREA)
Abstract
A memory device includes: a non-volatile memory; a plurality of cores; a host interface configured to receive a first setup command, an I/O command, and an ADMIN command from a host; and a storage controller including a command distribution module configured to be set to a first state according to a first set command, and to distribute the I/O command to the plurality of cores according to the set first state. Each of the plurality of cores may be configured to perform operations indicated by the I/O command and operations indicated by the ADMIN command on the non-volatile memory in response to the distributed I/O command.
Description
Cross Reference to Related Applications
Korean patent application No. 10-2020-.
Technical Field
Embodiments relate to a storage device and a storage system including the same.
Background
The storage device may be used for various purposes depending on the environment of the storage system, etc. For example, the storage device may be used for gaming purposes, document work, or viewing high definition video. The memory device may include a multi-core processor to improve performance of the memory device.
Disclosure of Invention
An embodiment relates to a storage device, comprising: a non-volatile memory; a plurality of cores;
a host interface configured to receive a first setup command, an I/O command, and an ADMIN command from a host; and a storage controller including a command distribution module configured to be set to a first state according to a first set command, and to distribute the I/O command to the plurality of cores according to the set first state. Each of the plurality of cores may be configured to perform the operation indicated by the I/O command and the operation indicated by the ADMIN command on the non-volatile memory in response to the distributed I/O command.
Embodiments are also directed to a storage device, comprising: a non-volatile memory; and a storage controller configured to receive a first setting command from the host at a first point in time, configured to perform an operation indicated by the I/O command on the non-volatile memory in response to the I/O command provided from the host, and configured not to perform the operation indicated by the ADMIN command on the non-volatile memory in response to the ADMIN command provided from the host.
Embodiments are also directed to a storage system comprising: a host; a first storage device including a first non-volatile memory, a plurality of first cores configured to control the first non-volatile memory, and a first storage controller configured to output a first status to the plurality of first cores in response to a first status command provided from a host, the first status including information that a first ADMIN command and a first I/O command provided from the host are distributed; and a second storage apparatus including a second nonvolatile memory, a plurality of second cores configured to control the second nonvolatile memory, and a second storage controller configured to output a second status to the plurality of second cores in response to a second status command provided from the host, the second status including information that a second ADMIN command and a second I/O command provided from the host are distributed. The host may be configured to provide a third I/O command to one of the first storage device and the second storage device based on the first state and the second state.
Drawings
Features will become apparent to those skilled in the art from the detailed description of exemplary embodiments with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram illustrating a storage system according to some example embodiments;
FIG. 2 is a diagram for explaining the operation of the memory system of FIG. 1;
fig. 3 and 4 are diagrams for explaining the setup command of fig. 2;
FIGS. 5 to 8 are diagrams for explaining the operation of the command distribution module of FIG. 1;
FIG. 9 is a block diagram illustrating a storage system according to some other example embodiments;
fig. 10 and 11 are diagrams for explaining the operation of the memory system of fig. 9;
FIG. 12 is a block diagram for explaining the nonvolatile memory of FIG. 1;
FIG. 13 is a diagram for explaining a 3D V-NAND structure applicable to the nonvolatile memory of FIG. 1;
fig. 14 is a diagram for explaining a BVNAND structure applicable to the nonvolatile memory of fig. 1;
FIG. 15 is a block diagram illustrating a storage system according to some other example embodiments;
FIG. 16 is a block diagram illustrating a storage system according to some other example embodiments;
17-19 are block diagrams useful in explaining the operation of a storage system according to some other example embodiments;
FIG. 20 is a block diagram illustrating the operation of a storage system according to some other example embodiments;
FIG. 21 is a diagram for explaining the status command of FIG. 20;
FIG. 22 is a block diagram illustrating the operation of a storage system according to some other example embodiments; and
fig. 23 is a diagram illustrating a data center to which a storage system according to some other example embodiments is applied.
Detailed Description
FIG. 1 is a block diagram illustrating a storage system according to some example embodiments.
Referring to fig. 1, a storage system 1 according to some example embodiments may include a host 100 and a storage apparatus 200.
The host 100 may be or comprise, for example, a PC (personal computer), a laptop computer, a mobile phone, a smartphone, a tablet PC, etc.
The host 100 may include a host controller 110 and a host memory 120. The host memory 120 may serve as a buffer memory for temporarily storing data to be transmitted to the storage device 200 or data transmitted from the storage device 200.
According to example embodiments, the host controller 110 and the host memory 120 may be implemented as separate semiconductor chips. Alternatively, in an example embodiment, the host controller 110 and the host memory 120 may be integrated on the same semiconductor chip. As an example, the host controller 110 may be one of a plurality of modules provided in an application processor, and the application processor may be implemented as a system on chip (SoC). Further, the host memory 120 may be an embedded memory provided within the application processor, or a non-volatile memory or memory module disposed outside the application processor.
The host controller 110 may manage an operation of storing data (e.g., recorded data) of the host memory 120 in a non-volatile memory (NVM)220 of the storage device 200 or storing data (e.g., read data) of the non-volatile memory 220 of the storage device in the host memory 120.
The storage device 200 may be a storage medium that stores data in response to a request from the host 100. As an example, the storage device 200 may be an SSD (solid state drive), an embedded memory, a detachable external memory, or the like. When the storage apparatus 200 is an SSD, the storage apparatus 200 may be an apparatus complying with NVMe (non-volatile memory express) standard. When the storage device 200 is an embedded memory or an external memory, the storage device 200 may be a device conforming to UFS (universal flash memory) or eMMC (embedded multimedia card) standards. The host 100 and the storage apparatus 200 may each generate and transmit a packet conforming to the adopted standard protocol.
The memory device 200 may include a memory controller 210 and a non-volatile memory (NVM) 220.
When the non-volatile memory 220 of the storage device 200 comprises flash memory, the flash memory may comprise a 2D NAND memory array or a 3D (or vertical) NAND (vnand) memory array. As another example, the storage device 200 may include various other types of non-volatile memory. For example, MRAM (magnetic RAM), spin transfer torque MRAM, conductive bridge RAM (cbram), FeRAM (ferroelectric RAM), PRAM (phase RAM), resistor memory (resistance RAM), and various other types of memory may be applied to the memory device 200.
The memory controller 210 may include a command distribution module 211, a plurality of cores (multi-core) 212, a Flash Translation Layer (FTL)214, a packet manager 215, a buffer memory 216, an ECC (error correction code) Engine (ECC)217, an AES (advanced encryption standard) engine (AES)218, a host interface (host I/F)219_1, and a memory interface (memory I/F)219_ 2. The command distribution module 211, the plurality of cores 212, the flash translation layer 214, the packet manager 215, the buffer memory 216, the ECC engine 217, and the AES engine 218 may be electrically connected to each other through the bus 205.
The storage controller 210 may further include a working memory (not shown) to load the flash translation layer 214, and write and read operations of data on the non-volatile memory may be controlled by executing the flash translation layer by the plurality of cores 212.
Host interface 219_1 may send packets to host 100 and receive packets from host 100. The packet transmitted from the host 100 to the host interface 219_1 may include a command or data to be recorded in the nonvolatile memory 220, etc., and the packet to be transmitted from the host interface 219_1 to the host 100 may include a response to the command, data read from the nonvolatile memory 220, etc. The memory interface 219_2 may transmit data to be recorded in the nonvolatile memory 220 to the nonvolatile memory 220 or receive data read from the nonvolatile memory 220. The memory interface 219_2 may be implemented to comply with a standard protocol such as Toggle or ONFI.
The state of the command distribution module 211 may be set according to a setting command provided from the host 100. The command distribution module 211 may distribute commands to the plurality of cores 212 according to the set state. The command distribution module 211 may distribute, for example, ADMIN commands and/or I/O (input/output) commands provided from the host 100 to the plurality of cores 212. This will be explained in detail below with reference to fig. 2 to 6.
The plurality of cores 212 may perform operations indicated by the command distributed from the command distribution module 211. For example, the plurality of cores 212 may perform a write operation according to a write command distributed from the command distribution module 211, and may perform a read operation according to a read command distributed from the command distribution module 211.
Each of the plurality of cores 212 may be or include a Central Processing Unit (CPU), a controller, an Application Specific Integrated Circuit (ASIC), and the like. The plurality of cores 212 may be homogeneous or heterogeneous multi-cores.
The flash translation layer 214 can perform various functions such as address mapping, wear leveling, and garbage collection. The address mapping operation is an operation of changing a logical address received from the host into a physical address for actually storing data in the nonvolatile memory 220. Wear leveling is a technique for ensuring that blocks in non-volatile memory 220 are used evenly to prevent excessive degradation of particular blocks, and may be implemented, for example, by firmware techniques that balance erase counts of physical blocks. Garbage collection is a technique for ensuring available capacity in the non-volatile memory 220 by a method of copying useful data of a block to a new block and then erasing an existing block.
The packet manager 215 may generate packets according to the interface protocol discussed with the host 100 or may parse various types of information from packets received from the host 100. In addition, the buffer memory 216 may temporarily store data to be recorded in the nonvolatile memory 220 or data read from the nonvolatile memory 220. The buffer memory 216 may be disposed inside the storage controller 210 or may be disposed outside the storage controller 210.
The ECC engine 217 may perform error detection and correction functions on data read from the non-volatile memory 220. For example, the ECC engine 217 may generate parity bits for data to be written to the non-volatile memory 220, and the parity bits generated in this manner may be stored in the non-volatile memory 220 along with the written data. When reading data from the non-volatile memory 220, the ECC engine 217 may correct errors of the read data using parity bits read from the non-volatile memory 220 together with the read data, and may output error-corrected data.
Fig. 2 is a diagram for explaining an operation of the memory system of fig. 1. Fig. 3 and 4 are diagrams for explaining the set command of fig. 2.
Referring to fig. 2, the host 100 may provide a setting command to the command distribution module 211 (S130). The set command is a command for the host 100 to set the storage apparatus 200, and may include information (feature command) about the state of the command distribution module 211.
For example, referring to fig. 3, when the host interface (219 _1 of fig. 1) is NVMe, the set command may be the set feature command 1000. The set feature command 1000 may include a field 1100, the field 1100 including a feature identifier. The feature identifier may mean a feature of the storage device 200 that the host 100 intends to set. The region 1100 including the feature identifier may include information about the status of the command distribution module 211.
In another example, referring to fig. 4, when the host interface (219 _1 of fig. 1) is SAS (serial attached SCSI), the set command may be a mode selection command 1500. The mode select command 1500 may include a region 1510, the region 1510 including a page code. The page code may mean a code of the memory device 200 that the host 100 intends to select. The area 1510 including the page code may include information on the status of the command distribution module 211.
In another example (not shown), when the host interface (219 _1 of fig. 1) is SATA, the setup command may be a setup feature command. The sub-command value of the set feature command may include information about the status of the command distribution module 211. The present disclosure is not limited thereto, and the setting command may be any command including information on the state of the command distribution module 211.
Referring back to fig. 2, the state of the command distribution module 211 may be set according to the setting command (S140). The states may include, for example, first to fourth states that are different from each other. The present disclosure is not limited thereto, and the definition of the states and the number of the states may be different according to the setting command of the host 100. Hereinafter, a detailed description will be given with reference to fig. 3 to 6.
The command distribution module 211 may issue a response to notify the host 100 that the status is set according to the set command (S150).
The host 100 may provide the I/O command (PERF command) and the ADMIN command to the command distribution module 211 (S160).
In an example embodiment, an I/O command (PERF command) means a command indicating an operation of inputting data from the host 100 or outputting data to the host 100. The I/O commands (PERF commands) may include, for example, write commands and/or read commands.
In an example embodiment, the ADMIN command means a command for the host 100 to manage the storage device (200 of fig. 1). The ADMIN command may include, for example, a read command or a write command to the metadata of the firmware. When an event such as a Sudden Power Off (SPO) occurs, a read command or a write command to the metadata of the firmware may be generated.
When the host interface (219 _1 of fig. 1) is NVMe, the I/O command (PERF command) may be an NVM I/O command in the NVMe command, and the ADMIN command may be an ADMIN command in the NVMe command.
The command distribution module 211 may distribute an I/O command (PERF command) and an ADMIN command to the plurality of cores 212 according to the set state (S170).
The plurality of cores 212 may perform the operation indicated by the distributed I/O command (PERF command) and the operation indicated by the distributed ADMIN command (S180).
The plurality of cores 212 may issue a response to notify the host 100 that the operation indicated by the distributed I/O command (PERF command) and the operation indicated by the distributed ADMIN command have been performed (S190).
In the storage system according to some example embodiments, the host 100 may provide the setup command according to the environment of the storage system or the like, and thus, the command may not be distributed evenly to each core 212, but may be distributed to each core 212 according to the setup command. Accordingly, the utilization of each core 212 can be further improved, and commands can be efficiently processed according to the environments of various storage systems.
Fig. 5 to 8 are diagrams for explaining the operation of the command distribution module of fig. 1.
Referring to fig. 5, the command distribution module 211 may be set to a first state 211_ a. The first state 211_ a may be, for example, a normal state. Hereinafter, the first state 211_ a will be described as a normal state.
The normal state 211_ a may be a basic state of the command distribution module 211. When the host 100 does not provide the set command, the command distribution module 211 may be in a normal state 211_ a.
The command distribution module 211 of the normal state 211_ a may distribute an ADMIN command (ADMIN CMD) to one core 212_1(ADMIN core) of the plurality of cores 212 and may distribute an I/O command (PERF CMD) to the remaining cores 212_2 to 212_ n (PERF cores). The present disclosure is not limited thereto, and the command distribution module 211 may distribute an ADMIN command (ADMIN CMD) to two or more cores among the plurality of cores 212.
The plurality of cores 212_1 to 212_ n may be divided into a core 212_1 performing an operation indicated by an ADMIN command (ADMIN CMD) for managing the storage and cores 212_2 to 212_ n performing an operation indicated by an I/O command (PERF CMD) provided from the host. Accordingly, among the plurality of cores 212_2 to 212 — n, even if the mode (pattern) of the I/O command (PERF CMD) provided from the host is changed, since the cores 212_1 to 212 — n to which the I/O command (PERF CMD) provided from the host is distributed are defined, the operation indicated by the I/O command (PERF CMD) can be more stably performed.
Referring to fig. 6, the command distribution module 211 may be set to a second state 211_ b. The second state 211_ b may be, for example, a maximum operating state. Hereinafter, the second state 211_ b will be described as a maximum operation state.
For example, a bottleneck phenomenon may occur in the storage device due to the overhead of firmware. In this case, the host may provide a set command including the maximum operation state 211_ b to the command distribution module 211.
The command distribution module 211 of the maximum operation state 211_ b may distribute a command (PERF CMD) provided from the host to all of the plurality of cores 212_1 to 212 — n, and may not distribute the ADMIN command. Accordingly, an I/O command (PERF CMD) may be distributed to all of the plurality of cores 212_1 to 212 — n. Accordingly, the plurality of cores 212_1 to 212 — n may perform only an operation indicated by an I/O command (PERF CMD) provided from the host. Therefore, the performance of the memory device can be further improved.
However, a setting command provided from the host to set the state of the command distribution module 211 may be processed. Thus, the command distribution module 211 may receive the setting command and may be set to another state.
Referring to fig. 7, the command distribution module 211 may be in a third state 211_ c. The third state 211_ c may be, for example, a low-latency state. Hereinafter, the third state 211_ c will be described as a low delay state.
For example, a delay may occur in a write operation or a read operation of the non-volatile memory (220 of fig. 1). When a WRITE command (WRITE CMD) and a READ command (READ CMD) are distributed to one core, a delay may occur in the execution of an operation indicated by the READ command (READ CMD) due to the execution of an operation indicated by the WRITE command (WRITE CMD), or a delay may occur in the execution of an operation indicated by the WRITE command (WRITE CMD) due to the execution of an operation indicated by the READ command (READ CMD). In this case, the host may provide a set command including the low-latency state 211_ c to the command distribution module 211.
The command distribution module 211 of the low latency state 211_ c may distribute an ADMIN command (ADMIN CMD) to one core 212_1 of the plurality of cores 212_1 to 212_ n, may distribute a WRITE command (WRITE CMD) to some other cores 212_2 to 212_ m, and may distribute a READ command (READ CMD) to the remaining cores 212_ m +1 to 212_ n. The present disclosure is not limited thereto, and the command distribution module 211 may distribute an ADMIN command (ADMIN CMD) to two or more cores among the plurality of cores 212_1 to 212 — n.
Accordingly, the plurality of cores 212_1 to 212_ n may be divided into a core 212_1 performing an operation indicated by an ADMIN command (ADMIN CMD) for managing the storage device, cores 212_2 to 212_ m performing an operation indicated by a WRITE command (WRITE CMD) provided from the host, and cores 212_ m +1 to 212_ n performing an operation indicated by a READ command (READ CMD) provided from the host. Accordingly, latency can be reduced when performing an operation indicated by a READ command (READ CMD) or a WRITE command (WRITE CMD).
Referring to fig. 8, the command distribution module 211 may be set to a fourth state 211_ d. The fourth state 211_ d may be, for example, a low power state. Hereinafter, the fourth state 211_ d will be described as a low power state.
For example, the storage device may be in an idle state. In this case, the host may provide a set command including the low power state 211_ d to the command distribution module 211.
The command distribution module 211 of the low power state 211_ d may distribute an ADMIN command (ADMIN CMD) and an I/O command (PERF CMD) to one core 212_1 of the plurality of cores 212_1 to 212_ n. The present disclosure is not limited thereto, and the command distribution module 211 may distribute the ADMIN command (ADMIN CMD) and the I/O command (PERF CMD) to only some of the plurality of cores 212_1 to 212 — n.
Thus, only core 212_1, for example, of the plurality of cores 212_1 to 212_ n, may perform the operation indicated by the ADMIN command (ADMIN CMD) and the I/O command (PERF CMD), while the remaining cores may be idle. Therefore, the power consumption of the memory device can be reduced.
The definition of the states and the number of states are not limited to those shown in fig. 5 to 8 and may vary. Depending on the requirements of the host 100 and/or the state of the storage device, the host 100 may provide the command distribution module 211 with a setup command including different states other than those shown in fig. 5 to 8. For example, the host 100 may provide the command distribution module 211 with a setting command including a state in which an I/O command provided from the host 100 is not distributed to at least one of the plurality of cores 212_1 to 212 — n, but a garbage collection command is distributed into the storage device.
Fig. 9 is a block diagram for explaining a memory system according to some other example embodiments. Fig. 10 and 11 are diagrams for explaining the operation of the memory system of fig. 9. Points different from those described with reference to fig. 1 to 8 will be mainly explained.
Referring to fig. 9, the storage system 2 according to some other example embodiments may further include a self-configuration module 213. The command distribution module 211, the plurality of cores (multi-core) 212, the self-configuration module 213, the Flash Translation Layer (FTL)214, the packet manager 215, the buffer memory 216, the ECC Engine (ECC)217, and the AES Engine (AES)218 may be electrically connected to each other through the bus 205.
The self-configuration module 213 may monitor a plurality of cores 212. The self-configuration module 213 may monitor, for example, the type of I/O command provided from the host 100, the latency of the I/O command, the size of pending requests in the plurality of cores 212, the queue depth of the plurality of cores 212, or the interval at which the I/O command is provided from the host 100. The queue depth may be the number of pending commands in the plurality of cores 212. Here, the dimension of the request may be the product of the dimension of the pending commands in the plurality of cores 212 and the queue depth of the commands (number of commands).
The self-configuration module 213 may monitor the plurality of cores 212 to generate a setup command. The command distribution module 211 may receive a setting command from the self-configuration module 213 and may set a state according to the setting command. Hereinafter, a detailed description will be given with reference to fig. 10 and 11.
The self-configuration module 213 may be implemented as software and firmware, such as an application program, executed on the storage device 200, for example.
Referring to fig. 9 and 10, the self-configuration module 213 may generate a setting command according to a predetermined condition. The initial state of the command distribution module 211 may be in a normal state 211_ a.
In an example embodiment, the self-configuration module 213 may set the set command including the maximum operating state 211_ b when the monitoring result indicates that the pending requests in the plurality of cores 212 have a dimension of 4MB or more, the latency of the read command is 10ms or less, and the latency of the write command is 3s or less. As a result, the command distribution module 211 may be set to the maximum operation state 211_ b, and may distribute commands to the plurality of cores 212 according to the maximum operation state 211_ b.
For example, when an I/O command is not provided from the host 100 to the storage device 200 during a preset time as a result of the monitoring, the self-configuration module 213 may generate a set command including the low power state 211_ d. Accordingly, the command distribution module 211 may be set to the low power state 211_ d and distribute commands to the plurality of cores 212 according to the low power state 211_ d. Although the preset time may be, for example, 5 minutes, the present disclosure is not limited thereto.
For example, when the delay of the read command exceeds 10ms and the delay of the write command exceeds 3s, the self-configuration module 213 may generate a set command including the normal state 211_ a. As a result, the command distribution module 211 may be set to the normal state 211_ a, and may distribute commands to the plurality of cores 212 according to the normal state 211_ a.
In an example embodiment, when the monitoring result indicates that the write command and the read command are provided from the host 100 to the storage device 200 in the hybrid mode, the queue depth of the read command is 10 or less, and the queue depth of the write command is 10 or less, the self-configuration module 213 may generate the set command including the low latency state 211_ c. Thus, the command distribution module 211 may be set to the low delay state 211_ c and may distribute commands to the plurality of cores 212 according to the low delay state 211_ c.
In an example embodiment, the self-configuration module 213 may generate the set command including the normal state 211_ a when the monitoring result indicates that the queue depth of the read command exceeds 10 and the queue depth of the write command exceeds 10. Accordingly, the command distribution module 211 may be set to the normal state 211_ a, and may distribute commands to the plurality of cores 212 according to the normal state 211_ a.
The definitions of the states 211_ a, 211_ b, 211_ c, and 211_ d, the numbers of the states 211_ a, 211_ b, 211_ c, and 211_ d, and the conditions set for each of the states 211_ a, 211_ b, 211_ c, and 211_ d are not limited to those shown in fig. 10, but may vary. The definitions of the states 211_ a, 211_ b, 211_ c, and 211_ d, the number of the states 211_ a, 211_ b, 211_ c, and 211_ d, and the conditions set for each of the states 211_ a, 211_ b, 211_ c, and 211_ d may be preset by a manufacturer at the time of manufacturing the memory controller, and/or may be preset by a user.
Referring to fig. 11, the command distribution module 211 may receive a setting command (set feature command) from the host 100 (S230), and may receive a status of the command distribution module 211 from the self-configuration module 213 (S235). The command distribution module 211 may receive the setup command from the host 100 and the status from the self-configuration module 213 at the same time. In this case, the state of the command distribution module 211 may be set according to the setting command provided from the host 100 (S240). Accordingly, the set command provided from the host 100 may override the state provided from the self-configuration module 213. However, the present disclosure is not limited thereto, and the state provided from the self-configuration module 213 may be prioritized over the setting command provided from the host 100 according to the setting.
Subsequently, as described above in fig. 2, the command distribution module 211 may issue a response to notify the host 100 that the status has been set according to the set command (S250), and the host 100 may provide the I/O command and the ADMIN command to the command distribution module 211 (S260). The command distribution module 211 may distribute the I/O command and the ADMIN command to the plurality of cores according to the set state (S270). Each core may perform the operation indicated by the distributed command (S280), and the plurality of cores may issue a response to notify the host 100 that the operation indicated by the multi I/O command has been performed (S290).
Fig. 12 is a block diagram for explaining the nonvolatile memory of fig. 1.
Referring to fig. 12, the nonvolatile memory 300 may include a control logic circuit 320, a memory cell array 330, a page buffer unit 340, a voltage generator 350, and a row decoder 360. Although not shown in detail in FIG. 10, the non-volatile memory 300 may also include a memory interface circuit 310, and may also include column logic, pre-decoders, temperature sensors, command decoders, address decoders, and so forth.
The control logic 320 generally controls various operations within the non-volatile memory 300. The control logic circuit 320 may output various control signals in response to commands CMD and/or addresses ADDR from the memory interface circuit 310. For example, control logic 320 may output voltage control signals CTRL _ vol, row addresses X-ADDR, and column addresses Y-ADDR.
The memory cell array 330 may include a plurality of memory blocks BLK1 through BLKz (z is a positive integer), and each of the plurality of memory blocks BLK1 through BLKz may include a plurality of memory cells. The memory cell array 330 may be connected to the page buffer unit 340 through a bit line BL, and may be connected to the row decoder 360 through a word line WL, a string select line SSL, and a ground select line GSL.
In example embodiments, the memory cell array 330 may include a three-dimensional memory cell array, and the three-dimensional memory cell array may include a plurality of NAND strings. Each NAND string may include memory cells each connected to a word line vertically stacked on the substrate. U.S. patent publication No.7,679,133, U.S. patent publication No.8,553,466, U.S. patent publication No.8,654,587, U.S. patent publication No.8,559,235, and U.S. patent application publication No.2011/0233648 are incorporated herein by reference. In example embodiments, the memory cell array 330 may include a 2D memory cell array, and the 2D memory cell array may include a plurality of NAND strings disposed along a row direction and a column direction.
The page buffer unit 340 may include a plurality of page buffers PB1 through PBn (n is an integer of 3 or more), and a plurality of page buffers PB1 through PBn may be connected to each of the memory cells through a plurality of bit lines BL. The page buffer unit 340 may select at least one bit line among the bit lines BL in response to the column address Y-ADDR. The page buffer unit 340 may operate as a write driver or a sense amplifier according to an operation mode. For example, at the time of a program operation, the page buffer unit 340 may apply a bit line voltage corresponding to data to be programmed to a selected bit line. In a read operation, the page buffer unit 340 may detect a current or voltage of a selected bit line to detect data stored in the memory cell.
The voltage generator 350 may generate various types of voltages for performing a program operation, a read operation, and an erase operation based on the voltage control signal CTRL _ vol. For example, the voltage generator 350 may generate a program voltage, a read voltage, a program verify voltage, an erase voltage, etc. as the word line voltage VWL.
The row decoder 360 may select one of a plurality of word lines WL in response to a row address X-ADDR and select one of a plurality of string selection lines SSL. For example, at a program operation, the row decoder 360 may apply a program voltage and a program verify voltage to a selected word line, and at a read operation, the row decoder 360 may apply a read voltage to the selected word line.
Fig. 13 is a diagram for explaining a 3D V-NAND structure applicable to the nonvolatile memory of fig. 1. When the nonvolatile memory of fig. 1 is implemented as a 3D V-NAND type flash memory, each of a plurality of memory blocks BLK1 through BLKz constituting the memory cell array 330 of the nonvolatile memory may be represented by an equivalent circuit as shown in fig. 13.
Referring to fig. 13, the memory block BLKi may be a three-dimensional memory block formed on a substrate in a three-dimensional structure. For example, a plurality of memory NAND strings included in the memory block BLKi may be formed in a direction perpendicular to the substrate.
The memory block BLKi may include a plurality of memory NAND strings NS11, NS12, NS13, NS21, NS22, NS23, NS31, NS32, and NS33 connected between the bit lines BL1, BL2, and BL3 and the common source line CSL. The memory NAND strings NS11 through NS33 may each include a string selection transistor SST, a plurality of memory cells MC1, MC2,. and MC8, and a ground selection transistor GST. Although fig. 11 illustrates that the plurality of memory NAND strings NS 11-NS 33 each include eight memory cells MC1, MC2,. and MC8, example embodiments are not limited thereto.
The string selection transistors SST may be connected to corresponding string selection lines SSL1, SSL2, and SSL 3. A plurality of memory cells MC1, MC 2.. and MC8 may be connected to corresponding gate lines GTL1, GTL 2.. and GTL 8. Gate lines GTL1, GTL2,. and GTL8 may correspond to word lines, and some of gate lines GTL1, GTL2,. and GTL8 may correspond to dummy word lines. The ground selection transistors GST may be connected to corresponding ground selection lines GSL1, GSL2, and GSL 3. The string selection transistors SST may be connected to corresponding bit lines BL1, BL2, and BL3, and the ground selection transistors GST may be connected to a common source line CSL.
Word lines of the same height (e.g., WL1) may be connected in common, and ground select lines GSL1, GSL2, and GSL3 and string select lines SSL1, SSL2, and SSL3 may be separated from each other. Although fig. 11 illustrates that the memory block BLK is connected to eight gate lines GTL1, GTL2,. and GTL8 and three bit lines BL1, BL2, and BL3, example embodiments are not limited thereto.
Fig. 14 is a diagram for explaining a BVNAND structure applicable to the nonvolatile memory of fig. 1.
Referring to fig. 14, the nonvolatile memory 300 may have a C2C (chip-to-chip) structure. The C2C structure may mean a structure in which an upper chip including a CELL region CELL is fabricated on a first wafer, a lower chip including a peripheral circuit region PERI is fabricated on a second wafer different from the first wafer, and then, the upper chip and the lower chip are bonded to each other through a bonding operation. As an example, the bonding operation may mean an operation of electrically connecting a bonding metal formed on the uppermost metal layer of the upper chip and a bonding metal formed on the uppermost metal layer of the lower chip. For example, when the bonding metal is formed of copper (Cu), the bonding operation may be a Cu-Cu bonding manner. The bonding metal may be formed of aluminum or tungsten.
Each of the peripheral circuit region PERI and the CELL region CELL of the nonvolatile memory 300 according to some example embodiments may include an outer pad bonding region PA, a word line bonding region WLBA, and a bit line bonding region BLBA.
The peripheral circuit region PERI may include a first substrate 1210, an interlayer insulating layer 1215, a plurality of circuit elements 1220a, 1220b, and 1220c formed on the first substrate 1210, a first metal layer 1230a, 1230b, and 1230c connected to each of the plurality of circuit elements 1220a, 1220b, and 1220c, and a second metal layer 1240a, 1240b, and 1240c formed on the first metal layer 1230a, 1230b, and 1230 c. In example embodiments, the first metal layers 1230a, 1230b, and 1230c may be formed of tungsten having a relatively high resistance, and the second metal layers 1240a, 1240b, and 1240c may be formed of copper having a relatively low resistance.
In this specification, although only the first metal layers 1230a, 1230b, and 1230c and the second metal layers 1240a, 1240b, and 1240c are shown and described, example embodiments are not limited thereto, and at least one or more metal layers may be further formed on the second metal layers 1240a, 1240b, and 1240 c. At least a portion of one or more metal layers formed over the second metal layers 1240a, 1240b, and 1240c may be formed of aluminum or the like having a lower resistance than the copper forming the second metal layers 1240a, 1240b, and 1240 c.
An interlayer insulating layer 1215 may be formed on the first substrate 1210 to cover the plurality of circuit elements 1220a, 1220b, and 1220c, the first metal layers 1230a, 1230b, and 1230c, and the second metal layers 1240a, 1240b, and 1240c, and may include an insulating material such as silicon oxide and silicon nitride.
The lower metallization 1271b and 1272b may be formed on the second metal layer 1240b of the word line bonding area WLBA. In the word line bonding region WLBA, the lower bonding metals 1271b and 1272b of the peripheral circuit region PERI may be electrically connected to the upper bonding metals 1371b and 1372b of the CELL region CELL by bonding. The lower bonding metals 1271b and 1272b and the upper bonding metals 1371b and 1372b may be formed of aluminum, copper, tungsten, or the like.
The CELL region CELL may provide at least one memory block. The CELL region CELL may include the second substrate 1310 and a common source line 1320 (corresponding to CSL of fig. 10). A plurality of word lines (1331 to 1338; collectively 1330 corresponding to WL1 to WL8 of fig. 10) may be stacked on the second substrate 1310 in a third direction z perpendicular to the upper surface of the second substrate 1310. A string select line and a ground select line may be placed above and below the word line 1330, and the word line 1330 may be placed between the string select line and the ground select line.
In the bit line bonding area BLBA, the channel structure CH may extend in a direction perpendicular to the upper surface of the second substrate 1310 and may penetrate the word line 1330, the string selection line, and the ground selection line. The channel structure CH may include a data storage layer, a channel layer, a buried insulation layer, etc., and the channel layer may be electrically connected to the first and second metal layers 1350c and 1360 c. For example, the first metal layer 1350c may be a bit line contact, and the second metal layer 1360c may be a bit line (corresponding to BL1 through BL3 of fig. 10). In example embodiments, the bit line 1360c may extend in a second direction y parallel to the upper surface of the second substrate 1310.
In the example embodiment shown in fig. 14, a region where the channel structure CH, the bit line 1360c, and the like are disposed may be defined as a bit line bonding region BLBA. The bit line 1360c may be electrically connected to a circuit element 1220c (which may provide a page buffer 1393) in the peripheral circuit region PERI in the bit line bonding region BLBA. As an example, the bit line 1360c may be connected to the upper bonding metals 1371c and 1372c in the CELL region CELL, and the upper bonding metals 1371c and 1372c may be connected to the lower bonding metals 1271c and 1272c connected to the circuit element 1220c of the page buffer 1393.
In the word line bonding area WLBA, the word line 1330 may extend in the first direction x parallel to the upper surface of the second substrate 1310 and may be connected to a plurality of cell contact plugs (1341 to 1347; collectively 1340). The word line 1330 and the cell contact plug 1340 may be connected at a pad provided by extending at least a portion of the word line 1330 along the first direction x by lengths different from each other. The first and second metal layers 1350b and 1360b may be sequentially connected to an upper portion of the cell contact plug 1340 connected to the word line 1330. The CELL contact plug 1340 may be connected to the peripheral circuit region PERI through the upper bonding metals 1371b and 1372b of the CELL region CELL and the lower bonding metals 1271b and 1272b of the peripheral circuit region PERI in the word line bonding region WLBA.
The cell contact plug 1340 may be electrically connected to the circuit element 1220b (which may provide the row decoder 1394) in the peripheral circuit region PERI. In an example embodiment, the operating voltage of the circuit element 1220b providing the row decoder 1394 may be different from the operating voltage of the circuit element 1220c providing the page buffer 1393. As an example, the operating voltage of the circuit element 1220c providing the page buffer 1393 may be higher than the operating voltage of the circuit element 1220b providing the row decoder 1394.
The common source line contact plug 1380 may be formed in the outer pad bonding region PA. The common source line contact plug 1380 may be formed of a conductive material such as a metal, a metal compound, or polysilicon, and may be electrically connected to the common source line 1319. A first metal layer 1350a and a second metal layer 1360a may be sequentially stacked on an upper portion of the common source line contact plug 1380. As an example, a region where the common source line contact plug 1380, the first metal layer 1350a and the second metal layer 1360a are placed may be defined as the external pad bonding region PA.
I/ O pads 1205 and 1305 may be formed in the outer pad bonding area PA. A lower insulating film 1201 covering the lower surface of the first substrate 1210 may be formed under the first substrate 1210, and a first I/O pad 1205 may be formed on the lower insulating film 1201. The first I/O pad 1205 may be connected to at least one of the plurality of circuit elements 1220a, 1220b, and 1220c placed in the peripheral circuit region PERI through a first I/O contact plug 1203, and may be separated from the first substrate 1210 by the lower insulating film 1201. A side insulating film may be formed between the first I/O contact plug 1203 and the first substrate 1210, and may electrically separate the first I/O contact plug 1203 and the first substrate 1210.
An upper insulating film 1301 covering the upper surface of the second substrate 1310 may be formed over the second substrate 1310. A second I/O pad 1305 may be formed on the upper insulating film 1301. The second I/O pad 1305 may be connected to at least one of the plurality of circuit elements 1220a, 1220b, and 1220c in the peripheral circuit region PERI through the second I/O contact plug 1303.
In some example embodiments, the second substrate 1310, the common source line 1320, and the like may not be formed in the region where the second I/O contact plug 1303 is formed. In addition, the second I/O pad 1305 may not overlap the word line 1330 in the third direction z. Referring to fig. 14, the second I/O contact plug 1303 is separated from the second substrate 1310 in a direction parallel to the upper surface of the second substrate 1310, and may penetrate the interlayer insulating layer 1315 of the CELL region CELL to be connected to the second I/O pad 1305.
In some example embodiments, the first I/O pad 1205 and the second I/O pad 1305 may be selectively formed. As an example, the nonvolatile memory 300 according to some example embodiments may include only the first I/O pad 1205 on the first substrate 1210 or may include only the second I/O pad 1305 on the second substrate 1310. Alternatively, the non-volatile memory 300 may include both the first I/O pad 1205 and the second I/O pad 1305.
The metal pattern of the uppermost metal layer may exist as a dummy pattern in each of the outer pad bonding area PA and the bit line bonding area BLBA included in the CELL area CELL and the peripheral circuit area PERI, or the uppermost metal layer may be omitted.
In the nonvolatile memory 300 according to some example embodiments, the lower metal pattern 1273a having the same shape as the upper metal pattern 1372a of the CELL region CELL may be formed on the uppermost metal layer of the peripheral circuit region PERI to correspond to the upper metal pattern 1372a formed in the uppermost metal layer of the CELL region CELL in the outer pad bonding region PA. The lower metal pattern 1273a formed on the uppermost metal layer of the peripheral circuit region PERI may not be connected to another contact in the peripheral circuit region PERI. Similarly, an upper metal pattern having the same shape as the lower metal pattern of the peripheral circuit region PERI may be formed on the upper metal layer of the CELL region CELL to correspond to the lower metal pattern formed in the uppermost metal layer of the peripheral circuit region PERI in the outer pad bonding region PA.
The lower metals 1271b and 1272b may be formed on the second metal layer 1240b of the word line bonding area WLBA. In the word line bonding region WLBA, the lower bonding metals 1271b and 1272b of the peripheral circuit region PERI may be electrically connected to the upper bonding metals 1371b and 1372b of the CELL region CELL by bonding.
In the bit line bonding region BLBA, an upper metal pattern 1392 having the same shape as the lower metal pattern 1252 of the peripheral circuit region PERI may be formed on the uppermost metal layer of the CELL region CELL to correspond to the lower metal pattern 1252 formed on the uppermost metal layer of the peripheral circuit region PERI. No contact may be formed on the upper metal pattern 1392 formed on the uppermost metal layer of the CELL region CELL.
Fig. 15 is a block diagram for explaining a memory system according to some other example embodiments. Points different from those described with reference to fig. 1 to 14 will be mainly explained.
Referring to fig. 15, in the storage system 3 according to some other example embodiments, the command distribution module 212_1 may be implemented as software and firmware such as an application program executed on the storage device 200. Thus, any of the plurality of cores 212 may be implemented as a command distribution module 212_ 1.
FIG. 16 is a block diagram illustrating a memory system according to some other example embodiments. Points different from those described with reference to fig. 1 to 14 will be mainly explained.
Referring to fig. 16, storage system 4 according to some other example embodiments may also include machine learning logic 230.
For example, machine learning logic 230 may analyze hourly patterns of I/O commands provided from host 100. As a result, the machine learning logic 230 may set the command distribution module 211 to an appropriate state every hour. Accordingly, the state of the command distribution module 211 may be previously set according to the operation of the host 100, and the storage system 4 according to some other example embodiments may accordingly more efficiently process commands provided from the host 100.
Although the machine learning logic 230 may be included within the storage controller 210, the present disclosure is not so limited and the machine learning logic 230 may be implemented within the storage system 4 as a different configuration than the storage controller 210.
Fig. 17, 18, and 19 are block diagrams for explaining the operation of a memory system according to some other example embodiments. Points different from those described with reference to fig. 1 to 14 will be mainly explained.
Referring to fig. 17, a storage system 5a according to some other example embodiments may include a host 100 and a plurality of storage apparatuses 400, 500, and 600.
The host 100 may provide the Set commands (Set feature cmd _1, Set feature cmd _2, and Set feature cmd _3) to the plurality of memory devices 400, 500, and 600 according to the purpose of each of the memory devices 400, 500, and 600. At least some of the Set commands (Set feature cmd _1, Set feature cmd _2, and Set feature cmd _3) may include different states and may include the same state.
For example, the host 100 may use the second storage apparatus 500 as a backup storage apparatus of the first storage apparatus 400, and may use the third storage apparatus 600 as a spare storage apparatus of the first storage apparatus 400. Accordingly, the host 100 may provide a first Set command (Set feature cmd _1) including a normal state to the first storage device 400, a second Set command (Set feature cmd _2) including a low power state to the second storage device 500, and a third Set command (Set feature cmd _3) including a low power state to the third storage device 600. The first setup command may have a format according to a host interface.
Referring to fig. 18, in the storage system 5b, the first command distribution module 411 of the first storage apparatus 400 may be set to a normal state, and the second command distribution module 511 of the second storage apparatus 500 and the third command distribution module 611 of the third storage apparatus 600 may be set to a low power state. As a result, the plurality of first cores of the first storage apparatus 400 may be divided into the core 412_1 distributed with the ADMIN command (ADMIN CMD _1) and the cores 412_2 to 412_ n distributed with the I/O command (PERF CMD _ 1). The plurality of second cores of the second storage apparatus 500 may be divided into a core 512_1 to which an ADMIN command (ADMIN CMD _2) and an I/O command (PERF CMD _2) are distributed and cores 512_2 to 512_ n to which commands are not distributed. The plurality of third cores of the third storage apparatus 600 may be divided into a core 612_1 to which an ADMIN command (ADMIN CMD _3) and an I/O command (PERF CMD _3) are distributed and cores 612_2 to 612_ n to which commands are not distributed. Accordingly, the second and third memory devices 500 and 600 may be maintained in a standby state to reduce power consumption.
Referring to fig. 19, when a backup is to be performed, the host 100 may provide a second Set command (Set feature cmd _2 ') including a maximum operation state to the second storage apparatus 500 and a third Set command (Set feature cmd _ 3') including a maximum operation state to the third storage apparatus 600. In the storage system 5c, the second command distribution module 511 and the third command distribution module 611 may be set to the maximum operation state. Accordingly, the second storage apparatus 500 and the third storage apparatus 600 may perform a backup operation more rapidly.
In the storage system 5c according to some example embodiments, the host 100 may use the storage devices 400, 500, and 600 in consideration of the purpose of the host 100 using the storage devices 400, 500, and 600, the state of the host 100, and the like. Accordingly, the memory devices 400, 500, and 600 can be more efficiently operated.
Fig. 20, 21, and 22 are block diagrams for explaining the operation of a memory system according to some other example embodiments. Fig. 21 is a diagram for explaining the status command of fig. 20. Points different from those described with reference to fig. 1 to 14 will be mainly explained.
Referring to fig. 20, the host 100 may provide status commands (Get feature cmd _1, Get feature cmd _2, and Get feature cmd _3) to each of the plurality of memory devices 400, 500, and 600. The status commands (Get feature cmd _1, Get feature cmd _2, and Get feature cmd _3) may be commands requesting information from the corresponding memory devices 400, 500, and 600, and may request information on the status of the command distribution modules 411, 511, and 611.
For example, referring to fig. 21, when the host interface (219 _1 of fig. 1) is NVMe, the status commands (Get feature cmd _1, Get feature cmd _2, and Get feature cmd _3) may be the Get feature command 2000. The get features command 2000 may include a region 2100, where the region 2100 includes a feature identifier. The feature identifier may mean a feature that the host 100 intends to request from the storage device 200. The area 2100 including the feature identifier may include information on the state of the command distribution modules 411, 511, and 611.
In another example, when the host interface (219 _1 of fig. 1) is SAS, the status command may be a mode sense command. The page code of the mode sensing command may include information on the state of the command distribution modules 411, 511, and 611.
In another example, when the host interface (219 _1 of FIG. 1) is SATA, the status command may be a get feature command. The sub-command value of the get feature command may include information about the status of the command distribution modules 411, 511, and 611. The present disclosure is not limited thereto, and the status commands (Get feature cmd _1, Get feature cmd _2, and Get feature cmd _3) may be any commands requesting information on the status of the command distribution modules 411, 511, and 611.
Referring again to fig. 20, each of the plurality of memory devices 400, 500, and 600 may provide the State (State _1, State _2, and State _3) to the host 100 according to the State command (Get feature cmd _1, Get feature cmd _2, and Get feature cmd _ 3).
Referring to fig. 22, the host 100 may provide an additional I/O command (PERF CMD) to any one of the plurality of memory devices 400, 500, and 600 based on the states of the plurality of memory devices 400, 500, and 600. The host 100 may provide an additional I/O command (PERF CMD) to the second storage device 500 having a lower load or utilization rate based on the states of the plurality of storage devices 400, 500, and 600.
For example, the first State _1 and the third State _3 may be maximum operation states, and the second State _2 may be a low power State. The host 100 may provide an additional I/O command (PERF CMD) to the second storage device 500. Accordingly, a load due to the additional I/O command (PERF CMD) may not be applied to the first memory device 400 or the third memory device 600, and the additional I/O command (PERF CMD) may be provided to the second memory device 500 to more rapidly process the additional I/O command (PERF CMD). Therefore, in the storage system 6b according to some example embodiments, additional I/O commands (PERF CMD) may be distributed and processed in consideration of the states of the storage devices 400, 500, and 600.
Fig. 23 is a diagram illustrating a data center to which a storage system according to some other example embodiments is applied.
Referring to fig. 23, a data center 3000 may be a facility that collects various data and provides services, and may also be referred to as a data storage center. Data center 3000 may be a system for search engine and database operations, and may be a computing system used by enterprises such as banks and government agencies. The data center 3000 may include application servers 3100 to 3100n and storage servers 3200 to 3200 m. The number of application servers 3100 to 3100n and the number of storage servers 3200 to 3200m may be variously selected according to embodiments, and the number of application servers 3100 to 3100n and the number of storage servers 3200 to 3200m may be different from each other.
The application server 3100 or the storage server 3200 may include at least one of processors 3110 and 3210 and memories 3320 and 3220. Taking storage server 3200 as an example, processor 3210 may control the overall operation of storage server 3200, and may access memory 3220 to execute commands and/or data loaded into memory 3220. The memory 3220 may be DDR SDRAM (double data rate synchronous DRAM), HBM (high bandwidth memory), HMC (hybrid memory cube), DIMM (dual inline memory module), Optane DIMM, or NVMDIMM (non-volatile DIMM). In some example embodiments, the number of processors 3210 and the number of memories 3220 included in the storage server 3200 may be selected differently. In an example embodiment, the processor 3210 and memory 3220 may provide a processor-memory pair. In an example embodiment, the number of processors 3210 and memory 3220 may be different from each other. Processor 3210 may include a single-core processor or a multi-core processor. The description of the storage server 3200 may similarly apply to the application server 3100. In some example embodiments, the application server 3100 may not include the storage 3150. Storage server 3200 may include at least one or more storage devices 3250. The number of storage devices 3250 included in the storage server 3200 may be variously selected according to embodiments.
The application servers 3100 to 3100n and the storage servers 3200 to 3200m can communicate with each other through the network 3300. The network 3300 may be implemented using FC (fiber channel), ethernet, etc. The FC is a medium for relatively high-speed data transfer, and an optical switch providing high performance/high availability may be used. The storage servers 3200 to 3200m may be provided as file storage, block storage, or object storage according to an access type of the network 3300.
In an example embodiment, the network 3300 may be a storage-only network such as a SAN (storage area network). For example, the SAN may be a FC-SAN implemented according to FCP (FC protocol) using a FC network. In another example, the SAN may be an IP-SAN implemented using a TCP/IP network and according to the iSCSI (SCSI over TCP/IP or Internet SCSI) protocol. In another example, the network 3300 may be a general network such as a TCP/IP network. For example, the network 3300 may be implemented according to protocols such as FCoE (FC over ethernet), NAS (network attached storage), and NVMe-af (NVMe over Fabrics).
Hereinafter, the application server 3100 and the storage server 3200 will be mainly described. The description of the application server 3100 is also applicable to another application server 3100n, and the description of the storage server 3200 is also applicable to another storage server 3200 m.
The application server 3100 may store data requested by a user or a client to be stored in one of the storage servers 3200 to 3200m through the network 3300. In addition, the application server 3100 may obtain data requested to be read from one of the storage servers 3200 to 3200m by a user or a client through the network 3300.
The application server 3100 according to some example embodiments may provide the setting command to the storage servers 3200 to 3200m according to the situation or requirement of the application server 3100. The states of the storage servers 3200 to 3200m may be set according to a setting command. In other example embodiments, the application server 3100 may provide status commands to the storage servers 3200 to 3200m to read the status of the storage servers. The application server 3100 may provide an additional I/O command to at least one of the storage servers 3200 to 3200m based on the status of the storage servers 3200 to 3200 m.
In an example embodiment, the application server 3100 may be implemented as a Web server, a DBMS (database management system), or the like.
The application server 3100 may access a memory 3320n or a storage 3150n included in another application server 3100n through a network 3300, and may access memories 3220 to 3220m or storage devices 3250 to 3250m included in the storage servers 3200 to 3200m through the network 3300. Accordingly, the application server 3100 may perform various operations on data stored in the application servers 3100 to 3100n and/or the storage servers 3200 to 3200 m. For example, application server 3100 may execute commands for moving or copying data between application servers 3100 through 3100n and/or storage servers 3200 through 3200 m. Data may be moved from the storage devices 3250 to 3250m of the storage servers 3200 to 3200m to the storages 3320 to 3320n of the application servers 3100 to 3100n or directly to the storages 3320 to 3320n of the application servers 3100 to 3100n via the storages 3220 to 3220m of the storage servers 3200 to 3200 m. The data moving through the network 3300 may be data encrypted for security and privacy.
Taking storage server 3200 as an example, interface (I/F)3254 may provide a physical connection between processor 3210 and controller 3251 as well as a physical connection between NIC (network interface controller) 3240 and controller 3251. For example, interface 3254 may be implemented as a DAS (direct attached storage) type in which storage 3250 is directly connected with a dedicated cable. In addition, for example, the interface 3254 may be implemented as various interface types such as ATA (advanced technology attachment), SATA (serial ATA), e-SATA (external SATA), SCSI (small computer small interface), SAS (serial attached SCSI), PCI (peripheral component interconnect), PCIe (PCI express), NVMe (NVM express), IEEE 1394, USB (universal serial bus), SD (secure digital) card, MMC (multimedia card), eMMC (embedded multimedia card), UFS (universal flash memory), eUFS (embedded universal flash memory), and CF (compact flash) card interfaces.
The storage server 3200 may also include a switch 3230 and a NIC 3240. The switch 3230 may selectively connect the processor 3210 and the storage device 3250, or may selectively connect the NIC 3240 and the storage device 3250, according to the control of the processor 3210.
In an example embodiment, the NIC 3240 may include a network interface card, a network adapter, and the like. The NIC 3240 may be connected to the network 3300 through a wired interface, a wireless interface, a bluetooth interface, an optical interface, and the like. The NIC 3240 may include internal memory, DSPs, host bus interfaces, etc., and may be connected to the processor 3210 and/or the switch 3230, etc., through the host bus interfaces. The host bus interface may also be implemented as one of the examples of interface 3254 described above. In an example embodiment, the NIC 3240 may also be integrated with at least one of the processor 3210, the switch 3230, and the storage 3250.
In the storage servers 3200 to 3200m or the application servers 3100 to 3100n, the processor may send a command to the storage devices 3150 to 3150n and 3250 to 3250m or the memories 3320 to 3320n and 3220 to 3220m to program or read data. The data may be data that has been error corrected by the ECC engine. The data may be data processed through Data Bus Inversion (DBI) or Data Mask (DM), and may include CRC (cyclic redundancy code) information. The data may be data that is encrypted for security and privacy.
The storage devices 3150 to 3150n and 3250 to 3250m may transmit control signals and command/address signals to the NAND flash memory devices (NAND)3252 to 3252m in response to a read command received from a processor. Therefore, when data is read from the NAND flash memory devices 3252 to 3252m, an RE (read enable) signal may be input as a data output control signal for outputting the data to the DQ bus. The DQS (data strobe) may be generated using the RE signal. The command and address signals may be latched into the page buffer according to a rising edge or a falling edge of a WE (write enable) signal.
The controller 3251 may generally control the operation of the storage device 3250. In an example embodiment, the controller 3251 may include an SRAM (static random access memory). The controller 3251 may write data in the NAND flash memory device 3252 in response to a write command, or may read data from the NAND flash memory device 3252 in response to a read command. For example, write commands and/or read commands may be provided from processor 3210 in storage server 3200, processor 3210m in another storage server 3200m, or processors 3110-3110 n in application servers 3100-3100 n.
The Controller (CTRL)3251 according to some example embodiments may include a plurality of cores. The state of the controller 3251 may be set according to a setting command provided from the application servers 3100 and 3100n, and a write command and/or a read command may be distributed to a plurality of cores according to the set state.
The DRAM 3253 can temporarily store (buffer) data to be written to the NAND flash memory device 3252 or data read from the NAND flash memory device 3252. In addition, DRAM 3253 may also store metadata. The metadata may be data generated by the controller 3251 to manage the user data and the NAND flash memory device 3252. The storage 3250 may include a SE (secure element) for security or privacy.
To summarize and review, a method of controlling a multi-core processor should efficiently operate a memory system according to various purposes.
As described above, embodiments may provide a storage device that more efficiently distributes commands to multiple cores. Embodiments may also provide a storage system including a storage device that more efficiently distributes commands to a plurality of cores.
Example embodiments are disclosed herein and, although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone, or in combination with features, characteristics and/or elements described in connection with other embodiments, as will be apparent to one of ordinary skill in the art, as of the present application, unless specifically indicated otherwise. It will, therefore, be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as set forth in the appended claims.
Claims (20)
1. A memory device, comprising:
a non-volatile memory;
a plurality of cores;
a host interface configured to receive a first setup command, an input/output command, and a management command from a host; and
a memory controller including a command distribution module configured to be set to a first state according to the first set command and distribute the input/output command to the plurality of cores according to the set first state,
wherein each of the plurality of cores is configured to perform an operation indicated by the input/output command and an operation indicated by the management command on the non-volatile memory in response to the distributed input/output command.
2. The storage device of claim 1, wherein:
the plurality of cores includes a first core and a second core different from each other, and
the command distribution module is configured to distribute the input/output command to the first core and not distribute the input/output command to the second core.
3. The storage device of claim 2, wherein the command distribution module is configured to distribute the management command to the second core and not to distribute the management command to the first core.
4. The storage device of claim 3, wherein the management command comprises a read command of metadata and a write command of metadata.
5. The storage device of claim 2, wherein the command distribution module is configured to not distribute the management command to the plurality of cores.
6. The storage device of claim 1, wherein:
the plurality of cores includes a first core and a second core that are different from each other,
the input/output commands include write commands and read commands, and
the command distribution module is configured to distribute the write command to the first core and distribute the read command to the second core.
7. The storage device of claim 6, wherein:
the plurality of cores further includes a third core different from the first core and the second core, and
the command distribution module is configured to distribute the management command to the third core and is configured not to distribute the management command to the first core and the second core.
8. The storage device of claim 1, wherein:
the first setting command is a command having a format according to the host interface and including a feature to be set by the host among the features of the storage device, and
the first state is included in a feature of the storage device.
9. The storage device of claim 1, further comprising:
a self-configuration module configured to provide a second setup command to the command distribution module,
wherein the command distribution module is configured to be set to a second state according to the second setting command.
10. The storage device of claim 9, wherein the self-configuration module is configured to monitor a type of the input/output command, a latency of the input/output command, a dimension of the input/output command, a queue depth of the plurality of cores, or an interval at which the input/output command is provided from the host to generate the second setup command.
11. The storage device of claim 1, further comprising:
a self-configuration module configured to provide a second setup command to the command distribution module,
wherein, when the first setting command and the second setting command are provided at the same time, the command distribution module is configured to be set to the first state according to the first setting command.
12. A memory device, comprising:
a non-volatile memory; and
a storage controller configured to receive a first set command from a host at a first point in time, configured to perform an operation indicated by an input/output command on the non-volatile memory in response to the input/output command provided from the host, and configured not to perform the operation indicated by the management command on the non-volatile memory in response to a management command provided from the host.
13. The storage device of claim 12, wherein:
the storage controller further includes a host interface configured to receive the first setup command, the input/output command, and the management command from the host, and
the first setting command is a command having a format according to the host interface and includes a feature to be set by the host among features of the storage device.
14. The storage device of claim 12, wherein the storage controller is configured to:
receiving a second setting command different from the first setting command from the host at a second time point later than the first time point, and
performing the operation indicated by the management command on the non-volatile memory after the second point in time.
15. The storage device of claim 14, wherein:
the memory controller is configured to control the nonvolatile memory, and further includes a first core and a second core different from each other,
the first core is configured to perform an operation indicated by the input/output command on the non-volatile memory, and
the second core is configured to perform the operation indicated by the management command on the non-volatile memory.
16. The storage device of claim 15, wherein:
the memory controller is configured to control the non-volatile memory, and further includes a third core different from the first core and the second core,
the input/output commands include read commands and write commands,
the first core is configured to perform an operation indicated by the read command on the non-volatile memory, and
the second core is configured to perform an operation indicated by the write command on the non-volatile memory.
17. A storage system, comprising:
a host;
a first storage device, comprising:
a first non-volatile memory device having a first memory cell,
a plurality of first cores configured to control the first non-volatile memory, an
A first storage controller configured to output a first status to the plurality of first cores in response to a first status command provided from the host, the first status including information that a first management command and a first input/output command provided from the host are distributed; and
a second storage device, comprising:
a second non-volatile memory device for storing a second non-volatile memory,
a plurality of second cores configured to control the second non-volatile memory, an
A second storage controller configured to output a second status to the plurality of second cores in response to a second status command provided from the host, the second status including information that a second management command and a second input/output command provided from the host are distributed,
wherein the host is configured to provide a third input/output command to one of the first storage device and the second storage device based on the first state and the second state.
18. The storage system of claim 17, wherein:
the first state includes information that the first management command is not provided and the first input/output command is distributed to the plurality of first cores,
the second state includes information to distribute the second management command and the second input/output command to one of the plurality of second cores, and
the host is configured to provide the third input/output command to the second storage device.
19. The storage system of claim 17, wherein:
the plurality of first cores includes a third core and a fourth core,
the first state includes information to provide the first management command to the first core and to distribute the first input/output command to the fourth core,
the second state includes information to distribute the second management command and the second input/output command to one of the plurality of second cores, and
the host is configured to provide the third input/output command to the second storage device.
20. The storage system of claim 17, wherein:
the first storage controller further comprises a first host interface configured to receive the first status command, the first management command, and the first input/output command from the host,
the second storage controller further comprises a second host interface configured to receive the second status command, the second management command, and the second input/output command from the host,
the first status command is a command having a format according to the first host interface and including a feature requested by the host among the features of the first storage device, and
the second status command is a command having a format according to the second host interface and includes a feature requested by the host among the features of the second storage device.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0154438 | 2020-11-18 | ||
KR1020200154438A KR20220067795A (en) | 2020-11-18 | 2020-11-18 | Storage device and storage system including the same |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114546903A true CN114546903A (en) | 2022-05-27 |
Family
ID=81586651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111368417.XA Pending CN114546903A (en) | 2020-11-18 | 2021-11-18 | Storage device and storage system including the same |
Country Status (3)
Country | Link |
---|---|
US (1) | US11789652B2 (en) |
KR (1) | KR20220067795A (en) |
CN (1) | CN114546903A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220067795A (en) * | 2020-11-18 | 2022-05-25 | 삼성전자주식회사 | Storage device and storage system including the same |
Family Cites Families (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6978353B2 (en) * | 2002-10-18 | 2005-12-20 | Sun Microsystems, Inc. | Low overhead snapshot in a storage array using a tree-of-slabs metadata |
US7240172B2 (en) * | 2003-02-07 | 2007-07-03 | Sun Microsystems, Inc. | Snapshot by deferred propagation |
KR101226685B1 (en) | 2007-11-08 | 2013-01-25 | 삼성전자주식회사 | Vertical type semiconductor device and Method of manufacturing the same |
US8756369B2 (en) | 2008-09-26 | 2014-06-17 | Netapp, Inc. | Priority command queues for low latency solid state drives |
KR101691092B1 (en) | 2010-08-26 | 2016-12-30 | 삼성전자주식회사 | Nonvolatile memory device, operating method thereof and memory system including the same |
US8553466B2 (en) | 2010-03-04 | 2013-10-08 | Samsung Electronics Co., Ltd. | Non-volatile memory device, erasing method thereof, and memory system including the same |
US9536970B2 (en) | 2010-03-26 | 2017-01-03 | Samsung Electronics Co., Ltd. | Three-dimensional semiconductor memory devices and methods of fabricating the same |
EP2587374A4 (en) | 2010-06-25 | 2016-11-09 | Fujitsu Ltd | Multi-core system and scheduling method |
KR101682666B1 (en) | 2010-08-11 | 2016-12-07 | 삼성전자주식회사 | Nonvolatile memory devicwe, channel boosting method thereof, programming method thereof, and memory system having the same |
TWI447646B (en) * | 2011-11-18 | 2014-08-01 | Asmedia Technology Inc | Data transmission device and method for merging multiple instruction |
US9395924B2 (en) | 2013-01-22 | 2016-07-19 | Seagate Technology Llc | Management of and region selection for writes to non-volatile memory |
KR101694310B1 (en) | 2013-06-14 | 2017-01-10 | 한국전자통신연구원 | Apparatus and method for monitoring based on a multi-core processor |
KR101481898B1 (en) | 2013-06-25 | 2015-01-14 | 광운대학교 산학협력단 | Apparatus and method for scheduling command queue of solid state drive |
US20150039815A1 (en) * | 2013-08-02 | 2015-02-05 | OCZ Storage Solutions Inc. | System and method for interfacing between storage device and host |
US9383926B2 (en) * | 2014-05-27 | 2016-07-05 | Kabushiki Kaisha Toshiba | Host-controlled garbage collection |
US9367254B2 (en) * | 2014-06-27 | 2016-06-14 | HGST Netherlands B.V. | Enhanced data verify in data storage arrays |
US9881014B1 (en) * | 2014-06-30 | 2018-01-30 | EMC IP Holding Company LLC | Snap and replicate for unified datapath architecture |
KR102336443B1 (en) * | 2015-02-04 | 2021-12-08 | 삼성전자주식회사 | Storage device and user device supporting virtualization function |
US10038744B1 (en) * | 2015-06-29 | 2018-07-31 | EMC IP Holding Company LLC | Intelligent core assignment |
KR102371916B1 (en) * | 2015-07-22 | 2022-03-07 | 삼성전자주식회사 | Storage device for supporting virtual machines, storage system including the storage device, and method of the same |
US20170109092A1 (en) * | 2015-10-15 | 2017-04-20 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Selective initialization of storage devices for a logical volume |
US10515192B2 (en) * | 2016-02-02 | 2019-12-24 | Vmware, Inc. | Consistent snapshots and clones in an asymmetric virtual distributed file system |
TWI617103B (en) * | 2016-02-29 | 2018-03-01 | Toshiba Memory Corp | Electronic machine |
US20170315878A1 (en) * | 2016-04-29 | 2017-11-02 | Netapp, Inc. | Method for low overhead, space tracking, high performance snapshots and clones by transfer of extent ownership |
US10474374B2 (en) | 2016-05-24 | 2019-11-12 | Samsung Electronics Co., Ltd. | Method and apparatus for storage device latency/bandwidth self monitoring |
US10235066B1 (en) * | 2017-04-27 | 2019-03-19 | EMC IP Holding Company LLC | Journal destage relay for online system checkpoint creation |
TWI645295B (en) * | 2017-06-20 | 2018-12-21 | 慧榮科技股份有限公司 | Data storage device and data storage method |
US10719474B2 (en) | 2017-10-11 | 2020-07-21 | Samsung Electronics Co., Ltd. | System and method for providing in-storage acceleration (ISA) in data storage devices |
KR102410671B1 (en) * | 2017-11-24 | 2022-06-17 | 삼성전자주식회사 | Storage device, host device controlling storage device, and operation mehtod of storage device |
US11836134B2 (en) * | 2018-03-20 | 2023-12-05 | Vmware, Inc. | Proactive splitting and merging of nodes in a Bε-tree |
KR102637166B1 (en) * | 2018-04-17 | 2024-02-16 | 삼성전자주식회사 | Network storage device storing large amount of data |
US10932202B2 (en) | 2018-06-15 | 2021-02-23 | Intel Corporation | Technologies for dynamic multi-core network packet processing distribution |
US10705965B2 (en) * | 2018-07-23 | 2020-07-07 | EMC IP Holding Company LLC | Metadata loading in storage systems |
US10872004B2 (en) * | 2018-11-15 | 2020-12-22 | Intel Corporation | Workload scheduling and coherency through data assignments |
US11487706B2 (en) * | 2019-05-02 | 2022-11-01 | EMC IP Holding Company, LLC | System and method for lazy snapshots for storage cluster with delta log based architecture |
US11625084B2 (en) * | 2019-08-15 | 2023-04-11 | Intel Corporation | Method of optimizing device power and efficiency based on host-controlled hints prior to low-power entry for blocks and components on a PCI express device |
US11347725B2 (en) * | 2020-01-14 | 2022-05-31 | EMC IP Holding Company LLC | Efficient handling of highly amortized metadata page updates in storage clusters with delta log-based architectures |
US11157177B2 (en) * | 2020-03-16 | 2021-10-26 | EMC IP Holding Company LLC | Hiccup-less failback and journal recovery in an active-active storage system |
KR20220067795A (en) * | 2020-11-18 | 2022-05-25 | 삼성전자주식회사 | Storage device and storage system including the same |
US11726663B2 (en) * | 2021-01-13 | 2023-08-15 | EMC IP Holding Company LLC | Dependency resolution for lazy snapshots in storage cluster with delta log based architecture |
-
2020
- 2020-11-18 KR KR1020200154438A patent/KR20220067795A/en active Search and Examination
-
2021
- 2021-07-28 US US17/387,011 patent/US11789652B2/en active Active
- 2021-11-18 CN CN202111368417.XA patent/CN114546903A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20220156007A1 (en) | 2022-05-19 |
US11789652B2 (en) | 2023-10-17 |
KR20220067795A (en) | 2022-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20190090635A (en) | Data storage device and operating method thereof | |
US20220102224A1 (en) | Test method of storage device implemented in multi-chip package (mcp) and method of manufacturing an mcp including the test method | |
KR20220043821A (en) | Test method of storage device implemented in multi-chip package(MCP) | |
US11789652B2 (en) | Storage device and storage system including the same | |
EP4184509B1 (en) | Non-volatile memory device, storage device and method of manufacturing the same using wafer-to-wafer bonding | |
US20220197510A1 (en) | Storage device for executing processing code and operating method of the storage device | |
KR20230068935A (en) | Storage device and method for operating the device | |
KR20230030344A (en) | Three-dimensional(3D) storage device using wafer-to-wafer bonding | |
US11836117B2 (en) | Storage device, storage system, and method of operating the storage system | |
US11921625B2 (en) | Storage device for graph data | |
US20230141409A1 (en) | Storage device and operating method thereof | |
US12014772B2 (en) | Storage controller and storage device including the same | |
US11842076B2 (en) | Storage system and operating method for same | |
US20240231687A9 (en) | Computational storage device, method for operating the computational storage device and method for operating host device | |
US20240134568A1 (en) | Computational storage device, method for operating the computational storage device and method for operating host device | |
US20240220151A1 (en) | Computational storage device and method for operating the device | |
EP4398111A1 (en) | Computational storage device and method for operating the device | |
US20230038363A1 (en) | Three-dimensional storage device using wafer-to-wafer bonding | |
US20230114199A1 (en) | Storage device | |
US20230292449A1 (en) | Storage Device | |
US20240193105A1 (en) | Computational storage device and method of operating the same | |
US20240194273A1 (en) | Nonvolatile memory device, storage device including the same, and method of operating the same | |
US20230139519A1 (en) | Storage device supporting multi-tenant operation and methods of operating same | |
US20230221885A1 (en) | Storage system and computing system including the same | |
US20230146540A1 (en) | Storage device and an operating method of a storage controller thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |