CN117908766A - Method and system for efficient access to solid state drives - Google Patents

Method and system for efficient access to solid state drives Download PDF

Info

Publication number
CN117908766A
CN117908766A CN202311247532.0A CN202311247532A CN117908766A CN 117908766 A CN117908766 A CN 117908766A CN 202311247532 A CN202311247532 A CN 202311247532A CN 117908766 A CN117908766 A CN 117908766A
Authority
CN
China
Prior art keywords
data
cache
memory device
request
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311247532.0A
Other languages
Chinese (zh)
Inventor
玛丽·麦·阮
瑞卡·皮塔楚玛尼
奇亮奭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/080,211 external-priority patent/US20240134801A1/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN117908766A publication Critical patent/CN117908766A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Methods and systems for efficient access to solid state drives are provided. In a first state, a request is received at a memory device from a host device. In the case where the request is a read request, the first data is read from the cache of the memory device based on the read request and output to the host device. With the memory device in the second state, the cache is loaded with data. In the case where the request is a write request, the cached block is modified to remove the cached data, the cached data from the cache and the corresponding data are written to the flash memory of the memory device, and the second data is written to the cached block based on the received write request.

Description

Method and system for efficient access to solid state drives
The present application is based on and claims priority of U.S. provisional patent application serial No. 63/417,504 filed on day 2022, 10, 19 and U.S. application serial No. 18/080,211 filed on day 2022, 12, 13, which are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates generally to Solid State Drives (SSDs), and more particularly, to methods and systems for efficiently accessing SSDs.
Background
Big data applications handle relatively large data sets. SSDs are widely used as hardware features in cloud infrastructure for big data services. SSDs are well suited for large data applications because they provide fast storage performance and are efficient and cost-effective. In particular, input/output (I/O) intensive operations may be accelerated by using an SSD architecture. The above statements are provided for illustrative purposes only and are not intended to constitute an admission of prior art.
Disclosure of Invention
Embodiments enable efficient access to SSD hardware caches.
According to an embodiment, a method is provided in which a write request is received at a memory device from a host device. The cached blocks of the memory device are modified to remove the cached data. Cache data from the cache and corresponding data are written to the flash memory of the memory device as a single write request. Based on the received write request, first data is written to the block of the cache.
According to an embodiment, there is provided a memory device including: cache, flash memory, and a controller containing a cache manager. The controller is configured to: a write request is received from a host device. The cache manager is configured to: the cached blocks are modified to remove the cached data. The cache manager is further configured to: the cache data from the cache and the corresponding data are written to the flash memory as a single write request. The cache manager is further configured to: first data is written to the block of the cache based on the received write request.
According to an embodiment, a method is provided in which, in a first state, a request is received at a memory device from a host device. In the case where the request is a read request, the first data is read from the cache of the memory device based on the read request, and the first data is output to the host device. With the memory device in the second state, the cache is loaded with data. In the case where the request is a write request, the cached block is modified to remove the cached data, the cached data and corresponding data from the cache are written to the flash memory of the memory device as a single write request, and the second data is written to the cached block based on the received write request.
According to an embodiment, there is provided a memory device including: cache, flash memory, and a controller containing a cache manager. The controller is configured to: a request is received from a host device with a memory device in a first state. The cache manager is configured to: in the case where the request is a read request, the first data is read from the cache based on the read request, and the first data is output to the host device. The cache manager is further configured to: in the case that the request is a write request, the cached block is modified to remove the cached data, the cached data and corresponding data from the cache are written to the flash memory as a single write request, and second data is written to the cached block based on the received write request.
Drawings
The above and other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings in which:
FIG. 1 is a diagram illustrating a cache replacement policy;
fig. 2 is a diagram showing an SSD controller according to an embodiment;
FIG. 3 is a diagram illustrating an SSD controller in read-only cache mode, according to an embodiment;
FIG. 4 is a diagram illustrating offline preloading data according to an embodiment;
Fig. 5 is a diagram showing an SSD controller in a write mode, according to an embodiment;
FIG. 6 is a flow diagram illustrating a method for efficiently accessing an SSD hardware cache, according to an embodiment;
FIG. 7 is a block diagram illustrating an electronic device in a network environment according to an embodiment; and
Fig. 8 is a diagram illustrating a storage system according to an embodiment.
Detailed Description
Hereinafter, embodiments of the present disclosure are described in detail with reference to the accompanying drawings. It should be noted that although the same elements are shown in different drawings, the same elements will be denoted by the same reference numerals. In the following description, only specific details such as detailed configurations and components are provided to aid in a general understanding of embodiments of the present disclosure. Accordingly, it will be apparent to those skilled in the art that various changes and modifications of the embodiments described herein can be made without departing from the scope of the disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. The terms described below are terms defined in consideration of functions in the present disclosure, and may be different according to users, intention or habit of the users. Accordingly, the definition of the terms should be determined based on the contents throughout the specification.
The present disclosure is capable of various modifications and various embodiments, among which embodiments are described in detail below with reference to the drawings. It should be understood, however, that the disclosure is not limited to the embodiments, but includes all modifications, equivalents, and alternatives falling within the scope of the disclosure.
Although terms including ordinal numbers (such as first, second, etc.) may be used to describe various elements, structural elements are not limited by the terms. The terms are only used to distinguish one element from another element. For example, a first structural element may be referred to as a second structural element without departing from the scope of the present disclosure. Similarly, the second structural element may also be referred to as a first structural element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated items.
The terminology used herein is for the purpose of describing various embodiments of the disclosure only and is not intended to be limiting of the disclosure. The singular is intended to include the plural unless the context clearly indicates otherwise. In this disclosure, it is to be understood that the terms "comprises" or "comprising" indicate the presence of a feature, quantity, step, operation, structural element, component, or combination thereof, and do not preclude the presence or addition of one or more other features, labels, steps, operations, structural elements, components, or combinations thereof.
Unless defined differently, all terms used herein have the same meaning as understood by those skilled in the art to which the present disclosure pertains. Unless explicitly defined in the present disclosure, terms such as those defined in a general dictionary should be construed to have the same meaning as the context in the relevant art and should not be construed to have an ideal or excessively formal meaning.
The electronic device according to one embodiment may be one of various types of electronic devices that utilize a storage device. The electronic device may use any suitable storage standard (such as, for example, peripheral component interconnect express (PCIe), non-volatile memory express (NVMe), NVMe over network (nvmeoh), advanced extensible interface (AXI), ultra-path interconnect (UPI), ethernet, transmission control protocol/internet protocol (TCP/IP), remote Direct Memory Access (RDMA), RDMA Over Converged Ethernet (ROCE), fibre Channel (FC), infiniband (IB), serial Advanced Technology Attachment (SATA), small Computer System Interface (SCSI), serial Attached SCSI (SAS), and/or Internet Wide Area RDMA Protocol (iWARP), etc., or any combination thereof). In some embodiments, the interconnect interface may be implemented with one or more memory semantics (memory semantic) and/or memory coherence interfaces and/or protocols including one or more computing quick link (CXL) protocols (such as cxl.mem, cxl.io, and/or cxl.cache), gen-Z, a Coherence Accelerator Processor Interface (CAPI), and/or a cache coherence interconnect for accelerators (CCIX), etc., or any combination thereof. Any of the memory devices may be implemented with one or more of any type of memory device interface including Double Data Rate (DDR), DDR2, DDR3, DDR4, DDR5, low power DDR (LPDDRX), open Memory Interface (OMI), NVlink High Bandwidth Memory (HBM), HBM2, and/or HBM3, among others. The electronic device may include, for example, a portable communication device (e.g., a smart phone), a computer, a portable multimedia device, a portable medical device, a camera, a wearable device, or a household appliance. However, the electronic device is not limited to the above-described electronic device.
The terminology used in the present disclosure is not intended to be limiting of the disclosure but is intended to include various changes, equivalents, or alternatives to the corresponding embodiments. With respect to the description of the drawings, like reference numerals may be used to refer to like or related elements. The singular form of a noun corresponding to an item may include one or more things unless the context clearly indicates otherwise. As used herein, each of the phrases such as "a or B", "at least one of a and B", "at least one of a or B", "A, B or C", "at least one of A, B and C", and "at least one of A, B or C" may include all possible combinations of items listed together in a corresponding one of the phrases. As used herein, terms such as "first," "second," and the like may be used to distinguish a corresponding component from another component, but are not intended to limit the component in other respects (e.g., importance or order). It is intended that if an element (e.g., a first element) is referred to as being "coupled" with, "coupled to" another element (e.g., a second element), with or without the term "operatively" or "communicatively," it can be directly (e.g., wired), wirelessly, or via a third element.
As used herein, the term "module" may include units implemented in hardware, software, firmware, or a combination thereof, and may be used interchangeably with other terms (e.g., "logic," "logic block," "component," and "circuitry"). A module may be a single integrated component or their smallest unit or component adapted to perform one or more functions. For example, according to one embodiment, a module may be implemented in the form of an Application Specific Integrated Circuit (ASIC), a coprocessor, or a Field Programmable Gate Array (FPGA).
Machine Learning (ML) use cases, such as, for example, deep Learning Recommendation Model (DLRM) inference and training, involve accessing data (e.g., 64 bytes versus 4 kilobytes) with a small granularity (granularity) and following a very skewed pareto distribution. Delay sensitive use cases require fast I/O access, but reading from NAND flash can be relatively slow.
Because each read/write request is provided to the NAND flash memory, standard cache replacement policies in the SSD may result in contamination of the cache. Thus, many small write requests may be detrimental to SSD performance and durability. As shown in fig. 1, the same number of requests received at the SSD controller are provided to the NAND flash memory.
Fig. 1 is a diagram illustrating a cache replacement policy. SSD controller 102 of the SSD (or memory device) includes a Host Interface Layer (HIL) 104, which Host Interface Layer (HIL) 104 may include a host controller that interfaces to the host side and an SSD interface that provides an abstract Application Programming Interface (API) to the host controller. Within SSD controller 102, HIL 104 communicates with a buffer manager 106, a processor 108, and a cache manager 110 within SSD controller 102. The buffer manager 106, processor 108, and cache manager 110 also communicate with a plurality of flash cores (Fcore) 112 that interface with pages of a NAND flash 114 of the SSD. The buffer manager 106 communicates with a write buffer 116 of the SSD. The cache manager 110 communicates with a cache 118 (e.g., dynamic Random Access Memory (DRAM)) of the SSD.
Each read request or write request received from a host application by the HIL 104 of the SSD controller 102 is provided to the NAND flash memory 114 of the SSD. For example, when ten write requests received via the HIL 104 are provided to the buffer manager 106, each of the ten write requests is provided to the NAND flash memory 114, resulting in contamination of the cache.
The disclosed embodiments optimize the cache manager 110 such that requested data is more efficiently read from or written to the SSD via the cache 118.
Fig. 2 is a diagram illustrating an SSD controller according to an embodiment. SSD controller 202 includes HIL 204 in communication with processor 208 and cache manager 210 within SSD controller 202. The processor 208 and cache manager 210 communicate with a plurality of flash cores 212 that interface with pages of a NAND flash memory 214 of the SSD. The cache manager 210 includes a buffer manager 206, a write buffer 216, a data preloader 220, and a list 222 of dirty cache blocks (or dirty blocks). The cache manager 210 communicates with a cache 218 of the SSD.
When SSD controller 202 is in read-only cache mode, data is preloaded into the cache while the SSD is offline. Thus, cache evictions (cache eviction) and/or updates do not need to be performed during runtime, resulting in less cache pollution. The read-only cache mode is described in more detail below with respect to fig. 3 and 4.
Further, when SSD controller 202 is in write mode, write requests to NAND flash memory 214 are minimized by writing data to cache 218 at a small granularity (e.g., 64 bytes). These data for each write request to cache 218 have a size smaller than the page size of NAND flash 214. For example, ten write requests received from a host application via HIL 204 are provided to cache manager 210. Data is written to the cache 218 in accordance with ten write requests. A single write request for multiple cache blocks (or chunks) corresponding to a single page of NAND flash 214 may be provided from cache manager 210 to NAND flash 214. The write mode of SSD controller 202 is described in more detail below with respect to fig. 5.
Fig. 3 is a diagram illustrating an SSD controller in read-only cache mode, according to an embodiment. In read-only cache mode, the data preloader 220 of the cache manager 210 may be user programmed and heuristic-based to preload the cache 218 offline at regular intervals. For example, cache 218 may be loaded according to a heuristic-based program. Faster cache accesses are enabled when there are no cache evictions and/or updates at run-time. For a given cache size, preloading of the cache also results in less cache pollution and higher cache hit rates than the Least Recently Used (LRU) cache or the Least Frequently Used (LFU) cache of similar size.
For example, a read request may be sent from a host application to SSD controller 202. The cache manager 210 determines whether the data corresponding to the read request (e.g., data 0 to data N) is in the preloaded cache 218. If the data is in cache 218, cache manager 210 retrieves the data from preloaded cache 218 and the requested data is provided to the host application. When the data is not in the preloaded cache 218, the data is read from the NAND flash memory 214, stored in the cache 218, and then provided to the host application. Thus, preloading data offline to the cache 218 may prevent delays caused by NAND flash data retrieval during runtime.
FIG. 4 is a diagram illustrating offline preloading data according to an embodiment. Normalized cumulative accesses are plotted against the cumulative access block. Before stabilization (offline preloading of the cache is shown at 402), the normalized cumulative accesses are shown as increasing during the first 0.2 cumulative access blocks.
Fig. 5 is a diagram showing an SSD controller in a write mode according to an embodiment. SSD controller 202 minimizes the number of write requests provided to NAND flash memory 214. For example, N write requests (e.g., write 0 to write N-1) may be provided from the host application to the HIL 204 and processor 208 of the SSD controller 202. The cache manager 210 writes data corresponding to these write requests only to the cache 218. The dirty bit is set to 1 for each block of cache 218 with the newly written data. The list 222 of dirty cache blocks in the cache manager 210 maintains a list of dirty blocks in the cache 218 for each page of the NAND flash memory 214.
When the cache 218 is filled with data corresponding to N write requests and a subsequent write request write N is received, the data from the cache block must be evicted in order to store the data of the write request write N. For example, data from cache block 0 may be selected for eviction from cache 218 to NAND flash 214. The cache manager 210 refers to the list 222 of dirty cache blocks and determines other dirty cache blocks belonging to the same page of the NAND flash 214 as cache block 0. For example, as shown in FIG. 5, blocks 0, 2, N-1, and N belong to page 0. When data is evicted from block 0, data from all dirty blocks belonging to the same page (e.g., page 0) as block 0 is written to the NAND flash memory 214 along with data from block 0 as data corresponding to a single write request. The cache manager 210 sends a single write request to update page 0 of the NAND flash memory 214.
The dirty bits for all blocks of page 0 are set to 0 in cache 218 to indicate that the corresponding data is in NAND flash 214. For example, as shown in cache detail 502 of FIG. 5, when a block of cache 218 is filled with data from write requests write 0 through write N-1, each block is allocated valid bit 1 and dirty bit 1. As shown in updated cache details 504 of FIG. 5, after evicting data from block 0 and writing data from all dirty blocks of page 0 to NAND flash 214, block 0 is allocated valid bit 0, and blocks 2, N-1, and N are allocated dirty bit 0. Thus, cache 218 indicates that data from block 0 has been evicted and that data from blocks 2, N-1, and N have been written to NAND flash memory 214 of the SSD.
Thus, fewer back-end writes are required for NAND flash 214 when writing data corresponding to the evicted block and all dirty blocks of the same page. While these asynchronous writes minimize the number of writes to the NAND flash memory 214, they may also result in potential inconsistencies. Accordingly, the SSD controller 202 selectively performs a first mode for asynchronous writing and a second mode for synchronous writing in which each write request is provided to the NAND flash memory.
Fig. 6 is a diagram illustrating a method for efficiently accessing an SSD hardware cache, according to an embodiment. At 602, a request is received from a host application at an SSD controller. The request may be a read request or a write request. When the request is a read request, data is read from the cache based on the read request at 604. When the SSD is offline, the data may be preloaded into the cache at predefined intervals. At 606, the data is provided from the cache to the host application.
When the request is a write request, at 608, the cache data is evicted from the cached block (e.g., the cached block is modified to evict the cache data) when the cache is full. At 610, the cache data and corresponding data are written from the cache to the NAND flash memory as data corresponding to a single write request. The cache manager may maintain a list of cached dirty blocks for each page of the NAND flash memory. The corresponding data may be from one or more blocks that belong to the same flash page as the block from which the cache data was evicted. In particular, one or more of the blocks may be dirty blocks having a dirty bit set to 1 in the cache. After writing the cache data and corresponding data to the NAND flash memory, dirty bits of one or more blocks may be set to 0 in the cache. At 612, data is written to the cached block based on the received write request.
The cache type may be set to be associated or direct mapped and the cache replacement policy may include LRU, LFU, etc.
Fig. 7 illustrates a block diagram of an electronic device 701 in a network environment 700, according to an embodiment. Referring to fig. 7, an electronic device 701 in a network environment 700 may communicate with an electronic device 702 via a first network 798 (e.g., a short-range wireless communication network) or with an electronic device 704 or server 708 via a second network 799 (e.g., a long-range wireless communication network). The electronic device 701 may communicate with the electronic device 704 via a server 708. The electronic device 701 may include a processor 720, a memory 730, an input device 750, a sound output device 755, a display device 760, an audio module 770, a sensor module 776, an interface 777, a haptic module 779, a camera module 780, a power management module 788, a battery 789, a communication module 790, a connection terminal 778, a Subscriber Identity Module (SIM) 796, or an antenna module 797. In one embodiment, at least one of the components (e.g., the display device 760 or the camera module 780) may be omitted from the electronic device 701, or one or more other components may be added to the electronic device 701. In one embodiment, some of the components may be implemented as a single Integrated Circuit (IC). For example, a sensor module 776 (e.g., a fingerprint sensor, iris sensor, or illuminance sensor) may be embedded in a display device 760 (e.g., a display).
Processor 720 may execute, for example, software (e.g., program 740) to control at least one other component (e.g., hardware or software component) of electronic device 701 in conjunction with processor 720, and may perform various data processing or calculations. The processor may correspond to a high performance CPU (HCPU), or a combination of HCPU, embedded, and/or NAND CPUs of the SSD. As at least part of the data processing or computation, the processor 720 may load commands or data received from a host or another component (e.g., the sensor module 776 or the communication module 790) into the volatile memory 732, process the commands or data stored in the volatile memory 732, and store the resulting data in the nonvolatile memory 734. The processor 720 may include a main processor 721 (e.g., a CPU or Application Processor (AP)) and an auxiliary processor 723 (e.g., a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a sensor hub processor, or a Communication Processor (CP)) that may operate independently of the main processor 721 or in conjunction with the main processor 721. Additionally or alternatively, the auxiliary processor 723 may be adapted to consume less power than the main processor 721 or to perform certain functions. The auxiliary processor 723 may be implemented separately from the main processor 721 or as part of the main processor 721.
The auxiliary processor 723 may replace the main processor 721 when the main processor 721 is in an inactive (e.g., sleep) state, or control at least some of functions or states related to at least one component (e.g., the display device 760, the sensor module 776, or the communication module 790) among the components of the electronic device 701 together with the main processor 721 when the main processor 721 is in an active state (e.g., executing an application). According to one embodiment, the auxiliary processor 723 (e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., a camera module 780 or a communication module 790) that is functionally related to the auxiliary processor 723.
Memory 730 may store various data used by at least one component of electronic device 701, such as processor 720 or sensor module 776. The various data may include, for example, software (e.g., program 740) and input data or output data for commands associated therewith. Memory 730 may include volatile memory 732 or nonvolatile memory 734.
Programs 740 may be stored as software in memory 730 and may include, for example, an Operating System (OS) 742, middleware 744, or applications 746.
The input device 750 may receive commands or data from outside the electronic device 701 (e.g., a user) to be used by additional components of the electronic device 701 (e.g., the processor 720). Input device 750 may include, for example, a microphone, a mouse, or a keyboard.
The sound output device 755 may output a sound signal to the outside of the electronic device 701. The sound output device 755 may include, for example, a speaker or a receiver. The speaker may be used for general purposes (such as playing multimedia or recording) and the receiver may be used to receive incoming calls. According to one embodiment, the receiver may be implemented separate from or as part of the speaker.
The display device 760 may visually provide information to the outside of the electronic device 701 (e.g., a user). The display device 760 may include, for example, a display, a hologram device, or a projector, and a control circuit for controlling a corresponding one of the display, the hologram device, and the projector. According to one embodiment, the display device 760 may include touch circuitry adapted to detect touches or sensor circuitry (e.g., pressure sensors) adapted to measure the strength of forces caused by touches.
The audio module 770 may convert sound into electrical signals and vice versa. According to one embodiment, the audio module 770 may obtain sound via the input device 750, or output sound via the sound output device 755 or headphones of the external electronic device 702 that is directly (e.g., wired) or wireless in combination with the electronic device 701.
The sensor module 776 may detect an operational state (e.g., power or temperature) of the electronic device 701 or an environmental state (e.g., a state of a user) external to the electronic device 701 and then generate an electrical signal or data value corresponding to the detected state. The sensor module 776 may include, for example, a gesture sensor, a gyroscope sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an Infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.
Interface 777 may support one or more specific protocols for electronic device 701 to interface directly (e.g., wired) or wirelessly with external electronic device 702. According to one embodiment, interface 777 may comprise, for example, a High Definition Multimedia Interface (HDMI), a Universal Serial Bus (USB) interface, a Secure Digital (SD) card interface, or an audio interface.
The connection terminal 778 may include a connector via which the electronic device 701 may be physically connected with the external electronic device 702. According to one embodiment, the connection terminal 778 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector).
The haptic module 779 may convert the electrical signal into mechanical stimulus (e.g., vibration or movement) or electrical stimulus that may be recognized by the user via a tactile or kinesthetic sensation. According to one embodiment, the haptic module 779 may include, for example, a motor, a piezoelectric element, or an electrostimulator.
The camera module 780 may capture still images or moving images. According to one embodiment, the camera module 780 may include one or more lenses, an image sensor, an image signal processor, or a flash.
The power management module 788 may manage power supplied to the electronic device 701. The power management module 788 may be implemented as at least a portion of a Power Management Integrated Circuit (PMIC), for example.
The battery 789 may supply power to at least one component in the electronic device 701. According to one embodiment, the battery 789 may include, for example, a non-rechargeable primary battery, a rechargeable secondary battery, or a fuel cell.
The communication module 790 may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device 701 and an external electronic device (e.g., the electronic device 702, the electronic device 704, or the server 708), and performing communication via the established communication channel. The communication module 790 may include one or more communication processors that may operate independently of the processor 720 (e.g., an AP) and support direct (e.g., wired) or wireless communication. According to one embodiment, the communication module 790 may include a wireless communication module 792 (e.g., a cellular communication module, a short-range wireless communication module, or a Global Navigation Satellite System (GNSS) communication module) or a wired communication module 794 (e.g., a Local Area Network (LAN) communication module or a Power Line Communication (PLC) module). A corresponding one of these communication modules may communicate with external electronic devices via a first network 798 (e.g., a short-range communication network such as bluetooth TM, a wireless fidelity (Wi-Fi) direct, or an infrared data association (IrDA) standard)) or a second network 799 (e.g., a long-range communication network such as a cellular network, the internet, or a computer network (e.g., a LAN or Wide Area Network (WAN)), these various types of communication modules may be implemented as a single component (e.g., a single IC), or as multiple components (e.g., multiple ICs) separate from one another, the wireless communication module 792 may use user information (e.g., an International Mobile Subscriber Identity (IMSI)) stored in the user identity module 796 to identify and authenticate the electronic devices 701 in the communication network such as the first network 798 or the second network 799.
The antenna module 797 may transmit signals or power to or receive signals or power from outside of the electronic device 701 (e.g., an external electronic device). According to one embodiment, the antenna module 797 may include one or more antennas, and thus, at least one antenna suitable for a communication scheme used in a communication network, such as the first network 798 or the second network 799, may be selected by, for example, the communication module 790 (e.g., the wireless communication module 792). The signal or power may then be transmitted or received between the communication module 790 and the external electronic device via the selected at least one antenna.
At least some of the above components may be combined with each other and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., bus, general Purpose Input and Output (GPIO), serial Peripheral Interface (SPI), or Mobile Industrial Processor Interface (MIPI)).
According to one embodiment, commands or data may be sent or received between the electronic device 701 and the external electronic device 704 via the server 708 in conjunction with the second network 799. Each of the electronic devices 702 and 704 may be the same type as the type of the electronic device 701 or a different type of device. All or some of the operations to be performed at the electronic device 701 may be performed at one or more of the external electronic devices 702, 704, or 708. For example, if the electronic device 701 should perform a function or service automatically or in response to a request from a user or another device, the electronic device 701 may request one or more external electronic devices to perform at least a portion of the function or service instead of, or in addition to, the function or service. The external electronic device or devices receiving the request may execute at least a portion of the requested function or service, or additional functions or additional services related to the request, and transmit the result of the execution to the electronic device 701. The electronic device 701 may provide the results as at least a portion of a reply to the request with or without further processing of the results. To this end, for example, cloud computing, distributed computing, or client-server computing techniques may be used.
One embodiment may be implemented as software (e.g., program 740) comprising one or more instructions stored in a storage medium (e.g., internal memory 736 or external memory 738) readable by a machine (e.g., electronic device 701). For example, a processor of the electronic device 701 may invoke at least one of the one or more instructions stored in the storage medium and execute it under the control of the processor with or without the use of one or more other components. Thus, the machine may be operated to perform at least one function in accordance with the at least one instruction invoked. The one or more instructions may include code generated by a compiler or code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. The term "non-transitory" indicates that the storage medium is a tangible device and does not include a signal (e.g., electromagnetic waves), but the term does not distinguish between a case where data is semi-permanently stored in the storage medium and a case where data is temporarily stored in the storage medium.
According to one embodiment, the disclosed methods may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disk read-only memory (CD-ROM)) or distributed (e.g., downloaded or uploaded) online via an application Store (e.g., play Store TM) or distributed (e.g., downloaded or uploaded) directly between two user devices (e.g., smartphones). If distributed online, at least a portion of the computer program product may be temporarily generated or at least temporarily stored in a machine-readable storage medium, such as a memory of a manufacturer's server, an application store's server, or a relay server.
According to one embodiment, each of the above-described components (e.g., a module or program) may include a single entity or multiple entities. One or more of the above components may be omitted, or one or more other components may be added. Alternatively or additionally, multiple components (e.g., modules or programs) may be integrated into a single component. In this case, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as the one or more functions of each of the plurality of components were performed by the corresponding one of the plurality of components prior to integration. Operations performed by modules, programs, or additional components may be performed sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be performed in a different order or omitted, or one or more other operations may be added.
Fig. 8 shows a diagram of a storage system 800 according to an embodiment. The storage system 800 includes a host 802 and a storage 804. Although one host and one storage device are depicted, storage system 800 may include multiple hosts and/or multiple storage devices. The storage 804 may be an SSD, a universal flash memory (UFS), or the like. The storage 804 includes a controller 806 and a storage medium 808 coupled to the controller 806. The controller 806 may be an SSD controller, UFS controller, or the like. The storage medium 808 may include volatile memory, non-volatile memory, or both, and may include one or more flash memory chips (or other storage media). The controller 806 may include one or more processors, one or more error correction circuits, one or more Field Programmable Gate Arrays (FPGAs), one or more host interfaces, one or more flash bus interfaces, and the like, or a combination thereof. The controller 806 may be configured to facilitate transfer of data/commands between the host 802 and the storage medium 808. The host 802 sends data/commands to the storage 804 for receipt by the controller 806 and processing in conjunction with the storage medium 808. As described herein, the methods, processes, and algorithms may be implemented on a storage device controller (such as controller 806). The arbiter, command fetcher, and command processor may be implemented in the controller 806 of the storage 804, and the processor and buffer may be implemented in the host 802.
Although specific embodiments of the present disclosure have been described in the detailed description thereof, the disclosure may be modified in various forms without departing from the scope of the disclosure. Thus, the scope of the disclosure should be determined not only by the embodiments described, but by the appended claims and their equivalents.

Claims (20)

1. A method of accessing a memory device, comprising:
Receiving a write request at a memory device from a host device;
Modifying a cached block of the memory device to evict the cached data;
Writing the cache data and corresponding data from the cache as data corresponding to a single write request to a flash memory of the memory device; and
Based on the received write request, the first data is written to the block of the cache.
2. The method of claim 1, wherein the cache is full based on receiving the write request.
3. The method of claim 1, wherein a cache manager of a controller of the memory device maintains a list of dirty blocks for each flash page of the flash memory.
4. The method of claim 1, wherein the corresponding data is from one or more blocks of the same flash page of flash memory as the block from which cache data was evicted.
5. The method of claim 4, wherein the one or more blocks are indicated in a cache as dirty blocks.
6. The method of claim 5, further comprising: after writing the cache data and the corresponding data to the flash memory, removing an indication that the one or more blocks are dirty blocks.
7. The method of claim 1, wherein a size of the first data is smaller than a size of a flash page of the flash memory.
8. The method of any of claims 1 to 7, further comprising:
loading the second data into the cache with the memory device in the first state;
receiving a read request from the host device with the memory device in the second state;
Reading the second data from the cache based on the read request; and
The second data is output to the host device.
9. The method of claim 8, wherein the cache is loaded according to a heuristic based program.
10. A memory device, comprising:
A cache;
A flash memory; and
A controller configured to: a write request is received from a host device,
Wherein the controller includes a cache manager configured to:
modifying the cached block to evict the cached data;
writing the cache data and the corresponding data from the cache as data corresponding to a single write request to the flash memory; and
First data is written to the block of the cache based on the received write request.
11. The memory device of claim 10, wherein the cache is full based on receiving the write request.
12. The memory device of claim 10, wherein the cache manager is further configured to: a list of dirty blocks for each flash page of the flash memory is maintained.
13. The memory device of claim 10, wherein the corresponding data is from one or more blocks belonging to a same flash page of flash memory as the block from which cache data was evicted.
14. The memory device of claim 13, wherein the one or more blocks are indicated as dirty blocks.
15. The memory device of claim 14, wherein after writing the cache data and the corresponding data to flash memory, removing the indication that the one or more blocks are dirty blocks.
16. The memory device of claim 10, wherein a size of the first data is smaller than a size of a flash page of the flash memory.
17. The memory device of any one of claims 10 to 16, wherein the controller is further configured to:
loading the second data into the cache with the memory device in the first state;
receiving a read request from the host device with the memory device in the second state;
Reading the second data from the cache based on the read request; and
The second data is output to the host device.
18. The memory device of claim 17, wherein the cache is loaded according to a heuristic-based program.
19. A method of accessing a memory device, comprising:
in a first state, receiving a request from a host device at a memory device;
Reading the first data from the cache of the memory device based on the read request and outputting the first data to the host device if the request is a read request, wherein the first data is loaded into the cache if the memory device is in the second state; and
In the case where the request is a write request, modifying a cached block to evict the cache data, writing the cache data and corresponding data from the cache to the flash memory of the memory device as data corresponding to the single write request, and writing second data to the cached block based on the received write request.
20. A memory device, comprising:
A cache;
A flash memory; and
A controller configured to: in the case where the memory device is in the first state, a request is received from the host device,
Wherein the controller includes a cache manager configured to:
Reading the first data from the cache based on the read request and outputting the first data to the host device in the case where the request is a read request, wherein the first data is loaded into the cache in the case where the memory device is in the second state; and
In the case where the request is a write request, modifying a block of the cache to evict the cache data, writing the cache data and corresponding data from the cache to the flash memory as data corresponding to a single write request, and writing second data to the block of the cache based on the received write request.
CN202311247532.0A 2022-10-19 2023-09-26 Method and system for efficient access to solid state drives Pending CN117908766A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/417,504 2022-10-19
US18/080,211 2022-12-13
US18/080,211 US20240134801A1 (en) 2022-10-19 2022-12-13 Methods and system for efficient access to solid state drive

Publications (1)

Publication Number Publication Date
CN117908766A true CN117908766A (en) 2024-04-19

Family

ID=90688413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311247532.0A Pending CN117908766A (en) 2022-10-19 2023-09-26 Method and system for efficient access to solid state drives

Country Status (1)

Country Link
CN (1) CN117908766A (en)

Similar Documents

Publication Publication Date Title
US11995003B2 (en) Method of data caching and device caching data
US20190171392A1 (en) Method of operating storage device capable of reducing write latency
JP2013530448A (en) Cache storage adapter architecture
US20170255561A1 (en) Technologies for increasing associativity of a direct-mapped cache using compression
US9164804B2 (en) Virtual memory module
EP4296841A1 (en) Method and system for solid state drive (ssd)-based redundant array of independent disks (raid)
CN112100088A (en) Electronic device and method of using memory section of the same
US9971549B2 (en) Method of operating a memory device
US20230169022A1 (en) Operating method of an electronic device
EP4325363A1 (en) Device for managing cache corruption, and operation method thereof
EP4357928A1 (en) Methods and system for efficient access to solid state drive
CN117908766A (en) Method and system for efficient access to solid state drives
US20230393906A1 (en) Method and system for accelerating application performance in solid state drive
US20230384960A1 (en) Storage system and operation method therefor
US20230024420A1 (en) Methods and devices for file read latency reduction
US11868270B2 (en) Storage system and storage device, and operating method thereof
US20220129383A1 (en) Electronic device, automotive device, and data center
US20230084539A1 (en) Computational storage device and storage system including the computational storage device
CN117271396A (en) Method and system for a redundant array of independent disks based on solid state drives
CN117194004A (en) Memory device and method of operating the same
CN117850670A (en) Data storage system, method of operating the same, and method of operating the system
CN115756557A (en) Firmware upgrading method of network memory and network memory
KR20230059092A (en) Apparatus for handling cache loss and method for operation thereof
CN116504289A (en) Memory device
CN117687567A (en) Memory system, method of memory system and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication