US20230142948A1 - Techniques for managing context information for a storage device - Google Patents
Techniques for managing context information for a storage device Download PDFInfo
- Publication number
- US20230142948A1 US20230142948A1 US18/150,783 US202318150783A US2023142948A1 US 20230142948 A1 US20230142948 A1 US 20230142948A1 US 202318150783 A US202318150783 A US 202318150783A US 2023142948 A1 US2023142948 A1 US 2023142948A1
- Authority
- US
- United States
- Prior art keywords
- silo
- volatile memory
- silos
- tier
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 230000015654 memory Effects 0.000 claims abstract description 223
- 230000004044 response Effects 0.000 claims description 9
- 238000011084 recovery Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 230000008901 benefit Effects 0.000 description 6
- 230000001965 increasing effect Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000005192 partition Methods 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000013467 fragmentation Methods 0.000 description 2
- 238000006062 fragmentation reaction Methods 0.000 description 2
- 230000008521 reorganization Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/068—Hybrid storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2213/00—Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F2213/28—DMA
Definitions
- the described embodiments set forth techniques for managing context information for a non-volatile memory (e.g., a solid-state drive (SSD) of a computing device).
- the techniques involve segmenting the context information to increase the granularity by which it is transmitted between volatile and non-volatile memories, which can substantially enhance operational efficiency.
- SSDs Solid state drives
- HDDs magnetic-based hard disk drives
- standard SSDs which utilize “flash” memory—can provide various advantages over standard HDDs, such as considerably faster Input/Output (I/O) performance.
- I/O latency speeds provided by SSDs typically outperform those of HDDs because the I/O latency speeds of SSDs are less-affected when data is fragmented across the memory sectors of SSDs. This occurs because HDDs include a read head component that must be relocated each time data is read/written, which produces a latency bottleneck as the average contiguity of written data is reduced over time.
- SSDs which are not bridled by read head components, can preserve I/O performance even as data fragmentation levels increase. SSDs also provide the benefit of increased impact tolerance (as there are no moving parts), and, in general, virtually limitless form factor potential.
- context information scales directly with the amount of data managed by the SSD.
- large-sized context information for a given SSD can lead to performance bottlenecks with regard to both (i) writing the context information (e.g., from a volatile memory) into the SSD, and (ii) restoring the context information when an inadvertent shutdown renders the context information out-of-date. Consequently, there exists a need for an improved technique for managing context information for data stored on SSDs to ensure that acceptable performance metrics remain intact even as the size of the context information scales with the ever-increasing capacities of SSDs.
- the described embodiments set forth techniques for managing context information for a non-volatile memory (e.g., a solid-state drive (SSD) of a computing device).
- the techniques involve partitioning the context information into a collection of “silos” that increase the granularity by which the context information is transferred between volatile and non-volatile memories. In this manner, periodic saves of the context information—as well as restorations of the context information in response to inadvertent shutdowns—can be performed more efficiently.
- one embodiment sets forth a method for managing context information for data stored within a non-volatile memory of a computing device.
- the method includes the initial steps of (1) loading, into a volatile memory of the computing device, the context information from the non-volatile memory, where the context information is separated into a plurality of silos, and (2) writing transactions into a log stored within the non-volatile memory (e.g., transactions generated by the computing device). Additionally, the method includes performing additional steps each time a particular condition is satisfied, e.g., whenever a threshold number of transactions are processed by the computing device and written into the log.
- the additional steps include (3) identifying a next silo of the plurality of silos to be written into the non-volatile memory (i.e., relative to a last-written silo), (4) updating the next silo to reflect the transactions that apply to the next silo, and (5) writing the next silo into the non-volatile memory.
- Another embodiment sets forth a method for restoring context information when an inadvertent shutdown of a computing device occurs.
- the method can include the steps of (1) identifying the context information within a non-volatile memory of the computing device (e.g., an SSD of the computing device), where the context information is separated into a plurality of silos, and (2) accessing a log stored within the non-volatile memory, where the log reflects a collection of transactions issued by the computing device.
- the method includes performing the following steps for each silo of the plurality of silos: (3) loading the silo into the volatile memory, and (4) in response to identifying, within the log, that at least one transaction (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory: updating the silo to reflect the at least one transaction.
- inventions include a non-transitory computer readable storage medium configured to store instructions that, when executed by a processor included in a computing device, cause the computing device to carry out the various steps of any of the foregoing methods. Further embodiments include a computing device that is configured to carry out the various steps of any of the foregoing methods.
- FIG. 1 illustrates a block diagram of different components of a system that is configured to implement the various techniques described herein, according to some embodiments.
- FIGS. 2 A- 2 C illustrate conceptual diagrams of example scenarios in which different silos can be transmitted, in a unified manner, between a volatile memory and a non-volatile memory by way of direct memory access (DMA), according to some embodiments.
- DMA direct memory access
- FIG. 3 sets forth a conceptual diagram of the manner in which data stored in non-volatile memory can be accessed through logical base addresses (LBAs) using the indirection techniques described herein, according to some embodiments.
- LBAs logical base addresses
- FIG. 4 illustrates a conceptual diagram of an example scenario that sets forth the manner in which first and second tier entries associated with a given silo can be used to reference data stored within a non-volatile memory, according to some embodiments.
- FIGS. 5 A- 5 F provide conceptual diagrams of an example scenario in which the various techniques described herein—i.e., the silo-based partitions and indirection paradigms—can be utilized to improve the overall operational efficiency of a computing device.
- FIG. 6 illustrates a method for managing context information for data stored within a non-volatile memory of a computing device, according to some embodiments.
- FIG. 7 illustrates a method for restoring context information when an inadvertent shutdown of a computing device occurs, according to some embodiments.
- FIG. 8 illustrates a detailed view of a computing device that can be used to implement the various components described herein, according to some embodiments.
- the embodiments disclosed herein set forth techniques for managing context information for data stored within a non-volatile memory (e.g., a solid-state storage device (SSD)) managed by a computing device.
- the techniques involve partitioning the context information into a collection of “silos” in order to increase the granularity by which the context information is transmitted between a volatile memory (e.g., a random-access memory (RAM)) of the computing device and the non-volatile memory of the computing device.
- a volatile memory e.g., a random-access memory (RAM)
- DMA direct memory access
- the silos of which the context information is comprised can be sequentially accessed/restored (e.g., based on logged transactional information), which further reduces latency in comparison to restoring the context information in its entirety.
- FIG. 1 illustrates a block diagram 100 of a computing device 102 —e.g., a smart phone, a tablet, a laptop, a desktop, a server, etc.—that is configured implement the various techniques described herein.
- the computing device 102 can include a processor 104 that, in conjunction with a volatile memory 106 (e.g., a dynamic random access memory (DRAM)) and a storage device 114 (e.g., a solid-state drive (SSD)), enables different software entities to execute on the computing device 102 .
- the processor 104 can be configured to load, from the storage device 114 into the volatile memory 106 , various components for an operating system (OS) 108 .
- OS operating system
- the OS 108 can enable the computing device 102 to provide a variety of useful functions, e.g., loading/executing various applications 110 (e.g., user applications). It should be understood that the various hardware components of the computing device 102 illustrated in FIG. 1 are presented at a high level in the interest of simplification, and that a more detailed breakdown is provided below in conjunction with FIG. 8 .
- the storage device 114 can include a controller 116 that is configured to orchestrate the overall operation of the storage device 114 .
- the controller 116 can be configured to process input/output (I/O) requests—referred to herein as “transactions”—is sued by the OS 108 /applications 110 to the storage device 114 .
- the controller 116 can include a parity engine for establishing various parity information for the data stored by the storage device 114 to improve overall recovery scenarios.
- the controller 116 can include additional entities that enable the implementation of the various techniques described herein without departing from the scope of this disclosure. It is further noted that these entities can be combined or split into additional entities without departing from the scope of this disclosure. It is additionally noted that the various entities described herein can be implemented using software-based or hardware-based approaches without departing from the scope of this disclosure.
- the storage device 114 can include a non-volatile memory 118 (e.g., flash memory) that is composed of a collection of dies.
- a non-volatile memory 118 e.g., flash memory
- different “bands” can be established within the non-volatile memory 118 , where each band spans the collection of dies.
- one or more of the dies can be reserved by the storage device 114 —e.g., for overprovisioning-based techniques—without departing from the scope of this disclosure, such that a given band can span a subset of the dies that are available within the non-volatile memory 118 .
- the overall “width” of a band can be defined by the number of dies that the band spans.
- the overall “height” of the band can be defined by a number of “stripes” into which the band is separated.
- each stripe within the band can be separated into a collection of pages, where each page is disposed on a different die of the non-volatile memory 118 .
- a given band spans five different dies—and is composed of five different stripes—a total of twenty-five (25) pages are included in the band, where each column of pages is disposed on the same die.
- the data within a given band can be separated across the non-volatile memory 118 in a manner that enables redundancy-based protection to be established without significantly impacting the overall performance of the storage device 114 .
- the aforementioned bands managed by the storage device 114 can include a log band 120 , an indirection band 122 , and a data band 124 .
- transactional information associated with the indirection band 122 /data band 124 e.g., details associated with I/O requests processed by the controller 116 —can be written into the log band 120 .
- this transactional information can be utilized to restore the content of the indirection band 122 when an inadvertent shutdown of the computing device 102 renders at least a portion of the content out-of-date.
- the content stored in the indirection band 122 can include context information 112 that serves as a mapping table for data that is stored within the data band 124 .
- the context information 112 can be transmitted between the volatile memory 106 and the non-volatile memory 118 using direct memory access (DMA) 150 .
- DMA 150 can enable the processor 104 to play little or no role in the data transmissions between the volatile memory 106 and the non-volatile memory 118 , which can improve efficiency.
- any technique can be utilized to transmit data between the volatile memory 106 and the non-volatile memory 118 without departing from the scope of this disclosure. In any case, as shown in FIG.
- each silo 130 can include metadata 132 and a context information subset 134 .
- the metadata 132 for a given silo 130 can include descriptive information about the silo 130 , e.g., an index of the silo 130 (relative to the other silos 130 ), a size of the silo 130 , and so on.
- the context information subset 134 for a given silo 130 can include a respective portion of the context information 112 to which the silo 130 corresponds.
- the context information 112 can be organized into a hierarchy that includes first and second depth levels.
- the first depth level can correspond to a collection of first-tier entries
- the second depth level can correspond to a collection of second-tier entries.
- the first and second-tier entries can store data in accordance with different encoding formats that coincide with the manner in which the non-volatile memory 118 is partitioned into different sectors. For example, when each sector represents a 4 KB sector of memory, each first-tier entry can correspond to a contiguous collection of two hundred fifty-six (256) sectors.
- the value of a given first-tier entry can indicate whether the first-tier entry (1) directly refers to a physical location (e.g., an address of a starting sector) within the non-volatile memory 118 , or (2) directly refers (e.g., via a pointer) to one or more second-tier entries.
- condition (1) when condition (1) is met, it is implied that all (e.g., the two-hundred fifty-six (256)) sectors associated with the first-tier entry are contiguously written, which can provide a compression ratio of 1/256.
- this compression ratio can be achieved because the first-tier entry stores a pointer to a first sector of the two hundred fifty-six (256) sectors associated with the first-tier entry, where no second-tier entries are required.
- information included in the first-tier entry indicates (i) one or more second-tier entries that are associated with the first-tier entry, as well as (ii) how the information in the one or more second-tier entries should be interpreted.
- each second-tier entry can refer to one or more sectors, thereby enabling data to be disparately stored across the sectors of the non-volatile memory 118 .
- FIGS. 3 - 4 A more detailed description of the first-tier entries and second-tier entries is provided below in conjunction with FIGS. 3 - 4 .
- FIG. 1 provides high-level overview of the manner in which the computing device 102 can be configured to implement the techniques described herein. A more detailed explanation of these techniques will now be provided below in conjunction with FIGS. 2 A- 2 C, 3 - 4 , 5 A- 5 F, and 6 - 8 .
- FIGS. 2 A- 2 C illustrate conceptual diagrams of example scenarios in which different silos 130 can be transmitted, in a unified manner, between the volatile memory 106 and the non-volatile memory 118 by way of direct memory access 150 , according to some embodiments.
- FIGS. 2 A- 2 C illustrate that the context information subset 134 of a given silo 130 —i.e., the first and second-tier entries that correspond to the silo 130 —can be separately-stored from one another, yet remain capable of being transmitted between the volatile memory 106 and the non-volatile memory 118 in a unified manner.
- the techniques set forth herein enable the context information subset 134 of the silo 130 to be transmitted between the volatile memory 106 and the non-volatile memory 118 in the form of a snapshot-like image despite representing only a portion of the context information 112 .
- a Tier 1 space 202 can be configured to store the different first-tier entries that correspond to the silos 130 .
- the Tier 1 space 202 can be configured to represent a span of logical base addresses (LBAs), where the first-tier entries of each silo 130 correspond to a respective portion of the LBAs.
- LBAs logical base addresses
- each silo 130 can correspond to a respective 1/32 of the LBAs.
- the Tier 1 space 202 can be fixed in size, whereas a Tier 2 space 204 can be dynamically expanded/contracted to accommodate second-tier entries as they are established/removed over time.
- the embodiments can involve expanding the Tier 2 space 204 for all silos 130 even when only a single silo 130 is seeking to store additional second-tier entries (e.g., that cannot fit within existing Tier 2 space 204 ). For example, as indicated in FIG. 2 A , adding a new column into the Tier 2 space 204 effectively expands the Tier 2 space 204 for all of the silos 130 .
- the column can be removed from the Tier 2 space 204 .
- a first example can involve transmitting the silo 130 - 0 —specifically, the context information subset 134 - 0 of the silo 130 - 0 —between the volatile memory 106 and the non-volatile memory 118 using direct memory access 150 .
- the first example can involve transmitting, in a unified manner, (1) the first-tier entries associated with the silo 130 - 0 —illustrated in FIG. 2 A as Silo_ 0 Tier 1 entries 208 - 0 —and (2) the second-tier entries associated with the silo 130 - 0 —illustrated in FIG. 2 A as Silo_ 0 Tier 2 entries 210 - 0 .
- the overall layout of the context information subset 134 (i.e., Silo_ 0 Tier 1 entries 208 - 0 /Silo_ 0 Tier 2 entries 210 - 0 ) can be maintained when transmitted between the volatile memory 106 and the non-volatile memory 118 such that little operational overhead is required.
- the context information subset 134 - 0 (of the silo 130 - 0 ) can be written into a corresponding area of the context information 112 in the indirection band 122 without requiring a reorganization/reformatting of the context information subset 134 - 0 .
- the context information subset 134 - 0 can be written into an available area of the volatile memory 106 (e.g., allocated for the context information 112 ) without requiring a reorganization/reformatting of the context information subset 134 - 0 .
- the silos 130 can be transmitted between the volatile memory 106 and the non-volatile memory 118 in a unified/snapshot-like manner, thereby substantially enhancing efficiency.
- the direct memory access 150 techniques described herein can enable both the volatile memory 106 and the non-volatile memory 118 to directly-transmit the context information subsets 134 of the silos 130 between one another without requiring intensive involvement of the processor 104 , thereby further enhancing operational efficiency.
- FIGS. 2 B- 2 C provide further examples of silo 130 transfers between the volatile memory 106 and the non-volatile memory 118 .
- FIGS. 2 B- 2 C further-convey the notion that the context information subsets 134 of different silos 130 can be separately stored from one another, yet remain capable of being transmitted between the volatile memory 106 and the non-volatile memory 118 in a unified manner.
- FIG. 2 B illustrates an additional example that involves transmitting the silo 130 - 1 —specifically, the context information subset 134 - 1 of the silo 130 - 1 —between the volatile memory 106 and the non-volatile memory 118 using direct memory access 150 .
- FIG. 1 illustrates an additional example that involves transmitting the silo 130 - 1 —specifically, the context information subset 134 - 1 of the silo 130 - 1 —between the volatile memory 106 and the non-volatile memory 118 using direct memory access 150 .
- 2 C illustrates another example that involves transmitting the silo 130 -J—specifically, the context information subset 134 -J of the silo 130 -J—between the volatile memory 106 and the non-volatile memory 118 using direct memory access 150 .
- FIGS. 2 A- 2 C illustrate conceptual diagrams of example scenarios in which different silos 130 can be transmitted, in a unified manner, between the volatile memory 106 and the non-volatile memory 118 by way of direct memory access 150 , according to some embodiments. It is noted that direct memory access 150 is not a requirement of the embodiments set forth herein, and that any approach can be utilized when transferring the silos 130 between the volatile memory 106 and the non-volatile memory 118 .
- FIG. 3 sets forth a conceptual diagram 300 of the manner in which data stored in non-volatile memory 118 (e.g., in the data band 124 ) can be accessed through logical base addresses (LBAs) using the indirection techniques described herein, according to some embodiments.
- LBAs logical base addresses
- an example LBA encoding scheme 302 can include a Tier 1 index 304 , a silo index 306 , and a Tier 1 offset 308 . It is noted that the number of bits allocated to each of the Tier 1 index 304 , the silo index 306 , and the Tier 1 offset 308 are not drawn to scale in FIG. 3 , and that these values can be assigned any number of bits without departing from the scope of this disclosure.
- the Tier 1 index 304 /silo index 306 can collectively refer to a particular group of first-tier entries (e.g., Silo_ 0 Tier 1 entries 208 - 0 ) associated with a particular silo 130
- the Tier 1 offset 308 can refer to a particular first-tier entry within the particular group of first-tier entries (e.g., Silo_ 0 Tier 1 entry 208 - 0 - 0 ).
- each first-tier entry can refer to a physical location (e.g., via an address of a starting sector) within the non-volatile memory 118 .
- FIG. 3 illustrates the Tier 1 index 304 /silo index 306
- each first-tier entry can refer to a physical location (e.g., via an address of a starting sector) within the non-volatile memory 118 .
- each first-tier entry can refer to at least one second-tier entry (e.g., the Silo_ 0 Tier 2 entry 210 - 0 - 0 - 0 within the Silo_ 0 Tier 2 entries 210 - 0 - 0 ), where each second-tier entry can refer to one or more sectors of the non-volatile memory 118 .
- FIG. 4 illustrates a conceptual diagram 400 of an example scenario that sets forth the manner in which first and second tier entries associated with a given silo 130 —in particular, the silo 130 - 0 —can be used to reference data stored within different sectors 402 of the non-volatile memory 118 , according to some embodiments.
- first and second tier entries associated with a given silo 130 in particular, the silo 130 - 0 —can be used to reference data stored within different sectors 402 of the non-volatile memory 118 , according to some embodiments.
- the Silo_ 0 Tier 1 entry 208 - 0 - 5 can represent a pass-through first-tier entry that corresponds to a contiguous span of sectors 402 (as previously described herein).
- at least one of the Silo_ 0 Tier 1 entries 208 - 0 in particular, the Silo_ 0 Tier 1 entry 208 - 0 - 1 —references at least one of the Silo_ 0 Tier 2 entries 210 - 0 - 0 —in particular, the Silo_ 0 Tier 2 entry 210 - 0 - 0 - 0 .
- the Silo_ 0 Tier 2 entry 210 - 0 - 0 - 0 along with any other Silo_ 0 Tier 2 entries 210 - 0 - 0 that correspond to the Silo_ 0 Tier 1 entry 208 - 0 - 1 —establish an indirect reference between the Silo_ 0 Tier 1 entry 208 - 0 - 1 and at least one sector 402 of the non-volatile memory 118 .
- indirection techniques described herein enable each LBA to refer to content stored in the non-volatile memory 118 through only one or two levels of hierarchy, thereby providing a highly-efficient architecture on which the various techniques described herein can be implemented.
- FIGS. 5 A- 5 F provide conceptual diagrams of an example scenario in which the various techniques described herein—i.e., the silo-based partitions and indirection paradigms—can be utilized to improve the overall operational efficiency of the computing device 102 .
- the example scenario illustrated in FIGS. 5 A- 5 B involves efficiently writing four (4) of six (6) total silos 130 from the volatile memory 106 into the non-volatile memory 118 as transactions are received and carried out by the controller 116 .
- FIGS. 5 A- 5 F involves efficiently writing four (4) of six (6) total silos 130 from the volatile memory 106 into the non-volatile memory 118 as transactions are received and carried out by the controller 116 .
- FIGS. 5 A- 5 F provides conceptual diagrams of an example scenario in which the various techniques described herein—i.e., the silo-based partitions and indirection paradigms—can be utilized to improve the overall operational efficiency of the computing device 102 .
- the example scenario illustrated in FIGS. 5 A- 5 B
- 5 C- 5 F involves the controller 116 (1) encountering an inadvertent shutdown that compromises the overall coherency of the six silos 130 in the non-volatile memory 118 , and (2) efficiently carrying out a procedure to restore the coherency of the six silos 130 .
- FIGS. 5 A- 5 F involves six silos 130 in the interest of simplifying this disclosure, and that any number of silos 130 can be implemented without departing from the scope of this disclosure.
- a first step in FIG. 5 A occurs after previous transactions 501 are processed and cause the silo 130 - 5 to be the last-written silo 130 from the volatile memory 106 to the non-volatile memory 118 .
- the silo 130 - 4 is the last-written silo 130 relative to the silo 130 - 5
- the silo 130 - 3 is the last-written silo 130 relative to silo 130 - 4 , and so on.
- a round-robin approach is utilized such that a successive silo 130 (relative to a previous silo 130 ) is written from the volatile memory 106 into the non-volatile memory 118 in accordance with different conditions being met, e.g., a threshold number of transactions being received, an amount of time lapsing, a particular functionality being executed (e.g., garbage collection, defragmentation, etc.), and the like.
- the first step involves the controller 116 receiving and processing a number of transactions 502 .
- each transaction can represent one or more I/O requests that are directed toward the storage device 114 .
- a transaction 502 can involve writing, modifying, or removing data from the data band 124 within the non-volatile memory 118 .
- the foregoing example is not meant to be limiting, and that the transactions described herein encompass any form of I/O operation(s) directed toward the non-volatile memory 118 of the storage device 114 .
- transactional information associated with each of the transactions 502 can be recorded within the log band 120 within the non-volatile memory 118 .
- the transactional information can include pointers to the context information 112 stored within the indirection band 122 .
- these pointers can enable an efficient restoration of the context information 112 to be carried out in response to inadvertent shutdowns of the computing device 102 , the details of which are described below in conjunction with FIGS. 5 C- 5 F .
- different log files can be managed within the log band 120 , and can be used to store transactional information associated with the transactions as they are processed.
- redundant copies of log file portions can be stored within the log band 120 , thereby improving the efficacy of recovery procedures even when severe failure events take place.
- each log file portion stored on a first die of the non-volatile memory 118 a copy of the log file portion can be stored on a second (i.e., different) die of the non-volatile memory 118 .
- each log file portion can be recovered even when the first or the second die fails within the non-volatile memory 118 .
- the controller 116 can be configured to carry out a context save 504 in response to identifying that a threshold number of transactions have been processed. It is noted, however, that the controller 116 can be configured to carry out context saves in response to other conditions being satisfied. For example, the controller 116 can be configured to periodically carry out context saves regardless of the number of transactions that have been processed. In another example, the controller 116 can be configured to carry out context saves in response to different types of events being completed, e.g., garbage collection events, defragmentation events, and so on. It is noted that the foregoing examples are not meant to represent an exhaustive list, and that any number of conditions, associated with any aspects of the operation of the computing device 102 , can cause the controller 116 to carry out context saves described herein.
- the context save 504 can involve (1) updating the silo 130 - 0 to reflect the transactions 502 , and (2) writing the silo 130 - 0 from the volatile memory 106 into the non-volatile memory 118 .
- writing the silo 130 - 0 can involve transmitting all or a portion of the information associated with the silo 130 - 0 , e.g., the metadata 132 - 0 , the context information subset 134 - 0 , etc., into a corresponding area within the context information 112 stored within the indirection band 122 .
- the silo 130 - 0 can be placed into a locked state prior to the silo 130 - 0 being updated/written from the volatile memory 106 into the non-volatile memory 118 to ensure that the state of the silo 130 - 0 is not inappropriately modified.
- the context save 504 can involve writing information into the log band 120 to indicate whether the silo 130 - 0 was successfully written into the non-volatile memory 118 . For example, when the silo 130 - 0 is successfully written from the volatile memory 106 to the non-volatile memory 118 , the controller 116 can generate a key that corresponds to the silo 130 - 0 , and place the key into the log band 120 .
- the log band 120 can be parsed at a later time to identify the last-written silo 130 among the silos 130 .
- the indication of the last-written silo 130 enables the recovery techniques described herein to be implemented in an efficient manner.
- the second step illustrated in FIG. 5 A involves (1) writing transactions 506 into the log band 120 , and (2) in accordance with a context save 508 , updating the silo 130 - 1 /writing the silo 130 - 1 from the volatile memory 106 into the non-volatile memory 118 .
- the third step of FIG. 5 A involves (1) writing transactions 506 into the log band 120 , and (2) in accordance with a context save 508 , updating the silo 130 - 1 /writing the silo 130 - 1 from the volatile memory 106 into the non-volatile memory 118 .
- the fourth step of FIG. 5 B involves (1) writing transactions 510 into the log band 120 , and (2) in accordance with a context save 512 , updating the silo 130 - 2 /writing the silo 130 - 2 from the volatile memory 106 into the non-volatile memory 118 .
- the fourth step of FIG. 5 B involves (1) writing transactions 514 into the log band 120 , and (2) in accordance with a context save 516 , updating the silo 130 - 3 /writing the silo 130 - 3 from the volatile memory 106 into the non-volatile memory 118 .
- FIGS. 5 A- 5 B provide a detailed understanding of the benefits that can be achieved through segmenting the context information 112 when writing the context information 112 from the volatile memory 106 into the non-volatile memory 118 .
- these benefits can also apply to recovery scenarios in which the context information 112 is rendered out-of-date and needs to be restored in accordance with the transaction information stored in the log band 120 .
- an inadvertent shutdown of the computing device 102 can cause a scenario in which (1) at least one transaction that affects a particular silo 130 has been written into the log band 120 , and (2) the silo 130 has not been written from the volatile memory 106 into the non-volatile memory 118 .
- the silo 130 stored within the non-volatile memory 118 is out-of-date, as the state of the silo 130 does not appropriately reflect the at least one transaction. Accordingly, it is necessary to restore the silo 130 to an up-to-date state (in accordance with the at least one transaction) to ensure that the storage device 114 —and the computing device 102 as a whole—are operating correctly.
- FIG. 5 C continues the example scenario illustrated in FIGS. 5 A- 5 B , and involves a fifth step in which an inadvertent shutdown 520 of the computing device 102 occurs (1) after transactions 518 are written into the log band 120 , but (2) before the silo 130 - 4 is written from the volatile memory 106 into the non-volatile memory 118 .
- a sixth step illustrated in FIG. 5 C involves the controller 116 initializing a recovery procedure (e.g., during a boot, reboot, wakeup, etc., of the computing device 102 ) to restore the context information 112 .
- a recovery procedure e.g., during a boot, reboot, wakeup, etc.
- the sixth step involves the controller 116 identifying that the silo 130 - 3 was the last silo 130 that was written from the volatile memory 106 into the non-volatile memory 118 .
- the controller 116 can reference the log band 120 —e.g., the transaction logs, the keys stored therein, etc.—to identify that the silo 130 - 3 was the last-written silo 130 .
- the controller 116 can load the silo 130 - 4 into the volatile memory 106 .
- the controller 116 loads the silo 130 - 4 because the silo 130 - 4 is the most out-of-date silo 130 relative to the other silos 130 , with the assumption that the silos 130 are written in a sequential, circular, and repetitive fashion (e.g., as described in FIGS. 5 A- 5 B ).
- the controller 116 can identify, e.g., within the transaction information stored in the log band 120 —any transactions that (1) apply to the silo 130 - 4 , and (2) occurred after the silo 130 - 4 was last-written from the volatile memory 106 into the non-volatile memory 118 .
- the controller 116 can “replay” the transactions against the silo 130 - 4 —in particular, the context information subset 134 - 4 of the silo 130 - 4 —in accordance with the transactions. This can involve, for example, updating first/second tier entries included in the context information subset 134 - 4 so that they reference the appropriate areas of the non-volatile memory 118 (in accordance with the transactions).
- the silo 130 - 4 when the transactions have been effectively replayed, the silo 130 - 4 is in an up-to-date state, and the silo 130 - 4 can optionally be written from the volatile memory 106 into the non-volatile memory 118 . Additionally, the transaction information stored in the log band 120 can be updated to reflect that the silo 130 - 4 has been successfully written. In this manner, if another inadvertent shutdown occurs during the recovery procedure, the same updates made to the silo 130 - 4 during the restoration of the sixth step of FIG. 5 C will not need to be carried out again, thereby increasing efficiency. Alternatively, the silo 130 - 4 will be written from the volatile memory 106 into the non-volatile memory 118 in due course, e.g., when the computing device 102 resumes normal operation after the recovery procedure is completed.
- FIGS. 5 D- 5 F illustrate steps seven through eleven of the recovery procedure, which involve restoring each of the remaining five silos 130 - 5 , 130 - 0 , 130 - 1 , 130 - 2 , and 130 - 3 .
- step seven illustrated in FIG. 5 D illustrates a recovery procedure for the silo 130 - 5 that is carried out by the controller 116 .
- step eight illustrated in FIG. 5 D illustrates a recovery procedure for the silo 130 - 0 that is carried out by the controller 116 .
- step nine illustrated in FIG. 5 E illustrates a recovery procedure for the silo 130 - 1 that is carried out by the controller 116 .
- step ten illustrated in FIG. 5 E illustrates a recovery procedure for the silo 130 - 2 that is carried out by the controller 116 .
- step eleven illustrated in FIG. 5 F illustrates a recovery procedure for the silo 130 - 3 that is carried out by the controller 116 .
- each of the six silos 130 have been properly restored, whereupon the computing device 102 /storage device 114 can enter back into a normal operating mode and process new transactions 550 .
- FIGS. 5 A- 5 F provide conceptual diagrams of an example scenario in which the various techniques described herein—i.e., the silo-based partitions and indirection paradigms—can be utilized to improve the overall operational efficiency of the computing device 102 .
- FIGS. 6 - 7 illustrate method diagrams that can be carried out to implement the various techniques described herein, which will now be described below in greater detail.
- FIG. 6 illustrates a method 600 for managing context information for data stored within a non-volatile memory of a computing device, according to some embodiments.
- the method 600 begins at step 602 , and involves loading context information into a volatile memory (of the computing device) from the non-volatile memory, where the context information is separated into a plurality of silos (e.g., as described above in conjunction with FIGS. 2 A- 2 C ).
- Step 604 involves writing transactions into a log stored within the non-volatile memory (e.g., as described above in conjunction with FIGS. 5 A- 5 B ).
- Step 606 involves determining whether at least one condition is satisfied (e.g., the conditions described above in conjunction with FIG. 5 A ). If, at step 606 , it is determined that condition is satisfied, then the method 600 proceeds to step 608 . Otherwise, the method 600 proceeds back to step 604 , where transactions are received/written into the log (until the at least one condition is satisfied).
- Step 608 involves identifying a next silo of the plurality of silos to be written into the non-volatile memory (e.g., as described above in conjunction with FIGS. 5 A- 5 B ).
- Step 610 involves updating the next silo to reflect the transactions that apply to the next silo (e.g., as described above in conjunction with FIGS. 5 A- 5 B ).
- Step 612 involves writing the next silo into the non-volatile memory (e.g., as described above in conjunction with FIGS. 5 A- 5 B ).
- the method can return to step 604 , such that the silos are updated in a round-robin fashion in accordance with the transactions that are processed.
- FIG. 7 illustrates a method 700 for restoring context information when an inadvertent shutdown of a computing device occurs, according to some embodiments.
- the method 700 begins at step 702 , and involves identifying, during a startup procedure (e.g., a boot, a reboot, a wakeup, etc.), context information within a non-volatile memory, where the context information is separated into a plurality of silos (e.g., as described above in conjunction with FIGS. 2 A- 2 C ).
- Step 704 involves accessing a log stored within the non-volatile memory (e.g., as described above in conjunction with FIGS. 5 C- 5 F ).
- Step 706 involves carrying out steps 708 - 714 for each silo of the plurality of silos.
- step 708 involves loading the silo into the volatile memory (e.g., as described above in conjunction with FIGS. 5 C- 5 F ).
- step 710 involves determining whether at least one transaction in the log (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory (e.g., as described above in conjunction with FIGS. 5 C- 5 F ).
- step 710 If, at step 710 , it is determined that at least one transaction in the log (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory, then the method 700 proceeds to step 712 . Otherwise, the method 700 proceeds back to step 706 , which involves processing a next silo (if any) of the plurality of silos, or the method 700 ends.
- Step 712 involves updating the silo to reflect the at least one transaction (e.g., as described above in conjunction with FIGS. 5 C- 5 F ).
- the controller 116 writes the silo into the non-volatile memory (e.g., as described above in conjunction with FIGS. 5 C- 5 F ).
- the method can proceed back to step 706 , which involves processing a next silo (if any) of the plurality of silos, or ending the method 700 .
- this disclosure primarily involves the controller 116 carrying out the various techniques described herein for the purpose of unified language and simplification.
- other entities can be configured to carry out these techniques without departing from this disclosure.
- other software components e.g., the OS 108 , applications 110 , firmware(s), etc.
- executing on the computing device 102 can be configured to carry out all or a portion of the techniques described herein without departing from the scope of this disclosure.
- other hardware components included in the computing device 102 can be configured to carry out all or a portion of the techniques described herein without departing from the scope of this disclosure.
- all or a portion of the techniques described herein can be offloaded to another computing device without departing from the scope of this disclosure.
- FIG. 8 illustrates a detailed view of a computing device 800 that can be used to implement the various components described herein, according to some embodiments.
- the computing device 800 can include a processor 802 that represents a microprocessor or controller for controlling the overall operation of computing device 800 .
- the computing device 800 can also include a user input device 808 that allows a user of the computing device 800 to interact with the computing device 800 .
- the user input device 808 can take a variety of forms, such as a button, keypad, dial, touch screen, audio input interface, visual/image capture input interface, input in the form of sensor data, etc.
- the computing device 800 can include a display 810 (screen display) that can be controlled by the processor 802 to display information to the user.
- a data bus 816 can facilitate data transfer between at least a storage device 840 , the processor 802 , and a controller 813 .
- the controller 813 can be used to interface with and control different equipment through and equipment control bus 814 .
- the computing device 800 can also include a network/bus interface 811 that couples to a data link 812 . In the case of a wireless connection, the network/bus interface 811 can include a wireless transceiver.
- the computing device 800 also includes a storage device 840 , which can comprise a single disk or a plurality of disks (e.g., SSDs), and includes a storage management module that manages one or more partitions within the storage device 840 .
- storage device 840 can include flash memory, semiconductor (solid state) memory or the like.
- the computing device 800 can also include a Random-Access Memory (RAM) 820 and a Read-Only Memory (ROM) 822 .
- the ROM 822 can store programs, utilities or processes to be executed in a non-volatile manner.
- the RAM 820 can provide volatile data storage, and stores instructions related to the operation of the computing device 102 .
- the various aspects, embodiments, implementations or features of the described embodiments can be used separately or in any combination.
- Various aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software.
- the described embodiments can also be embodied as computer readable code on a computer readable medium.
- the computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, hard disk drives, solid state drives, and optical data storage devices.
- the computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Disclosed herein are techniques for managing context information for data stored within a non-volatile memory of a computing device. According to some embodiments, the method can include (1) loading, into a volatile memory of the computing device, the context information from the non-volatile memory, where the context information is separated into a plurality of silos, (2) writing transactions into a log stored within the non-volatile memory, and (3) each time a condition is satisfied: (i) identifying a next silo of the plurality of silos to be written into the non-volatile memory, (ii) updating the next silo to reflect the transactions that apply to the next silo, and (iii) writing the next silo into the non-volatile memory. In turn, when an inadvertent shutdown of the computing device occurs, the silos of which the context information is comprised can be sequentially accessed and restored in an efficient manner.
Description
- The present application is a continuation of U.S. patent application Ser. No. 15/721,081, entitled “TECHNIQUES FOR MANAGING CONTEXT INFORMATION FOR A STORAGE DEVICE,” filed Sep. 29, 2017, the content of which is incorporated by reference herein in its entirety for all purposes.
- The described embodiments set forth techniques for managing context information for a non-volatile memory (e.g., a solid-state drive (SSD) of a computing device). In particular, the techniques involve segmenting the context information to increase the granularity by which it is transmitted between volatile and non-volatile memories, which can substantially enhance operational efficiency.
- Solid state drives (SSDs) are a type of storage device that share a similar footprint with (and provide similar functionality as) traditional magnetic-based hard disk drives (HDDs). Notably, standard SSDs—which utilize “flash” memory—can provide various advantages over standard HDDs, such as considerably faster Input/Output (I/O) performance. For example, average I/O latency speeds provided by SSDs typically outperform those of HDDs because the I/O latency speeds of SSDs are less-affected when data is fragmented across the memory sectors of SSDs. This occurs because HDDs include a read head component that must be relocated each time data is read/written, which produces a latency bottleneck as the average contiguity of written data is reduced over time. Moreover, when fragmentation occurs within HDDs, it becomes necessary to perform resource-expensive defragmentation operations to improve or restore performance. In contrast, SSDs, which are not bridled by read head components, can preserve I/O performance even as data fragmentation levels increase. SSDs also provide the benefit of increased impact tolerance (as there are no moving parts), and, in general, virtually limitless form factor potential. These advantages—combined with the increased availability of SSDs at consumer-affordable prices—make SSDs a preferable choice for mobile devices such as laptops, tablets, and smart phones.
- Despite the foregoing benefits provided by SSDs, some drawbacks remain that have yet to be addressed. In particular, for a given SSD, the size of the organizational data for managing data stored by the SSD—referred to herein as “context information”—scales directly with the amount of data managed by the SSD. This presents a problem given that the overall storage capacities of SSDs are only increasing with time, thereby leading to increased size requirements for the context information. For example, large-sized context information for a given SSD can lead to performance bottlenecks with regard to both (i) writing the context information (e.g., from a volatile memory) into the SSD, and (ii) restoring the context information when an inadvertent shutdown renders the context information out-of-date. Consequently, there exists a need for an improved technique for managing context information for data stored on SSDs to ensure that acceptable performance metrics remain intact even as the size of the context information scales with the ever-increasing capacities of SSDs.
- The described embodiments set forth techniques for managing context information for a non-volatile memory (e.g., a solid-state drive (SSD) of a computing device). In particular, the techniques involve partitioning the context information into a collection of “silos” that increase the granularity by which the context information is transferred between volatile and non-volatile memories. In this manner, periodic saves of the context information—as well as restorations of the context information in response to inadvertent shutdowns—can be performed more efficiently.
- Accordingly, one embodiment sets forth a method for managing context information for data stored within a non-volatile memory of a computing device. According to some embodiments, the method includes the initial steps of (1) loading, into a volatile memory of the computing device, the context information from the non-volatile memory, where the context information is separated into a plurality of silos, and (2) writing transactions into a log stored within the non-volatile memory (e.g., transactions generated by the computing device). Additionally, the method includes performing additional steps each time a particular condition is satisfied, e.g., whenever a threshold number of transactions are processed by the computing device and written into the log. In particular, the additional steps include (3) identifying a next silo of the plurality of silos to be written into the non-volatile memory (i.e., relative to a last-written silo), (4) updating the next silo to reflect the transactions that apply to the next silo, and (5) writing the next silo into the non-volatile memory.
- Another embodiment sets forth a method for restoring context information when an inadvertent shutdown of a computing device occurs. According to some embodiments, the method can include the steps of (1) identifying the context information within a non-volatile memory of the computing device (e.g., an SSD of the computing device), where the context information is separated into a plurality of silos, and (2) accessing a log stored within the non-volatile memory, where the log reflects a collection of transactions issued by the computing device. Additionally, the method includes performing the following steps for each silo of the plurality of silos: (3) loading the silo into the volatile memory, and (4) in response to identifying, within the log, that at least one transaction (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory: updating the silo to reflect the at least one transaction.
- Other embodiments include a non-transitory computer readable storage medium configured to store instructions that, when executed by a processor included in a computing device, cause the computing device to carry out the various steps of any of the foregoing methods. Further embodiments include a computing device that is configured to carry out the various steps of any of the foregoing methods.
- Other aspects and advantages of the embodiments described herein will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the described embodiments.
- The included drawings are for illustrative purposes and serve only to provide examples of possible structures and arrangements for the disclosed inventive apparatuses and methods for providing wireless computing devices. These drawings in no way limit any changes in form and detail that may be made to the embodiments by one skilled in the art without departing from the spirit and scope of the embodiments. The embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements.
-
FIG. 1 illustrates a block diagram of different components of a system that is configured to implement the various techniques described herein, according to some embodiments. -
FIGS. 2A-2C illustrate conceptual diagrams of example scenarios in which different silos can be transmitted, in a unified manner, between a volatile memory and a non-volatile memory by way of direct memory access (DMA), according to some embodiments. -
FIG. 3 sets forth a conceptual diagram of the manner in which data stored in non-volatile memory can be accessed through logical base addresses (LBAs) using the indirection techniques described herein, according to some embodiments. -
FIG. 4 illustrates a conceptual diagram of an example scenario that sets forth the manner in which first and second tier entries associated with a given silo can be used to reference data stored within a non-volatile memory, according to some embodiments. -
FIGS. 5A-5F provide conceptual diagrams of an example scenario in which the various techniques described herein—i.e., the silo-based partitions and indirection paradigms—can be utilized to improve the overall operational efficiency of a computing device. -
FIG. 6 illustrates a method for managing context information for data stored within a non-volatile memory of a computing device, according to some embodiments. -
FIG. 7 illustrates a method for restoring context information when an inadvertent shutdown of a computing device occurs, according to some embodiments. -
FIG. 8 illustrates a detailed view of a computing device that can be used to implement the various components described herein, according to some embodiments. - Representative applications of apparatuses and methods according to the presently described embodiments are provided in this section. These examples are being provided solely to add context and aid in the understanding of the described embodiments. It will thus be apparent to one skilled in the art that the presently described embodiments can be practiced without some or all of these specific details. In other instances, well known process steps have not been described in detail in order to avoid unnecessarily obscuring the presently described embodiments. Other applications are possible, such that the following examples should not be taken as limiting.
- The embodiments disclosed herein set forth techniques for managing context information for data stored within a non-volatile memory (e.g., a solid-state storage device (SSD)) managed by a computing device. In particular, the techniques involve partitioning the context information into a collection of “silos” in order to increase the granularity by which the context information is transmitted between a volatile memory (e.g., a random-access memory (RAM)) of the computing device and the non-volatile memory of the computing device. For example, direct memory access (DMA) can be utilized to sequentially write different ones of the silos from the volatile memory into the non-volatile memory, which substantially reduces latency in comparison to writing the context information in its entirety. Moreover, when an inadvertent shutdown of the computing device occurs—and the context information is not up-to-date within the non-volatile memory—the silos of which the context information is comprised can be sequentially accessed/restored (e.g., based on logged transactional information), which further reduces latency in comparison to restoring the context information in its entirety.
-
FIG. 1 illustrates a block diagram 100 of acomputing device 102—e.g., a smart phone, a tablet, a laptop, a desktop, a server, etc.—that is configured implement the various techniques described herein. As shown inFIG. 1 , thecomputing device 102 can include aprocessor 104 that, in conjunction with a volatile memory 106 (e.g., a dynamic random access memory (DRAM)) and a storage device 114 (e.g., a solid-state drive (SSD)), enables different software entities to execute on thecomputing device 102. For example, theprocessor 104 can be configured to load, from thestorage device 114 into thevolatile memory 106, various components for an operating system (OS) 108. In turn, the OS 108 can enable thecomputing device 102 to provide a variety of useful functions, e.g., loading/executing various applications 110 (e.g., user applications). It should be understood that the various hardware components of thecomputing device 102 illustrated inFIG. 1 are presented at a high level in the interest of simplification, and that a more detailed breakdown is provided below in conjunction withFIG. 8 . - According to some embodiments, and as shown in
FIG. 1 , thestorage device 114 can include acontroller 116 that is configured to orchestrate the overall operation of thestorage device 114. For example, thecontroller 116 can be configured to process input/output (I/O) requests—referred to herein as “transactions”—is sued by theOS 108/applications 110 to thestorage device 114. According to some embodiments, thecontroller 116 can include a parity engine for establishing various parity information for the data stored by thestorage device 114 to improve overall recovery scenarios. It is noted that thecontroller 116 can include additional entities that enable the implementation of the various techniques described herein without departing from the scope of this disclosure. It is further noted that these entities can be combined or split into additional entities without departing from the scope of this disclosure. It is additionally noted that the various entities described herein can be implemented using software-based or hardware-based approaches without departing from the scope of this disclosure. - In any case, as shown in
FIG. 1 , thestorage device 114 can include a non-volatile memory 118 (e.g., flash memory) that is composed of a collection of dies. According to some embodiments, different “bands” can be established within thenon-volatile memory 118, where each band spans the collection of dies. It is noted that one or more of the dies can be reserved by thestorage device 114—e.g., for overprovisioning-based techniques—without departing from the scope of this disclosure, such that a given band can span a subset of the dies that are available within thenon-volatile memory 118. In this regard, the overall “width” of a band can be defined by the number of dies that the band spans. Continuing with this notion, the overall “height” of the band can be defined by a number of “stripes” into which the band is separated. Additionally, and according to some embodiments, each stripe within the band can be separated into a collection of pages, where each page is disposed on a different die of thenon-volatile memory 118. For example, when a given band spans five different dies—and is composed of five different stripes—a total of twenty-five (25) pages are included in the band, where each column of pages is disposed on the same die. In this manner, the data within a given band can be separated across thenon-volatile memory 118 in a manner that enables redundancy-based protection to be established without significantly impacting the overall performance of thestorage device 114. - As shown in
FIG. 1 , the aforementioned bands managed by thestorage device 114 can include alog band 120, anindirection band 122, and adata band 124. According to some embodiments, transactional information associated with theindirection band 122/data band 124—e.g., details associated with I/O requests processed by thecontroller 116—can be written into thelog band 120. As described in greater detail herein, this transactional information can be utilized to restore the content of theindirection band 122 when an inadvertent shutdown of thecomputing device 102 renders at least a portion of the content out-of-date. - According to some embodiments, the content stored in the
indirection band 122 can includecontext information 112 that serves as a mapping table for data that is stored within thedata band 124. As shown inFIG. 1 , thecontext information 112 can be transmitted between thevolatile memory 106 and thenon-volatile memory 118 using direct memory access (DMA) 150. In particular, theDMA 150 can enable theprocessor 104 to play little or no role in the data transmissions between thevolatile memory 106 and thenon-volatile memory 118, which can improve efficiency. It is noted, however, that any technique can be utilized to transmit data between thevolatile memory 106 and thenon-volatile memory 118 without departing from the scope of this disclosure. In any case, as shown inFIG. 1 , thecontext information 112 can be segmented into a collection ofsilos 130, which, as described in greater detail herein, increases the granularity by which thecontext information 112 can be transmitted between thevolatile memory 106 and thenon-volatile memory 118. According to some embodiments, and as shown inFIG. 1 , eachsilo 130 can include metadata 132 and a context information subset 134. According to some embodiments, the metadata 132 for a givensilo 130 can include descriptive information about thesilo 130, e.g., an index of the silo 130 (relative to the other silos 130), a size of thesilo 130, and so on. Additionally, the context information subset 134 for a givensilo 130 can include a respective portion of thecontext information 112 to which thesilo 130 corresponds. - According to some embodiments, and as described in greater detail herein, the
context information 112 can be organized into a hierarchy that includes first and second depth levels. In particular, the first depth level can correspond to a collection of first-tier entries, while the second depth level can correspond to a collection of second-tier entries. According to some embodiments, the first and second-tier entries can store data in accordance with different encoding formats that coincide with the manner in which thenon-volatile memory 118 is partitioned into different sectors. For example, when each sector represents a 4 KB sector of memory, each first-tier entry can correspond to a contiguous collection of two hundred fifty-six (256) sectors. In this regard, the value of a given first-tier entry can indicate whether the first-tier entry (1) directly refers to a physical location (e.g., an address of a starting sector) within thenon-volatile memory 118, or (2) directly refers (e.g., via a pointer) to one or more second-tier entries. According to some embodiments, when condition (1) is met, it is implied that all (e.g., the two-hundred fifty-six (256)) sectors associated with the first-tier entry are contiguously written, which can provide a compression ratio of 1/256. More specifically, this compression ratio can be achieved because the first-tier entry stores a pointer to a first sector of the two hundred fifty-six (256) sectors associated with the first-tier entry, where no second-tier entries are required. Alternatively, when condition (2) is met, information included in the first-tier entry indicates (i) one or more second-tier entries that are associated with the first-tier entry, as well as (ii) how the information in the one or more second-tier entries should be interpreted. Using this approach, each second-tier entry can refer to one or more sectors, thereby enabling data to be disparately stored across the sectors of thenon-volatile memory 118. A more detailed description of the first-tier entries and second-tier entries is provided below in conjunction withFIGS. 3-4 . - Accordingly,
FIG. 1 provides high-level overview of the manner in which thecomputing device 102 can be configured to implement the techniques described herein. A more detailed explanation of these techniques will now be provided below in conjunction withFIGS. 2A-2C, 3-4, 5A-5F, and 6-8 . -
FIGS. 2A-2C illustrate conceptual diagrams of example scenarios in whichdifferent silos 130 can be transmitted, in a unified manner, between thevolatile memory 106 and thenon-volatile memory 118 by way ofdirect memory access 150, according to some embodiments. In particular,FIGS. 2A-2C illustrate that the context information subset 134 of a givensilo 130—i.e., the first and second-tier entries that correspond to thesilo 130—can be separately-stored from one another, yet remain capable of being transmitted between thevolatile memory 106 and thenon-volatile memory 118 in a unified manner. In other words, the techniques set forth herein enable the context information subset 134 of thesilo 130 to be transmitted between thevolatile memory 106 and thenon-volatile memory 118 in the form of a snapshot-like image despite representing only a portion of thecontext information 112. - According to some embodiments, and as shown in
FIG. 2A , aTier 1space 202 can be configured to store the different first-tier entries that correspond to thesilos 130. In particular, theTier 1space 202 can be configured to represent a span of logical base addresses (LBAs), where the first-tier entries of eachsilo 130 correspond to a respective portion of the LBAs. For example, when thecontext information 112 is separated into thirty-two (32)different silos 130, eachsilo 130 can correspond to a respective 1/32 of the LBAs. In this regard, theTier 1space 202 can be fixed in size, whereas aTier 2space 204 can be dynamically expanded/contracted to accommodate second-tier entries as they are established/removed over time. Notably, it is important to ensure that the context information subset 134 for a givensilo 130—which includes first and second-tier entries—can continue to be transmitted in a unified operation even as theTier 2space 204 fluctuates over time. To achieve this result, the embodiments can involve expanding theTier 2space 204 for allsilos 130 even when only asingle silo 130 is seeking to store additional second-tier entries (e.g., that cannot fit within existingTier 2 space 204). For example, as indicated inFIG. 2A , adding a new column into theTier 2space 204 effectively expands theTier 2space 204 for all of thesilos 130. Similarly, when a particular column in theTier 2space 204 is no longer needed—e.g., when all second-tier entries for all of thesilos 130 are eliminated (e.g., through data deletions, defragmentation operations, etc.)—the column can be removed from theTier 2space 204. - In any case, in
FIG. 2A , a first example can involve transmitting the silo 130-0—specifically, the context information subset 134-0 of the silo 130-0—between thevolatile memory 106 and thenon-volatile memory 118 usingdirect memory access 150. In particular, the first example can involve transmitting, in a unified manner, (1) the first-tier entries associated with the silo 130-0—illustrated inFIG. 2A asSilo_0 Tier 1 entries 208-0—and (2) the second-tier entries associated with the silo 130-0—illustrated inFIG. 2A asSilo_0 Tier 2 entries 210-0. According to some embodiments, the overall layout of the context information subset 134 (i.e.,Silo_0 Tier 1 entries 208-0/Silo_0 Tier 2 entries 210-0) can be maintained when transmitted between thevolatile memory 106 and thenon-volatile memory 118 such that little operational overhead is required. For example, when written from thevolatile memory 106 into thenon-volatile memory 118, the context information subset 134-0 (of the silo 130-0) can be written into a corresponding area of thecontext information 112 in theindirection band 122 without requiring a reorganization/reformatting of the context information subset 134-0. Conversely, when read from thenon-volatile memory 118 into thevolatile memory 106, the context information subset 134-0 can be written into an available area of the volatile memory 106 (e.g., allocated for the context information 112) without requiring a reorganization/reformatting of the context information subset 134-0. In this manner, thesilos 130 can be transmitted between thevolatile memory 106 and thenon-volatile memory 118 in a unified/snapshot-like manner, thereby substantially enhancing efficiency. Moreover, thedirect memory access 150 techniques described herein can enable both thevolatile memory 106 and thenon-volatile memory 118 to directly-transmit the context information subsets 134 of thesilos 130 between one another without requiring intensive involvement of theprocessor 104, thereby further enhancing operational efficiency. - Additionally,
FIGS. 2B-2C provide further examples ofsilo 130 transfers between thevolatile memory 106 and thenon-volatile memory 118. In particular,FIGS. 2B-2C further-convey the notion that the context information subsets 134 ofdifferent silos 130 can be separately stored from one another, yet remain capable of being transmitted between thevolatile memory 106 and thenon-volatile memory 118 in a unified manner. For example,FIG. 2B illustrates an additional example that involves transmitting the silo 130-1—specifically, the context information subset 134-1 of the silo 130-1—between thevolatile memory 106 and thenon-volatile memory 118 usingdirect memory access 150. Further,FIG. 2C illustrates another example that involves transmitting the silo 130-J—specifically, the context information subset 134-J of the silo 130-J—between thevolatile memory 106 and thenon-volatile memory 118 usingdirect memory access 150. - Accordingly,
FIGS. 2A-2C illustrate conceptual diagrams of example scenarios in whichdifferent silos 130 can be transmitted, in a unified manner, between thevolatile memory 106 and thenon-volatile memory 118 by way ofdirect memory access 150, according to some embodiments. It is noted thatdirect memory access 150 is not a requirement of the embodiments set forth herein, and that any approach can be utilized when transferring thesilos 130 between thevolatile memory 106 and thenon-volatile memory 118. -
FIG. 3 sets forth a conceptual diagram 300 of the manner in which data stored in non-volatile memory 118 (e.g., in the data band 124) can be accessed through logical base addresses (LBAs) using the indirection techniques described herein, according to some embodiments. In particular, and as shown inFIG. 3 , an example LBA encoding scheme 302 can include aTier 1index 304, asilo index 306, and aTier 1 offset 308. It is noted that the number of bits allocated to each of theTier 1index 304, thesilo index 306, and theTier 1 offset 308 are not drawn to scale inFIG. 3 , and that these values can be assigned any number of bits without departing from the scope of this disclosure. In any case, as shown inFIG. 3 , theTier 1index 304/silo index 306 can collectively refer to a particular group of first-tier entries (e.g.,Silo_0 Tier 1 entries 208-0) associated with aparticular silo 130, and theTier 1 offset 308 can refer to a particular first-tier entry within the particular group of first-tier entries (e.g.,Silo_0 Tier 1 entry 208-0-0). As previously described herein, and as illustrated inFIG. 3 , each first-tier entry can refer to a physical location (e.g., via an address of a starting sector) within thenon-volatile memory 118. Alternatively, and as illustrated inFIG. 3 , each first-tier entry can refer to at least one second-tier entry (e.g., theSilo_0 Tier 2 entry 210-0-0-0 within theSilo_0 Tier 2 entries 210-0-0), where each second-tier entry can refer to one or more sectors of thenon-volatile memory 118. - It is noted that a more detailed breakdown of various indirection techniques that can be utilized by the embodiments set forth herein can be found in U.S. patent application Ser. No. 14/710,495, filed May 12, 2015, entitled “METHODS AND SYSTEM FOR MAINTAINING AN INDIRECTION SYSTEM FOR A MASS STORAGE DEVICE,” the content of which is incorporated by reference herein in its entirety.
- To provide additional understanding of the indirection techniques described herein,
FIG. 4 illustrates a conceptual diagram 400 of an example scenario that sets forth the manner in which first and second tier entries associated with a givensilo 130—in particular, the silo 130-0—can be used to reference data stored withindifferent sectors 402 of thenon-volatile memory 118, according to some embodiments. In particular, and as shown inFIG. 4 ,several Silo_0 Tier 1 entries 208-0 associated with the silo 130-0 are depicted, where at least one of theSilo_0 Tier 1 entries 208-0—in particular, theSilo_0 Tier 1 entry 208-0-5—does not reference anySilo_0 Tier 2 entries 210-0-0. Instead, theSilo_0 Tier 1 entry 208-0-5 directly-references aparticular sector 402 of thenon-volatile memory 118. According to this example, theSilo_0 Tier 1 entry 208-0-5 can represent a pass-through first-tier entry that corresponds to a contiguous span of sectors 402 (as previously described herein). As also illustrated inFIG. 4 , at least one of theSilo_0 Tier 1 entries 208-0—in particular, theSilo_0 Tier 1 entry 208-0-1—references at least one of theSilo_0 Tier 2 entries 210-0-0—in particular, theSilo_0 Tier 2 entry 210-0-0-0. In this regard, theSilo_0 Tier 2 entry 210-0-0-0—along with anyother Silo_0 Tier 2 entries 210-0-0 that correspond to theSilo_0 Tier 1 entry 208-0-1—establish an indirect reference between theSilo_0 Tier 1 entry 208-0-1 and at least onesector 402 of thenon-volatile memory 118. Accordingly, indirection techniques described herein enable each LBA to refer to content stored in thenon-volatile memory 118 through only one or two levels of hierarchy, thereby providing a highly-efficient architecture on which the various techniques described herein can be implemented. - At this juncture,
FIGS. 5A-5F provide conceptual diagrams of an example scenario in which the various techniques described herein—i.e., the silo-based partitions and indirection paradigms—can be utilized to improve the overall operational efficiency of thecomputing device 102. In particular, the example scenario illustrated inFIGS. 5A-5B involves efficiently writing four (4) of six (6)total silos 130 from thevolatile memory 106 into thenon-volatile memory 118 as transactions are received and carried out by thecontroller 116. Moreover, the example scenario illustrated inFIGS. 5C-5F involves the controller 116 (1) encountering an inadvertent shutdown that compromises the overall coherency of the sixsilos 130 in thenon-volatile memory 118, and (2) efficiently carrying out a procedure to restore the coherency of the sixsilos 130. It is noted that the example scenario set forth inFIGS. 5A-5F involves sixsilos 130 in the interest of simplifying this disclosure, and that any number ofsilos 130 can be implemented without departing from the scope of this disclosure. - To provide a detailed understanding of the circular manner in which the
silos 130 are written from thevolatile memory 106 into thenon-volatile memory 118, a first step inFIG. 5A occurs afterprevious transactions 501 are processed and cause the silo 130-5 to be the last-writtensilo 130 from thevolatile memory 106 to thenon-volatile memory 118. In this regard, the silo 130-4 is the last-writtensilo 130 relative to the silo 130-5, the silo 130-3 is the last-writtensilo 130 relative to silo 130-4, and so on. In this manner, a round-robin approach is utilized such that a successive silo 130 (relative to a previous silo 130) is written from thevolatile memory 106 into thenon-volatile memory 118 in accordance with different conditions being met, e.g., a threshold number of transactions being received, an amount of time lapsing, a particular functionality being executed (e.g., garbage collection, defragmentation, etc.), and the like. - Accordingly, and as shown in
FIG. 5A , the first step involves thecontroller 116 receiving and processing a number oftransactions 502. As previously noted herein, each transaction can represent one or more I/O requests that are directed toward thestorage device 114. For example, atransaction 502 can involve writing, modifying, or removing data from thedata band 124 within thenon-volatile memory 118. It is noted that the foregoing example is not meant to be limiting, and that the transactions described herein encompass any form of I/O operation(s) directed toward thenon-volatile memory 118 of thestorage device 114. As shown inFIG. 5A , transactional information associated with each of thetransactions 502 can be recorded within thelog band 120 within thenon-volatile memory 118. According to some embodiments, the transactional information can include pointers to thecontext information 112 stored within theindirection band 122. In particular, these pointers can enable an efficient restoration of thecontext information 112 to be carried out in response to inadvertent shutdowns of thecomputing device 102, the details of which are described below in conjunction withFIGS. 5C-5F . According to some embodiments, different log files can be managed within thelog band 120, and can be used to store transactional information associated with the transactions as they are processed. Moreover, redundant copies of log file portions can be stored within thelog band 120, thereby improving the efficacy of recovery procedures even when severe failure events take place. For example, for each log file portion stored on a first die of thenon-volatile memory 118, a copy of the log file portion can be stored on a second (i.e., different) die of thenon-volatile memory 118. In this manner, each log file portion can be recovered even when the first or the second die fails within thenon-volatile memory 118. - As shown in
FIG. 5A , thecontroller 116 can be configured to carry out a context save 504 in response to identifying that a threshold number of transactions have been processed. It is noted, however, that thecontroller 116 can be configured to carry out context saves in response to other conditions being satisfied. For example, thecontroller 116 can be configured to periodically carry out context saves regardless of the number of transactions that have been processed. In another example, thecontroller 116 can be configured to carry out context saves in response to different types of events being completed, e.g., garbage collection events, defragmentation events, and so on. It is noted that the foregoing examples are not meant to represent an exhaustive list, and that any number of conditions, associated with any aspects of the operation of thecomputing device 102, can cause thecontroller 116 to carry out context saves described herein. - In any case, as shown in
FIG. 5A , the context save 504 can involve (1) updating the silo 130-0 to reflect thetransactions 502, and (2) writing the silo 130-0 from thevolatile memory 106 into thenon-volatile memory 118. In particular, and as previously described above in conjunction withFIGS. 2A-2C , writing the silo 130-0 can involve transmitting all or a portion of the information associated with the silo 130-0, e.g., the metadata 132-0, the context information subset 134-0, etc., into a corresponding area within thecontext information 112 stored within theindirection band 122. According to some embodiments, the silo 130-0 can be placed into a locked state prior to the silo 130-0 being updated/written from thevolatile memory 106 into thenon-volatile memory 118 to ensure that the state of the silo 130-0 is not inappropriately modified. Additionally, the context save 504 can involve writing information into thelog band 120 to indicate whether the silo 130-0 was successfully written into thenon-volatile memory 118. For example, when the silo 130-0 is successfully written from thevolatile memory 106 to thenon-volatile memory 118, thecontroller 116 can generate a key that corresponds to the silo 130-0, and place the key into thelog band 120. In this manner, thelog band 120 can be parsed at a later time to identify the last-writtensilo 130 among thesilos 130. As described below in greater detail in conjunction withFIGS. 5C-5F , the indication of the last-writtensilo 130 enables the recovery techniques described herein to be implemented in an efficient manner. - Additionally, the second step illustrated in
FIG. 5A —as well as the third and fourth steps illustrated inFIG. 5B —provide additional understanding for thesilo 130 write techniques set forth herein. For example, the second step inFIG. 5A involves (1) writingtransactions 506 into thelog band 120, and (2) in accordance with a context save 508, updating the silo 130-1/writing the silo 130-1 from thevolatile memory 106 into thenon-volatile memory 118. Additionally, the third step ofFIG. 5B involves (1) writingtransactions 510 into thelog band 120, and (2) in accordance with a context save 512, updating the silo 130-2/writing the silo 130-2 from thevolatile memory 106 into thenon-volatile memory 118. Further, the fourth step ofFIG. 5B involves (1) writingtransactions 514 into thelog band 120, and (2) in accordance with a context save 516, updating the silo 130-3/writing the silo 130-3 from thevolatile memory 106 into thenon-volatile memory 118. - Accordingly, the various steps illustrated in
FIGS. 5A-5B provide a detailed understanding of the benefits that can be achieved through segmenting thecontext information 112 when writing thecontext information 112 from thevolatile memory 106 into thenon-volatile memory 118. As previously described herein, these benefits can also apply to recovery scenarios in which thecontext information 112 is rendered out-of-date and needs to be restored in accordance with the transaction information stored in thelog band 120. For example, an inadvertent shutdown of thecomputing device 102 can cause a scenario in which (1) at least one transaction that affects aparticular silo 130 has been written into thelog band 120, and (2) thesilo 130 has not been written from thevolatile memory 106 into thenon-volatile memory 118. In this scenario, thesilo 130 stored within thenon-volatile memory 118 is out-of-date, as the state of thesilo 130 does not appropriately reflect the at least one transaction. Accordingly, it is necessary to restore thesilo 130 to an up-to-date state (in accordance with the at least one transaction) to ensure that thestorage device 114—and thecomputing device 102 as a whole—are operating correctly. - Accordingly,
FIG. 5C continues the example scenario illustrated inFIGS. 5A-5B , and involves a fifth step in which aninadvertent shutdown 520 of thecomputing device 102 occurs (1) aftertransactions 518 are written into thelog band 120, but (2) before the silo 130-4 is written from thevolatile memory 106 into thenon-volatile memory 118. In turn, a sixth step illustrated inFIG. 5C involves thecontroller 116 initializing a recovery procedure (e.g., during a boot, reboot, wakeup, etc., of the computing device 102) to restore thecontext information 112. In particular, and as shown inFIG. 5C , the sixth step involves thecontroller 116 identifying that the silo 130-3 was thelast silo 130 that was written from thevolatile memory 106 into thenon-volatile memory 118. For example, as previously described above, thecontroller 116 can reference thelog band 120—e.g., the transaction logs, the keys stored therein, etc.—to identify that the silo 130-3 was the last-writtensilo 130. In turn, to carry out the recovery procedure, thecontroller 116 can load the silo 130-4 into thevolatile memory 106. In particular, thecontroller 116 loads the silo 130-4 because the silo 130-4 is the most out-of-date silo 130 relative to theother silos 130, with the assumption that thesilos 130 are written in a sequential, circular, and repetitive fashion (e.g., as described inFIGS. 5A-5B ). In this regard, it can be efficient to restore the silo 130-4 first, as it can be likely that the silo 130-4 will require the most updates relative to theother silos 130. - Accordingly, as shown in
FIG. 5C —and after the silo 130-4 is loaded into thevolatile memory 106—thecontroller 116 can identify, e.g., within the transaction information stored in thelog band 120—any transactions that (1) apply to the silo 130-4, and (2) occurred after the silo 130-4 was last-written from thevolatile memory 106 into thenon-volatile memory 118. In turn, if thecontroller 116 identifies any transactions using the foregoing criteria, thecontroller 116 can “replay” the transactions against the silo 130-4—in particular, the context information subset 134-4 of the silo 130-4—in accordance with the transactions. This can involve, for example, updating first/second tier entries included in the context information subset 134-4 so that they reference the appropriate areas of the non-volatile memory 118 (in accordance with the transactions). - According to some embodiments, when the transactions have been effectively replayed, the silo 130-4 is in an up-to-date state, and the silo 130-4 can optionally be written from the
volatile memory 106 into thenon-volatile memory 118. Additionally, the transaction information stored in thelog band 120 can be updated to reflect that the silo 130-4 has been successfully written. In this manner, if another inadvertent shutdown occurs during the recovery procedure, the same updates made to the silo 130-4 during the restoration of the sixth step ofFIG. 5C will not need to be carried out again, thereby increasing efficiency. Alternatively, the silo 130-4 will be written from thevolatile memory 106 into thenon-volatile memory 118 in due course, e.g., when thecomputing device 102 resumes normal operation after the recovery procedure is completed. - In any case, at this juncture, it is noted that the transactions that occurred after the silo 130-3 was written from the
volatile memory 106 into thenon-volatile memory 118 can potentially apply to one or more of the remaining five silos 130-5, 130-0, 130-1, 130-2, and 130-3. Accordingly,FIGS. 5D-5F illustrate steps seven through eleven of the recovery procedure, which involve restoring each of the remaining five silos 130-5, 130-0, 130-1, 130-2, and 130-3. For example, step seven illustrated inFIG. 5D illustrates a recovery procedure for the silo 130-5 that is carried out by thecontroller 116. Additionally, step eight illustrated inFIG. 5D illustrates a recovery procedure for the silo 130-0 that is carried out by thecontroller 116. Additionally, step nine illustrated inFIG. 5E illustrates a recovery procedure for the silo 130-1 that is carried out by thecontroller 116. Additionally, step ten illustrated inFIG. 5E illustrates a recovery procedure for the silo 130-2 that is carried out by thecontroller 116. Additionally, step eleven illustrated inFIG. 5F illustrates a recovery procedure for the silo 130-3 that is carried out by thecontroller 116. In turn, at step twelve illustrated inFIG. 5F , each of the sixsilos 130 have been properly restored, whereupon thecomputing device 102/storage device 114 can enter back into a normal operating mode and processnew transactions 550. - Accordingly,
FIGS. 5A-5F provide conceptual diagrams of an example scenario in which the various techniques described herein—i.e., the silo-based partitions and indirection paradigms—can be utilized to improve the overall operational efficiency of thecomputing device 102. To provide further context,FIGS. 6-7 illustrate method diagrams that can be carried out to implement the various techniques described herein, which will now be described below in greater detail. -
FIG. 6 illustrates amethod 600 for managing context information for data stored within a non-volatile memory of a computing device, according to some embodiments. As shown inFIG. 6 , themethod 600 begins atstep 602, and involves loading context information into a volatile memory (of the computing device) from the non-volatile memory, where the context information is separated into a plurality of silos (e.g., as described above in conjunction withFIGS. 2A-2C ). Step 604 involves writing transactions into a log stored within the non-volatile memory (e.g., as described above in conjunction withFIGS. 5A-5B ). Step 606 involves determining whether at least one condition is satisfied (e.g., the conditions described above in conjunction withFIG. 5A ). If, atstep 606, it is determined that condition is satisfied, then themethod 600 proceeds to step 608. Otherwise, themethod 600 proceeds back to step 604, where transactions are received/written into the log (until the at least one condition is satisfied). - Step 608 involves identifying a next silo of the plurality of silos to be written into the non-volatile memory (e.g., as described above in conjunction with
FIGS. 5A-5B ). Step 610 involves updating the next silo to reflect the transactions that apply to the next silo (e.g., as described above in conjunction withFIGS. 5A-5B ). Step 612 involves writing the next silo into the non-volatile memory (e.g., as described above in conjunction withFIGS. 5A-5B ). In turn, the method can return to step 604, such that the silos are updated in a round-robin fashion in accordance with the transactions that are processed. -
FIG. 7 illustrates amethod 700 for restoring context information when an inadvertent shutdown of a computing device occurs, according to some embodiments. As shown inFIG. 7 , themethod 700 begins atstep 702, and involves identifying, during a startup procedure (e.g., a boot, a reboot, a wakeup, etc.), context information within a non-volatile memory, where the context information is separated into a plurality of silos (e.g., as described above in conjunction withFIGS. 2A-2C ). Step 704 involves accessing a log stored within the non-volatile memory (e.g., as described above in conjunction withFIGS. 5C-5F ). Step 706 involves carrying out steps 708-714 for each silo of the plurality of silos. In particular,step 708 involves loading the silo into the volatile memory (e.g., as described above in conjunction withFIGS. 5C-5F ). In turn,step 710 involves determining whether at least one transaction in the log (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory (e.g., as described above in conjunction withFIGS. 5C-5F ). If, atstep 710, it is determined that at least one transaction in the log (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory, then themethod 700 proceeds to step 712. Otherwise, themethod 700 proceeds back to step 706, which involves processing a next silo (if any) of the plurality of silos, or themethod 700 ends. Step 712 involves updating the silo to reflect the at least one transaction (e.g., as described above in conjunction withFIGS. 5C-5F ). Atstep 714, thecontroller 116 writes the silo into the non-volatile memory (e.g., as described above in conjunction withFIGS. 5C-5F ). In turn, the method can proceed back to step 706, which involves processing a next silo (if any) of the plurality of silos, or ending themethod 700. - It is noted that this disclosure primarily involves the
controller 116 carrying out the various techniques described herein for the purpose of unified language and simplification. However, it is noted that other entities can be configured to carry out these techniques without departing from this disclosure. For example, other software components (e.g., theOS 108,applications 110, firmware(s), etc.) executing on thecomputing device 102 can be configured to carry out all or a portion of the techniques described herein without departing from the scope of this disclosure. Moreover, other hardware components included in thecomputing device 102 can be configured to carry out all or a portion of the techniques described herein without departing from the scope of this disclosure. Further, all or a portion of the techniques described herein can be offloaded to another computing device without departing from the scope of this disclosure. -
FIG. 8 illustrates a detailed view of acomputing device 800 that can be used to implement the various components described herein, according to some embodiments. In particular, the detailed view illustrates various components that can be included in thecomputing device 102 illustrated inFIG. 1 . As shown inFIG. 8 , thecomputing device 800 can include aprocessor 802 that represents a microprocessor or controller for controlling the overall operation ofcomputing device 800. Thecomputing device 800 can also include auser input device 808 that allows a user of thecomputing device 800 to interact with thecomputing device 800. For example, theuser input device 808 can take a variety of forms, such as a button, keypad, dial, touch screen, audio input interface, visual/image capture input interface, input in the form of sensor data, etc. Still further, thecomputing device 800 can include a display 810 (screen display) that can be controlled by theprocessor 802 to display information to the user. Adata bus 816 can facilitate data transfer between at least astorage device 840, theprocessor 802, and acontroller 813. Thecontroller 813 can be used to interface with and control different equipment through andequipment control bus 814. Thecomputing device 800 can also include a network/bus interface 811 that couples to adata link 812. In the case of a wireless connection, the network/bus interface 811 can include a wireless transceiver. - The
computing device 800 also includes astorage device 840, which can comprise a single disk or a plurality of disks (e.g., SSDs), and includes a storage management module that manages one or more partitions within thestorage device 840. In some embodiments,storage device 840 can include flash memory, semiconductor (solid state) memory or the like. Thecomputing device 800 can also include a Random-Access Memory (RAM) 820 and a Read-Only Memory (ROM) 822. TheROM 822 can store programs, utilities or processes to be executed in a non-volatile manner. TheRAM 820 can provide volatile data storage, and stores instructions related to the operation of thecomputing device 102. - The various aspects, embodiments, implementations or features of the described embodiments can be used separately or in any combination. Various aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software. The described embodiments can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, hard disk drives, solid state drives, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Claims (20)
1. A method for restoring context information when an inadvertent shutdown of a computing device occurs, the method comprising, at the computing device:
identifying context information within a non-volatile memory, wherein the context information is separated into a plurality of silos, and each silo of the plurality of silos consists of a respective and distinct range of contiguous logical base addresses (LBAs) associated with the non-volatile memory;
accessing a transaction log that is isolated from the context information, wherein the transaction log is stored within the non-volatile memory; and
for each silo of the plurality of silos:
loading the silo into a volatile memory, and
in response to identifying, within the transaction log, that at least one transaction (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory:
updating the silo to reflect the at least one transaction.
2. The method of claim 1 , further comprising, for each silo of the plurality of silos, and subsequent to updating the silo to reflect the at least one transaction:
writing the silo into the non-volatile memory.
3. The method of claim 1 , wherein direct memory access (DMA) is utilized to write the plurality of silos between the volatile memory and the non-volatile memory.
4. The method of claim 1 , wherein each silo of the plurality of silos includes:
metadata associated with the silo;
a first memory structure that includes a plurality of first-tier entries; and
a second memory structure that includes a plurality of second-tier entries.
5. The method of claim 4 , wherein, for a given silo of the plurality of silos, each first-tier entry of the plurality of first-tier entries references (1) an area of memory within the non-volatile memory, or (2) at least one second-tier entry within the second memory structure.
6. The method of claim 5 , wherein, for a given silo of the plurality of silos, each second-tier entry of the plurality of second-tier entries references an area of memory within the non-volatile memory.
7. The method of claim 4 , wherein the first memory structure and the second memory structure are not contiguously stored within the volatile memory and/or the non-volatile memory.
8. A non-transitory computer readable storage medium configured to store instructions that, when executed by a processor included in a computing device, cause the computing device to restore context information when an inadvertent shutdown of the computing device occurs, by carrying out steps that include:
identifying context information within a non-volatile memory, wherein the context information is separated into a plurality of silos, and each silo of the plurality of silos consists of a respective and distinct range of contiguous logical base addresses (LBAs) associated with the non-volatile memory;
accessing a transaction log that is isolated from the context information, wherein the transaction log is stored within the non-volatile memory; and
for each silo of the plurality of silos:
loading the silo into a volatile memory, and
in response to identifying, within the transaction log, that at least one transaction (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory:
updating the silo to reflect the at least one transaction.
9. The non-transitory computer readable storage medium of claim 8 , wherein the steps further include, for each silo of the plurality of silos, and subsequent to updating the silo to reflect the at least one transaction:
writing the silo into the non-volatile memory.
10. The non-transitory computer readable storage medium of claim 8 , wherein direct memory access (DMA) is utilized to write the plurality of silos between the volatile memory and the non-volatile memory.
11. The non-transitory computer readable storage medium of claim 8 , wherein each silo of the plurality of silos includes:
metadata associated with the silo;
a first memory structure that includes a plurality of first-tier entries; and
a second memory structure that includes a plurality of second-tier entries.
12. The non-transitory computer readable storage medium of claim 11 , wherein, for a given silo of the plurality of silos, each first-tier entry of the plurality of first-tier entries references (1) an area of memory within the non-volatile memory, or (2) at least one second-tier entry within the second memory structure.
13. The non-transitory computer readable storage medium of claim 12 , wherein, for a given silo of the plurality of silos, each second-tier entry of the plurality of second-tier entries references an area of memory within the non-volatile memory.
14. The non-transitory computer readable storage medium of claim 11 , wherein the first memory structure and the second memory structure are not contiguously stored within the volatile memory and/or the non-volatile memory.
15. A computing device configured to restore context information when an inadvertent shutdown of the computing device occurs, the computing device comprising a processor configured to cause the computing device to carry out steps that include:
identifying context information within a non-volatile memory, wherein the context
information is separated into a plurality of silos, and each silo of the plurality of silos consists of a respective and distinct range of contiguous logical base addresses (LBAs) associated with the non-volatile memory;
accessing a transaction log that is isolated from the context information, wherein the transaction log is stored within the non-volatile memory; and
for each silo of the plurality of silos:
loading the silo into a volatile memory, and
in response to identifying, within the transaction log, that at least one transaction (i) applies to the silo, and (ii) occurred after a last write of the silo into the non-volatile memory:
updating the silo to reflect the at least one transaction.
16. The computing device of claim 15 , wherein the steps further include, for each silo of the plurality of silos, and subsequent to updating the silo to reflect the at least one transaction:
writing the silo into the non-volatile memory.
17. The computing device of claim 15 , wherein direct memory access (DMA) is utilized to write the plurality of silos between the volatile memory and the non-volatile memory.
18. The computing device of claim 15 , wherein each silo of the plurality of silos includes:
metadata associated with the silo;
a first memory structure that includes a plurality of first-tier entries; and
a second memory structure that includes a plurality of second-tier entries.
19. The computing device of claim 18 , wherein, for a given silo of the plurality of silos, each first-tier entry of the plurality of first-tier entries references (1) an area of memory within the non-volatile memory, or (2) at least one second-tier entry within the second memory structure.
20. The computing device of claim 19 , wherein, for a given silo of the plurality of silos, each second-tier entry of the plurality of second-tier entries references an area of memory within the non-volatile memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/150,783 US20230142948A1 (en) | 2017-09-29 | 2023-01-05 | Techniques for managing context information for a storage device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/721,081 US11579789B2 (en) | 2017-09-29 | 2017-09-29 | Techniques for managing context information for a storage device |
US18/150,783 US20230142948A1 (en) | 2017-09-29 | 2023-01-05 | Techniques for managing context information for a storage device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/721,081 Continuation US11579789B2 (en) | 2017-09-29 | 2017-09-29 | Techniques for managing context information for a storage device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230142948A1 true US20230142948A1 (en) | 2023-05-11 |
Family
ID=65896112
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/721,081 Active 2037-12-05 US11579789B2 (en) | 2017-09-29 | 2017-09-29 | Techniques for managing context information for a storage device |
US18/150,783 Pending US20230142948A1 (en) | 2017-09-29 | 2023-01-05 | Techniques for managing context information for a storage device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/721,081 Active 2037-12-05 US11579789B2 (en) | 2017-09-29 | 2017-09-29 | Techniques for managing context information for a storage device |
Country Status (1)
Country | Link |
---|---|
US (2) | US11579789B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10853199B2 (en) * | 2018-09-19 | 2020-12-01 | Apple Inc. | Techniques for managing context information for a storage device while maintaining responsiveness |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090327589A1 (en) * | 2008-06-25 | 2009-12-31 | Stec, Inc. | Table journaling in flash storage devices |
US20100241806A1 (en) * | 2009-03-19 | 2010-09-23 | Fujitsu Limited | Data backup method and information processing apparatus |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8112574B2 (en) * | 2004-02-26 | 2012-02-07 | Super Talent Electronics, Inc. | Swappable sets of partial-mapping tables in a flash-memory system with a command queue for combining flash writes |
US8346719B2 (en) * | 2007-05-17 | 2013-01-01 | Novell, Inc. | Multi-node replication systems, devices and methods |
CN103488580A (en) * | 2012-06-14 | 2014-01-01 | 建兴电子科技股份有限公司 | Method for establishing address mapping table of solid-state memory |
US9037820B2 (en) * | 2012-06-29 | 2015-05-19 | Intel Corporation | Optimized context drop for a solid state drive (SSD) |
US9460177B1 (en) * | 2012-09-25 | 2016-10-04 | Emc Corporation | Managing updating of metadata of file systems |
US8966160B2 (en) | 2012-09-28 | 2015-02-24 | Intel Corporation | Storage device trimming |
US10102146B2 (en) * | 2015-03-26 | 2018-10-16 | SK Hynix Inc. | Memory system and operating method for improving rebuild efficiency |
US9927985B2 (en) * | 2016-02-18 | 2018-03-27 | SK Hynix Inc. | Method of dynamic table journaling |
KR20170131796A (en) * | 2016-05-20 | 2017-11-30 | 에스케이하이닉스 주식회사 | Memory system and operating method of memory system |
US10289544B2 (en) * | 2016-07-19 | 2019-05-14 | Western Digital Technologies, Inc. | Mapping tables for storage devices |
US10528463B2 (en) * | 2016-09-28 | 2020-01-07 | Intel Corporation | Technologies for combining logical-to-physical address table updates in a single write operation |
US10853199B2 (en) * | 2018-09-19 | 2020-12-01 | Apple Inc. | Techniques for managing context information for a storage device while maintaining responsiveness |
-
2017
- 2017-09-29 US US15/721,081 patent/US11579789B2/en active Active
-
2023
- 2023-01-05 US US18/150,783 patent/US20230142948A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090327589A1 (en) * | 2008-06-25 | 2009-12-31 | Stec, Inc. | Table journaling in flash storage devices |
US20100241806A1 (en) * | 2009-03-19 | 2010-09-23 | Fujitsu Limited | Data backup method and information processing apparatus |
Also Published As
Publication number | Publication date |
---|---|
US11579789B2 (en) | 2023-02-14 |
US20190102101A1 (en) | 2019-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11640353B2 (en) | Memory system, data storage device, user device and data management method thereof | |
US10101930B2 (en) | System and method for supporting atomic writes in a flash translation layer | |
US11544159B2 (en) | Techniques for managing context information for a storage device while maintaining responsiveness | |
US9348760B2 (en) | System and method for efficient flash translation layer | |
US10175894B1 (en) | Method for populating a cache index on a deduplicated storage system | |
KR100734823B1 (en) | Method and apparatus for morphing memory compressed machines | |
JP2018152126A (en) | Method and system for storing and retrieving data | |
US20120102260A1 (en) | Storage apparatus and data control method | |
US11126561B2 (en) | Method and system for organizing NAND blocks and placing data to facilitate high-throughput for random writes in a solid state drive | |
CN108733306B (en) | File merging method and device | |
US8694563B1 (en) | Space recovery for thin-provisioned storage volumes | |
US9146928B1 (en) | Techniques for storing metadata of a filesystem in persistent memory | |
US11132145B2 (en) | Techniques for reducing write amplification on solid state storage devices (SSDs) | |
US20190146881A1 (en) | Data storage and retrieval mediation system and methods for using same | |
US9922039B1 (en) | Techniques for mitigating effects of small unaligned writes | |
US9430492B1 (en) | Efficient scavenging of data and metadata file system blocks | |
US20220129420A1 (en) | Method for facilitating recovery from crash of solid-state storage device, method of data synchronization, computer system, and solid-state storage device | |
US20230142948A1 (en) | Techniques for managing context information for a storage device | |
KR20070031647A (en) | Space-Efficient Management Method of Compressed Data in Flash Memory Storages | |
CN111966281A (en) | Data storage device and data processing method | |
CN110569000A (en) | Host RAID (redundant array of independent disk) management method and device based on solid state disk array | |
US20210223957A1 (en) | Storage apparatus and storage control method | |
US10552077B2 (en) | Techniques for managing partitions on a storage device | |
CN112486861B (en) | Solid state disk mapping table data query method and device, computer equipment and storage medium | |
US11354233B2 (en) | Method and system for facilitating fast crash recovery in a storage device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |