WO2010066098A1 - 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置 - Google Patents

用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置 Download PDF

Info

Publication number
WO2010066098A1
WO2010066098A1 PCT/CN2009/001379 CN2009001379W WO2010066098A1 WO 2010066098 A1 WO2010066098 A1 WO 2010066098A1 CN 2009001379 W CN2009001379 W CN 2009001379W WO 2010066098 A1 WO2010066098 A1 WO 2010066098A1
Authority
WO
WIPO (PCT)
Prior art keywords
dram
management
module
cache
area
Prior art date
Application number
PCT/CN2009/001379
Other languages
English (en)
French (fr)
Inventor
王树峰
Original Assignee
深圳市晶凯电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市晶凯电子技术有限公司 filed Critical 深圳市晶凯电子技术有限公司
Publication of WO2010066098A1 publication Critical patent/WO2010066098A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory

Definitions

  • This invention relates to erasable programmable read only memories, and more particularly to access, addressing or within a memory system or architecture. Allocation, and more particularly to a method and apparatus for constructing a solid state storage hard disk using dynamic random access memory DRAM composite to Flash management.
  • HHD hard disk drive in English is hard disk drives, used to refer to such a hard disk, hereinafter referred to as: HHD
  • I/O data input/output
  • the way and mechanism of this whole block is determined by the number of writes, especially the MLC integration developed to increase the capacity is increased, the cost per unit capacity is greatly reduced compared to the SLC, but the time of writing programming is simultaneously It is also relatively extended, and the number of effective writes is also greatly reduced.
  • the wafer Wafer is continuously indented (from 70nm to 56nm, then to 50nm, 43nm, or even 34nm). While the amount is increasing, the dead zone is increasing, and the number of write programming effective times is decreasing. And so on, all of which are effective in applying new manufacturing and new low-cost models to the use of SSDs.
  • Dynamic Random Access Memory (DRAM) technology is a technology that is currently actively developing for the speed and capacity of the CPU to provide memory storage.
  • Some forms of interfaces (such as interfaces PCI-E and SATA-II and PATA, etc.) are also appearing in SDRAMs of pure DRAM format (as shown in Figure 3).
  • the capacity is mostly in the range of 4GB, 8GB, 16GB due to price constraints.
  • the form since the form must be continuously supplied with power to keep the internal storage data from being lost, it is only used in some occasions where the special speed is high. To ensure that the data is not lost, the data should be set back to the software level when powering down or shutting down.
  • the technical problem to be solved by the present invention is to provide a method for constructing a solid-state storage hard disk by using a dynamic random access memory DRAM composite to Flash management, which not only utilizes the above-mentioned deficiencies in the prior art, but also utilizes high-speed and balanced I/O of DRAM. Capabilities, and effectively make Flash's large-capacity storage work, combine the two organically, and make SSDs free from the dependence on the number of writes when using Flash alone and the reliance on power-down protection using DRAM alone.
  • the DRAM module In the initial stage of the SSD production completion of the solid state storage disk, the DRAM module is to be tested offline to construct a defect point area table;
  • each memory logical address of the DRAM module is mapped to a good physical address of the DRAM module after combining the defect point region table;
  • the ECC error-correcting method performs online monitoring and retrieval, and registers the address of the unstable area in the DRAM module in the defect point area table in real time to participate in new mapping management.
  • Step E The partition in the partition storage space is divided into a cache area of the super cache, a write area of the super cache, and a traditional cache area of the super cache.
  • the memory area of the super cache operating system is buffered by a memory cache for storing a page file Page files on the solid state storage hard disk; the super cache write area is used to write to the solid storage.
  • the hard disk data is temporarily stored.
  • the hierarchical form in the partition storage space is divided into a primary direct area and a second compressed area; the traditional cache area of the super cache is divided into one. Level group area and second level union area.
  • step E "the efficient algorithm of multiple cache policies of the composite adaptive adjustment performs internal management on the area and each sub-domain" is to manage the super-cache memory area by using the first-level direct-coupled secondary compression.
  • the partitioned storage space described in step E is dynamically adjusted according to the empirical value of the application statistics, that is, the cache Cache space partition is originally adjusted according to the default value to spatially divide by the empirical value.
  • steps F, G, and H may not be performed.
  • the data I/O and switch fabric of the storage and disk arrays can build an inexpensive redundant disk array RAID-type massive high-speed storage system on top of this high-speed cache, thereby enhancing RAID management capabilities and reducing costs.
  • the present invention further provides a high-speed solid-state storage disk device constructed by using a large-capacity DRAM to participate in flash media management, and is used as a storage device of a computer or a server, including a flash media module and an interface circuit module, in particular It also includes a larger capacity dynamic random access memory DRAM module and a DRAM hard disk controller that participates in managing the flash media, and a dual backup power management module required for the DRAM module.
  • the hard disk controller of the DRAM participating in managing the flash medium is respectively connected to the DRAM module and the flash module through the address/data bus; the hard disk controller participating in the management of the flash medium is connected to the interface circuit module through the composite bus; the two backup power management module is electrically Connect to the DM disk controller that participates in managing flash media.
  • the hard disk controller of the DRAM participating in managing the flash media includes a CPU-program memory, a DRA manager-super cache policy management-DMA channel, m flash media channel controllers, n DRAM management blocks, and an ECC check channel chip select. ;
  • the CPU-program memory is connected to the DRAM Manager via the composite bus-Super Cache Policy Management-DMA channel, and the flash media channel controller is connected by the control bus; the flash media channel controller and the flash media module are connected through the data bus; DRAM Manager- Super Cache Policy Management -
  • the DMA channel uses the address/data bus to connect to the DRAM management block and the ECC check channel chip select; the DRAM management block and the ECC check channel chip select are connected to the DRAM module through the address/data bus.
  • the working mode of the dual backup power management module includes a power supply mode in which a capacitive storage battery and a battery power supply are combined; when the solid state storage disk device is in a normal working state of the host to which it belongs, the two backup power management The module is in a charging state and a full power protection state; when the computer is powered off or powered down, the dual backup power management module supplies power to the solid state storage disk device, and the signal line triggers the DRAM to participate in managing the hard disk controller of the flash medium.
  • the data with the write-back flag set in the super cache area of the DRAM module is written back to the flash media module.
  • the two backup power sources are not divided into primary and secondary powers, and are powered according to the voltage at the time of use; when one of the two backup power sources fails, the other circuit can independently satisfy the solid state storage.
  • the Super Cache in the disk device maximizes the power requirement for write-back and the power required for the power-on alarm.
  • the circuit design of the dual backup power management includes a combination of a gold capacitor bank and a lithium battery to improve safety and reliability; meanwhile, the battery in the circuit of the dual power backup management module is replaceable.
  • the gold capacitor used in the dual power management module is a super large capacitor.
  • the hard disk interface used by the interface circuit module includes SATAI I, SATAI II, e-SATA, PATA, PCI, PCI-E, USB 2.0 and USB3.0.
  • the invention has the advantages of: minimizing Flash writeback, and at the same time, due to I/O
  • the hit rate is high and the speed is up, and the read/write speed is greatly improved because it does not depend on the write speed of the Flash.
  • the speed of the system response of the solid state storage hard disk SSD constructed according to the present invention is greatly improved; According to the different data request I/O, the regional policy management can be made to adapt to the management strategy of multiple data I/O requests in a massive high-speed storage system.
  • FIG. 1 is a structural block diagram of a dynamic random access memory DRAM composite to a Flash management and solid state storage hard disk according to the present invention
  • the figure is a block diagram of a prior art Flash-SSD solid state hard disk
  • FIG. 3 is a structural block diagram of a prior art DRAM-SSD solid state hard disk
  • a bad point area X first address ⁇ a bad point area X tail address ⁇ a bad point area Y a first address a bad point area Y first Address ⁇ bad point area R first address
  • FIG. 5 is a block diagram of a composite Cache strategy of the DRAM as a super cache according to the present invention.
  • FIG. 8 is a block diagram showing a cooperative operation of different Cache area policies of the super cache according to the present invention
  • FIG. 9 is a schematic diagram of a circuit implementation example of the dual backup power management module according to the present invention
  • FIG. 10 is a power failure or Turn off the power, power protection and super cache to write back the program flow diagram of the Flash storage area;
  • FIG. 11 is a schematic diagram of an implementation diagram of constructing a low-cost redundant disk display RAID type mass high-speed storage system by using a large-capacity DRAM composite to participate in Flash management according to the present invention.
  • DRAM dynamic random access memory
  • the dual backup power management module 38 is configured to write back data in the DRAM module 35 to the flash media module 37 to provide a protective backup power source during shutdown or power down;
  • the memory addresses of the DRAM module 35 are mapped to the good physical address of the DRAM module 35 after the defect point region table is combined;
  • Step E The partition in the "dividing storage space by partition and hierarchical form" is to divide the cache cache area into a super cache memory area, a super cache write area, and a super cache traditional cache area;
  • the super cache operating system memory area buffers a memory cache for storing a page file Page files on the solid state storage hard disk of the host operating system; the super cache write area is used to write to the solid storage
  • the hard disk data is temporarily stored.
  • the hierarchical form in the partition storage space is divided into a primary direct area and a second compressed area; the traditional cache area of the super cache is divided into one. Level group area and second level union area.
  • the E-efficient algorithm of the multiple adaptive caching strategies of the adaptive adaptive adjustment of the area and the partition is internally managed in step E.
  • the management method and algorithm for using the first-level direct-coupled two-compression for the super-cache operation memory area are used;
  • the write area of the super cache adopts a data classification management mode.
  • the partitioned storage space described in step E is dynamically adjusted according to the empirical value of the application statistics, that is, the cache Cache space partition is originally adjusted according to the default value to perform spatial division by the empirical value.
  • the present invention further provides a high-speed solid-state storage disk device that participates in flash media management with a large-capacity DRAM, and is used as a storage device of a computer or a server, including a flash media module 37 and an interface circuit module 31, especially Also included are a larger capacity dynamic random access memory DRAM module 35 and a DRAM hard disk controller 39 that participates in managing the flash media, and a dual backup power management module 38 that is required for the DRAM module 35.
  • a high-speed solid-state storage disk device that participates in flash media management with a large-capacity DRAM, and is used as a storage device of a computer or a server, including a flash media module 37 and an interface circuit module 31, especially Also included are a larger capacity dynamic random access memory DRAM module 35 and a DRAM hard disk controller 39 that participates in managing the flash media, and a dual backup power management module 38 that is required for the DRAM module 35.
  • the hard disk controller 39 in which the DRAM participates in managing the flash media is coupled to the DRAM module 35 and the flash module 37 via the address/data bus 32, 33, respectively; the hard disk controller 39 that the DRAM participates in managing the flash medium passes through the composite bus and interface circuit module 31.
  • the dual backup power management module 38 is electrically coupled to the DRAM controller 39 that participates in managing the flash media.
  • the hard disk controller 39 of the DRA participating in managing flash media includes a CPU-program memory 391, a DRAM manager-super cache policy management-DMA channel 392, m flash media channel controllers 394, n DRAM management blocks, and ECC schools.
  • the CPU-program memory 391 is connected to the DRAM Manager-Super Cache Policy Management-DMA channel 392 via a composite bus, while the flash media channel controller 394 is coupled by the control bus; the flash media channel controller 394 and the flash media module 37 are passed through the data bus 33.
  • DRAM Manager - Super Cache Policy Management - DMA channel 392 with address / data bus ⁇ with DRAM management block and ECC check channel chip select 393; DRAM management block and ECC check channel chip select 393 through address / Data bus 32 is coupled to DRAM module 35.
  • the working mode of the dual backup power management module 38 includes a combination of capacitive storage and battery power supply; when the solid state storage disk device is in the normal working state of the host to which it belongs, the two backup power management The module 38 is in a charging state and a full power protection state; when the computer is powered off or powered down, the dual backup power management module 38 supplies power to the state storage disk device, and the signal line triggers the DRAM to participate in managing the hard disk control of the flash media.
  • the device 39 completes writing back the data with the write-back flag set in the super cache area of the DRAM module 35 to the flash media module 37.
  • the two backup power sources are not divided into primary and secondary powers, and are powered according to the voltage during use; when one of the two backup power sources fails, the other circuit can independently satisfy the solid state.
  • the Super Cache in the storage disk device maximizes the power requirement for write-back and the power required for the power-on alarm.
  • Circuit design in dual backup power management In the calculation, including the combination of a gold capacitor bank and a lithium battery to improve the safety and reliability of the power supply, the battery in the circuit is replaceable.
  • the gold capacitor used in the circuit of the dual power management module is a super large capacitor.
  • the hard disk interface used by the interface circuit module 31 includes SATAII, SATAI I I, e-SATA, PATA, PCI, PCI-E, USB2.0, and USB3.0.
  • FIG. 4 A form of hardware design for the DRAM management block and the ECC check channel chip select 392 is shown in Figure 4.
  • the block classification or bit classification of the DRAM before application can be performed, and the portion where the defect point defect area 354 is removed is excluded from the MMU address, and the defect point defect area 354 is discrete or not. More can be directly into the MMU addressing.
  • the DRAM memory address lines No. 1 to No. 9 of a certain group of DRAM groups of the DRAM module 35 are connected in series to the address bus 321 , and the data lines are collected into 72 bits and connected in parallel to the data bus 322 , where D0 to D63
  • the total 64-bit data line is the memory data storage bandwidth
  • the 8-bit data line of the D64-D71 is the accessor data ECC check bandwidth (ie, No. 9 DRAM355).
  • the address bus 321 and the data bus 321 are integrated into the management block and the ECC check channel manager 392.
  • the data bus can be adjusted depending on the application, such as 32-bit or 128-bit.
  • the dot area table (as shown in FIG. 4: the node tables of 351, 352, and 353 are constructed in the order of addresses).
  • the defect points or areas 354 on all DRAM memories of a set of chip select CSs that have been tested are recorded in the defect point area table record node by the address size sorting in the first address and the tail address (sorting purpose is to reduce utilization)
  • Defect point registration (a small number of defect points are calibrated from the DRAM large-capacity area, and the number of good spaces that are "continuously connected" is also a small number, which can be ignored.)
  • the program After the test program completes the test and builds the defect point area table, the program will store the formed table into the specific storage area of the Flash (the area is used as the SSD management area, and is not open to the user area of the SSD; meanwhile, There is enough space in the area to store the table and replace the block reserve surplus).
  • the I/O data of the DRAM memory is managed by the hardware-implemented ECC check mode.
  • the management monitors and retrieves the data integrity in the DRAM memory online, and corrects the bit in time if a verification error occurs. Error (can perform lbit online error correction on DRAM memory data). If the error is not only lbit error, it cannot be corrected, and it is necessary to notify the resend data.
  • the node whose address is made into the defect point area table is inserted into the stack for registration.
  • the newly discovered defect point (or defective block) node 356 in the ECC check is shown in Figure 4. Use the address index to find the location of the stack node and do the insert registration.
  • the logical and physical mapping areas are registered as missing, registered in the temporary missing table, and the new address of "Logic + ⁇ is enabled for I/O operation.
  • the defect point area table that is updated after being shut down is returned.
  • Write the Flash management area When the SSD is re-enabled, the mapping of the logical address and physical address of the new DRAM is performed on the basis of the last updated defect point area table.
  • the management mode is adopted to make the DRAM memory The stability is better guaranteed.
  • the hardware system of DRAM memory management is built, which provides a reliable hardware platform for effectively utilizing its high speed, large capacity and constructing a more efficient Cache strategy.
  • the super cache designed in the present invention has a large capacity and a partition, wherein the super cache operating system memory cache area, hereinafter referred to as “super cache”; the super cache write data cache area, hereinafter referred to as “super cache” Write “; The traditional Cache area of Super Cache, hereinafter referred to as "Super Cache”.
  • Super cache a super-large capacity DRAM memory with a storage capacity of 2 GB is used to illustrate the space allocation of the super cache.
  • Figure 5 shows the default state space allocation map after initialization. Spatial allocation size After the SSD enters the actual application process, the dynamic adjustment strategy will make dynamic allocation adjustment according to the actual work situation, in order to optimize the specific application tendency.
  • Super Cache is assigned as super Cache by default 51; Super Cache is written 52; Super Cache is passed 53.
  • the space is lGBytes; 512MBytes; 512Mbytes.
  • the internal management mode is as follows: The first level is the direct connection mode 511, and the default space is
  • the second level is compressed mode 512, the default space is 768Mbytes.
  • This area is mainly used to cache the memory cache of the operating system (OS) on the SSD for storing page files (Page Files).
  • OS operating system
  • Page Files page files
  • the storage content in this area is scheduled according to the frequency of use:
  • the frequently accessed data blocks are placed in the level 1 mode, which is not frequently accessed.
  • the data block is placed in the secondary compression mode 512 area.
  • Such a scheduling strategy can obtain several times the storage of a limited space (the simulation of the extracted page file data can be performed, and the request for storing the 1.7 ⁇ 4.5 GBytes page file can be completed in the lGBytes space).
  • the management strategy may consider moving the data block in the data compression level deep in the secondary compression mode 512 area to the flash storage space, and its scheduling management is performed by Secondary management has risen to a three-tier management model.
  • the conversion of the internal and secondary storage data blocks and the spatial dynamic allocation scheduling of the first and second levels are relatively easy for the engineers and technicians in the industry to understand, and will not be described here.
  • Super Cache write 52 internal management strategy tends to temporarily store the data to be written to the SSD, to write
  • the data of the SSD can be classified and managed.
  • the data can be divided into two types: one is already existing in the SSD to be replaced; and the other is the new request SSD allocated space storage.
  • the former data form it can be regarded as the data being edited or modified. It is assumed that the modification or editing is still going on, and the replacement may occur at any time.
  • this type of data temporary cache In the middle, the SSD manager can participate in the management when the data is the real storage space, and the writing to the Flash is completed only when the area is full or the power is turned off.
  • the data with the lowest priority can be temporarily stored in the cache, and written to the Flash when the buffer space is tight or the power is turned off. Due to the background operability of the cache, the restriction on the SSD write speed is released, the write speed is freed from the speed limit of the flash memory, and the write operation to the flash memory is minimized.
  • Level 1 is the group association mode 531; Level 2 is the full association mode 532.
  • the default space allocation is 128MBytes; 384Mbytes. The details will be described below with reference to Figs. 6, and 5.
  • the first-level group is connected to the 531 mode, which is to group the buffer areas, and the flash media module is divided into units, and the space in the group is equal to the space in the area.
  • the SB area number in the group and the SB area number in the corresponding mapped area may not be in one-to-one correspondence, and the string is allowed to be synchronized.
  • the mapping form of this mode is based on the principle of temporal locality: that is, if a storage item is accessed, the access item may be accessed again quickly.
  • each block is 512 Bytes
  • the first-level group associative (531) cache replacement and the intra-group lookup operation use a second hash (Hash) algorithm. Both the lookup and replace operations use the LUN+LBA after the logical unit number (LUN) and logical block combination (LBA) as the key to the Hash search operation.
  • LUN logical unit number
  • LBA logical block combination
  • the second-level fully associative 532 mode is a method in which a buffer area is divided into a large area, and each large area is composed of a plurality of small areas. Specifically: LBl ⁇ LB n (n ⁇ 24K, because the Flag area and the data link area have space occupied) large blocks, each large block consists of 128 small blocks of SB1 ⁇ SB128 and a flag area of 16 bytes before the block. Each tile has a capacity of 512 Bytes.
  • the flash memory area logic unit is still a 1M SB small block area. Its number is SB1-SB1M.
  • mapping form of this mode is based on the principle of spatial locality: If a storage item is accessed, then This and adjacent items may also be accessed shortly. Specifically, it is a small block in the logical unit of the flash memory area and adjacent m small blocks (m is less than 128) data, which is taken out from the memory as a large block (LB) data to a second-level fully associated one. In the free block (LB), since the adjacent blocks are taken out, the prefetch method is embodied. At this time, the small block that has been accessed is identified as 1 in its corresponding Flag bit, so that in the future, when the large block is replaced, the small block with the access mark can be moved to the cache of the first-level group associative 531. .
  • the two-level fully associative 532 search uses a balanced binary tree, corresponding to the Flash memory area.
  • the logical unit block (SB) has a balanced binary tree node unit, and the LBA in each node unit is used as a key to find the balanced binary tree.
  • the replacement of the secondary all-in-one 532 is implemented by the replacement algorithm of the LRU, and the small block identified as 1 in the flag in the large block to be replaced is moved to the corresponding area of the primary group associative 531. Replace the incoming large block data with the corresponding access calibration of the new Flag and make corrections to the LRU table.
  • the movement of the second-level all-in-one 532 is one-way and can only be moved to the first-level group 531. There are two types of moving: one is the removal of the small block with the flag flag of 1 mentioned in the replacement; the second is the timing shift of the system.
  • This first-level and second-level collaborative work mode maximizes the principle of time and space locality, and solves the inherent contradiction between the two cache management links.
  • the access speed of the SSD data request is freed from the speed limit of the flash memory, which greatly reduces the swap-in and query-time and improves the speed of the system.
  • the above three spatial divisions of the Super Cache Super Cache 51; Super Cache Write 52; Super Cache 53 introduced.
  • the three divisions and the use of each internal caching strategy enable the application device to completely implement the I/O operation speed of the SSD of the present invention through the system (OS) on the DRAM basis, so that the read/write speed is freed from the flash memory.
  • the bondage At the same time, the writing to the flash memory is minimized, thereby reducing the dependence on algorithms such as equalization loss, thereby further improving the reliability and service life of the SSD, and further, for using a cheap new process of Flash (such as 3LC). Etc.) Lowering costs will have a deeper meaning.
  • the internal function division of the super cache and the application of multiple caching strategies can be implemented as shown in Figure 8.
  • the super cache (default value) and super cache (experience value) refer to the way to divide the super cache space area.
  • the default value is the way the SSD solid state drive is factory set.
  • the empirical value method is to divide the super cache space area into a set mode by applying a series of data stream classification analysis and statistics to the application environment in which the SSD solid state hard disk is applied by the specific user to the application environment. Analysis of the relevant parameters of the statistics as a global quantity will be affected by the weighting value of the specific Super Cache (Super Cache, Super Cache Write, Super Cache) policy. The data exchange and flow details between several super caches are not described here.
  • a dual-purpose power management module is a protective backup method used to write back flash memory for data in shutdown or power-down DRAM.
  • the specific application can take more accurate measurement according to the power consumption. Referring to FIG. 9, this embodiment is a way of explaining the matching of power consumption by using 2 GB DRAM and 512 GB Flash memory.
  • the design power input is 5VDC
  • the standby power supply is a single lithium battery and gold capacitor group
  • the main and standby power sources are isolated and converted by diodes.
  • the power supply circuit has a working voltage of 3. 3V, 1. 8V and 1.5V, respectively, in consideration of the power supply requirements of the DRAM, the flash, and the main controller.
  • the power conversion circuit is a relatively common DC-DC circuit, in which the current of 1. 8V is large, composed of SP 5 1 , the load capacity can reach 3A, 1. 5V power supply is obtained by 1. 8V through LD0 regulator SP6201, The load capacity is 0. 5A. The load capacity is 0. 5A.
  • the load capacity is 0. 5A. (If DDR3 is used as the main component of DRAM, consider increasing the load capacity of 1. 5V, and reducing the load of 1.
  • Vin-0. 4V, which is about 4. 6V, which is higher than the battery voltage of 4. 2V at the positive end of diode D5, so the battery does not discharge under normal working conditions.
  • the voltage at the VDD point is simultaneously charged to the gold capacitor group through D6 and R15, so that the voltage on the voltage is about 4. 2V.
  • the function of the diode D6 is to lower the voltage so that the gold capacitor group is not overvoltage (4.6 V), the resistor R15 Limit the current charged by the gold capacitor bank.
  • VDD When the input supply is disconnected, VDD will be supplied by the battery through D5 and the gold capacitor bank through D7. Since the floating voltage of the battery is 4. 2V, and the stable discharge voltage is 3.7V, the voltage of the gold capacitor group will be higher than the battery voltage. First, the gold capacitor group supplies power to VDD through D7, when the voltage drops to a certain value. After the value, the battery is powered at the same time.
  • the MOS-FET Q2 and the peripheral circuit constitute a power switch circuit.
  • the Q2 When the 5VDC input is valid, the Q2 is turned on through D1 and R2, so that VDD is powered. At this time, the 3. 3V power supply of the power conversion circuit is also given through D9 and R2.
  • Q2 provides turn-on control. When the 5VDC power supply is removed, the 3. 3V power supply continues to exist due to the operation of the above backup power system, so that Q2 continues to be turned on.
  • the detection of power-down is composed of Q1 and peripheral circuits.
  • Q1 When the 5VDC power supply is normal, Q1 is turned on, and its collector outputs a low-level signal to the CPU 33, which is the normal mode.
  • Q1 When the 5VDC power supply is lost, Q1 is turned off and the output ⁇ level is output.
  • trigger CPU 33 When the shutdown/power-down mode is performed (the most important one is the flash write back of the data), when the operation ends, the CPU 33 will output a high-level signal to turn off the 3. 3V power supply, so that Q2 is also turned off, the whole machine The power is off.
  • FIG. 10 is a schematic diagram of the principle logic and the text description of the embodiment of the above-mentioned two-reserve management module, in order to clarify the working procedure of the power protection and the super cache back to the flash storage area after the shutdown/power-down.
  • FIG. 11 a description will be given of an example application of the Super Cache in a larger RAID in the present invention to show commonality and speciality with the SSD application, to show different and comparative descriptions.
  • the implementation of the inexpensive redundant disk array (MID) mass high-speed storage system is to apply the multi-level and large-capacity Cache91 formed by the DRAM & Flash composite to the RAID management device.
  • MID inexpensive redundant disk array
  • the efficient mechanism for its work is still to configure the I/O speed between the server, network storage device (interface can be SCSI/SATA2, etc.) and disk array 94 on the DRAM & Flash composite high-speed Cache, thus effectively Improve the efficiency of disk storage. This relieves the low-speed backup storage device from the server and network speed constraints.
  • the DRAM & Flash composite multi-level ultra-capacity Cache 91 and the SSD solid-state hard disk described above are common to speeding up, and the differences are reflected in the following aspects;
  • the Flash application can be used as a backup primary storage on the SSD. It can be used as the next-level backup temporary storage cache of the DMM-level Cache on RAID (not to rule out the application of the disk array due to price degradation or special application requirements). ).
  • the structural design has a large flexible space position due to the application of MID.
  • the design will consider the DRAM or Flash stacking or stacking mode, which is convenient for application expansion. SSDs will be more about customization and integration due to space constraints.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Description

用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的方法及装置 技术领域 本发明涉及可擦除可编程序只读存储器,特别是涉及在存储器***或者体系 结构内的存取、 寻址或分配, 尤其是涉及用动态随机存储器 DRAM复合到 Flash管理构建 固态存储硬盘的方法及装置。 背景技术 随着计算机领域中央处理器(CPU )速度和内存(Memory )速度的不断发 展, 传统的机械式硬盘 (硬盘驱动器英文为 hard disk drives,用于指这类硬盘, 以下简称: HHD )越来越成为数据输入 /输出 (I/O ) 的瓶颈, 虽然硬盘缓存(Cache )技术和接口 (PATA,SATA等)技术不断改进使得 HHD 的速度得到了很大提升,但仍不能满足 CPU和总 线对进一步提升 I/O速度的要求。
闪存介质(Flash )技术的迅速发展, 在容量不断提升, 成本逐步降低的情况下, 提 供给硬盘制造一种新型的非易失性固体数据存储器。 如图 2所示的(sol id-s tate drives , 简称 SSD )开始出现。 此类型的 SSD, 由于考虑到 Flash写入次数的限制, 为增加稳定性 往往釆用单层单元结构 (s ingle-level cel l,简称 SLC ) 的 Flash, 但其成本高, 制造该 硬盘价格昂贵, 目前多层单元结构 (Mul t i-Level Cel l , 简称 MLC ) 高容量低成本都开始 进入批量生产, 为减少写入次数而加入了 Cache处理的空间, 为提高 Flash的使用寿命而 采用了进阶动态损耗均衡(Advanced Dynamic Wear Level ing,简称 ADWL )算法, 为提高 写入速度而釆用了多通道、 数据宽带、 以及并行作业 (Concurrent Operat ions,简称 CO ) 等措施。 此种解决方案低温低噪声, 能提供个人计算机及网络服务器重量轻、 低耗电及速 度快的存储解决方案。
但受限于 I/O的速度提升及 I/O速度的不对称性( Flash的写入速度要比读出速 慢 许多)、 制造成本过高、 硬盘的耐久性和稳定性等诸多因素的影响, 使用这类固态硬盘难 于广泛地推向巿场。 造成这种结果的主要原因来自于 Flash的技术本身, 由于釆用了区块 ( Block )和页面 (Page ) 的管理模式, Flash写入前要把整块内容读出备份, 加入要更改 的内容, 再用该内容对擦除操作完成的块进行写入编程。 这种整块写入的方式和机理决定 了是有写入次数限制的, 特别是为了增加容量而开发的 MLC集成度提高了, 单位容量的 成本较 SLC大大缩减,但同时写入编程的时间也相对延长了,有效写入次数也大幅度降低。 再加上晶片 Wafer制成的不断缩进(从 70nm,到 56nm,再到 50nm,43nm,甚至为 34nm ),容 量提升的同时, 坏点区域在增加, 写入编程有效次数在缩减。 等等这些都为有效的把新的 制成和新的低成本的型号应用到 SSD使用制造中, 带来了是否可行的问题。
动态随机存储器( Dynamic Random Access Memor,简称 DRAM )技术是为 CPU提供 存储 Memory的过程中速度和容量不断被诉求进步而目前活跃发展着的一种技术。 一些接 口形式 (如接口 PCI-E和 SATA-II及 PATA形式等) 的纯 DRAM形式的 SSD也相继出现 (如图 3所示), 容量由于价格等的限制大多在 4GB,8GB,16GB的范围内, 该形式由于要 持续提供电源才能保持内部存储数据不丢失, 所以只应用于一些特殊速度要求较高的场 合, 为保证数据不丟失, 掉电或者关机时要在软件层面设置数据回写到后备式硬盘中或者 直接将数据通过接口挂在后备式硬盘上。 由于价格和不能掉电保存数据等原因, 单独作为 后备存储硬盘存在困境。
从 SDRAM发展到 DDR1,DDR2,DDR3等及一些特殊应用形式的 DRAM,速度越来越 快, 同时 Wafer制成的不断缩进, 速度容量不断提高的同时, 制造过程的良品和带有坏点 区域的 B 级品 (通常也称为存储空间有缺陷点或者称瑕疵点) 的数量始终占有一定比例。 由于 Memory的使用是连续空间的 I/O,不允许类似硬盘这种坏块区域管理的运算和标定参 与, 所以, 目前这种 B级的产品应用场合受到限制, 由于不能像 Flash能被有效使用一定 比例, 可以有效完成一定比例的成本摊销, 目前 DRAM的生产厂家存在数量庞多的货量 并且仍在积累中。 如果能有效应用一定比例的 B级品的部分将会对整个行业成本摊销是有 利的。 这是 DRAM B级品目前存在的客观面。 同时, 在 SSD的应用中, Cache部分多半 也是采用 DRAM的产品来完成的, 但由于成本因素的考虑, 釆用 DRAM容量的都不大, 构建的 Cache也是为块回写和减少块回写, 以及采用最近最少使用 (least recently used, LRU )和最近使用 ( most recently used, MRU )缓存技术的传统缓存器。 发明内容 本发明要解决的技术问题是针对上述现有技术中的不足而提出的一种用动 态随机存储器 DRAM复合到 Flash管理构建固态存储硬盘的方法, 既发挥了 DRAM的高速 和均衡 I/O能力, 又有效使得 Flash的大容量存储发挥作用, 将二者有机的结合起来, 并 使 SSD摆脱单独使用 Flash构建时对写入次数的依赖及单独使用 DRAM构建对掉电保护 的依赖。
本发明为解决上述技术问题而提出的技术方案是: 一种用动态随机存储器 DRAM复合 到 Flash管理构建固态存储硬盘的方法, 用于组成计算机或服务器的存储***, 所述方法 包括步驟:
A.设置闪存介质模块和接口电路模块; B.设置较大容量的动态随机存储器 DRAM模块,将其一部分存储空间同所述闪存介 质模块一起用作数据存储;
C.设置 DRAM参与管理闪存介质的硬盘控制器;
D.设置两重备电管理模块,用于在关机或者掉电时将所述 DRAM模块中的数据回写 到所述闪存介质模块中提供保护性备用电源;
E.用所述较大容量的 DRAM模块构建超级高速缓存器 Cache区域,釆用分区和分级 形式划分存储空间, 同时构建复合型自适应调整的多种缓存策略的高效算法对 所述区域和各分区进行内部管理;
F.在所述固态存储盘 SSD生产完成初始化阶段, 要对所述 DRAM模块做离线测试, 以便构建缺陷点区域表;
G.所述 DRAM模块各存储器逻辑地址, 在结合了所述缺陷点区域表之后映射到所述 DRAM模块的良好物理地址上;
H.采用硬件实现的差错校验 ECC纠错方式进行在线监视和检索,实时地将所述 DRAM 模块内不稳定区域的地址登记到缺陷点区域表中从而参与新的映射管理。
步骤 E所述: "采用分区和分级形式划分存储空间"中的分区是将所述高速缓存器 Cache 区域分为超级 Cache的内存区、 超级 Cache的写入区和超级 Cache的传统 Cache区; 所述 超级 Cache操作***内存区是对主机搡作***开在本固态存储硬盘上的用于存储页面文件 Page files的内存 Cache进行缓冲; 所述超级 Cache写入区用于对要写入本固态存储硬盘数 据进行暂存。 '
步骤 E所述: "采用分区和分级形式划分存储空间" 中的分级形式是将所述超级 Cache 操作***内存区划分一级直联区和二级压缩区; 超级 Cache的传统 Cache区划分为一级组 联区和二级全联区。
步驟 E所述 "复合型自适应调整的多种缓存策略的高效算法对所述区域和各分域进行 内部管理" 是对所述超级 Cache搡作内存区采用一级直联二级压缩的管理方式和算法; 对 所述超级 Cache的传统 Cache区所采取的一级组相联二级全相联的管理方式及算法; 超级 Cache的写入区采用数据分类管理方式。
步骤 E中所述的划分存储空间是依据应用统计的经验值进行动态调整, 即将原按缺省 值对高速缓存器 Cache空间划分调整为以所述经验值进行空间划分。
同时, 在使用无坏点区域的 DRAM的 A级品时, 可以不实施步骤F、 G、 H。 采用更大容量的 DRAM和 Flash构建超大容量的多级 Cache***, 将服务器和网络存 储与磁盘阵列的数据 I/O及交换架构在这个高速 Cache 之上, 可构建廉价冗余磁盘阵列 RAID型海量高速存储***, 从而增强 RAID的管理能力和降低成本。 本发明为解决上述技术问题还进一步地提供了一种用较大容量 DRAM参与闪存介质管 理构建的高速固态存储盘装置, 用作计算机或服务器的存储装置, 包括闪存介质模块和接 口电路模块, 尤其还包括较大容量的动态随机存储器 DRAM模块和 DRAM参与管理闪存介质 的硬盘控制器, 以及为 DRAM模块所需的两重备电管理模块。
所述 DRAM 参与管理闪存介质的硬盘控制器分别通过地址 /数据总线与 DRAM模块、 Flash模块联接; DRAM参与管理闪存介质的硬盘控制器通过复合总线与接口电路模块联接; 两重备电管理模块电联接到 D M参与管理闪存介质的硬盘控制器。
所述 DRAM参与管理闪存介质的硬盘控制器包括 CPU -程序存储器、 DRA 管理器 -超级 Cache策略管理 -DMA通道、 m个闪存介质通道控制器、 n个 DRAM管理区块及 ECC校验通道 片选;
CPU -程序存储器通过复合总线连接 DRAM管理器 -超级 Cache策略管理 -DMA通道, 同 时用控制总线联接闪存介质通道控制器; 闪质介质通道控制器与闪存介质模块通过数据总 线联接; DRAM管理器 -超级 Cache策略管理 -DMA通道用地址 /数据总线 Π与 DRAM管理区块 及 ECC校验通道片选联接; DRAM管理区块及 ECC校验通道片选通过地址 /数据总线联接到 DRAM模块上。
所述的两重备电管理模块的工作方式包括电容式储电和电池供电两重结合的供电方 式; 当该固态存储盘装置在其所属的主机正常工作状态时, 所述两重备电管理模块, 处于 充电状态和满电保护状态; 当计算机关机或者掉电时, 该两重备电管理模块向所述固态存 储盘装置供电, 并由信号线触发 DRAM参与管理闪存介质的硬盘控制器完成对 DRAM模块中 的超级 Cache区内有回写标志置位的数据回写到闪存介质模块中。
所述两重备电管理模块的电路中, 两个备用电源无主次之分, 根据使用时的电压浮动 供电; 当所述两路备电中一路失效时, 另一路可独立满足该固态存储盘装置中超级 Cache 最大限度完成回写的电量需求及备电报警提示所需电量。 同时,两重备电管理的电路设计, 包括用金电容组和锂电池的组合方式以提高安全性和可靠性; 同时, 所述两重备电管理模 块的电路中的电池是可更换的。 所述两重务电管理模块中使用的金电容为超级大电容。
所述接口电路模块使用的硬盘接口包括 SATAI I、 SATAI I I、 e-SATA, PATA、 PCI , PCI-E、 USB2. 0和 USB3. 0。 同现有技术相比,本发明的有益效果在于:最大程度的降低 Flash回写,同时由于 I/O 的命中率高而达到了提速的目的, 而且读写速度因为不依赖于 Flash的写入速度而得到极 大提升, 按本发明所构建的固态存储硬盘 SSD对***响应的速度得到极大提升; 并可根据 数据请求 I/O的不同作出因应性的区域策略管理, 使得在一个海量高速存储***里边同时 适应多种数据 I/O请求的管理策略。 这样, 就使得不仅仅是 I/O在超级 Cache中命中率极 大提高, 同时在因应不同的 I/O请求上数据管理能够并行应对多个不同类型的诉求, 使得 海量高速存储***在不增加多少成本的前提下实现了更快速的响应, 性能得到更大的提 升。 附图说明 图 1是本发明动态随机存储器 DRAM复合到 Flash管理构建固态存储硬盘的结 构框图;
图 是现有技术的 Flash-SSD固态硬盘结构框图;
图 3是现有技术的 DRAM-SSD固态硬盘结构框图;
图 4是本发明所述 DRAM硬件结构的一种形式,缺陷点缺陷区域的入链原则 是: 坏点区域 X首地址 <坏点区域 X尾地址 <坏点区域 Y首地址 坏点区域 Y首地址 < 坏点区域 R首地址
坏点区域 R首地址 <坏点区域 Z首地址
图 5是本发明所述 DRAM作为超级 Cache的一种复合式 Cache策略的划分 框图;
图 6是本发明所述超级 Cache传统 Cache 区的一级组相联 Cache策略参与
Flash存储区域逻辑单元的映射示意图;
图 7是本发明所述超级 Cache传统 Cache 区的二级全相联 Cache策略参与
Flash存储区域逻辑单元的映射示意图;
图 8是本发明所述超级 Cache不同 Cache区域策略协同工作的流程框图; 图 9是本发明所述两重备电管理模块的一种电路实施实例示意图; 图 10是本发明中为掉电或者关闭电源,电源保护及超级 Cache回写 Flash存储 区域的程序流程框图;
图 11是本发明所述用较大容量 DRAM复合参与到 Flash管理中构建廉价冗 余磁盘陈列 RAID型海量高速存储***的一种实施图示意图。 具体实施方式 下面, 结合附图所示之优选实施例进一步阐述本发明
参见图 1 ,本发明用动态随机存储器 DRAM复合到 Flash管理构建固态存储硬盘的方法, 实施步骤:
A.设置闪存介质模块 37和接口电路模块 31;
B.设置较大容量的动态随机存储器 DRAM模块 35, 将其一部分存储空间同所述闪 存介质模块 37—起用作数据存储;
C.设置 DRAM参与管理闪存介质的硬盘控制器 39;
D.设置两重备电管理模块 38, 用于在关机或者掉电时将所述 DRAM模块 35中的数 据回写到所述闪存介质模块 37中提供保护性备用电源;
E.用所述较大容量的 DRAM模块 35构建超级高速缓存器 Cache区域, 采用分区和 分级形式划分存储空间, 同时构建复合型自适应调整的多种缓存策略的高效算 法对所述区域和各分区进行内部管理;
F.在所述固态存储盘 SSD生产完成初始化阶段, 要对所述 DRAM模块 35做离线测 试, 以便构建缺陷点区域表;
G.所述 DRAM模块 35各存储器逻辑地址, 在结合了所述缺陷点区域表之后映射到 所述 DRAM模块 35的良好物理地址上;
H.采用硬件实现的差错校验 ECC纠错方式进行在线监视和检索,实时地将所述 DRAM 模块 35内不稳定区域的地址登记到缺陷点区域表中从而参与新的映射管理。 步骤 E所述: "釆用分区和分级形式划分存储空间"中的分区是将所述高速缓存器 Cache 区域分为超级 Cache的内存区、 超级 Cache的写入区和超级 Cache的传统 Cache区; 所述 超级 Cache操作***内存区是对主机操作***开在本固态存储硬盘上的用于存储页面文件 Page files的内存 Cache进行缓冲; 所述超级 Cache写入区用于对要写入本固态存储硬盘数 据进行暂存。
步骤 E所述: "采用分区和分级形式划分存储空间" 中的分级形式是将所述超级 Cache 操作***内存区划分一级直联区和二级压缩区; 超级 Cache的传统 Cache区划分为一级组 联区和二级全联区。
步骤 E所述 "复合型自适应调整的多种缓存策略的高效算法对所述区域和分区进行内 部管理" 是对所述超级 Cache操作内存区采用一级直联二压缩的管理方式和算法; 对所述 超级 Cache的传统 Cache区所釆取的一级组相联二级全相联的管理方式及算法;超级 Cache 的写入区采用数据分类管理方式。 步骤 E中所述的划分存储空间是依据应用统计的经验值进行动态调整, 即将原按缺省 值对高速缓存器 Cache空间划分调整为以所述经验值进行空间划分。
在使用无坏点区域的 DRAM的 A级品时, 可以不实施步骤 F 、 G、凡
釆用更大容量的 DRAM和 Flash构建超大容量的多级 Cache***, 将服务器和网络存 储与磁盘阵列的数据 I/O及交换架构在这个高速 Cache 之上, 可构建廉价冗余磁盘阵列 RAID型海量高速存储***, 从而增强 RAID的管理能力和降低成本。 参见图 1 ,本发明同时进一步地提供了一种用较大容量 DRAM参与闪存介质管理构建高 速固态存储盘装置, 用作计算机或服务器的存储装置, 包括闪存介质模块 37 和接口电路 模块 31 , 尤其还包括较大容量的动态随机存储器 DRAM模块 35和 DRAM参与管理闪存介质 的硬盘控制器 39 , 以及为 DRAM模块 35所需的两重备电管理模块 38。
- 所述 DRAM参与管理闪存介质的硬盘控制器 39分别通过地址 /数据总线 32、 33与 DRAM 模块 35、 Flash模块 37联接; DRAM参与管理闪存介质的硬盘控制器 39通过复合总线与接 口电路模块 31联接; 两重备电管理模块 38 电联接到 DRAM参与管理闪存介质的硬盘控制 器 39。
所述 DRA 参与管理闪存介质的硬盘控制器 39包括 CPU -程序存储器 391, DRAM管理器 -超级 Cache策略管理 -DMA通道 392, m个闪存介质通道控制器 394 , n个 DRAM管理区块及 ECC校验通道片选 393;
CPU -程序存储器 391通过复合总线连接 DRAM管理器 -超级 Cache策略管理 -DMA通道 392 , 同时用控制总线联接闪存介质通道控制器 394; 闪质介质通道控制器 394与闪存介质 模块 37通过数据总线 33联接; DRAM管理器 -超级 Cache策略管理 -DMA通道 392用地址 / 数据总线 Π与 DRAM管理区块及 ECC校验通道片选 393联接; DRAM管理区块及 ECC校验通 道片选 393通过地址 /数据总线 32联接到 DRAM模块 35上。
所 的两重备电管理模块 38 的工作方式包括电容式储电和电池供电两重结合的供电 方式; 当该固态存储盘装置在其所属的主机正常工作状态时, 所述两重备电管理模块 38, 处于充电状态和满电保护状态; 当计算机关机或者掉电时, 该两重备电管理模块 38 向所 述态存储盘装置供电, 并由信号线触发 DRAM 参与管理闪存介质的硬盘控制器 39 完成对 DRAM模块 35中的超级 Cache区内有回写标志置位的数据回写到闪存介质模块 37中。
所述两重备电管理模块 38 的电路中, 两个备用电源无主次之分, 根据使用时的电压 浮动供电; 当所述两路备电中一路失效时, 另一路可独立满足该固态存储盘装置中超级 Cache 最大限度完成回写的电量需求及备电报警提示所需电量。 在两重备电管理的电路设 计中, 包括用金电容组和锂电池的组合方式以提高供电的安全性和可靠性, 电路中的电池 是可更换的。 所述两重务电管理模块电路中使用的金电容为超级大电容。
所述接口电路模块 31使用的硬盘接口包括 SATAII、 SATAI I I , e-SATA、 PATA、 PCI、 PCI-E、 USB2. 0和 USB3. 0。
DRAM管理区块及 ECC校验通道片选 392的硬件设计的一种形式如图 4所示。 根据 DRAM存储器寻址结构, 可以对 DRAM在应用之前的区块分类或者 bit分类, 剔出掉缺陷 点缺陷区域 354较多的部分在 MMU寻址之外, 对缺陷点缺陷区域 354比较离散或不多的 可以直接到 MMU寻址中。
参见图 1、 4, DRAM模块 35的某一组 DRAM组中的 1号〜 9号 DRAM存储器地址线 串联到地址总线 321上, 数据线汇集成 72位并联到数据总线 322上, 其中 D0~D63共 64 位数据线为存储器数据存储带宽, D64-D71共 8位数据线为存取器数据 ECC校验带宽(即 9号 DRAM355 )。 地址总线 321和数据总线 321汇集到管理区块及 ECC校验通道管理器 392之 L„ 该方式数据总线可以视具体应用做调整, 如 32位或者 128位等方式。
对无论采用以上任何种形式寻址管理的 DRAM存储器, 在 SSD生产完成初始化设置 阶段(或者称低级格式化阶段), 都要进行非在线的对 DRAM存储器的测试程序, 该测试 过程是为了构建缺陷点区域表(如图 4所示:按地址顺序构建了 351、 352、 353的节点表)。 完成测试的一组片选 CS的所有 DRAM存储器上的缺陷点或区域 354均用首址和尾址的方 式被按地址大小排序的方式记录在缺陷点区域表记录节点中 (排序目的是减少利用缺陷点 区域表检索的时间, 从而提升逻辑地址向 DRAM物理地址映射的效率; 如果下一地址没 有缺陷点, 这样被登记的首址 =尾址)。 由于同一片选组的 DRAM存储器数据是组合数据 带宽, 为提高检索管理效率, 对于同一片选组某一地址的不同 DRAM存储器, 只要有一 缺陷点就意味着该地址的所有数据存储区被做了缺陷点登记(从 DRAM 大容量区域作少 数的缺陷点标定, 由于这样被 "连累" 的好的空间数目也是少数的, 可以忽略不计)。 当 测试程序完成测试构建了缺陷点区域表后, 程序将会将形成的该表存储到 Flash特定存储 区域中 (该区域作为 SSD管理区, 对 SSD的用户区来说属于不开放区域; 同时, 该区域 有预留足够的空间存储该表及替换块储备盈余)。
当 SSD投入使用时, DRAM存储器的 I/O数据都是经过硬件实现的 ECC校验方式管 理的, 该管理在线监视和检索 DRAM存储器中的数据完整性, 如果出现校验错误将及时 纠正该位错误(能对 DRAM存储器数据进行 lbit在线纠错), 若该处错误不只 lbit错误, 则不可纠正, 需要通知重新发数据。 并将该地址做缺陷点区域表的节点做***入栈登记。 ECC校验中新发现的缺陷点(或者缺陷块)节点 356入栈方式如图 4所示。 以地址索引找 寻入栈节点位置, 做***登记。 同时, 逻辑和物理的映射区域做缺失处理登记, 登记到临 时的缺失表中, 启用 "逻辑 +Γ 的新的地址做 I/O搡作。 待到关机后被更新的缺陷点区域 表被回写 Flash管理区域, 当在重新启用 SSD时, 新的 DRAM的逻辑地址和物理地址的 映射又在上一次更新过的缺陷点区域表基础上进行。 采用这样的管理模式, 目的是使得 DRAM存储器的稳定性得到更好的保障。 构建了 DRAM存储器管理的硬件体系, 为有效 的发挥其高速度, 大容量, 去构造更加高效的 Cache策略提供了可靠.的硬件平台。
以下将就图 5、 6、 7所示的超大容量 DRAM存储器所构建的超级 Cache的实施实例 做进一步详细说明:
本发明中设计的超级 Cache, 由于其超大容量和釆用了分区, 其中超级 Cache的操作 ***内存 Cache区, 以下简称 "超级 Cache内"; 超级 Cache的写入数据 Cache区, 以下 简称 "超级 Cache写"; 超级 Cache的传统 Cache区, 以下简称 "超级 Cache传"。 本实施 中以一个存储容量为 2GB的超大容量 DRAM存储器来说明超级 Cache的空间分配。 图 5 示为初始化后的缺省状态空间分配图。 空间分配大小在 SSD进入实际应用过程后, 动态调 整策略会根据实际工作的情况做出动态分配调整, 以便优化具体应用的倾向性。
超级 Cache被缺省分配为超级 Cache内 51;超级 Cache写 52; 超级 Cache传 53。 空 间分别为 lGBytes;512MBytes;512Mbytes。
超级 Cache 内 51 内部釆用二级管理方式: 一级为直联模式 511, 缺省空间为
256MBytes; 二级为压缩模式 512,缺省空间为 768Mbytes。该区域主要是对操作***( OS ) 开在 SSD上的用于存储页面文件( Page Files )的内存 Cache进行缓存。 为了最大限度利用 这一空间的同时不影响响应的速度, 对这一区域内的存储内容进行依据使用频率的调度: 经常被访问数据块被放在一级直联模式 511区域, 不经常被访问数据块被放在二级压缩模 式 512区域。 这样的调度策略可以获得几倍于有限空间的存储(经抽取页面文件数据进行 模拟试验, 可以在 lGBytes空间内完成存储 1.7~4.5GBytes的页面文件的请求)。 若一二级 的方式仍不能满足内存 Cache对 SSD的空间诉求,其管理策略上可考虑将数据使用级别中 在二级压缩模式 512区域深层的数据块移至 Flash存储空间中, 其调度管理由二级管理上 升到三级管理模式。 内部一二级的存储数据块的转换及一二级的空间动态分配调度, 对于 本行业的工程技术人员都比较容易理解, 在此不再赘述。 超级 Cache写 52内部管理策略倾向主要是对要写入 SSD的数据进行暂存, 对要写入
SSD的数据可进行数据分类管理,该数据可分为二种:一种是已经存在于 SSD中要替换的; 一种是新请求 SSD分配空间存储的。 对前一种数据形式, 可以认为是正在编辑或者修改的 数据, 假定修改或者编辑还在继续, 替换随时都有可能发生, 为减少 SSD中 Flash的写入 次数, 这种类型的数据暂存缓存中, SSD管理器可以当这部分数据为真正的存储空间一样 参与管理中, 只有等到该区域满或者关机掉电等情况时才完成对 Flash的写入。 对后一种 数据形式, 可以作为优先级最低的数据暂存于缓存中, 在缓存空间紧张或者关机掉电等时 被写入 Flash中。 由于缓存的后台操作性, 使得对 SSD写入速度的限制约束被放开, 写入 速度摆脱开了 Flash存储器的速度限制, 同时, 最大限度的降低了对 Flash存储器的写入操 作。
超级 Cache传 (53)内部釆用二级管理方式: 一级为组相联方式 531 ; 二级为全相联方 式 532。 缺省空间分配为 128MBytes; 384Mbytes。 以下参考图 6, 图 5进行详细说明。
参见图 6, 一级组相联 531方式, 是对缓存区域分组, 闪存介质模块遲辑单元进行分 区, 组内空间等于区内空间。 "组 1~组 256" 映射 "区 1〜区 256"; 映射 "区 256+1〜区 256+256"; 映射 "区 11^1-256~区 1M"。 组内的 SB区域号和对应映射的区内的 SB区 域号可以不实际一一对应, 允许串动。 该方式的映射形式, 依据的是时间局部性原理: 即 如果一个存储项被访问, 则可能该访问项很快被再访问。
一组: 1024块;每块 512Bytes( SB );该组相联区域缺省状态为 256组( 128Mbytes=256 组 *1024块 *512Bytes ) (该缓存空间可视硬件结构和动态应用空间调整策略可变)。
一区: 1024块; 每块 512Bytes, Flash存储器设定为 512GBytes容量(该容量根据具 体应用容量可变), 则有 1M个区 (512GBytes=lM区 *1024块 *512Bytes )。
考虑到算法的效率, 考虑到时间局部性, 一级组相联(531 )缓存的替换和组内查找 操作采用二次哈希(Hash )算法。 查找和替换操作均以逻辑单元号(LUN )和逻辑块组合 ( LBA )后的 LUN+LBA来作为 Hash搜索运算的关键字。
参见图 7, 二级全相联 532方式, 是对缓存区域分大块区域, 每个大块区域由若干小 块区域组成的方式。 具体为: LBl~LB n(n<24K,因为 Flag区域和数据链路区域有空间占用) 个大块, 每个大块由 SB1~SB128共计 128个小块和块前 16Bytes的 Flag区域组成, 每个 小块为 512Bytes容量。 Flash存储器区域逻辑单元仍然是分 1M的 SB小块区域。 其编号为 SB1-SB1M. 该方式的映射形式, 依据的是空间局部性原理: 如果一个存储项被访问, 则 该项及邻近的项也可能很快被访问。 具体的说, 就是 Flash存储器区域逻辑单元中的一小 块及相邻的 m个小块(m小于 128 )数据, 作为一个大块(LB )数据从存储器中取出到二 级全相联的一个空闲大块中 (LB ), 由于是相邻小块被取出, 体现了预取的方法。 此时对 已经访问过的小块在其对应的 Flag位中标识为 1, 以便将来大块由于被替换出去时, 可以 将有访问标记的小块移到一级组相联 531的缓存中去。
二级全相联 532的査找采用平衡二叉树的方式, 对应于 Flash存储器区域逻辑单元小 块(SB )都有一平衡二叉树节点单元, 以每一节点单元中的 LBA为关键字来查找平衡二 叉树。
二级全相联 532的替换采用 LRU的替换算法来实现, 把需要被替换出去的大块中的 Flag中标识为 1的小块搬移到一级组相联 531的相应区域中。 替换进来的大块数据做新的 Flag的相应访问标定, 对 LRU表做修正。
二级全相联 532的搬移是单向的, 只能向一级组相联 531搬移。 搬移的种类有二种: 一是替换时提到的对 Flag标识为 1的小块的搬移; 二是***的定时搬移。
此一级和二级的协同工作方式, 最大限度的把时间和空间局部性原理得到体现, 解决 了单一缓存管理环节二者不可得兼的固有矛盾。使得对 SSD的数据请求的访问速度摆脱开 了 Flash存储器的速度限制, 大大缩减了换入换出及査询时间, 提高了***的速度。
以上对超级 Cache的三种空间划分:超级 Cache内 51;超级 Cache写 52;超级 Cache 传 53进行了介绍。其三种划分及每种内部的缓存策略的使用,使得应用设备通过***( OS ) 对本发明的 SSD 的 I/O操作的速度完全架构于 DRAM基础上, 使得读 /写速度摆脱开了 Flash存储器的束缚。 同时最大限度的减少了对 Flash存储器的写入, 进而降低了对均衡损 耗等算法的依赖度, 从而进一步提升了 SSD的可靠性和使用寿命, 更进一步, 对采用廉价 新制程的 Flash (如 3LC等) 降低成本将会有更加深层的意义。
超级 Cache的内部功能划分和多种缓存策略的应用,具体实施可采用如图 8所示流程: 超级 Cache (缺省值)和超级 Cache (经验值)指的是对超级 Cache空间区域的划分方式, 缺省 值方式是 SSD固态硬盘出厂的设定方式。 经验值方式是在该 SSD固态硬盘被具体的用户 应用到自己的应用环境中, 经过一系列的数据流分类分析统计得出的经验值来重新对超级 Cache 空间区域进行设定的划分方式。 分析统计的相关参数作为全局量, 将会受到具体的 超级 Cache (超级 Cache内 、 超级 Cache写、 超级 Cache传)策略的加权值影响。 几种超级 Cache之间的数据交换和流动细节在此不做叙述。 釆用的一种两重备电源管理模块是为关机或者掉电 DRAM中数据回写 Flash存储器而 釆用的保护性备电方式。 具体应用可以根据用电量而采取更精确的测算, 参考图 9, 本实 施实例是以 2GB DRAM和 512GB Flash存储器来说明用电量的匹配的一种方式。
按 18片 DDR2 1Gb ( 128M*8bi t )计算, 消耗电流为 18*0. 12=2. 16A, 此为 1. 8V电源 的消耗, 功率约 3. 5W, 再加上 Flash和控制电路的消耗, 预算为需要 5W的功率, 如需 3 分钟,则能量需求为 5*3*60=900W - S,取电容电压为 . 2V,需要的容量为 C=900/4. 22=51F, 考虑电压变换的效率和隔离二极管的压降, 选用的电容为 70F。
设计电源输入为 5VDC, 备用电源为单只的锂电和金电容组, 主备电源通过二极管隔离 和转换。电源变换电路考虑到 DRAM、Flash和主控制器的供电需求不同分别产生 3. 3V, 1. 8V 和 1. 5V的工作电压。 电源变换电路采用的是比较通用的 DC- DC电路, 其中 1. 8V的电流较 大, 由 SP 51组成, 负载能力可达 3A, 1. 5V电源由 1. 8V经 LD0稳压 SP6201获得, 负载 能力为 0. 2A, 3. 3V电源由 SP6641组成, 负载能力为 0. 5A。 (若采用 DDR3作为 DRAM的主 要组成时, 要考虑增加 1. 5V的负载能力, 而减少 1. 8V的负载) 电源输入的 5VDC, 经过二极管 D4到达后级电源变换电路的 VDD端, 同时也通过电池 充电电路 U3 ( MCP73831 ) 给电池充电, 电池的输出通过二极管 D5到达 VDD, 由于
■=Vin-0. 4V,约为 4. 6V,高于二极管 D5的正端的 4. 2V的电池电压,所以电池在正常工作 状态是不放电的。 VDD点的电压同时通过 D6和 R15给金电容组充电,使其上的电压达到 4. 2V 左右, 二极管 D6的作用是降掉一点电压以使金电容组不过压(4. 6V) , 电阻 R15限制金电 容组充电的电流。
当输入电源断开时, VDD将由电池通过 D5和金电容组通过 D7供给。 由于电池的浮充 电压是 4. 2V, 而稳定的放电电压为 3. 7V, 所以金电容组的电压将高于电池电压, 首先由 金电容组通过 D7给 VDD供电, 当电压下降到一定的值后, 再由电池同时供电。
MOS-FET Q2及***电路构成电源开关电路, 当 5VDC输入有效时, 通过 D1和 R2使 Q2 导接通, 从而使 VDD得电, 这时电源变换电路的 3. 3V电源也通过 D9和 R2给 Q2提供接通 控制。 当 5VDC电源撤掉时, 由于上述备电***的工作而使 3. 3V电源继续存在, 从而 Q2 继续接通。
掉电的检测由 Q1及***电路组成, 当 5VDC电源正常时, Q1接通, 其集电极输出低电 平信号给 CPU 33, 为正常模式, 当 5VDC电源失掉时, Q1截止从而输出髙电平,触发 CPU 33 进行关机 /掉电模式的操作(其中最主要的是数据的 Flash回写), 当操作结束时, CPU 33 将输出一个高电平信号关掉 3. 3V的电源, 使 Q2也关闭, 整机电源关闭。
图 10是将上述两重备管理模块实施例的原理逻辑和文字描述更进一步以程序框图示 意, 以明晰关机 /掉电后,电源保护及超级 Cache回写 Flash存储区域的工作程序流程。 参考图 11 , 将本发明中为超级 Cache在更大规模的 RAID中实例应用做一描述, 以示 与 SSD应用的共性和特殊的地方, 以示不同和比较说明。 如图 11 所示 廉价冗余磁盘阵 列 (MID )型海量高速存储***的实现, 是将 DRAM & Flash 复合形成的多级超大容量的 Cache91运用到 RAID的管理设备当中, 运用的方式有架构在 RAID阵列卡 92、 93和磁盘阵 列 94之间; 或者直接合并到 RAID阵列卡的结构当中等多种形式。 其工作的高效机理仍然 是将服务器,网络存储设备等(接口可以为 SCSI/SATA2等形式)和磁盘阵列 94之间的 I/O 速度架构在 DRAM & Flash复合型高速 Cache之上, 从而有效的提升磁盘存储的效率。 进 而解除了低速后备存储设备对服务器, 网络速度的约束。 DRAM & Flash复合形成的多级超 大容量的 Cache 91和前面所描述的 SSD固态硬盘的共性都是为提速而设计, 不同则体现 在以下几个方面;
其一, DRAM的应用容量上有较大区别, 在 SSD的应用是几 GB的级别, 在 RAID的应用 上是几十 GB 的级别 (由于位置空间等不受限制, 备电***将会提供更大的应用级别的可 能, 比如 512GB成为可能)。
其二, Flash的应用在 SSD上可以作为后备主存储体,在 RAID上可以作为 DMM级 Cache 的下一级后备暂存 Cache (不排除由于价格降级或者特殊应用要求的情况作为磁盘阵列的 目的应用)。
其三,结构设计上由于 MID的应用有较大的灵活空间位置,设计上会考虑 DRAM, Flash 的插叠或堆叠方式, 便于应用扩展的灵活性。 SSD 由于空间位置的限制将更多的考虑定制 和集成性。
其四, 内部管理策略上, RAID的应用上将会比 SSD的应用上更趋于复杂和要求更多的 智能自适应性。
上述过程为本发明优选实现过程, 本领域的技术人员在本发明基本上进行的通常变化 和替代包含在本发明的保护范围之内。

Claims

1. 一种用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的方法,用于组成计算机 或服务器的存储***; 所述方法包括步驟:
A.设置闪存介质模块( 37 )和接口电路模块( 31 );
其特征在于还包括步骤:
B.设置较大容量的动态随机存储器 DRAM模块(35 ), 将其一部分存储空间同所述 闪存介质模块(37 ) —起用作数据存储;
C.设置 DRAM参与管理闪存介质的硬盘控制器(39 );
D.设置两重备电管理模块(38 ), 用于在关机或者掉电时将所述 DRAM模块(35 ) 中的数据回写到所述闪存介质模块(37 ) 中提供保护性备用电源;
E.用所述较大容量的 DRAM模块( 35 )构建超级高速缓存器 Cache区域, 釆用分区 和分级形式划分存储空间, 同时构建复合型自适应调整的多种缓存策略的高效 算法对所述区域和各分区进行内部管理;
F.在所述固态存储盘 SSD生产完成初始化阶段, 要对所述 DRAM模块(35 )做离线 测试, 以便构建缺陷点区域表;
G.所述 DRAM模块(35 )各存储器逻辑地址, 在结合了所述缺陷点区域表之后映射 到所述 DRAM模块( 35 ) 的良好物理地址上;
H.采用硬件实现的差错校验 ECC纠错方式进行在线监视和检索,实时地将所述 DRAM 模块(35 ) 内不稳定区域的地址登记到缺陷点区域表中从而参与新的映射管理。
2. 按照杈利要求 1 所述的用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的方 法, 其特征在于: 步骤 E所述: "釆用分区和分级形式划分存储空间" 中的分区是将所 述高速缓存器 Cache区域分为超级 Cache的内存区、超级 Cache的写入区和超级 Cache 的传统 Cache区; 所述超级 Cache操作***内存区是对主机操作***开在本固态存储 硬盘上的用于存储页面文件 Page files的内存 Cache进行缓冲; 所述超级 Cache写入区 用于对要写入本固态存储硬盘数据进行暂存。
3. 按照权利要求 1 所述的用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的方 法, 其特征在于: 步骤 E所述: "采用分区和分级形式划分存储空间" 中的分级形式是 将所述超级 Cache 操作***内存区分一级直联区和二级压缩区; 超级 Cache 的传统 Cache区化分为一级组联区和二级全联区。
4. 按照权利要求 1 所述的用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的方 法, 其特征在于: 步骤 E所述 "复合型自适应调整的多种缓存策略的高效算法对所述 区域和各分区进行内部管理" 是对所述超级 Cache操作内存区釆用一级直联二级压缩 的管理方式和算法; 对所述超级 Cache的传统 Cache区所釆取的一级组相联二级全相 联的管理方式及算法; 超级 Cache的写入区釆用数据分类管理方式。
5. 按照权利要求 1 所述的用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的方 法, 其特征在于: 步骤 E中所述的划分存储空间是依据应用统计的经验值进行动态调 整,即将原按缺省值对高速缓存器 Cache空间划分调整为以所述经验值进行空间划分。
6. 按照杈利要求 1 所述的用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的方 法, 其特征在于: 在使用无坏点区域的 DRAM的 A级品时, 可以不实施步骤?、 G、 H。
7. 按照权利要求 1 所述的用较大容量 DRAM参与闪存介质管理构建髙速固态存储盘的方 法, 其特征在于: 釆用更大容量的 DRAM和 Flash构建超大容量的多级 Cache***, 将服务器和网络存储与磁盘阵列的数据 I/O及交换架构在这个高速 Cache之上, 可构 建廉价冗余磁盘阵列 RAID型海量高速存储***, 从而增强 RAID的管理能力和降低 成本。
8. 一种用较大容量 DRAM参与闪存介质管理构建的高速固态存储盘装置,用作计算机或服 务器的存储装置, 包括闪存介质模块(37 )和接口电路模块 (31), 其特征在于:
还包括较大容量的动态随机存储器 DRAM模块(35 )和 DRAM参与管理闪存介质的 硬盘控制器(39 ), 以及为 DRAM模块(35 )所需的两重备电管理模块( 38 );
所述 DRAM参与管理闪存介质的硬盘控制器(39 )分别通过地址 /数据总线(32、 33 ) 与 DRAM模块(35 )、 Flash模块( 37 )联接; DRAM参与管理闪存介质的硬盘控制 器(39 )通过复合总线与接口电路模块(31 )联接; 两重备电管理模块(38 ) 电联接 到 DRAM参与管理闪存介质的硬盘控制器(39 )。
9. 按照权利要求 8所述的用较大容量 DRAM参与闪存介质管理构建的高速固态存储盘装 置, 其特征在于: 所述 DRAM参与管理闪存介质的硬盘控制器(39 )包括 CPU -程序存 储器(391), DRAM管理器 -超级 Cache策略管理 -DMA通道(392), m个闪存介质通道控制 器( 394 ), n个 DRAM管理区块及 ECC校验通道片选 DM) ; CPU -程序存储器(391)通过复合总线连接 DRAM管理器 -超级 Cache策略管理 -DMA 通道(392), 同时用控制总线联接闪存介质通道控制器 ( 394 ); 闪质介质通道控制器 ( 394 ) 与闪存介质模块( 37 )通过数据总线 ( 33 )联接; DRAM管理器 -超级 Cache策 略管理 -DMA通道(392)用地址 /数据总线 Π与 DRAM管理区块及 ECC校验通道片选(393) 联接; DRAM管理区块及 ECC校验通道片选(393)通过地址 /数据总线 ( 32 )联接到 DRAM 模块( 35 ) 上。
10. 按照权利要求 8所述的种用较大容量 DRAM参与闪存介质管理构建的高速固态存储盘装 置, 其特征在于: 所述的两重备电管理模块(38 ) 的工作方式包括电容式储电和电池 供电两重结合的供电方式; 当该态存储盘装置在其所属的主机正常工作状态时, 所述 两重备电管理模块(38 ), .处于充电状态和满电保护状态; 当计算机关机或者掉电时, 该两重备电管理模块(38 )向所述固态存储盘装置供电, 并由信号线触发 DRAM参与管 理闪存介质的硬盘控制器(39 )完成对 DRAM模块(35 )中的超级 Cache区内有回写标 志置位的数据回写到闪存介质模块(37 ) 中。
11. 按照权利要求 8或 10所述的用较大容量 DRAM参与闪存介质管理构建的高速固态存储 盘装置, 其特征在于: 所述两重备电管理模块(38 ) 的电路中, 两个备用电源无主次 之分, 根据使用时的电压浮动供电; 当所述两路备电中一路失效时, 另一路可独立满 足该高速固态存储盘装置中超级 Cache最大限度完成回写的电量需求及备电报警提示 所需电量。
12. 按照权利要求 10所述用较大容量 DRAM参与闪存介质管理构建高速固态存储盘的装 置, 其特征在于: 两重备电管理模块的电路中, 包括用金电容组和锂电池的组合方式 以提高供电的安全性和可靠性。
13. 按照权利要求 8或 10所述的用较大容量 DRAM参与闪存介质管理构建的高速固态存储 盘装置, 其特征在于: 所述两重备电管理模块(38 ) 的电路中的电池是可更换的。
14. 按照杈利要求 8所述的用较大容量 DRAM参与闪存介质管理构建的高速固态存储盘装 置, 其特征在于: 所述接口电路模块(31 )使用的硬盘接口包括 SATAI I、 SATAI I I , e-SATA、 PATA、 PCI , PCI-E、 USB2. 0和 USB3. 0。
PCT/CN2009/001379 2008-12-12 2009-12-07 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置 WO2010066098A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2008102183213A CN101552032B (zh) 2008-12-12 2008-12-12 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置
CN200810218321.3 2008-12-12

Publications (1)

Publication Number Publication Date
WO2010066098A1 true WO2010066098A1 (zh) 2010-06-17

Family

ID=41156223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/001379 WO2010066098A1 (zh) 2008-12-12 2009-12-07 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置

Country Status (2)

Country Link
CN (1) CN101552032B (zh)
WO (1) WO2010066098A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486870A (zh) * 2020-11-16 2021-03-12 深圳宏芯宇电子股份有限公司 计算机***及计算机***控制方法
CN113325321A (zh) * 2021-07-02 2021-08-31 阳光电源股份有限公司 一种储能***浮充状态电池掉电检测方法及储能***
CN113778344A (zh) * 2021-04-25 2021-12-10 联芸科技(杭州)有限公司 固态硬盘及写操作方法
CN115686372A (zh) * 2022-11-07 2023-02-03 武汉麓谷科技有限公司 一种基于zns固态硬盘zrwa功能的数据管理的方法
US12045498B2 (en) 2021-04-25 2024-07-23 Maxio Technology (Hangzhou) Co., Ltd. Solid state drive and write operation method

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101552032B (zh) * 2008-12-12 2012-01-18 深圳市晶凯电子技术有限公司 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置
CN102043728B (zh) * 2009-10-23 2012-07-04 慧荣科技股份有限公司 用来提高闪存存取效能的方法以及相关的记忆装置
US8495471B2 (en) * 2009-11-30 2013-07-23 International Business Machines Corporation Solid-state storage system with parallel access of multiple flash/PCM devices
CN102110034A (zh) * 2009-12-28 2011-06-29 北京安码科技有限公司 一种结合电池和dram的硬盘存储方法及装置
CN102376342A (zh) * 2010-08-18 2012-03-14 宇瞻科技股份有限公司 固态硬盘模块堆叠结构
CN102447604B (zh) * 2010-09-30 2016-01-27 迈普通信技术股份有限公司 路由表信息存储方法以及路由设备
TWI451435B (zh) 2010-10-08 2014-09-01 Phison Electronics Corp 非揮發性記憶體儲存裝置、記憶體控制器與資料儲存方法
CN102456404A (zh) * 2010-10-21 2012-05-16 群联电子股份有限公司 非易失性存储器存储装置、存储器控制器与数据存储方法
CN102541458B (zh) * 2010-12-17 2015-11-25 西安奇维科技股份有限公司 一种提高电子硬盘数据写入速度的方法
CN102097133B (zh) * 2010-12-31 2012-11-21 中国人民解放军装备指挥技术学院 一种海量存储***的可靠性测试***及测试方法
CN102779548A (zh) * 2011-05-09 2012-11-14 深圳市晶凯电子技术有限公司 用闪存介质作为存储体进行分级管理的固态存储装置及其构建方法
CN102567213B (zh) * 2011-11-30 2014-09-24 华中科技大学 相变存储器的写均衡方法
US9811414B2 (en) * 2012-07-25 2017-11-07 Silicon Motion Inc. Method for managing data stored in flash memory and associated memory device and controller
CN103970684A (zh) * 2013-02-04 2014-08-06 联想(北京)有限公司 存储数据的方法和电子设备
WO2015024532A1 (zh) * 2013-08-23 2015-02-26 上海芯豪微电子有限公司 高性能指令缓存***和方法
CN104424124B (zh) * 2013-09-10 2018-07-06 联想(北京)有限公司 内存装置、电子设备和用于控制内存装置的方法
CN104679589A (zh) * 2013-11-27 2015-06-03 中兴通讯股份有限公司 ***资源均衡调整方法及装置
CN103729302A (zh) * 2014-01-02 2014-04-16 厦门雅迅网络股份有限公司 一种避免对flash分区频繁读写的方法
CN105988720B (zh) * 2015-02-09 2019-07-02 ***通信集团浙江有限公司 数据存储装置和方法
CN105607862A (zh) * 2015-08-05 2016-05-25 上海磁宇信息科技有限公司 一种dram与mram结合具有备份电源的固态硬盘
US9990311B2 (en) * 2015-12-28 2018-06-05 Andes Technology Corporation Peripheral interface circuit
KR102469099B1 (ko) * 2016-03-24 2022-11-24 에스케이하이닉스 주식회사 반도체 시스템
CN108563403A (zh) * 2018-04-03 2018-09-21 北京公共交通控股(集团)有限公司 一种公交车数据存储方法及装置
CN109117301A (zh) * 2018-07-20 2019-01-01 江苏华存电子科技有限公司 一种随机内存使用纠错码校验可配置功能设置分区的方法
CN110087011B (zh) * 2019-03-29 2020-12-01 南京航空航天大学 一种基于高速相机的工业设备振动视频采集存储***
CN110165847B (zh) 2019-06-11 2021-01-26 深圳市瑞达美磁业有限公司 不同宽度波形的径向各向异性多极实心磁体的生产方法
CN110780811B (zh) * 2019-09-19 2021-10-15 华为技术有限公司 数据保护方法、装置及存储介质
CN112486852B (zh) * 2020-12-01 2024-05-14 合肥大唐存储科技有限公司 一种固态硬盘及其地址映射方法
CN117055822B (zh) * 2023-10-11 2024-02-06 苏州元脑智能科技有限公司 NVME SSD Raid卡板载备电***及控制方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0619541A2 (en) * 1993-04-08 1994-10-12 Hitachi, Ltd. Flash memory control method and information processing system therewith
US20040186946A1 (en) * 2003-03-19 2004-09-23 Jinaeon Lee Flash file system
WO2008057557A2 (en) * 2006-11-06 2008-05-15 Rambus Inc. Memory system supporting nonvolatile physical memory
CN101552032A (zh) * 2008-12-12 2009-10-07 深圳市晶凯电子技术有限公司 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4049297B2 (ja) * 2001-06-11 2008-02-20 株式会社ルネサステクノロジ 半導体記憶装置
JP5138869B2 (ja) * 2002-11-28 2013-02-06 ルネサスエレクトロニクス株式会社 メモリモジュール及びメモリシステム
CN101169971A (zh) * 2006-10-23 2008-04-30 北京锐科天智科技有限责任公司 电子硬盘
CN101211649B (zh) * 2006-12-27 2012-10-24 宇瞻科技股份有限公司 带有固态磁盘的动态随机存取内存模块

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0619541A2 (en) * 1993-04-08 1994-10-12 Hitachi, Ltd. Flash memory control method and information processing system therewith
US20040186946A1 (en) * 2003-03-19 2004-09-23 Jinaeon Lee Flash file system
WO2008057557A2 (en) * 2006-11-06 2008-05-15 Rambus Inc. Memory system supporting nonvolatile physical memory
CN101552032A (zh) * 2008-12-12 2009-10-07 深圳市晶凯电子技术有限公司 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486870A (zh) * 2020-11-16 2021-03-12 深圳宏芯宇电子股份有限公司 计算机***及计算机***控制方法
CN113778344A (zh) * 2021-04-25 2021-12-10 联芸科技(杭州)有限公司 固态硬盘及写操作方法
US12045498B2 (en) 2021-04-25 2024-07-23 Maxio Technology (Hangzhou) Co., Ltd. Solid state drive and write operation method
CN113325321A (zh) * 2021-07-02 2021-08-31 阳光电源股份有限公司 一种储能***浮充状态电池掉电检测方法及储能***
CN113325321B (zh) * 2021-07-02 2024-05-14 阳光电源股份有限公司 一种储能***浮充状态电池掉电检测方法及储能***
CN115686372A (zh) * 2022-11-07 2023-02-03 武汉麓谷科技有限公司 一种基于zns固态硬盘zrwa功能的数据管理的方法
CN115686372B (zh) * 2022-11-07 2023-07-25 武汉麓谷科技有限公司 一种基于zns固态硬盘zrwa功能的数据管理的方法

Also Published As

Publication number Publication date
CN101552032B (zh) 2012-01-18
CN101552032A (zh) 2009-10-07

Similar Documents

Publication Publication Date Title
WO2010066098A1 (zh) 用较大容量dram参与闪存介质管理构建高速固态存储盘的方法及装置
US9720616B2 (en) Data-retention controller/driver for stand-alone or hosted card reader, solid-state-drive (SSD), or super-enhanced-endurance SSD (SEED)
US8001317B2 (en) Data writing method for non-volatile memory and controller using the same
CN108121503B (zh) 一种NandFlash地址映射及块管理方法
CN101425041B (zh) 在nand flash存储器上建立fat文件***的优化方法
CN102012791B (zh) 基于Flash的数据存储PCIE板卡
US9235346B2 (en) Dynamic map pre-fetching for improved sequential reads of a solid-state media
Wu et al. GCaR: Garbage collection aware cache management with improved performance for flash-based SSDs
US20130073798A1 (en) Flash memory device and data management method
TWI405209B (zh) 資料管理方法及使用此方法的快閃儲存系統與控制器
US8195971B2 (en) Solid state disk and method of managing power supply thereof and terminal including the same
US20090089485A1 (en) Wear leveling method and controller using the same
US20100042774A1 (en) Block management method for flash memory, and storage system and controller using the same
CN101369451A (zh) 固态存储器、包含其的计算机***和操作其的方法
TW201437807A (zh) 映射資訊記錄方法、記憶體控制器與記憶體儲存裝置
US20100057979A1 (en) Data transmission method for flash memory and flash memory storage system and controller using the same
TWI498899B (zh) 資料寫入方法、記憶體控制電路單元與記憶體儲存裝置
KR101166803B1 (ko) 비휘발성 메모리 및 휘발성 메모리를 포함하는 메모리 시스템 및 그 시스템을 이용한 처리 방법
CN104699413A (zh) 数据管理方法、存储器存储装置及存储器控制电路单元
TW201945927A (zh) 資料寫入方法、記憶體控制電路單元以及記憶體儲存裝置
WO2020029319A1 (zh) 全闪存服务器
CN105607862A (zh) 一种dram与mram结合具有备份电源的固态硬盘
CN110321081B (zh) 一种闪存读缓存的方法及其***
US20210182192A1 (en) Storage device with enhanced time to ready performance
Liu et al. Hybrid ssd with pcm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09831375

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09831375

Country of ref document: EP

Kind code of ref document: A1