CN110888600A - Buffer area management method for NAND flash memory - Google Patents

Buffer area management method for NAND flash memory Download PDF

Info

Publication number
CN110888600A
CN110888600A CN201911107839.4A CN201911107839A CN110888600A CN 110888600 A CN110888600 A CN 110888600A CN 201911107839 A CN201911107839 A CN 201911107839A CN 110888600 A CN110888600 A CN 110888600A
Authority
CN
China
Prior art keywords
data page
linked list
data
cold
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911107839.4A
Other languages
Chinese (zh)
Other versions
CN110888600B (en
Inventor
伍卫国
宫继伟
解超
聂世强
张驰
张晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201911107839.4A priority Critical patent/CN110888600B/en
Publication of CN110888600A publication Critical patent/CN110888600A/en
Application granted granted Critical
Publication of CN110888600B publication Critical patent/CN110888600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a buffer area management method facing NAND flash memory, which divides a database buffer area into a cold clean linked list, a cold dirty linked list and a mixed linked list, wherein the cold clean linked list, the cold dirty linked list and the mixed linked list are numbered according to the integral access sequence; in the overall logic, the buffer area is regarded as a single linked list organized according to the access sequence, and a time window concept is proposed to cover the data page accessed in the latest period of time; when buffer replacement occurs, checking whether the data page pointed by the tail part of the linked list is in the time window according to the priority order of the cold dry clean linked list, the cold dirty linked list and the mixed linked list; if so, searching a next linked list; if not, then directly replace. The invention obtains larger hit rate and reduces the writing operation times of the flash memory, thereby obtaining larger cache benefit and improving the overall performance of the storage system.

Description

Buffer area management method for NAND flash memory
Technical Field
The invention belongs to the technical field of cache of computer storage equipment, and particularly relates to a buffer area management method for a NAND flash memory.
Background
The buffer area replacement algorithm is used for optimizing IO operation and reducing the access times to a disk, and is widely applied to an operating system, a data block and a network server. The cache module of the computer storage equipment can optimize the I/O sequence and reduce the access times to the storage equipment, and a good buffer area management method can obtain higher hit rate and improve the performance of the storage system on the whole.
Flash memory is a non-volatile storage device, and Solid State Disks (SSDs) are widely used in mobile devices and personal computers due to their advantages of small size, light weight, shock resistance, high speed, high reliability, and the like. Unlike a conventional magnetic disk, the NAND flash memory is a storage medium that is write-once and erased in bulk, and has three basic operations, i.e., read, write, and erase operations, where the read and write operations are performed in units of data pages, and the erase operation is performed in units of blocks, resulting in asymmetry in read and write costs of the NAND flash memory. Therefore, the flash memory-oriented buffer replacement algorithm needs to be designed according to the read-write characteristics of the flash memory, and the optimal cache benefit can be obtained only by considering the asymmetry of the read-write cost.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a buffer management method for a NAND flash memory, which aims at overcoming the above-mentioned shortcomings in the prior art, and utilizes a locality principle to maintain a high hit rate while reducing the dirty data page write-back operation as much as possible, thereby obtaining a balance between the read-write cost and the cache gain and obtaining the maximum cache gain.
The invention adopts the following technical scheme:
a buffer area management method facing NAND flash memory divides a flash memory buffer area into three logical storage areas of page units, and uses a linked list structure to carry out organization management respectively, the linked lists corresponding to the three storage areas are respectively a cold dry clean linked list, a cold dirty linked list and a mixed linked list, and data pages in the three linked lists use access time mark values as logical numbers; on the whole logic, the organization structure of the buffer area is used as an LRU single linked list organized according to the logic number sequence; on the basis, a time window concept is provided, wherein the time window represents a time vector from the current system access time to a certain previous moment and covers a data page closest to the current access time; when buffer replacement occurs, according to the priority order of the cold dry clean linked list, the cold dirty linked list and the mixed linked list, checking whether the data page pointed by the tail part of the linked list is in the time window, and selecting the data page not in the time window for buffer replacement.
Specifically, the method comprises the following steps:
s1, setting a system counter which represents the number of times of the system accessing the data page or the access time of the system, and the initial value is 0;
s2, organizing a cold dry clean linked list, a cold dirty linked list and a mixed linked list of the flash memory buffer area by adopting a least recently used principle; setting the size of a time window of a buffer area to represent the range of the data page which is accessed recently;
s3, when the upper layer application sends out the read-write operation request to the solid state disk data, firstly adding 1 to the system counter value, representing the system access data page times plus 1 or the system access time plus 1, and checking whether the request data exists in the flash memory buffer area; if the data page exists, adjusting the position of the data page, updating the access time of the data page, the inter-cooling hot mark and the dirty mark, returning the data to the upper layer application, and ending; if the requested data does not exist in the flash buffer, go to step S4;
s4, checking whether there is free space in the buffer area to accommodate new data page, if so, executing step S5; if the buffer has no empty space, go to step S6;
s5, reading request data from the solid state disk, inserting the data page into the MRU end of the cold dry clean linked list or the MRU end of the cold dirty linked list according to the current request type, setting the dirty mark corresponding to the data page to be 0 or 1, setting the cold and hot identification value of the data page to be 1 to represent that the data page is a cold data page, assigning the current system counter value to the access time mark value of the data page, then returning the data to the upper application, and ending;
s6, judging whether the data page at the LRU end of the linked list is in the time window or not according to the priority of the cold dry clean linked list, the cold dirty linked list and the mixed linked list, and selecting the data page which is not in the time window as a replacement data page; firstly, judging whether a cold clean linked list is empty: if the clean cold link list is not empty and the data page at the LRU end of the clean cold link list is not in the time window, directly removing the data page at the LRU end of the clean cold link list, and then executing the step S5; otherwise, judging whether the cold dirty linked list is empty, if not, writing the LRU end data page of the cold clean linked list back to the flash memory, and then executing the step S5; otherwise, go to step S7;
s7, selecting the LRU end data page of the mixed linked list, if the LRU end data page of the mixed linked list is a clean page, directly removing the clean page, executing step S5, and if the LRU end data page of the mixed linked list is a dirty page, writing the LRU end data page of the mixed linked list back to a flash memory, and then executing step S5.
Further, in step S1, the data pages in the buffer are logically organized into a logical LRU linked list structure sorted according to the access time stamp values of the data pages, and the area at the MRU end of the logical linked list is covered by a time window, which represents the length of the fresh data pages in the linked list.
Further, with 10% of the number of data pages accommodated in the buffer as the time window size, it means that 10% of the data pages in the buffer are considered as fresh pages.
Further, in step S2, the clean cold link list stores the data page that has been accessed by the read operation once; the cold dirty linked list stores the data page which is accessed once by the write operation or the newly written data page; the mixed linked list is stored in the data page hit by the buffer, namely the data page is accessed twice or more than twice; performing access time marking, dirty marking and cold and hot marking on data pages in a flash buffer area; marking the modified data page or the newly written data page as a dirty page; and marking the data page stored in the buffer for the first time as a cold data page, and marking the data page hit in the buffer as a hot data page.
Further, in step S3, the resetting operation is performed on the system counter and the time stamp values of all the data pages in the buffer, and the specific steps are as follows:
s301, comparing access time mark values of LRU end data pages of the cold clean linked list, the cold dirty linked list and the mixed linked list, and finding out the minimum access time mark value and marking the minimum access time mark value as min _ t;
s302, subtracting the minimum access time mark value from the access time mark value of each data page in the buffer area to obtain the current access time mark value of the data page;
and S303, taking the value obtained by subtracting the minimum access time stamp value min _ t from the counter value as the current system counter value.
Further, step S5 is specifically:
if the requested data page is in the buffer area, the data page is removed from the current position and inserted into the MRU end of the mixed linked list, meanwhile, the cold and hot marks of the data page are set to be 0, the data page is represented as a hot data page, operation is performed on the dirty mark value of the data page by using 0 or 1 according to the type of the read-write request, the dirty mark state of the data page is represented as superposition of the previous state and the current state, 0 represents read operation, 1 represents write operation, the current system counter value is assigned to the access time mark value of the data page, the access time mark value is returned to the upper-layer application, and the operation is finished.
Further, in step S6, it is determined whether a data page P is specified as follows within the time window:
Count_t-P_t<=w_size
where, Count _ t represents the current system counter value, P _ t represents the access time stamp of the data page P, and w _ size represents the time window size.
Furthermore, when the formula of the time window is satisfied, the data page P is in the time window, and if the formula of the time window is not satisfied, the data page P is not in the time window;
when the data page of LRU end of a linked list is in the time window, all the data pages in the linked list are in the time window, the data page of MRU end of the linked list is the latest accessed data page, the access time value is larger than that of the data page of LRU end, and the access time mark values of the data pages are reduced from MRU end to LRU end in sequence.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a buffer area management method facing NAND flash memory, which divides a buffer area into different logic storage areas and organizes and manages the areas by using a linked list structure respectively, and carries out cold and dirty marking on data pages; meanwhile, the concept of a time window is set to replace the middle data page of the non-time window. The strategy can effectively avoid the performance jitter problem caused by the replacement of the data page which is just accessed, overcome the problem of searching expense caused by the prior replacement algorithm that when the LRU linked list length threshold is used for partitioning, the clean data page is preferentially replaced, and meanwhile, the strategy using the time window can more truly reflect the life length of the data page in the buffer area than the linked list length threshold, thereby avoiding the cold data page from occupying the buffer area for a long time and ensuring the higher hit rate of the buffer area.
Furthermore, the invention uses a system counter to mark the times of system accessing data pages, carries out access time marking on each data page in the buffer area, and selects data pages which are not in a time window at the same time, thereby ensuring to replace 'stale' data pages.
Further, the data pages are marked with cold and hot data and dirty data, and the states of the data pages are recorded, so that the data pages with the minimum replacement cost are replaced preferentially.
Furthermore, priority setting is carried out on the three linked lists of the buffer area so as to preferentially exchange the cold data pages and the clean pages, the hit rate of the buffer area is guaranteed, and the replacement cost is reduced.
Furthermore, the size of the time window of the buffer area is set according to a certain proportion, which represents the number of the data pages accessed in the latest period of time and protects the data pages accessed recently from being replaced.
Furthermore, whether the current candidate replacement page is a 'fresh' data page is judged by calculating whether the difference value between the current system counter value and the access time mark of the cold area candidate replacement page is smaller than the size of a time window, the time distance representing the access time of the data page and the current system is calculated, the 'fresh' degree of the data page can be reflected more truly, and therefore the real cold page is replaced.
In summary, the invention fully considers the time characteristic of data page access on the premise of considering the asymmetric read-write cost characteristic of the NAND flash memory in the cache system, organizes the data pages of the flash memory buffer area by using different linked lists, so that the cache replacement strategy can adapt to different application scenes and data loads, obtains a larger hit rate, and reduces the number of flash memory write operations, thereby obtaining a larger cache benefit and improving the overall performance of the storage system.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a logical diagram of buffer data organization according to the present invention;
FIG. 2 illustrates the buffer real data organization logic of the present invention;
FIG. 3 is a flowchart of an overall implementation of buffer replacement according to the present invention;
FIG. 4 is a flowchart of a detailed implementation of buffer replacement according to the present invention.
Detailed Description
Referring to fig. 1, a buffer management method for a NAND flash memory includes the following steps:
s1, setting a system counter, and taking the value of the system counter as the access time mark value of the accessed data page in the buffer area;
in the overall logic, the organization structure of the buffer area is regarded as an LRU linked list organized according to the logical number of the data pages, the data page at the MRU end of the logical LRU linked list is the data page currently accessed by the system, the access time mark value is the largest, the value is equal to the current system counter value, and the data page access time mark values are reduced in sequence from the MRU end to the LRU end.
On the basis, a time window concept is provided, a time window size is selected from the MRU end of the logic LRU linked list to identify the number of data pages accessed in the latest period of time, the time window size can be dynamically set, the larger the value is, the more the number of data pages is covered, and according to experimental comparison, 10% of the number of the data pages which can be accommodated in the buffer area is selected as the time window size.
S2, please refer to fig. 2, the present invention divides the buffer area into three logical storage areas, and actually organizes and manages the data pages in the three logical storage areas by using three linked list structures such as a cold dry clean linked list (CCL), a cold dirty linked list (CDL), and a mixed linked list (ML), and each linked list is organized by using the least recently used principle. Therefore, in each linked list, the access time stamp values of the data pages are sequentially ordered, the MRU end data page of each linked list has the largest access time value and is the data page which is accessed recently, and the LRU end data page has the smallest access time value and is the data page which has the longest access time in the current linked list. Storing the data page which is accessed once by the read operation by a cold dry clean chain table (CCL); a cold dirty chain table (CDL) stores data pages which are accessed once by the write operation or newly written data pages; the mixed linked list (ML) is stored in the data page hit by the buffer, namely, the data page is accessed twice or more than twice; and carrying out access time marking, dirty marking and cold and hot marking on the data pages in the flash buffer: marking the modified data page or the newly written data page as a dirty page; data pages that just entered the buffer are marked as cold data pages and data pages that hit in the buffer are marked as hot data pages.
S3, please refer to fig. 3, when the upper layer application sends a read/write operation request for the solid state disk data, first add 1 to the system counter value, which represents that the number of times the system accesses the data page adds 1 or the system access time adds 1, and then check whether the requested data is in the flash buffer; if the request data exists in the flash memory buffer area, the data page is firstly removed from the current linked list position and is inserted into the MRU end of the mixed linked list, then the cold and hot marks of the data page are set to be 0, the data page is represented as a hot data page, meanwhile, according to the read-write request type, 0 or 1 is used for executing | operation on the dirty mark value of the data page, and the dirty mark state represented as the superposition of the previous state and the current state: 0 represents read operation, 1 represents write operation, assigns the current system counter value to the access time mark value of the data page, and finally returns the data page to the upper application, and the operation is finished; if the requested data is not in the buffer, go to step S4;
the system counter is a counting variable for recording each data page access, the initial value of the counting variable is 0, the system executes the operation of adding 1 to the counter value when accessing one data page, and assigns the counter value to the access time mark value of the accessed data page, and after a period of data access, the counter value may have the risk of overflow. Therefore, after the counter value reaches a certain size, a value resetting operation needs to be performed on the system counter value and the access time stamp values of all data pages in the buffer, and the specific steps are as follows:
s301, comparing access time mark values of LRU end data pages of the cold clean linked list, the cold dirty linked list and the mixed linked list, and finding out the minimum access time mark value and marking the minimum access time mark value as min _ t;
s302, subtracting the minimum access time mark value min _ t from the access time mark value of each data page in the buffer area to obtain the current access time mark value of the data page;
s303, taking the value obtained by subtracting the minimum access time mark value min _ t from the counter value as the current system counter value;
therefore, the problem of overflow of the counter value is solved, and meanwhile, the accuracy of calculating whether the data page is in the time window or not and the relative sequence stability of the access time sequence of the data page are ensured.
S4, checking whether there is free space in the buffer area to accommodate new data page, if so, executing step S5; if the buffer has no empty space, go to step S6;
s5, reading corresponding request data from the SSD, and inserting the data page into the MRU end of the cold dry clean linked list or the MRU end of the cold dirty linked list according to whether the current request type is a read request or a write request; meanwhile, setting a corresponding dirty mark of the data page as 0 or 1, wherein the dirty mark represents a clean page or a dirty page; setting the cold and hot identification value of the data page to be 1, and representing that the data page is a cold data page; assigning a current system counter value to an access time stamp value for the data page; then returning the data to the upper application, and ending;
s6, judging whether the data page at the LRU end of the linked list is in the time window according to the priority of the cold dry clean linked list, the cold dirty linked list and the mixed linked list, and selecting the data page which is not in the time window as the replacement data page. Firstly, judging whether a cold clean linked list is empty: if the clean cold link list is not empty and the data page at the LRU end of the clean cold link list is not in the time window, directly removing the data page at the LRU end of the clean cold link list, and then executing the step S5; otherwise, judging whether the cold dirty linked list is empty, if not, writing the data page back to the flash memory, and then executing the step S5; otherwise, go to step S7;
the formula for determining whether a data page P is within the time window is as follows:
Count_t-P_t<=w_size
where, Count _ t represents the current system counter value, P _ t represents the access time stamp of the data page P, and w _ size represents the time window size.
If the formula is satisfied, it represents that the data page P is in the time window, and if the formula is not satisfied, it represents that the data page P is not in the time window.
When the data page at the LRU end of one linked list is in the time window, all the data pages in the linked list are in the time window, because each linked list is organized according to the least recently used principle, namely the LRU linked list, the data page at the MRU end of the linked list is the latest accessed data page, the access time value of the data page is larger than that of the data page at the LRU end, and the access time mark values of the data pages are sequentially reduced from the MRU end to the LRU end.
S7, selecting the LRU end data page of the mixed linked list, if the LRU end data page of the mixed linked list is a clean page, directly removing the clean page, executing step S5, if the data page is a dirty page, indicating that the data page is modified, or the data page is a newly written data page, writing the data page back to the flash memory, and then executing step S5.
The method provided by the invention effectively balances the hit rate and the read-write cost, can obtain higher hit rate and integral read-write performance under different working loads, effectively reduces the read-write response time of the flash memory solid-state disk, and is suitable for scenes with higher real-time requirements. In the process of cache replacement, cache replacement can be completed only by judging whether the data page at the LRU end of each linked list meets the replacement condition according to the priority, so that the replacement process is very quick, meanwhile, according to the priority sequence of dirty pages and clean pages and the limitation of a time window, the high hit rate and the small SSD writing times can be ensured, the write-back of the dirty pages in the buffer is delayed, the integral read-write performance of the SSD storage device is improved, and the method can adapt to various data loads and has good robustness.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (9)

1. A buffer area management method facing NAND flash memory is characterized in that a flash memory buffer area is divided into three logical storage areas of page units, and organization management is carried out by using linked list structures respectively, linked lists corresponding to the three storage areas are respectively a cold dry clean linked list, a cold dirty linked list and a mixed linked list, and data pages in the three linked lists use access time mark values as logical numbers; on the whole logic, the organization structure of the buffer area is used as an LRU single linked list organized according to the logic number sequence; on the basis, a time window concept is provided, wherein the time window represents a time vector from the current system access time to a certain previous moment and covers a data page closest to the current access time; when buffer replacement occurs, according to the priority order of the cold dry clean linked list, the cold dirty linked list and the mixed linked list, checking whether the data page pointed by the tail part of the linked list is in the time window, and selecting the data page not in the time window for buffer replacement.
2. The NAND-flash-oriented buffer management method of claim 1, comprising the steps of:
s1, setting a system counter which represents the number of times of the system accessing the data page or the access time of the system, and the initial value is 0;
s2, organizing a cold dry clean linked list, a cold dirty linked list and a mixed linked list of the flash memory buffer area by adopting a least recently used principle; setting the size of a time window of a buffer area to represent the range of the data page which is accessed recently;
s3, when the upper layer application sends out the read-write operation request to the solid state disk data, firstly adding 1 to the system counter value, representing the system access data page times plus 1 or the system access time plus 1, and checking whether the request data exists in the flash memory buffer area; if the data page exists, adjusting the position of the data page, updating the access time of the data page, the inter-cooling hot mark and the dirty mark, returning the data to the upper layer application, and ending; if the requested data does not exist in the flash buffer, go to step S4;
s4, checking whether there is free space in the buffer area to accommodate new data page, if so, executing step S5; if the buffer has no empty space, go to step S6;
s5, reading request data from the solid state disk, inserting the data page into the MRU end of the cold dry clean linked list or the MRU end of the cold dirty linked list according to the current request type, setting the dirty mark corresponding to the data page to be 0 or 1, setting the cold and hot identification value of the data page to be 1 to represent that the data page is a cold data page, assigning the current system counter value to the access time mark value of the data page, then returning the data to the upper application, and ending;
s6, judging whether the data page at the LRU end of the linked list is in the time window or not according to the priority of the cold dry clean linked list, the cold dirty linked list and the mixed linked list, and selecting the data page which is not in the time window as a replacement data page; firstly, judging whether a cold clean linked list is empty: if the clean cold link list is not empty and the data page at the LRU end of the clean cold link list is not in the time window, directly removing the data page at the LRU end of the clean cold link list, and then executing the step S5; otherwise, judging whether the cold dirty linked list is empty, if not, writing the LRU end data page of the cold clean linked list back to the flash memory, and then executing the step S5; otherwise, go to step S7;
s7, selecting the LRU end data page of the mixed linked list, if the LRU end data page of the mixed linked list is a clean page, directly removing the clean page, executing step S5, and if the LRU end data page of the mixed linked list is a dirty page, writing the LRU end data page of the mixed linked list back to a flash memory, and then executing step S5.
3. The buffer management method for the NAND flash memory as claimed in claim 2, wherein in step S1, the data pages in the buffer are logically organized into a logical LRU list structure sorted by the data page access time stamp value, and the area at the MRU end of the logical link list is covered with a time window representing the length of the fresh data page in the link list.
4. The method of claim 3, wherein 10% of the number of data pages in the buffer is taken as a time window size, indicating that 10% of the data pages in the buffer are considered as fresh pages.
5. The method according to claim 2, wherein in step S2, the rdl list stores data pages that have been accessed once by a read operation; the cold dirty linked list stores the data page which is accessed once by the write operation or the newly written data page; the mixed linked list is stored in the data page hit by the buffer, namely the data page is accessed twice or more than twice; performing access time marking, dirty marking and cold and hot marking on data pages in a flash buffer area; marking the modified data page or the newly written data page as a dirty page; and marking the data page stored in the buffer for the first time as a cold data page, and marking the data page hit in the buffer as a hot data page.
6. The method of claim 2, wherein in step S3, the resetting operation is performed on the system counter and the timestamp values of all the data pages in the buffer, and the specific steps are as follows:
s301, comparing access time mark values of LRU end data pages of the cold clean linked list, the cold dirty linked list and the mixed linked list, and finding out the minimum access time mark value and marking the minimum access time mark value as min _ t;
s302, subtracting the minimum access time mark value from the access time mark value of each data page in the buffer area to obtain the current access time mark value of the data page;
and S303, taking the value obtained by subtracting the minimum access time stamp value min _ t from the counter value as the current system counter value.
7. The method for managing the buffer area of the NAND flash memory as claimed in claim 2, wherein the step S5 is specifically as follows:
if the requested data page is in the buffer area, the data page is removed from the current position and inserted into the MRU end of the mixed linked list, meanwhile, the cold and hot marks of the data page are set to be 0, the data page is represented as a hot data page, operation is performed on the dirty mark value of the data page by using 0 or 1 according to the type of the read-write request, the dirty mark state of the data page is represented as superposition of the previous state and the current state, 0 represents read operation, 1 represents write operation, the current system counter value is assigned to the access time mark value of the data page, the access time mark value is returned to the upper-layer application, and the operation is finished.
8. The method of claim 2, wherein in step S6, it is determined whether a data page P is specified as follows within the time window:
Count_t-P_t<=w_size
where, Count _ t represents the current system counter value, P _ t represents the access time stamp of the data page P, and w _ size represents the time window size.
9. The method of claim 8, wherein the data page P is within the time window when the time window formula is satisfied, and the data page P is not within the time window if the time window formula is not satisfied;
when the data page of LRU end of a linked list is in the time window, all the data pages in the linked list are in the time window, the data page of MRU end of the linked list is the latest accessed data page, the access time value is larger than that of the data page of LRU end, and the access time mark values of the data pages are reduced from MRU end to LRU end in sequence.
CN201911107839.4A 2019-11-13 2019-11-13 Buffer area management method for NAND flash memory Active CN110888600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911107839.4A CN110888600B (en) 2019-11-13 2019-11-13 Buffer area management method for NAND flash memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911107839.4A CN110888600B (en) 2019-11-13 2019-11-13 Buffer area management method for NAND flash memory

Publications (2)

Publication Number Publication Date
CN110888600A true CN110888600A (en) 2020-03-17
CN110888600B CN110888600B (en) 2021-02-12

Family

ID=69747433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911107839.4A Active CN110888600B (en) 2019-11-13 2019-11-13 Buffer area management method for NAND flash memory

Country Status (1)

Country Link
CN (1) CN110888600B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580754A (en) * 2020-05-06 2020-08-25 西安交通大学 Write-friendly flash memory solid-state disk cache management method
CN112148366A (en) * 2020-09-14 2020-12-29 上海华虹集成电路有限责任公司 FLASH acceleration method for reducing power consumption and improving performance of chip
CN112684981A (en) * 2020-12-23 2021-04-20 北京浪潮数据技术有限公司 Solid state disk reading operation recording method, system, device and readable storage medium
CN113204573A (en) * 2021-05-21 2021-08-03 珠海金山网络游戏科技有限公司 Data read-write access system and method
CN115048056A (en) * 2022-06-20 2022-09-13 河北工业大学 Solid state disk buffer area management method based on page replacement cost
US11762578B2 (en) * 2020-09-29 2023-09-19 International Business Machines Corporation Buffer pool contention optimization

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470500A (en) * 2007-12-27 2009-07-01 株式会社东芝 Information processing apparatus and nonvolatile semiconductor storage device
KR20100115090A (en) * 2009-04-17 2010-10-27 서울대학교산학협력단 Buffer-aware garbage collection technique for nand flash memory-based storage systems
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN105930282A (en) * 2016-04-14 2016-09-07 北京时代民芯科技有限公司 Data cache method used in NAND FLASH
US20170110196A1 (en) * 2013-03-13 2017-04-20 Winbond Electronics Corp. Nand flash memory
CN107391398A (en) * 2016-05-16 2017-11-24 中国科学院微电子研究所 Management method and system for flash memory cache region
CN107688443A (en) * 2017-09-18 2018-02-13 郑州云海信息技术有限公司 A kind of method of data storage, system and relevant apparatus
CN108694134A (en) * 2017-04-10 2018-10-23 三星电子株式会社 The technology of read-modify-write expense is reduced in mixing DRAM/NAND memories
CN108845957A (en) * 2018-03-30 2018-11-20 杭州电子科技大学 It is a kind of to replace and the adaptive buffer management method of write-back
US20190278485A1 (en) * 2018-03-08 2019-09-12 Western Digital Technologies, Inc. Adaptive transaction layer packet for latency balancing
US10423558B1 (en) * 2018-08-08 2019-09-24 Apple Inc. Systems and methods for controlling data on a bus using latency

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470500A (en) * 2007-12-27 2009-07-01 株式会社东芝 Information processing apparatus and nonvolatile semiconductor storage device
KR20100115090A (en) * 2009-04-17 2010-10-27 서울대학교산학협력단 Buffer-aware garbage collection technique for nand flash memory-based storage systems
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
US20170110196A1 (en) * 2013-03-13 2017-04-20 Winbond Electronics Corp. Nand flash memory
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN105930282A (en) * 2016-04-14 2016-09-07 北京时代民芯科技有限公司 Data cache method used in NAND FLASH
CN107391398A (en) * 2016-05-16 2017-11-24 中国科学院微电子研究所 Management method and system for flash memory cache region
CN108694134A (en) * 2017-04-10 2018-10-23 三星电子株式会社 The technology of read-modify-write expense is reduced in mixing DRAM/NAND memories
CN107688443A (en) * 2017-09-18 2018-02-13 郑州云海信息技术有限公司 A kind of method of data storage, system and relevant apparatus
US20190278485A1 (en) * 2018-03-08 2019-09-12 Western Digital Technologies, Inc. Adaptive transaction layer packet for latency balancing
CN108845957A (en) * 2018-03-30 2018-11-20 杭州电子科技大学 It is a kind of to replace and the adaptive buffer management method of write-back
US10423558B1 (en) * 2018-08-08 2019-09-24 Apple Inc. Systems and methods for controlling data on a bus using latency

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BAICHUAN SHEN等: "APRA: Adaptive Page Replacement Algorithm for NAND Flash Memory Storages", 《2009 INTERNATIONAL FORUM ON COMPUTER SCIENCE-TECHNOLOGY AND APPLICATIONS》 *
SUNGMIN PARK等: "Using Non-Volatile RAM as a Write Buffer for NAND Flash Memory-based Storage Devices", 《 2008 IEEE INTERNATIONAL SYMPOSIUM ON MODELING, ANALYSIS AND SIMULATION OF COMPUTERS AND TELECOMMUNICATION SYSTEMS》 *
何简繁: "基于NAND闪存的固态硬盘缓存优化策略研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
崔金华等: "基于MWM的闪存数据库缓冲区置换算法", 《华中科技大学学报(自然科学版)》 *
申烨婷: "基于NAND闪存的缓冲区管理优化算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
聂世强等: "一种基于跳跃hash的对象分布算法", 《软件学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580754A (en) * 2020-05-06 2020-08-25 西安交通大学 Write-friendly flash memory solid-state disk cache management method
CN111580754B (en) * 2020-05-06 2021-07-13 西安交通大学 Write-friendly flash memory solid-state disk cache management method
CN112148366A (en) * 2020-09-14 2020-12-29 上海华虹集成电路有限责任公司 FLASH acceleration method for reducing power consumption and improving performance of chip
US11762578B2 (en) * 2020-09-29 2023-09-19 International Business Machines Corporation Buffer pool contention optimization
CN112684981A (en) * 2020-12-23 2021-04-20 北京浪潮数据技术有限公司 Solid state disk reading operation recording method, system, device and readable storage medium
CN112684981B (en) * 2020-12-23 2023-12-22 北京浪潮数据技术有限公司 Method, system and device for recording read operation of solid state disk and readable storage medium
CN113204573A (en) * 2021-05-21 2021-08-03 珠海金山网络游戏科技有限公司 Data read-write access system and method
CN113204573B (en) * 2021-05-21 2023-07-07 珠海金山数字网络科技有限公司 Data read-write access system and method
CN115048056A (en) * 2022-06-20 2022-09-13 河北工业大学 Solid state disk buffer area management method based on page replacement cost
CN115048056B (en) * 2022-06-20 2024-07-16 河北工业大学 Solid state disk buffer area management method based on page replacement cost

Also Published As

Publication number Publication date
CN110888600B (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN110888600B (en) Buffer area management method for NAND flash memory
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
US10922235B2 (en) Method and system for address table eviction management
US20170109050A1 (en) Memory system having a plurality of writing modes
CN108762664B (en) Solid state disk page-level cache region management method
US20130198439A1 (en) Non-volatile storage
CN107391398B (en) Management method and system for flash memory cache region
CN103631536B (en) A kind of method utilizing the invalid data of SSD to optimize RAID5/6 write performance
US20140115241A1 (en) Buffer management apparatus and method
CN110795363B (en) Hot page prediction method and page scheduling method of storage medium
CN110532200B (en) Memory system based on hybrid memory architecture
CN111580754B (en) Write-friendly flash memory solid-state disk cache management method
CN107247675B (en) A kind of caching selection method and system based on classification prediction
US20180210832A1 (en) Hybrid drive translation layer
KR101297442B1 (en) Nand flash memory including demand-based flash translation layer considering spatial locality
CN113590045B (en) Data hierarchical storage method, device and storage medium
CN110297787A (en) The method, device and equipment of I/O equipment access memory
CN111722797B (en) SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device
CN111352593B (en) Solid state disk data writing method for distinguishing fast writing from normal writing
CN107562806B (en) Self-adaptive sensing acceleration method and system of hybrid memory file system
CN111078143B (en) Hybrid storage method and system for data layout and scheduling based on segment mapping
TWI388986B (en) Flash memory apparatus and method for operating a flash memory apparatus
CN109002400B (en) Content-aware computer cache management system and method
CN105988720B (en) Data storage device and method
Liu et al. FLAP: Flash-aware prefetching for improving SSD-based disk cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant