CN109739780A - Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method - Google Patents

Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method Download PDF

Info

Publication number
CN109739780A
CN109739780A CN201811374675.7A CN201811374675A CN109739780A CN 109739780 A CN109739780 A CN 109739780A CN 201811374675 A CN201811374675 A CN 201811374675A CN 109739780 A CN109739780 A CN 109739780A
Authority
CN
China
Prior art keywords
cache
mapping
page
address
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811374675.7A
Other languages
Chinese (zh)
Inventor
阮利
丁树勋
肖利民
苏书宾
李昂鸿
殷成涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201811374675.7A priority Critical patent/CN109739780A/en
Publication of CN109739780A publication Critical patent/CN109739780A/en
Pending legal-status Critical Current

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention proposes that a kind of dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method, solves the problems, such as the problem of current page grade address of cache fails to make full use of sequential I/O locality and I/O performance declines when random I/O is more.Method of the invention utilizes the temporal locality and spatial locality of sequential I/O, and level cache L is respectively set1Cache, L2 cache L2Cache uses different cache management strategies, level cache L in two-level cache1Cache is used to cache individual address mapping item, L2 cache L2Cache is used to cache entire mapping page;Spatial locality detection method is used on L2 cache, current a certain number of I/O requests are detected, if detecting that current I/O request has stronger spatial locality, extracts corresponding mapping page to L2 cache L2In Cache;Level cache L simultaneously1Cache dynamically adjusts level cache size according to cache hit rate, guarantees the I/O performance under random I/O mode.

Description

Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method
Technical field
The invention belongs to computer science and technology fields more particularly to a kind of dynamic secondary based on the mapping of page grade to cache and dodge Deposit conversion layer address mapping method.
Background technique
NAND Flash because the features such as its low-power consumption, non-volatile and outstanding IOPS largely made in many fields With, such as embedded device, high-performance server.Its I/O performance is significantly larger than traditional mechanical hard disk, to a certain extent more The gaps between their growth rates of CPU and I/O in computer system are mended, capacity is increased every year with speed at double at present.With tradition machinery The structure of hard disk is completely different, and NAND Flash supports randow addressing, take page as the basic unit of read-write, using block as erasing Basic unit, and strange land update can only be carried out, directly data cannot be written over, so current operating system is supported NAND Flash storage device is there are two types of widespread practice, another one is for the dedicated file system of NAND Flash customization Kind is one layer of flash translation layer (FTL) of increase inside NAND Flash, is responsible for address translation and other management works, flash memory conversion Layer allow NAND Flash as mechanical hard disk come using and do not have to any modification is made to existing file system.
NAND Flash storage inside is made of block (block) and page (page), and typical NAND Flash is each The size of page is 4KB, and a block includes 64 page.The basic unit of NAND Flash read-write is page, works as upper layer request When data, the address of request is logical address, and NAND Flash is because of the characteristic that strange land updates, when the data of same logical address Actual physical storage address also can accordingly change after being modified, it is therefore desirable to which an address mapping table is come with completing upper layer logic Conversion of the location to NAND Flash physical address.Address mapping method can be divided into three classes: it is mapped based on page, is mapped based on block, And mixed-use developments.Page grade mapping is a major advantage that transfer efficiency is higher, because mapping unit is consistent with read/write unit, Primary conversion can be obtained by the physical address of the page, but since current NAND Flash capacity is increasing, so mapping Space size shared by is also increasing, and by taking 512GB NAND Flash as an example, page size is 4KB, each address is reflected Penetrating a size is 10Byte, then the size of address mapping table can be 1280MB, and more than 1GB, this is in NAND Flash SRAM space occupancy is very serious.For this problem it is current page grade mapping use settling mode be using caching, In sram by part mapping table cache, mapping table is reduced to the occupancy of SRAM, but these methods do not utilize well The locality of sequential I/O, and in the case where random I/O is more, cache hit rate is very unsatisfactory, so as to cause I/O performance Decline.
Summary of the invention
Set forth above in order to solve the problems, such as, the present invention proposes that a kind of dynamic secondary caching flash memory based on the mapping of page grade turns Change layer address mapping method, aiming at the problem that current page grade address of cache fails to make full use of sequential I/O locality and with The problem of I/O performance declines when machine I/O is more using two-level cache, and uses different cache management plans in two-level cache Slightly, the temporal locality and spatial locality of sequential I/O are taken full advantage of, while we combine dynamic buffering adjustment to cope with Random I/O more scene, had both reduced the occupancy to SRAM, and had also improved cache hit rate and property under random I/O mode Energy.
The technical scheme is that
1. the dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method, which is characterized in that utilize suitable The temporal locality and spatial locality of sequence I/O, is respectively set level cache L1Cache, L2 cache L2Cache is slow in two-stage It deposits using different cache management strategies, level cache L1Cache is used to cache individual address mapping item, L2 cache L2Cache is used to cache entire mapping page;Spatial locality detection method is used on L2 cache, to current a certain number of I/ O request is detected, if detecting that current I/O request has stronger spatial locality, extracts corresponding mapping page To L2 cache L2In Cache;Level cache L simultaneously1Cache dynamically adjusts level cache size according to cache hit rate, protects Demonstrate,prove the I/O performance under random I/O mode.
2. the described method comprises the following steps:
Step 1: receiving upper layer I/O request and in level cache L1The corresponding address of cache item of search request in Cache, such as Fruit hit caching, then decide whether to reduce level cache L according to the height of current cache hit rate1The size of Cache, such as Fruit needs to reduce, then writes back in global map table GMT dirty cache entry and the cache entry being of little use packing, then jump to step 7, it is carried out if miss in next step;
Step 2: page number LMP being mapped according to the logical address LPN calculating logic of I/O request, mapping entry is inquired by LMP GTD is recorded, checks L2 cache L2Whether request address mapping item corresponding mapping page has been cached in Cache, if delayed It deposits, jumps to step 7, otherwise carry out in next step;
Step 3: Physical Page of the request address mapping item in global map table GMT is searched in mapping table catalogue GTD Address PMP, and decided whether to increase level cache L according to the height of current cache hit rate1The size of Cache, if It needs to increase, then by level cache L1Cache increases the size of a mapping page;
Step 4: judging whether current request has stronger spatial locality according to spatial locality detection method, if do not had Have and then directly carries out in next step, otherwise extracting the mapping page where the address of cache item of request from global map table GMT Come;At this time if L2 cache L2Cache has expired, then needs to find out the mapping page to be replaced according to " request distance ", so New mapping page is put into L2 cache L afterwards2In Cache, new mapping page is directly put into L2 cache if less than L2Cache;
Step 5: if level cache L1Cache is less than, then the address of cache item of the request is directly put into level cache L1In Cache, then jump to step 7, if expired, found out according to Segment LRU policy need the cache entry that swaps out into Row replacement;
Step 6: if level cache L1The replacement of Cache, which is not modified, then directly to be carried out in next step, if modification It crosses, checks the corresponding mapping page of mapping item in L2 cache L2It whether there is in Cache, and if so, modification second level is slow It deposits corresponding mapping item and by the mapping page labeled as dirty, otherwise directly writes back in global map table GMT;
Step 7: I/O operation is carried out to NAND Flash according to the address of cache item of request.
3. wherein, dynamically reducing level cache L in step 11Specific step is as follows by Cache:
Step 1.1: in level cache L1K dirty cache entries and least common caching are collected in any section of Cache , wherein k is the address of cache item number that each mapping page can store;
Step 1.2: k cache entry of collection is written in global map table GMT;
Step 1.3: recycling this k cache entry the space occupied, then adjust the position of remaining cache item in any section.
4. wherein, dynamically increasing level cache L in step 31Specific step is as follows by Cache:
Step 3.1: new buffer area, size L are distributed in the SRAM of flash memory1The size of original any section in Cache In addition k, wherein k is the address of cache item number that each mapping page can store;
Step 3.2: the address of cache item cached in original any section is copied in newly assigned space;
Step 3.3: recycling original any section the space occupied.
5. wherein, specific step is as follows for the spatial locality detection method in step 4:
Step 4.1: installation space locality threshold value t, the request for representing current a period of time fall in the ratio of same mapping page Example;
Step 4.2: saving nearest k request (r1,r2,r3,…,rk) address of cache item where mapping page number (r1/k, r2/k,…,rk/ k), k is the quantity for the address of cache item that a mapping page can store;
Step 4.3: finding out the highest mapping page number of the frequency of occurrences in these mapping page numbers, calculate this mapping page number and occur Ratio r, compared with threshold value t, if ratio r is higher than threshold value t, then it is assumed that request has stronger spatial locality, otherwise mesh Preceding request does not have spatial locality;
6. " request distance " wherein, has been used to find the replacement page in L2 cache in step 4, " request distance " contains Justice is as follows:
The logical address for defining current request is Lp, and the address of cache item number that each mapping page can store is k, second level Cache L2The mapping number of pages that Cache can store is n, at present L2 cache L2The page number of the mapping page cached in Cache is (p1,p2,p3,…,pn), then current L2 cache L2" request distance " dis of each mapping page and current request in CacheiTable It is shown as:
The characteristics of according to spatial locality, it is farthest that page number differs the maximum distance that can regard as, i.e., it is following most unlikely Access, it is assumed that be paged out as victimpage, then the calculation formula that is paged out are as follows:
The solution have the advantages that:
A kind of dynamic secondary based on the mapping of page grade proposed by the present invention caches flash translation layer (FTL) address mapping method, the calculation Thought of the method based on page grade address of cache proposes the optimization method of multi-level buffer and dynamic buffering adjustment.In order to sufficiently sharp With the temporal locality and spatial locality of sequential I/O, the present invention uses L2 cache, has used on L2 cache different Cache management strategy proposes spatial locality detection algorithm on L2 cache, while dynamically being adjusted according to cache hit rate Whole cache size is to improve the performance of random I/O.Finally we realize the flash translation layer (FTL) mapping address algorithm of proposition and test Optimization of the algorithm to mapping table cache hit rate and I/O response time is demonstrate,proved.
Compared with prior art, major advantage is:
1, improved buffer structure: devising L2 cache and be used to make full use of its temporal locality and spatial locality, Level cache is designed to that dynamic is adjustable, guarantees that random I/O is more by the influence in view of random I/O to cache hit rate When cache hit rate.
2, the caching writeback policies optimized: traditional address mapping method can frequently modify mapping page because of write operation, Increase the I/O response time.In the present invention, it can use L2 cache to keep in the dirty cache entry that a part swaps out, and When caching capacity reducing, dirty cache entry can be packaged write-back, reduce the modification number of mapping page, improve I/O performance.
It is with traditional page grade address mapping method difference:
(1) buffer structure and management strategy are different: traditional page grade address mapping method uses single caching, it is difficult to protect Demonstrate,prove the cache hit rate under a variety of loads.We can be in the case where guaranteeing a variety of I/O modes using the method for dynamic secondary caching Cache hit rate uses different management strategies on different cachings, ensure that making full use of for I/O request locality.
(2) the I/O scene optimized is more complicated: traditional page grade address mapping method can mainly guarantee in sequential I/O field There is preferable transfer efficiency under scape, but performance can degradation under the more environment of random I/O.The method that we use is not only The performance having had under sequential I/O mode is also able to maintain good performance under the more scene of random I/O, therefore can answer In complicated I/O environment.
Detailed description of the invention
Fig. 1 is the structural representation of the dynamic secondary caching flash translation layer (FTL) address mapping method the present invention is based on the mapping of page grade Figure.
Fig. 2 is the flow chart of the dynamic secondary caching flash translation layer (FTL) address mapping method the present invention is based on the mapping of page grade.
Fig. 3 is L2Cache keeps in L1Cache swaps out the process schematic of item.
Fig. 4 is that dirty cache entry and the cache entry that is of little use are packaged write-back process schematic.
Fig. 5 is the structural schematic diagram of mapping table catalogue GTD.
Specific embodiment
To be more clearly understood with making the object, technical solutions and advantages of the present invention expression, below in conjunction with attached drawing and specifically Embodiments of the present invention is described in detail for implementation steps, but not as a limitation of the invention.
Fig. 1 is the structural representation of the dynamic secondary caching flash translation layer (FTL) address mapping method the present invention is based on the mapping of page grade Figure.
The present invention utilizes the temporal locality and spatial locality of sequential I/O, and level cache L is respectively set1Cache, two Grade caching L2Cache uses different cache management strategies, level cache L in two-level cache1Cache is used to cache individually Location maps item, L2 cache L2Cache is used to cache entire mapping page;Spatial locality detection method is used on L2 cache, Current a certain number of I/O requests are detected, if detecting that current I/O request has stronger spatial locality, Corresponding mapping page is then extracted to L2 cache L2In Cache;Level cache L simultaneously1Cache is according to cache hit rate come dynamic Level cache size is adjusted, guarantees the I/O performance under random I/O mode.
Include mainly 4 parts, is level cache L respectively1Cache, L2 cache L2Cache, mapping table catalogue GTD with And global map table GMT.Wherein level cache L1Cache, L2 cache L2Cache and mapping table catalogue GTD are in NAND Flash The region SRAM in store, global map table stores in the region Flash of NAND Flash.The block of entire NAND Flash is drawn It is divided into data block and mapping block, mapping block is used to store all mapping items, and data block is used to store using data.Level-one is reflected It penetrates caching and is used to cache individual address mapping item;In order to delay so we introduce second level using the spatial locality of I/O request It deposits, for caching entire mapping page, we can observe current a certain number of I/O requests, if it is observed that request tool There is stronger spatial locality, then extracts corresponding mapping page and extract in L2 cache;Mapping table catalogue is for searching address The position that mapping page stores in NAND Flash mapping block needs in mapping table catalogue when the equal miss of two-level cache Search the physical address of mapping item memory page in NAND Flash;Global map table is stored in the mapping of NAND Flash In page, it stores all address of cache items.
Fig. 2 is the flow chart of the dynamic secondary caching flash translation layer (FTL) address mapping method the present invention is based on the mapping of page grade.
The following steps are included:
Step 1: receiving upper layer I/O request and in level cache L1The corresponding address of cache item of search request in Cache, such as Fruit hit caching, then decide whether to reduce level cache L according to the height of current cache hit rate1The size of Cache, such as Fruit needs to reduce, then writes back in global map table GMT dirty cache entry and the cache entry being of little use packing, then jump to step 7, it is carried out if miss in next step;
Step 2: page number LMP being mapped according to the logical address LPN calculating logic of I/O request, mapping entry is inquired by LMP GTD is recorded, checks L2 cache L2Whether request address mapping item corresponding mapping page has been cached in Cache, if delayed It deposits, jumps to step 7, otherwise carry out in next step;
Step 3: Physical Page of the request address mapping item in global map table GMT is searched in mapping table catalogue GTD Address PMP, and decided whether to increase level cache L according to the height of current cache hit rate1The size of Cache, if It needs to increase, then by level cache L1Cache increases the size of a mapping page;
Step 4: judging whether current request has stronger spatial locality according to spatial locality detection method, if do not had Have and then directly carries out in next step, otherwise extracting the mapping page where the address of cache item of request from global map table GMT Come;At this time if L2 cache L2Cache has expired, then needs to find out the mapping page to be replaced according to " request distance ", so New mapping page is put into L2 cache L afterwards2In Cache, new mapping page is directly put into L2 cache if less than L2Cache;
Step 5: if level cache L1Cache is less than, then the address of cache item of the request is directly put into level cache L1In Cache, then jump to step 7, if expired, found out according to Segment LRU policy need the cache entry that swaps out into Row replacement;
Step 6: if level cache L1The replacement of Cache, which is not modified, then directly to be carried out in next step, if modification It crosses, checks the corresponding mapping page of mapping item in L2 cache L2It whether there is in Cache, and if so, modification second level is slow It deposits corresponding mapping item and by the mapping page labeled as dirty, otherwise directly writes back in global map table GMT;
Step 7: I/O operation is carried out to NAND Flash according to the address of cache item of request.
1. dynamic level cache designs
For the temporal locality for making full use of I/O to request, level cache L1Replacement method in Cache uses Segment LRU policy.But the performance in this way under the more environment of random I/O is not so good, random read-write is not because Has locality so will lead to the frequent miss of request, to continually read and write Flash.In response to this, we use The method that dynamic adjusts cache size increases the size of caching when requesting hit rate lower, when requesting hit rate higher, It is appropriate to reduce cache size.Both improved so random I/O it is more when cache hit rate, reduce reading to Flash time Number, also ensures that mapping table will not occupy excessive SRAM space under normal circumstances.
Because buffering has been divided into two parts by Segment LRU policy, dynamic needs when increasing and decreasing cache size Consider to carry out scalable appearance to that a part.Because dynamic buffering adjustment towards be the more environment of random I/O, random I/O mould Formula does not have locality, and most of data can be only accessed once.Protection section is for storing recent visit in our caching segmentation Multiple data are asked, if the size of dynamic expansion protection section will not enter because all only access is primary for most of data Into protection section, so the effect of dynamic adjustment protection section is little;If dynamic expansion any section, when random I/O is more, mention The cache entry of taking-up can be more stored in any section.Therefore we carry out dynamic expansion for any section.
The specific method is as follows for dynamic adjustment caching: whether the request record of I/O each time being hit, when request number of times arrives Up to after certain numerical value, cache hit rate p is calculated, a hit rate threshold value T is set, if cache hit rate p < T, illustrates most The cache hit rate of nearly a period of time request is relatively low.Simultaneously in order to avoid frequently adjusting cache size, data cached frequency is caused Numerous transfer and SRAM space distribution, are also provided with a threshold value M, when continuously there is M statistics to meet p < T, just illustrate to delay at this time It is bad to deposit hit rate, random I/O is more, needs to increase the size of caching, and any section of level cache is increased by one at this time and is reflected Penetrate the address of cache item number purpose size that page can accommodate.If when there is the p > T of continuous N time and cache size is greater than Initial value size then collects dirty cache entry at present, then that dirty cache entry and the cache entry being of little use is occupied Space reclamation, while dirty cache entry being written back in global map table, the write-back number needed when replacement cache entry after reducing.
2. L2 cache designs
Performance under temporal locality and the random I/O environment of guarantee that level cache mainly utilizes I/O to request, second level are slow I/O performance of the design deposited primarily to make full use of the spatial locality of I/O, more fully under promotion complex load.Two Grade caching is in order to which using the spatial locality of I/O, for saving the adjacent mapping item of logical address, therefore L2 cache is to map Page rather than an address of cache item as storage cell.When request miss, need to extract mapping item from global map table When, whether obvious spatial locality is had according to the I/O of current a period of time request, decides whether to extract corresponding mapping Page.If current request mode has stronger spatial locality, the mapping page where this address of cache item is all mentioned It gets in L2 cache, because what is stored in a mapping page is all the adjacent mapping item of logical address, according to space office Portion's property, these adjacent address entries are likely to be accessed.The specific extracting method of L2 cache mapping page is as follows:
When the mapping item of request address not in the buffer when, require to look up mapping table catalogue GTD by corresponding cache entry from It is extracted in Flash, at this time if level cache L1Cache is not full to be then directly put into level cache for corresponding mapping item L1In Cache;If level cache L1Cache has expired, then selects one to swap out according to segment LRU policy and item and search this Whether one in L2 cache L2In Cache, if the item that swaps out is in L2 cache L2In Cache and this modified mistake, then Only modify L2 cache L2Cache is waited until L2 cache L2When the Page swap-out of Cache, further according to the address in the page Whether item, which was modified, decides whether to update the mapping page in Flash.If the item that swaps out is not in L2 cache L2In Cache, Then write back in Flash.If there is no L2 cache L2Cache, then when dirty cache entry that swaps out each time, all can additionally cause primary The write operation of mapping page, and then lead to the increase of invalid page and the waste in space.Therefore L2 cache L2Cache is not only utilized The spatial locality of I/O, number caused by also reducing the address of cache item of update from caching.
If L2 cache L2Cache has expired, then needs to swap out existing caching page, and being paged out is according to from current " request distance " is selected.The meaning of " request distance " is as follows:
The logical address for defining current request is Lp, and the address of cache item number that each mapping page can store is k, second level Cache L2The mapping number of pages that Cache can store is n, at present L2 cache L2The page number of the mapping page cached in Cache is (p1,p2,p3,…,pn), then current L2 cache L2" request distance " dis of each mapping page and current request in CacheiTable It is shown as:
The characteristics of according to spatial locality, it is farthest that page number differs the maximum distance that can regard as, i.e., it is following most unlikely Access, it is assumed that be paged out as victimpage, then the calculation formula that is paged out are as follows:
3. caching write-back mechanism
In NAND Flash, write operation is about 20 times or so of read operation than relatively time-consuming.Our address is reflected Shooting method is based on caching, cannot ignore the influence that the write-back of caching requests I/O, because the write-back of caching can be to NAND Flash Additional write operation is generated, therefore is also very necessary for the optimization of caching write-back.Traditional page grade based on caching is reflected Shooting method can all be written the mapping block of NAND Flash, therefore work as when dirty buffer address mapping item swaps out each time When write request is more and cache hit rate is lower, I/O request can be modified along with a large amount of additional mapping pages, this is to I/O performance Generate more serious influence.The present invention has carried out some optimizations for caching write-back using L2 cache.Firstly, such as Fig. 3 institute Show, is L2Cache keeps in L1Cache swaps out the process schematic of item.As level cache L1Swapping out for dirty cache entry occurs for Cache When, it can check L2 cache L2It whether include the corresponding mapping page of the cache entry in Cache, if comprising directly updating two Grade caching L2Address of cache item in Cache, until needing L2 cache L2When swapping out of mapping page, root occur for Cache According to mapping page whether be it is dirty, carry out the write-in of entire containing dirty pages, therefore obvious the write-in time for reducing address of cache region Number;Secondly, as shown in figure 4, being that dirty cache entry and the cache entry that is of little use are packaged write-back process schematic.One designed in the present invention When grade caching carries out dynamic and reduces, can the preferential collection address of cache cache entry modified and the cache entry being of little use, then It is packaged the write-in for carrying out mapping block together, reduces the write-in number in address of cache region to a certain extent.Two kinds for slow Being stored back to the optimization write reduces write operation to mapping page, reduces the influence because of additional write operation to I/O performance.
4. request processing flow
In order to more clearly introduce address conversion process of the invention, mapping table catalogue specific structure is said first Bright, Fig. 5 is the structural schematic diagram of mapping table catalogue GMT.LMP represents the logical mappings where I/O request address mapping item in Fig. 5 Page index, PMP represent this logical mappings page in the physical address of NAND Flash mapping page, when the address of cache item of request does not exist When in caching, the number of locating logical mappings page can be found out by the logical address LPN requested first, then according to this number Inquiry mapping table catalogue can obtain the physical mappings page number of this request address of actual storage mapping item.
InRAM indicates whether corresponding address of cache page is extracted in L2 cache, and it is slow that second level is extracted in Dirty expression Whether address of cache item therein is modified after depositing, if because write operation is modified, or because level cache is slow Credit balance swaps out and modified, then Dirty mark comes into force.
Because flash translation layer (FTL) is different with process when write request in processing read request, asked separately below for two kinds Seek the explanation for carrying out process:
1) read request
It reads request to up to flash translation layer (FTL), first inquiry level cache, if hit caching, directly according to the object of caching It manages address and read operation is carried out to NAND Flash, then decide whether to carry out dynamically adjustment according to preset condition and cache;If Miss caching calculates logical mappings page number LMP according to the logical address LPN of request, inquiry mapping table catalogue GMT's Whether the InRAM flag bit mapping page extracts in L2 cache, if be located in L2 cache, inquires L2 cache and obtains The physical page address of address of cache item carries out read operation, and the Physical Page where request address mapping item is otherwise obtained according to GMT Address PMP then reads the address of cache page.Detect whether current request has relatively by force according to spatial locality detection method Spatial locality, if having stronger spatial locality, need to extract mapping page in L2 cache, be then checked for Whether L2 cache has expired, if expired, determines the address of cache page to be swapped out according to " request distance ", is then replaced It changes, is directly extracted in L2 cache if less than, if not having spatial locality, do not need to extract this mapping page Into L2 cache.After the relevant operation for completing L2 cache, need the address of cache item of the request being put into level cache In, directly address of cache item is put into level cache if level cache is less than, if level cache has been expired, is needed The cache entry for needing to swap out is determined according to Segment LRU policy, and the address of cache item of request is then put into level cache.So It is decided whether to carry out dynamic adjustment caching according to preset condition afterwards, be counted after completion according to the address of cache item extracted According to read operation.
2) write request
Write request general process is consistent with read operation, but due to the characteristic that the strange land NAND Flash updates, than reading The process flow of request is more complicated.
Write request reaches flash translation layer (FTL), first inquiry level cache, if hit caching, directly finds free page pair NAND Flash carries out write operation, the physical address of buffer address mapping item is then changed, if L2 cache has also cached this The corresponding mapping page of write request then will also be modified in L2 cache and request corresponding address of cache item, and by the mapping page Dirty is identified as effectively, is then adjusted according to the dynamic that preset condition decides whether to carry out level cache;If miss one Grade caching, then calculate logical mappings page number LMP according to the logical address LPN of request, inquires the InRAM of mapping table catalogue GMT Flag bit obtains whether the corresponding mapping page of the request extracts in L2 cache, if be located in L2 cache, finds idle Page carries out write operation, the address of cache page of L2 cache is then updated according to new writing address, and should by mapping table catalogue GMT The Dirty of the corresponding mapping page in address is set to effectively;If miss L2 cache, data are written in directly searching free page, Because the corresponding physical address of request logical address is also changed after write-in, therefore is updated complete in NAND Flash Office's mapping table, that is, be written modified mapping page, and the corresponding physical address of the mapping page is changed, therefore modifies mapping table The corresponding physical address PMP of the mapping page in catalogue.The address of cache item of the request is put into level cache later, if Level cache is less than, and directly address of cache item is put into level cache, if level cache has been expired, needs basis Segment LRU policy determines the cache entry for needing to swap out, and the address of cache item of request is then put into level cache.Then root It decides whether to carry out dynamic adjustment caching according to preset condition.

Claims (6)

1. a kind of dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method, which is characterized in that utilize suitable The temporal locality and spatial locality of sequence I/O, is respectively set level cache L1Cache, L2 cache L2Cache is slow in two-stage It deposits using different cache management strategies, level cache L1Cache is used to cache individual address mapping item, L2 cache L2Cache is used to cache entire mapping page;Spatial locality detection method is used on L2 cache, to current a certain number of I/ O request is detected, if detecting that current I/O request has stronger spatial locality, extracts corresponding mapping page To L2 cache L2In Cache;Level cache L simultaneously1Cache dynamically adjusts level cache size according to cache hit rate, protects Demonstrate,prove the I/O performance under random I/O mode.
2. the method according to claim 1, wherein the following steps are included:
Step 1: receiving upper layer I/O request and in level cache L1The corresponding address of cache item of search request in Cache, if life Middle caching then decides whether to reduce level cache L according to the height of current cache hit rate1The size of Cache, if needed Reduce, then dirty cache entry and the cache entry being of little use packing is write back in global map table GMT, reduce L1Cache, then Step 7 is jumped to, is carried out if miss in next step;
Step 2: page number LMP being mapped according to the logical address LPN calculating logic of I/O request, mapping table catalogue is inquired by LMP GTD checks L2 cache L2Whether request address mapping item corresponding mapping page has been cached in Cache, if buffered, Step 7 is jumped to, is otherwise carried out in next step;
Step 3: physical page address of the request address mapping item in global map table GMT is searched in mapping table catalogue GTD PMP, and decided whether to increase level cache L according to the height of current cache hit rate1The size of Cache, if necessary Increase, then by level cache L1Cache increases the size of a mapping page;
Step 4: judging whether current request has stronger spatial locality according to spatial locality detection method, if without if It directly carries out in next step, otherwise extracting the mapping page where the address of cache item of request from global map table GMT;This Shi Ruguo L2 cache L2Cache has expired, then needs to find out the mapping page to be replaced according to " request distance ", then will be new Mapping page be put into L2 cache L2In Cache, new mapping page is directly put into L2 cache L if less than2Cache;
Step 5: if level cache L1Cache is less than, then the address of cache item of the request is directly put into level cache L1In Cache, then jump to step 7, if expired, found out according to Segment lru algorithm need the cache entry that swaps out into Row replacement;
Step 6: if level cache L1The replacement of Cache, which is not modified, then directly to be carried out in next step, looking into if modifying See the corresponding mapping page of mapping item in L2 cache L2It whether there is in Cache, and if so, modification L2 cache is corresponding Mapping item and by the mapping page labeled as dirty, otherwise directly write back in global map table GMT;
Step 7: I/O operation is carried out to NAND Flash according to the address of cache item of request.
3. method according to claim 2, which is characterized in that in the step 1, dynamic reduces level cache L1The step of Cache It is rapid as follows:
Step 1.1: in level cache L1K dirty cache entries and least common cache entry are collected in any section of Cache, Middle k is the address of cache item number that each mapping page can store;
Step 1.2: k cache entry of collection is written in global map table GMT;
Step 1.3: recycling this k cache entry the space occupied, then adjust the position of remaining cache item in any section.
4. method according to claim 2, which is characterized in that in the step 3, dynamic increases level cache L1The step of Cache It is rapid as follows:
Step 3.1: new buffer area, size L are distributed in the SRAM of flash memory1The size of original any section adds k in Cache, Wherein k is the address of cache item number that each mapping page can store;
Step 3.2: the address of cache item cached in original any section is copied in newly assigned space;
Step 3.3: recycling original any section the space occupied.
5. method according to claim 2, which is characterized in that the specific step of the spatial locality detection method in the step 4 It is rapid as follows:
Step 4.1: installation space locality threshold value t, the request for representing current a period of time fall in the ratio of same mapping page;
Step 4.2: saving nearest k request (r1,r2,r3,…,rk) address of cache item where mapping page number (r1/k,r2/ k,…,rk/ k), k is the quantity for the address of cache item that a mapping page can store;
Step 4.3: finding out the highest mapping page number of the frequency of occurrences in these mapping page numbers, calculate the ratio that this mapping page number occurs Example r, compared with threshold value t, if ratio r is higher than threshold value t, then it is assumed that request has stronger spatial locality, otherwise current Request does not have spatial locality.
6. according to the method described in claim 2, it is characterized in that, having used " request distance " in the step 4 to find two The meaning of replacement page in grade caching, " request distance " is as follows:
The logical address for defining current request is Lp, and the address of cache item number that each mapping page can store is k, L2 cache L2The mapping number of pages that Cache can store is n, at present L2 cache L2The page number of the mapping page cached in Cache is (p1, p2,p3,…,pn), then current L2 cache L2" request distance " dis of each mapping page and current request in CacheiIt indicates are as follows:
The characteristics of according to spatial locality, it is farthest that page number differs the maximum distance that can regard as, i.e., following most unlikely access, Assuming that being paged out as victimpage, then the calculation formula that is paged out are as follows:
CN201811374675.7A 2018-11-20 2018-11-20 Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method Pending CN109739780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811374675.7A CN109739780A (en) 2018-11-20 2018-11-20 Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811374675.7A CN109739780A (en) 2018-11-20 2018-11-20 Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method

Publications (1)

Publication Number Publication Date
CN109739780A true CN109739780A (en) 2019-05-10

Family

ID=66355726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811374675.7A Pending CN109739780A (en) 2018-11-20 2018-11-20 Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method

Country Status (1)

Country Link
CN (1) CN109739780A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110413228A (en) * 2019-07-09 2019-11-05 江苏芯盛智能科技有限公司 A kind of mapping table management method, system and electronic equipment and storage medium
CN110413537A (en) * 2019-07-25 2019-11-05 杭州电子科技大学 A kind of flash translation layer (FTL) and conversion method towards hybrid solid-state hard disk
CN111258924A (en) * 2020-01-17 2020-06-09 中国科学院国家空间科学中心 Mapping method based on satellite-borne solid-state storage system self-adaptive flash translation layer
CN111506517A (en) * 2020-03-05 2020-08-07 杭州电子科技大学 Flash memory page level address mapping method and system based on access locality
CN111813709A (en) * 2020-07-21 2020-10-23 北京计算机技术及应用研究所 High-speed parallel storage method based on FPGA (field programmable Gate array) storage and calculation integrated framework
CN112685337A (en) * 2021-01-15 2021-04-20 浪潮云信息技术股份公司 Method for hierarchically caching read and write data in storage cluster
CN113377690A (en) * 2021-06-28 2021-09-10 福建师范大学 Solid state disk processing method suitable for user requests of different sizes
CN113419976A (en) * 2021-06-29 2021-09-21 华中科技大学 Self-adaptive segmented caching method and system based on classification prediction
CN117708000A (en) * 2024-02-05 2024-03-15 成都佰维存储科技有限公司 Random writing method and device of data, electronic equipment and storage medium
CN118170327A (en) * 2024-05-14 2024-06-11 苏州元脑智能科技有限公司 Solid state disk address mapping method, device and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN1820257A (en) * 2002-11-26 2006-08-16 先进微装置公司 Microprocessor including a first level cache and a second level cache having different cache line sizes
US7366829B1 (en) * 2004-06-30 2008-04-29 Sun Microsystems, Inc. TLB tag parity checking without CAM read
US20140223118A1 (en) * 2013-02-01 2014-08-07 Brian Ignomirello Bit Markers and Frequency Converters
CN104268094A (en) * 2014-09-23 2015-01-07 浪潮电子信息产业股份有限公司 Optimized flash memory address mapping method
CN104809076A (en) * 2014-01-23 2015-07-29 华为技术有限公司 Management method and device of cache

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN1820257A (en) * 2002-11-26 2006-08-16 先进微装置公司 Microprocessor including a first level cache and a second level cache having different cache line sizes
US7366829B1 (en) * 2004-06-30 2008-04-29 Sun Microsystems, Inc. TLB tag parity checking without CAM read
US20140223118A1 (en) * 2013-02-01 2014-08-07 Brian Ignomirello Bit Markers and Frequency Converters
CN104809076A (en) * 2014-01-23 2015-07-29 华为技术有限公司 Management method and device of cache
CN104268094A (en) * 2014-09-23 2015-01-07 浪潮电子信息产业股份有限公司 Optimized flash memory address mapping method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIUQIAO LI ET AL.: "HCCache: A hybrid client-side cache management scheme for I/O-intensive", 《2012 13TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED COMPUTING, APPLICATIONS AND TECHNOLOGIES》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110413228A (en) * 2019-07-09 2019-11-05 江苏芯盛智能科技有限公司 A kind of mapping table management method, system and electronic equipment and storage medium
CN110413228B (en) * 2019-07-09 2022-10-14 江苏芯盛智能科技有限公司 Mapping table management method and system, electronic equipment and storage medium
CN110413537A (en) * 2019-07-25 2019-11-05 杭州电子科技大学 A kind of flash translation layer (FTL) and conversion method towards hybrid solid-state hard disk
CN111258924A (en) * 2020-01-17 2020-06-09 中国科学院国家空间科学中心 Mapping method based on satellite-borne solid-state storage system self-adaptive flash translation layer
CN111506517A (en) * 2020-03-05 2020-08-07 杭州电子科技大学 Flash memory page level address mapping method and system based on access locality
CN111813709A (en) * 2020-07-21 2020-10-23 北京计算机技术及应用研究所 High-speed parallel storage method based on FPGA (field programmable Gate array) storage and calculation integrated framework
CN111813709B (en) * 2020-07-21 2023-08-08 北京计算机技术及应用研究所 High-speed parallel storage method based on FPGA (field programmable Gate array) memory and calculation integrated architecture
CN112685337B (en) * 2021-01-15 2022-05-31 浪潮云信息技术股份公司 Method for hierarchically caching read and write data in storage cluster
CN112685337A (en) * 2021-01-15 2021-04-20 浪潮云信息技术股份公司 Method for hierarchically caching read and write data in storage cluster
CN113377690A (en) * 2021-06-28 2021-09-10 福建师范大学 Solid state disk processing method suitable for user requests of different sizes
CN113377690B (en) * 2021-06-28 2023-06-27 福建师范大学 Solid state disk processing method suitable for user requests of different sizes
CN113419976A (en) * 2021-06-29 2021-09-21 华中科技大学 Self-adaptive segmented caching method and system based on classification prediction
CN113419976B (en) * 2021-06-29 2024-04-26 华中科技大学 Self-adaptive segmented caching method and system based on classification prediction
CN117708000A (en) * 2024-02-05 2024-03-15 成都佰维存储科技有限公司 Random writing method and device of data, electronic equipment and storage medium
CN117708000B (en) * 2024-02-05 2024-05-07 成都佰维存储科技有限公司 Random writing method and device of data, electronic equipment and storage medium
CN118170327A (en) * 2024-05-14 2024-06-11 苏州元脑智能科技有限公司 Solid state disk address mapping method, device and product

Similar Documents

Publication Publication Date Title
CN109739780A (en) Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method
CN103885728B (en) A kind of disk buffering system based on solid-state disk
CN104268094B (en) Optimized flash memory address mapping method
US7143240B2 (en) System and method for providing a cost-adaptive cache
US10740251B2 (en) Hybrid drive translation layer
US9582282B2 (en) Prefetching using a prefetch lookup table identifying previously accessed cache lines
CN107066393A (en) The method for improving map information density in address mapping table
EP3486786B1 (en) System and methods for efficient virtually-tagged cache implementation
CN105389135B (en) A kind of solid-state disk inner buffer management method
CN104102591A (en) Computer subsystem and method for implementing flash translation layer in computer subsystem
CN104166634A (en) Management method of mapping table caches in solid-state disk system
US9176856B2 (en) Data store and method of allocating data to the data store
CN109446117B (en) Design method for page-level flash translation layer of solid state disk
US10564871B2 (en) Memory system having multiple different type memories with various data granularities
US10423534B2 (en) Cache memory
CN110262982A (en) A kind of method of solid state hard disk address of cache
CN110888600A (en) Buffer area management method for NAND flash memory
US7356650B1 (en) Cache apparatus and method for accesses lacking locality
CN108845957A (en) It is a kind of to replace and the adaptive buffer management method of write-back
CN111580754B (en) Write-friendly flash memory solid-state disk cache management method
CN106909323B (en) Page caching method suitable for DRAM/PRAM mixed main memory architecture and mixed main memory architecture system
US7472226B1 (en) Methods involving memory caches
CN109478164A (en) For storing the system and method for being used for the requested information of cache entries transmission
CN109815168A (en) System and method for marking buffer to realize less
CN111506517B (en) Flash memory page level address mapping method and system based on access locality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190510

RJ01 Rejection of invention patent application after publication