CN103383666B - Improve method and system and the cache access method of cache prefetching data locality - Google Patents

Improve method and system and the cache access method of cache prefetching data locality Download PDF

Info

Publication number
CN103383666B
CN103383666B CN201310298246.7A CN201310298246A CN103383666B CN 103383666 B CN103383666 B CN 103383666B CN 201310298246 A CN201310298246 A CN 201310298246A CN 103383666 B CN103383666 B CN 103383666B
Authority
CN
China
Prior art keywords
data record
hit
data
prefetch
accessed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310298246.7A
Other languages
Chinese (zh)
Other versions
CN103383666A (en
Inventor
严得辰
刘立坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201310298246.7A priority Critical patent/CN103383666B/en
Publication of CN103383666A publication Critical patent/CN103383666A/en
Application granted granted Critical
Publication of CN103383666B publication Critical patent/CN103383666B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides the method and system improving cache prefetching data locality.In the method statistics caching each prefetch data record set prefetch hit-count, and for its prefetch hit-count less than set hit threshold prefetch data record set, when this set being swapped out caching, it is written to new memory area by this set is accessed for data record, is formed with other data in this memory area and new prefetch data record set.The method can effectively reduce number of prefetches, improves cache hit rate.

Description

Improve method and system and the cache access method of cache prefetching data locality
Technical field
The present invention relates to caching technology, particularly relate to improve cache hit rate prefetches data locality method for organizing.
Background technology
Caching is very important ingredient in multilevel memory system, and cache prefetching is an important raising caching effect The technology of rate.Access data record pijTime first look for its access position (index search or metadata lookup etc.), when failing In caching during hit, cache prefetching is accessed p by once storageijThe rudimentary storage hierarchy in place prefetches data record set Pi: { pi1..., pinBe prefetched in caching, and by pi1pinThe position that accesses be revised as position corresponding in caching, it is desirable to thereafter More multipair p occursi1~pinAccess.Wherein, P is claimediFor pi1..., pinPrefetch entrance, pijFirst note is prefetched for what this prefetched Record.The data record accessed can be fixed-length data record, it is also possible to be elongated data record.Sky in cache prefetching data Between locality to determine the mechanism of prefetching the most effective: the cache prefetching data with preferable spatial locality can make once to prefetch Bring more cache hit, reduce the access of low level storage, and the poor cache prefetching data of spatial locality make to prefetch Mechanism can not get income.In order to make to prefetch the effect that mechanisms play is bigger, need to improve the spatial locality prefetching data.
Summary of the invention
Therefore, it is an object of the invention to overcome the defect of above-mentioned prior art, it is provided that one improves cache prefetching data The method of locality.
It is an object of the invention to be achieved through the following technical solutions:
On the one hand, the invention provides a kind of method improving cache prefetching data locality, described method includes:
In statistics caching each prefetch data record set prefetch hit-count, described in prefetch hit-count be this set In be accessed for the sum of data record;
For its prefetch hit-count less than set hit threshold prefetch data record set, this set is being swapped out During caching, it is written to new memory area by this set is accessed for data record, with other data in this memory area Formed and new prefetch data record set.
In said method, may also include that
For each data record set that prefetches in caching:
It is special record by this set is accessed for data recording mark first;
Calculate the access interval being currently accessed for data record in this set and being accessed for last time between data record, If this access interval is more than the interval threshold set, then it is special record by being currently accessed for data recording mark;
Hit-count is prefetched for it and prefetches data record set, at the caching that this set swapped out less than hit threshold Time, it is revised as described new prefetching data record set by the entrance that prefetches being marked as the data record of special record.
In said method, described access interval can be time interval, access times interval, self-defining logic interval or The combination at above-mentioned interval.
In said method, may also include and hit-count is prefetched for it prefetch data record set less than hit threshold, When this set being swapped out caching, prefetching entrance be all revised as described new prefectching by this set is accessed for data record According to set of records ends.
Another aspect, present invention also offers a kind of system improving cache prefetching data locality, and described system includes:
For adding up each device prefetching hit-count prefetching data record set in caching, described in prefetch hit time Number is for being accessed for the sum of data record in this set;
For for its prefetch hit-count less than set hit threshold prefetch data record set, by this set Swap out caching time, be written to new memory area by this set is accessed for data record, with other in this memory area Data form the new device prefetching data record set.
In said system, may also include labelling apparatus and amendment device, described labelling apparatus can be used for for every in caching The individual data record set that prefetches:
It is special record by this set is accessed for data recording mark first;
Calculate the access interval being currently accessed for data record in this set and being accessed for last time between data record, If this access interval is more than the interval threshold set, then it is special record by being currently accessed for data recording mark;
Described amendment device can be used for prefetching hit-count for it and prefetches data record set less than hit threshold, This set is swapped out caching time, the entrance that prefetches being marked as the data record of special record is revised as described new prefectching According to set of records ends.
It yet still another aspect, present invention also offers a kind of cache access method, the method includes:
For data record to be visited, if cache hit, then caching will comprise this data record to be visited The hit-count that prefetches prefetching data record set increases by 1;
If cache miss and free cache entry, then prefetch data record by comprise this data record to be visited Set is prefetched in this cache entry, and this hit-count that prefetches prefetching data record set is increased by 1;
If cache miss and be not free cache entry, then perform:
Whether prefetch data record set in the cache entry that judgement is selected prefetches hit-count less than the hit threshold set Value, if it is less, be written to new memory area by being accessed for data record in this set, with its in this memory area His data are formed and new prefetch data record set;And
The data record set that prefetches comprising this data record to be visited is prefetched in this selected cache entry, and will This hit-count that prefetches prefetching data record set increases by 1.
In above-mentioned cache access method, may also include that
For each data record set that prefetches in caching:
It is special record by this set is accessed for data recording mark first;
Calculate the access interval being currently accessed for data record in this set and being accessed for last time between data record, If this access interval is more than the interval threshold set, then it is special record by being currently accessed for data recording mark;
Prefetched hit-count less than hit threshold prefetch data record set swap out caching time, will be marked as The entrance that prefetches of the data record of special record is revised as described new prefetching data record set.
In above-mentioned cache access method, may additionally include prefetched hit-count less than hit threshold prefetch data note Record set swap out caching time, prefetch entrance be all revised as described new prefetching data note by this set be accessed for data record Record set.
Above-mentioned cache access method, may also include that
When the described new data record number prefetched in data record set reaches the threshold value set, in caching It prefetches hit-count and prefetches data record set less than each of hit threshold set, and will be accessed for data in this set Record is written to this new prefetching in data record set;
Stopping the write that prefetch data record set new to this, it is new that the spatial cache of acquisition free time is used for storing another Prefetch data record set.
Compared with prior art, the method improving cache prefetching data locality that the present invention provides can effectively reduce and prefetches Number of times, improves cache hit rate.
Accompanying drawing explanation
Embodiments of the present invention is further illustrated referring to the drawings, wherein:
Fig. 1 is the schematic flow sheet of the method improving cache prefetching data locality according to the embodiment of the present invention;
Fig. 2 is the schematic flow sheet of the cache access method according to the embodiment of the present invention.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage are clearer, below in conjunction with accompanying drawing by concrete real The present invention is described in more detail to execute example.Should be appreciated that specific embodiment described herein only in order to explain the present invention, It is not intended to limit the present invention.
Cache prefetching data locality to be improved needs to solve two problems: determine the locality of which cache prefetching data Poor, need it is improved;And how to improve the locality of these cache prefetching data.Data record is prefetched from certain Set (such as, Pi) be pre-fetched in caching, be paged out caching to it before, this set will have several data record quilts Accessing, these are accessed for set (such as, the H of data recordi: { pij1..., pijm,) can be described as this time prefetching Prefetching hiting data, these are accessed for the number of data record, and (such as, m) can be referred to as this time prefetching prefetches hit time Number.Such as, data record set P will prefetchediAfter being prefetched in caching, it is desirable to what this prefetched prefetches hit-count quickly More than a threshold value, if not up to this requirement, then explanation prefetches PiDo not play a role or played less effect, claiming This kind of situation, for not up to prefetch effect, shows this time to prefetch the spatial locality within data poor, it should prefetch these Data carry out suitable layout again thus improve locality.
Fig. 1 gives the flow process of the method improving cache prefetching data locality according to an embodiment of the invention and shows It is intended to.In the method statistics caching each prefetch data record set prefetch hit-count, hit-count is prefetched for it little In set hit threshold prefetch data record set (such as Pi), when this set being swapped out caching, by quilt in this set The data record accessed (i.e. prefetches hiting data, such as Hi) redundancy is written to new memory area, and in this memory area Other data are formed and new prefetch data record set (for example, P).
Wherein, described hit threshold can be configured according to real system environment or user's request, can be static Threshold value, it is also possible to be dynamic threshold, such as, can be set as certain predetermined integer value, it is also possible to will hit threshold by hit threshold Value is set to prefetch the percentage ratio of element number in data record set, such as 10% × | Pi|, 20% × | Pi|, 30% × | Pi | etc..The described hit-count that prefetches is the sum being accessed for data record in this set.In other embodiments, prefetch described in Hit-count can also be that this is prefetched the access times of data record set.It is written to this by redundancy new by said method The hiting data that prefetches prefetched in data record set forms a collection of new spatial locality with other data records in this set Preferably prefetch data;Other data records in this set can be the redundant data by same method write, it is possible to To be the newly generated data record of write, this set can exist the data record of repetition, it is also possible to by someway Remove the data record repeated.In other embodiments, one or more such new data that prefetch can be there are to remember simultaneously Record set.When existing multiple, by determined by locality to be improved prefetch data record set is accessed for data note Record and be written in one of them set according to certain classification.It addition, do not limit formed new to prefetch data record set institute Storage medium, can be in writing caching, it is also possible in the storage medium of other levels, it is also possible to be transferred to from writing caching In the storage medium of other levels;Preferentially access from caching when accessing the data record that there is multiple copy, if not slow In depositing, then by accessing after prefetching mode and entering in caching.
Write these in redundancy and prefetch hiting data (such as HiAfter), same data record is likely to be present in multiple pre- Fetch data in set of records ends, now had multiple choosing by the entrance that prefetches of each data record in hiting data that prefetches of redundancy Select, can not be altered and (such as, remain as Pi), it is also possible to navigate to new prefetch position (such as P), it is also possible to according to access Situation make a choice, it is different that to prefetch the environment that entrance positioning strategy is suitable for different.It is preferable to carry out of the present invention In example, the method also comprises the following steps: for each data record set that prefetches in caching: will be accessed first in this set Data recording mark be special record;And calculate and this set is currently accessed for be accessed for data record and last time number According to the access interval between record, if this access interval is more than the interval threshold set, then will currently be accessed for data note Record mark is special record.So, hit-count is prefetched for it and prefetches data record set less than hit threshold, should Set swaps out when caching, and is written to new memory area by being accessed for data record redundancy in this set, with this memory area In other data form the new data record set that prefetches, this set can also will be marked as the number of special record simultaneously It is revised as described new prefetching data record set according to the entrance that prefetches of record.Wherein, accessing interval can be time interval, visit Ask the one in number of times interval and certain logic interval self-defining, it is also possible to be that multiple interval type combines (one of which It is spaced and long can be labeled as special record).Interval threshold can be arranged according to the type accessing interval.In other embodiments In, it is also possible to all it is revised as described new prefetching data note by the entrance that prefetches carrying out all data records of above-mentioned redundancy write Record set.Or can also not revise.
From above it can be seen that the method for this improvement cache prefetching data locality can be eased up with cache access Deposit replacement process and run.In yet another embodiment of the present invention, additionally provide one to combine and above-mentioned improve cache prefetching number Cache access method according to local approach.It is as follows that this cache access method performs process: (assumes when accessing certain data record For pij) time, point three kinds of situations process: (1) is (these data record p when access fails to hit in the bufferijNot slow In depositing), if there is empty cache entry, such as cache entry Ck, then by pijPlace prefetches data record set Pi: { pi1..., pinIt is prefetched to cache entry CkIn, now by p after having prefetchedijJoin HkIn, it is incremented by hk, and labelling pijFor special record; Wherein, HkIt is and cache entry CkCorresponding record prefetches the set of hiting data, its data structure can for example, queue, bitmap Deng, hkHit-count counting is prefetched for record.Without empty cache entry, then performing caching replacement step (will below Middle introduction), use PiCache entry C selected by replacementkIn original content.(2) hit (such as, P in the buffer is accessediBy in advance Get cache entry CkIn) and hk< TiTime, if now pijThe most not at HkIn, then add it to HkIn, it is incremented by hk, and calculate Secondary access PiIn element and the interval I of current accessed, if I > TI, then labelling pijFor special record;Wherein, TiFor hit threshold Value, TIFor interval threshold, it can determine according to accessing interval type, and both can be static threshold, it is also possible to is dynamic threshold Value.(3) access is hit in the buffer and prefetches hit-count hk≥TiTime, now it is incremented by hk.Wherein, by pijJoin HkMiddle example As referred to pijLogic or physical indicator in the buffer are saved in record HkData structure in (before addition, judge pijIt is No HkMiddle existence, not in the presence of add).
When will be by cache entry CkContent will be from PiReplace with PlTime (data record such as, to be accessed is at PlIn, need Will be by PlIt is prefetched to cache entry CkIn), in two kinds of situation process: (1) if hk< Ti, then by HkIn data record be written to superfluous Remaining data record set PwIn, change H simultaneouslykIn to be labeled as the entrance that prefetches of special record be Pw, then empty HkJuxtaposition hkFor 0;(2) if hk≥Ti, then H is emptiedkJuxtaposition hkIt is 0.Wherein, PwStorage position can in the buffer (such as, write caching), also Can be in the storage medium of other levels, it is also possible to be transferred to the storage medium of other levels from writing caching.PwIn except pressing Outside the data record of said method redundancy write, it is also possible to write newly generated data record;Now, the data note of redundancy write Record there may be two copy (PiAnd PwIn each one), access and preferentially access from caching when there is the data record of multiple copy, If the most in the buffer, then enter into and access after in caching by prefetching mode.When redundant data set of records ends PwIn storage empty Between exhaust or data record number therein reaches upper limit requirement, now scan all cache entry C1~Cn, when scanning cache entry Ck(wherein cache set is Pi) time, in two kinds of situation process: (1) if hk< Ti, then by HkIn data record be written to Pw In, change H simultaneouslykIn to be labeled as the entrance that prefetches of special record be Pw, finally empty HkJuxtaposition hkIt is 0;(2) if hk≥Ti, The most do not perform action.Then, stop to PwThe data record that middle write is new, continues to obtain free buffer memory space and becomes New redundant data set of records ends Pw′
For being more fully understood that above-mentioned cache access method, with reference to Fig. 2, the execution process of this cache access method is carried out Illustrate.Assuming that cache entry number is 4 in this example, cache replacement algorithm used is least recently used (LRU) algorithm, With j as pijIn cache entry CkIn logical pointer (by record numeral j can realize pijJoin set HkIn), order life Middle threshold value Ti=10% × | Pi|, set interval threshold TIValue be 4, can write new prefetches record in set of records ends The number upper limits require to be 10, and the memory space of set of records ends that what this was new prefetch is positioned to be write in caching.During original state, cache all Sky, prefetches all skies of hiting data set, and prefetching hit-count is 0.Assume in current system, had 6 as shown in table 1 Prefetch data record set P1-P6.Table 2 gives access sequence 1-20 to be carried out in this example, wherein access sequence 1-7, 12-20 is read access;Access sequence 8-11 is write access.Access sequence 1 represents reads data record p1,2, access sequence 8 represents to be write Data record p7,1, by that analogy.
Table 1
P1 P2 P3 P4 P5 P6
p1,1~p1,22 p2,1~p2,18 p3,1~p3,15 p4,1~p4,12 p5,1~p5,25 p6,1~p6,21
Table 2
Sequence number 1 2 3 4 5 6 7 8 9 10
Type Read Read Read Read Read Read Read Write Write Write
Record p1,2 p2,5 p3,1 p4,9 p2,6 p2,7 p1,3 p7,1 p7,2 p7,3
Sequence number 11 12 13 14 15 16 17 18 19 20
Type Write Read Read Read Read Read Read Read Read Read
Record p7,4 p5,10 p5,11 p5,10 p1,2 p6,3 pIsosorbide-5-Nitrae p3,2 p3,11 p4,12
With continued reference to Fig. 2, the detailed process accessing above-mentioned sequence 1-20 is:
1) step 101 is performed, it is judged that accessed data p1,2The most in the buffer;
Such as, caching system can determine accessed data record the most in the buffer by search operation.If p1,2Do not exist In caching, then need p1,2Place prefetch data record set P1Anticipate in caching.Assume that Systematic selection is by P1Expection To cache entry C1In, go to step 104 and continue executing with.
2) step 104 is performed, it is judged that cache entry C1In be whether empty;
It is to say, caching system is by P1It is prefetched to cache entry C1Before, C to be judged1In whether have data.If it is slow Credit balance C1For sky, then forward step 110 to and continue executing with.
3) step 110 is performed, by P1It is prefetched to cache entry C1In;
4) step 111 is performed, by p1,2Join C1Prefetch hitting set H1In;It is then back to step 101, continues with Next access sequence.
Specifically, at C1Corresponding prefetches hiting data set H1Middle addition p1,2Logical pointer 2, labelling p1,2For special Record, is incremented by and prefetches hit-count h1
According to process 1)~5) identical mode, process access sequence 2~4, storage in cache entry after being disposed Prefetch data record set, corresponding prefetch hiting data set, and correspondence prefetch hit-count can be as shown in table 3 below (lower stroke Line represents special recording mark):
Table 3
Cache entry Prefetch data record set Prefetch hiting data set Prefetch hit-count
C1 P1 2 1
C2 P2 5 1
C3 P3 1 1
C4 P4 9 1
Access sequence with reference to shown in Fig. 2 and table 2, continues with and processes access sequence 5:
5) step 101 is performed, it is judged that p2,6The most in the buffer;Here p2,6The most in the buffer, then go to step 102 continue Continuous execution.
6) step 102 is performed, it is judged that prefetch hit-count h2Whether less than hit threshold;If it is less, go to step 103 continue executing with, and otherwise return step 101, process next access sequence.
Now, h2It is now 1, less than threshold value T2=10% × | P2|=10% × 18.
7) step 103 is performed, by p2,6Join C2Prefetch hitting set H2In;It is then back to step 101, continues with Next access sequence.
In step 103, in addition it is also necessary to calculate access interval I as described above;Here with to C2Previous access between It is 3 every I, less than interval threshold TI, therefore at C2Corresponding prefetches hiting data set H2Middle record p2,6Logical pointer 6, pass Increase and prefetch hit-count h2
According to process 5)~7) identical mode, continue with access sequence 6~7, except for the difference that, for access sequence 6, the h when performing 1022Have been above hit threshold T2, therefore it is not required to perform step 103;For access sequence 7, performing 103 Time, and to C1Previous access interval I be 6, more than interval threshold TI, therefore except by p1,3Logical pointer 3 join H1 In and be incremented by h1In addition, in addition it is also necessary to be marked as special record.
Then, access sequence 8~11 is continued with.For write sequence 8~11, caching system can generate one new pre- Fetch data set of records ends P7, by p7,1~p7,4It is written to P7In (in this example, it is assumed that P7Storage position writing caching PwIn). Now, in each cache entry storage prefetch data record set, corresponding prefetch hiting data set, and correspondence prefetch hit Number of times (underscore represents special recording mark, and hyphen represents that information is empty) as shown in table 4 below:
Table 4
Then process access sequence 12, i.e. read data record p5,10:
8) step 101 is performed, it is judged that p5,10The most in the buffer;Here p5,10No longer in caching, it is assumed that Systematic selection p5,10 Place prefetch data access set of records ends P5It is prefetched to cache entry C1In, then go to step 104 and perform.
9) step 104 is performed, it is judged that cache entry C1Whether it is empty;If cache entry is empty, then go to step 110;If Cache entry is not empty, then need cache entry C1In content swap out caching, turn now to step 105 and continue executing with.
10) step 105 is performed, it is judged that prefetch hit-count h1Whether less than or equal to hit threshold;If less than or etc. In hit threshold, then perform step 106;If greater than hit threshold, then go to step 110 and continue executing with.
Specifically, now h1It is 2, less than hit threshold T1=10% × | P1|=10% × 22.This illustrates now C1In Prefetch data access set of records ends P1In data locality poor, need improve.Whereas if more than hit threshold, Then explanation need not improve the locality of these part data, i.e. need not present cache entry C1Data carry out extra process, Directly by P5Change to C1?.
11) step 106 is performed, by H1In data record write set P7In;
Specifically, from cache entry C1Corresponding prefetches hiting data set H1In sequential read out logical pointer 2 and 3, and accordingly From C1Middle reading the 2nd and the 3rd record p1,2And p1,3, it is written into P7In (become P7In p7,5And p7,6), and change p1,2 And p1,3The entrance that prefetches be P7(data record and the newly generated record of these redundancies are write same prefectching by the present embodiment According to set of records ends).Now p1,2, p1,3And p7,5, p7,6Copy each other, owing to the latter is writing caching PwIn and P1It is paged out delaying Deposit, so preferentially accessing p7,5, p7,6
12) perform step 107, empty and prefetch hitting set H1, juxtaposition h1It is 0;
13) step 108 is performed, it is judged that write set P7The fullest;If less than, then go to step 110 and continue executing with; If the fullest, then go to step 109 and continue executing with.
Here, P7Middle data record number is 4, less than 10, and the not up to number upper limit.
14) step 110 is performed, by P5Change to cache entry C1In;
15) step 111 is performed, by p5,10Join C1Prefetch in hitting set H1;
According to process 5)~7) identical mode, process access sequence 13,14, except for the difference that, for access sequence 14, To H during execution step 1031Middle addition p5,10Logical pointer 10 time, find that this numeral has been recorded and (visit with sequence 12 Ask identical data record), therefore, skip other operations in step 103.After process completes, the prefectching of storage in each cache entry According to set of records ends, corresponding prefetch hiting data set, and correspondence prefetches hit-count, and as shown in table 5 below (underscore represents special Different recording mark, hyphen represents that information is empty):
Table 5
For access sequence 15, due to p1,2And p1,3At P7In there is copy p7,5And p7,6(find pair by cache lookup This), therefore accessing can be from P7Middle reads data log p7,5, p7,6, do not perform other operations.
According to process 8)~15) identical mode, process access sequence 16~18, in processing procedure, delay according to LRU Deposit replacement algorithm, P3, P4, P2Successively it is paged out caching, in the process by p3,1, p4,9Write P7, due to p3,1, p4,9Labeled Record it and prefetch entrance for special and be revised as P7, swap out P2 time owing to prefetching hit-count more than threshold value T2, wherein prefetch hits According to being not written into P7
According to process 5)~7) identical mode, process access sequence 19, in processing procedure, p3,11In cache entry C2 Middle hit, at C2Corresponding prefetches hiting data set H2Middle addition p3,11Logical pointer 11, be incremented by prefetch hit-count h2
According to process 8)~15) identical mode, process access sequence 20, in processing procedure, replace according to LRU cache Scaling method, P5It is paged out caching, in the process by p5,10, p5,11Write P7, due to p5,10It is marked as special recording it and prefetching Entrance is revised as P7.Except for the difference that, when performing step 108, P7In data record number be 10, reached the number upper limit, need Step 109 to be performed, specifically, scans cache entry C1~C4, find h1, h3, h4It is unsatisfactory for threshold requirement, therefore by p6,3, pIsosorbide-5-Nitrae, p4,12Write P7(h2Meet threshold requirement, therefore p3,2, p3,11It is not written into), due to p6,3, pIsosorbide-5-Nitrae, p4,12It is marked as special record, It prefetches entrance and is revised as P7, finally empty C1, C3, C4Corresponding prefetches hiting data set H, H3, H4, juxtaposition h1, h3, h4For 0.After step 109 is finished, new prefetches data record set P7: { p7,1..., p7,1313 data records are comprised in }, Wherein four is newly written record, and nine is copy record.Now by stopping the write operation to P7, there are new data when follow-up When producing or need redundancy write segment data record as described above, from the memory block that new free buffer memory allocation is new Territory so that it is for storing new redundant data set of records ends P8Deng.
After having processed all access sequences that table 2 provides, in each cache entry, storage prefetches data record set, correspondence Prefetch hiting data set, and correspondence prefetches hit-count, and as shown in table 6 below (underscore represents special recording mark, loigature Symbol expression information is empty, at P7It is paged out being persisted in disk before writing caching):
Table 6
In inventor's also directory system in content finds storage, utilize the backup load under true environment to above-mentioned Method is tested.Test result shows, the method reduces the index number of prefetches of 17.8%~56%, improves 8% ~the reading bandwidth of 24% and the write bandwidth of 2%~6%;In same directory system, the data utilizing two weeks by a definite date are same The test result of step load shows, number of prefetches have dropped 96%.
Although the present invention has been described by means of preferred embodiments, but the present invention is not limited to described here Embodiment, the most also include done various changes and change.

Claims (8)

1. the method improving cache prefetching data locality, described method includes:
In statistics caching each prefetch data record set prefetch hit-count, described in prefetch hit-count be quilt in this set The sum of the data record accessed;
For its prefetch hit-count less than set hit threshold prefetch data record set, at the caching that this set swapped out Time, it is written to new memory area by this set is accessed for data record, is formed with other data in this memory area New prefetches data record set;
Wherein, for each data record set that prefetches in caching:
It is special record by this set is accessed for data recording mark first;
Calculate the access interval being currently accessed for data record in this set and being accessed for last time between data record, if This access interval more than the interval threshold set, is then special record by being currently accessed for data recording mark;
And wherein hit-count is prefetched for it and prefetch data record set less than hit threshold, this set is being swapped out slow When depositing, it is revised as described new prefetching data record set by the entrance that prefetches being marked as the data record of special record.
Method the most according to claim 1, described access is spaced apart time interval, access times interval, self-defining patrols Collect interval or the combination at above-mentioned interval.
Method the most according to claim 1, also includes that prefetching hit-count for it prefetches data less than hit threshold Set of records ends, when this set swaps out caching, prefetches entrance be all revised as described by being accessed for data record in this set New prefetches data record set.
4. improving a system for cache prefetching data locality, described system includes:
For adding up each device prefetching hit-count prefetching data record set in caching, described in prefetch hit-count and be This set is accessed for the sum of data record;
For for its prefetch hit-count less than set hit threshold prefetch data record set, this set is being swapped out During caching, it is written to new memory area by this set is accessed for data record, with other data in this memory area Form the new device prefetching data record set;
Wherein, described system also includes labelling apparatus and amendment device,
Described labelling apparatus is for for each data record set that prefetches in caching:
It is special record by this set is accessed for data recording mark first;
Calculate the access interval being currently accessed for data record in this set and being accessed for last time between data record, if This access interval more than the interval threshold set, is then special record by being currently accessed for data recording mark;
Described amendment device prefetches data record set, by this collection for prefetching hit-count for it less than hit threshold Conjunction swap out caching time, be revised as described new prefetching data record by the entrance that prefetches being marked as the data record of special record Set.
System the most according to claim 4, wherein said amendment device is additionally operable to prefetch hit-count less than life for it Middle threshold value prefetch data record set, this set is swapped out caching time, prefetch this set is accessed for data record Entrance is all revised as described new prefetching data record set.
6. a cache access method, the method includes:
For data record to be visited, if cache hit, then caching will comprise prefetching of this data record to be visited The hit-count that prefetches of data record set increases by 1;
If cache miss and free cache entry, then prefetch data record set by comprise this data record to be visited It is prefetched in this cache entry, and this hit-count that prefetches prefetching data record set is increased by 1;
If cache miss and be not free cache entry, then perform:
Prefetch data record set in the cache entry that judgement is selected prefetches whether hit-count is less than the hit threshold set, as Fruit is less than, then be written to new memory area by being accessed for data record in this set, with other numbers in this memory area New data record set is prefetched according to being formed;And
The data record set that prefetches comprising this data record to be visited is prefetched in this selected cache entry, and this is pre- The hit-count that prefetches of set of records ends of fetching data increases by 1;
Wherein for each data record set that prefetches in caching:
It is special record by this set is accessed for data recording mark first;
Calculate the access interval being currently accessed for data record in this set and being accessed for last time between data record, if This access interval more than the interval threshold set, is then special record by being currently accessed for data recording mark;
And prefetched hit-count less than hit threshold prefetch data record set swap out caching time, will be marked as The entrance that prefetches of the data record of special record is revised as described new prefetching data record set.
Method the most according to claim 6, be additionally included in prefetched hit-count less than hit threshold prefetch data Set of records ends swap out caching time, prefetch entrance be all revised as described new prefetching data by this set be accessed for data record Set of records ends.
8., according to the method described in claim 6 or 7, also include:
When the described new data record number prefetched in data record set reaches the threshold value set, in caching, it is pre- Take hit-count and prefetch data record set less than each of hit threshold set, this set will be accessed for data record It is written to this new prefetching in data record set;
Stop the write that prefetch data record set new to this, obtain idle spatial cache new pre-for storing another Fetch data set of records ends.
CN201310298246.7A 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality Active CN103383666B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310298246.7A CN103383666B (en) 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310298246.7A CN103383666B (en) 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality

Publications (2)

Publication Number Publication Date
CN103383666A CN103383666A (en) 2013-11-06
CN103383666B true CN103383666B (en) 2016-12-28

Family

ID=49491463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310298246.7A Active CN103383666B (en) 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality

Country Status (1)

Country Link
CN (1) CN103383666B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103858112A (en) * 2013-12-31 2014-06-11 华为技术有限公司 Data-caching method, device and system
CN104063330B (en) * 2014-06-25 2017-04-26 华为技术有限公司 Data prefetching method and device
CN107463509B (en) * 2016-06-05 2020-12-15 华为技术有限公司 Cache management method, cache controller and computer system
CN107168648B (en) * 2017-05-04 2021-03-02 Oppo广东移动通信有限公司 File storage method and device and terminal
CN108287795B (en) * 2018-01-16 2022-06-21 安徽蔻享数字科技有限公司 Processor cache replacement method
CN109491873B (en) * 2018-11-05 2022-08-02 阿里巴巴(中国)有限公司 Cache monitoring method, medium, device and computing equipment
CN116107926B (en) * 2023-02-03 2024-01-23 摩尔线程智能科技(北京)有限责任公司 Cache replacement policy management method, device, equipment, medium and program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN102110073A (en) * 2011-02-01 2011-06-29 中国科学院计算技术研究所 Replacement device and method for chip shared cache and corresponding processor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7546422B2 (en) * 2002-08-28 2009-06-09 Intel Corporation Method and apparatus for the synchronization of distributed caches

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN102110073A (en) * 2011-02-01 2011-06-29 中国科学院计算技术研究所 Replacement device and method for chip shared cache and corresponding processor

Also Published As

Publication number Publication date
CN103383666A (en) 2013-11-06

Similar Documents

Publication Publication Date Title
CN103383666B (en) Improve method and system and the cache access method of cache prefetching data locality
CN103885728B (en) A kind of disk buffering system based on solid-state disk
US8745334B2 (en) Sectored cache replacement algorithm for reducing memory writebacks
CN104115134B (en) For managing the method and system to be conducted interviews to complex data storage device
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
CN105094686B (en) Data cache method, caching and computer system
US6578111B1 (en) Cache memory system and method for managing streaming-data
US20080052488A1 (en) Method for a Hash Table Lookup and Processor Cache
CN107368436B (en) Flash memory cold and hot data separated storage method combined with address mapping table
CN104063330B (en) Data prefetching method and device
US20130205089A1 (en) Cache Device and Methods Thereof
CN110795363B (en) Hot page prediction method and page scheduling method of storage medium
JPS61156346A (en) Apparatus for forestalling memory hierarchy
KR102453192B1 (en) Cache entry replacement based on availability of entries in other caches
US20050235115A1 (en) System, method and storage medium for memory management
CN111488125B (en) Cache Tier Cache optimization method based on Ceph cluster
US6240489B1 (en) Method for implementing a pseudo least recent used (LRU) mechanism in a four-way cache memory within a data processing system
CN110532200B (en) Memory system based on hybrid memory architecture
US9477416B2 (en) Device and method of controlling disk cache by identifying cached data using metadata
US8924652B2 (en) Simultaneous eviction and cleaning operations in a cache
US6643743B1 (en) Stream-down prefetching cache
CN104424132B (en) High performance instruction cache system and method
US6598124B1 (en) System and method for identifying streaming-data
CN104375955A (en) Cache device and control method thereof
CN103514107B (en) High-performance data caching system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant