CN102214146A - Step size adaptive Cache pre-fetching method and system - Google Patents

Step size adaptive Cache pre-fetching method and system Download PDF

Info

Publication number
CN102214146A
CN102214146A CN2011102133606A CN201110213360A CN102214146A CN 102214146 A CN102214146 A CN 102214146A CN 2011102133606 A CN2011102133606 A CN 2011102133606A CN 201110213360 A CN201110213360 A CN 201110213360A CN 102214146 A CN102214146 A CN 102214146A
Authority
CN
China
Prior art keywords
looking ahead
ahead
cache
address
index value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011102133606A
Other languages
Chinese (zh)
Other versions
CN102214146B (en
Inventor
郭阳
靳强
鲁建壮
陈书明
胡春媚
刘蓬侠
李勇
余再祥
许邦建
吴虎成
罗恒
唐涛
刘祥远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN2011102133606A priority Critical patent/CN102214146B/en
Publication of CN102214146A publication Critical patent/CN102214146A/en
Application granted granted Critical
Publication of CN102214146B publication Critical patent/CN102214146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3824Operand accessing
    • G06F9/383Operand prefetching

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a step size adaptive Cache pre-fetching method and system, wherein the method comprises the steps that: a pre-fetching table is set, an index value is calculated according to a fail address, two predication addresses are calculated and compared with the index value, the pre-fetching table is considered to be hit if any one of the predication addresses is identical to the index value, otherwise, a new entry is allocated to the fail address; if the pre-fetching table is hit and data is pre-fetched in the pre-fetching table, the pre-fetched data is returned to Cache; the pre-fetching table is then updated; if the pre-fetching table is hit, whether the hit entries in the pre-fetching table meet pre-fetching conditions is determined, and if so, a pre-fetching operation is triggered. The system comprises a pre-fetching table, an address conversion component for converting the index value, an adder for calculating the two predication addresses, a comparator for comparing the two predication addresses with the index value, and an update control logic component for determining whether to perform the pre-fetching operation and updating the pre-fetching table. The method and the system disclosed by the invention have the advantages of strong portability, high pre-fetching corrective rate and the like.

Description

The Cache forecasting method and the system thereof of adaptive step
Technical field
The present invention relates to field of microprocessors, relate in particular to a kind of Cache forecasting method and system.
Background technology
Cache (cache memory) broad application, well solved the restriction that storage wall problem promotes microprocessor performance, and the development of very large scale integration technology, make that also integrated high capacity Cache becomes possibility on the sheet, this has reduced the crash rate of Cache to a great extent.But when the Cache inefficacy took place, high capacity Cache meaned and will read more data from external memory, has so just increased the inefficacy expense of Cache.The Cache that looks ahead can predict next time and take place the data address that Cache lost efficacy, in memory access idle period of time in advance with data read to Cache, when really need these data next time, just can hit Cache, reach the purpose of hiding the inefficacy expense.
Existing C ache prefetch mechanisms adopts mostly and is similar to the list structure of looking ahead shown in Figure 1, and wherein Ins addr represents the access instruction address, and Prefetch addr represents prefetch address, and stride represents the step-length of looking ahead, and valid represents whether current list item is effective.There is following shortcoming in this list structure of looking ahead: one, only a certain memory access address sequence type is looked ahead, lack the dirigibility of looking ahead, can not adjust predicted address computing method, function singleness according to the fail address sequence type flexibly; Two, the index that uses instruction address to look ahead and show as inquiry needs the instruction address data path of the special CPU of increasing to Cache, and the design realization is complicated.
Summary of the invention
Technical matters to be solved by this invention is: at the problem of prior art existence, the invention provides a kind of Cache of use fail address and calculate the index that inquiry is looked ahead and shown, and can dynamically adjust the predicted address computing method flexibly, the Cache forecasting method and the system thereof of the adaptive step that portability is strong, the accuracy of looking ahead is high.
For solving the problems of the technologies described above, the present invention by the following technical solutions:
A kind of Cache forecasting method of adaptive step may further comprise the steps:
(1) table of looking ahead is set: the table of looking ahead is set, stores the fail address in showing, be recorded in the PA territory described looking ahead;
(2) inquire about the table of looking ahead: when taking place at first to inquire about the table of looking ahead when Cache lost efficacy, a high position of at first using the fail address is as index value; Content according to the clauses and subclauses of storage in the table of looking ahead obtains two predicted address by additional calculation, and described two predicted address are compared with index value respectively, if any predicted address is identical with index value, then judges and hits the table of looking ahead, and changes step (4); Otherwise judge the miss table of looking ahead, change step (3);
(3) distribute new list item: if miss looking ahead when showing then is that a new list item is distributed in this fail address in the table of looking ahead; Change step (5);
(4) return data:, return the data of having looked ahead and give Cache if hit in the look ahead table and the table of looking ahead during prefetch data; Otherwise return data not; If there is not prefetch data to return, then proceeds accessing operation and fetch required data; Change step (5);
(5) upgrade the table of looking ahead: the information to the table of looking ahead is upgraded;
(6) look ahead:, judge then whether hitting in the table of looking ahead satisfies the condition of looking ahead, and as satisfied, then triggers prefetch operation, looks ahead if hit the table of looking ahead.
Further improvement as forecasting method of the present invention:
Described index value is meant that specifically the memory access address conductively-closed when the described last Cache of generation lost efficacy falls on the ground the low n position of location, wherein n=log 2(Cache line size).
Also record in the described table of looking ahead: S territory, the step-length of memory access address sequence, the step-length of described memory access address sequence are the difference of current memory access fail address and last memory access fail address;
With the DS territory, the linear change value of memory access address sequence step-length;
Two predicted address are specially in the described step (2): the predicted address PDA of fixed step size memory access type cPredicted address PDA with step-length linear change memory access type 1, PDA wherein c=PA+S, PDA 1=PA+S+DS.
Also store the memory access address sequence type of prediction in the described table of looking ahead, be recorded in the PT territory, wherein, PT is that the type of prediction of the current list item of 0 expression is a fixed step size memory access type, and PT is that the type of prediction of the current list item of 1 expression is a step-length linear change memory access type;
The described table of looking ahead that hits comprises that ordained by Heaven neutralization vacation hits, described ordained by Heaven in for a type of prediction of hitting consistent with current memory access address sequence; Described vacation is hit and is current type of prediction and memory access address sequence Type-Inconsistencies.
Also record in the described table of looking ahead: the C territory is a saturability counter, writes down the confidence value of current list item, is used to judge and carries out the opportunity of looking ahead;
The V territory, whether identify current list item effective, and V represented that current list item was effective at 1 o'clock, otherwise invalid;
The PR territory identifies in the D territory of current list item whether the data of having looked ahead are arranged, and PR is 1 o'clock then prefetch data and effectively in the D territory of corresponding list item, otherwise invalid;
The described table of looking ahead also is provided with confidence system<I, D, T, ST〉(this confidence system can preferably realize by analogy method before hardware is realized), in the described confidence system,, make C=C+I when hitting when looking ahead table; Miss looking ahead when showing makes C=C-D, and T is for triggering the confidence threshold value of looking ahead, and ST is the saturation value of saturability counter C;
Described step (5) is upgraded the table of looking ahead, and specifically comprises following two kinds:
5.1) do not need the renewal of conversion estimation type:
A. if look ahead table in ordained by Heaven, the C that then will hit item increases I, is updated to successively in order: DS=index value-PA-S, S=index value-PA and PA=index value, and the C with other list items subtracts D simultaneously;
B. if the miss table of looking ahead, then be new list item of current fail address distribution, the C of all list items is subtracted D, C is updated in order successively less than the list item of confidence threshold value T: DS=index value-PA-S, S=index value-PA and PA=index value;
C. PT and PR are changed to 0; And with the V clear 0 of the miss list item of C=0;
5.2) need the renewal of conversion estimation type:
D. if the table of looking ahead is hit in vacation, then will hit the PT negate of item, C is changed to I, and PR is changed to 0, and is updated to successively in order: DS=index value-PA-S, S=index value-PA, PA=index value.
The present invention also provides a kind of Cache pre-fetching system that is used to realize the adaptive step of said method, and described pre-fetching system comprises:
The table of looking ahead is used to store the required information of Cache prefetch operation, and described information comprises the Cache fail address;
The address translation parts are used for converting described Cache fail address to index value that the table of looking ahead is inquired about;
Totalizer goes out two kinds of predicted address according to the information calculations in the table of looking ahead;
Comparer be used for described two kinds of predicted address and index value are compared, and the result that will compare is sent to the renewal control logic unit;
Upgrade control logic unit, be used for judging whether to carry out prefetch operation, and upgrade the described table of looking ahead according to the result of described comparer comparison.
Further improvement as pre-fetching system of the present invention:
Described pre-fetching system also comprise be used to judge whether the data of having looked ahead are returned and arithmetical unit, describedly connect with described comparer and the described epiphase of looking ahead respectively with arithmetical unit.
Compared with prior art, the invention has the advantages that:
1, the Cache forecasting method of adaptive step of the present invention, the increase of excessive data path avoided in the index that uses the fail address to look ahead and show as inquiry, can be very easily on the basis of existing Cache structure, add prefetch architecture and realize looking ahead Cache, portable strong.
2, the Cache forecasting method of adaptive step of the present invention can Dynamic Recognition memory access address sequence type, and adaptive adjustment predicted address computing method are looked ahead to a greater variety of memory access address sequences.Simultaneously, the present invention passes through to introduce the confidence system mechanism, and chooses appropriate structural parameters and trigger prefetch operation in good time, can obtain the higher success ratio of looking ahead, and hides the inefficacy expense to a greater degree;
3, the Cache pre-fetching system of adaptive step of the present invention can be realized said method by increasing limited parts.It is simple in structure, hardware spending is little and portable strong, the performance that can effectively improve the cache system.
Description of drawings
Fig. 1 is the existing prefetch mechanisms list structure synoptic diagram of typically looking ahead;
Fig. 2 is the main-process stream synoptic diagram of forecasting method of the present invention;
Fig. 3 is the basic structure synoptic diagram of pre-fetching system of the present invention;
The steps flow chart synoptic diagram that Fig. 4 looks ahead and shows for inquiry in the specific embodiment of the invention; Wherein, Fig. 4 (a) is the ordained by Heaven middle steps flow chart synoptic diagram in the specific embodiment of the invention; Fig. 4 (b) hits the steps flow chart synoptic diagram for the vacation in the specific embodiment of the invention;
Fig. 5 is for upgrading the steps flow chart synoptic diagram of the table of looking ahead in the specific embodiment of the invention.
Marginal data:
1, address translation parts; 2, totalizer; 3, comparer; 4, upgrade control logic unit; 5 and arithmetical unit.
Embodiment
Below with reference to Figure of description and specific embodiment the present invention is described in further detail.
As shown in Figure 3, the Cache pre-fetching system of adaptive step of the present invention comprises:
The table of looking ahead is used to store the required information of Cache prefetch operation;
Address translation parts 1 are used for converting the Cache fail address to index value (IDX) that the table of looking ahead is inquired about;
Totalizer 2 goes out two kinds of predicted address according to the information calculations in the table of looking ahead;
Comparer 3 is used for two kinds of predicted address and index value are compared, and the result that will compare is sent to renewal control logic unit 4;
Upgrade control logic unit 4, be used for judging whether to carry out prefetch operation, and upgrade the table of looking ahead according to comparer 3 result relatively.
In the present embodiment, pre-fetching system also comprise be used to judge whether the data of having looked ahead are returned with arithmetical unit 5, connect with the comparer 3 and the epiphase of looking ahead respectively with arithmetical unit 5.
Wherein, look ahead and comprise following information in the table:
The PA territory, the memory access address when the last Cache of generation lost efficacy, i.e. fail address;
S territory, the step-length of memory access address sequence, the step-length of memory access address sequence are the difference of current memory access fail address and last memory access fail address;
The DS territory, the linear change value of memory access address sequence step-length;
In the PT territory, also store the memory access address sequence type of prediction in the table of looking ahead, wherein, PT is that the type of prediction of the current list item of 0 expression is a fixed step size memory access type, and PT is that the type of prediction of the current list item of 1 expression is a step-length linear change memory access type;
The C territory is a saturability counter, writes down the confidence value of current list item, is used to judge carry out the opportunity of looking ahead;
The V territory, whether identify current list item effective, and V represented that current list item was effective at 1 o'clock, otherwise invalid;
The PR territory identifies in the D territory of current list item whether the data of having looked ahead are arranged, and PR is 1 o'clock then prefetch data and effectively in the D territory of corresponding list item, otherwise invalid;
The table of looking ahead also is provided with confidence system<I, D, T, ST 〉, this confidence system can preferably realize by analogy method before hardware is realized.In this confidence system,, make C=C+I when hitting when table of looking ahead; Miss looking ahead when showing, C=C-D, T is for triggering the confidence threshold value of looking ahead, and ST is the saturation value of saturability counter C.
Can realize the Cache forecasting method of adaptive step of the present invention by above-mentioned pre-fetching system, as shown in Figure 2, this method realizes by following steps:
1, the table of looking ahead is set:
One table of looking ahead is set, in the table of looking ahead, clauses and subclauses such as PA, S, DS, PT, C, V and PR is set, be used to store above-mentioned information.
2, inquire about the table of looking ahead:
When taking place at first to inquire about the table of looking ahead when Cache lost efficacy, at first to use by address translation parts 1 high position of fail address as index value (IDX), the memory access address mask when being about to the last Cache of generation inefficacy falls on the ground the low n position of location, wherein n=log 2(Cache line size), Cache line size are the capable size of Cache that is provided with above-mentioned pre-fetching system.
As shown in Figure 4, calculate two predicted address with totalizer 2: the predicted address PDA of fixed step size memory access type cPredicted address PDA with step-length linear change memory access type 1, PDA wherein c=PA+S, PDA 1=PA+S+DS.By comparer 3 with PDA cAnd PDA 1Compare with IDX respectively,, then judge and hit the table of looking ahead, change step 4 if any is identical with IDX; Otherwise judge the miss table of looking ahead, change step 3.
Hitting the table of looking ahead and comprising that ordained by Heaven neutralization vacation hits, as IDX and PDA cIdentical and PT is 0, perhaps IDX and PDA 1Identical and PT be thought in 1 o'clock ordained by Heaven in (shown in Fig. 4 (a)), it is consistent with current memory access address sequence to be a type of prediction of hitting; Vacation is hit (shown in Fig. 4 (b)) and is current type of prediction and memory access address sequence Type-Inconsistencies, but needs only conversion estimation type (being about to the PT negate) when table is looked ahead in follow-up renewal, just can meet current memory access address sequence type.
3, distribute new list item:
Look ahead when table when the fail address is miss, then in the table of looking ahead, be new list item of this fail address distribution; Its principle is preferentially to select the sky list item for use, and promptly V is 0 item, if there is not null term then to select C minimum or directly select first.After the selected list item, the PA of new list item is changed to current fail address, V is changed to the current list item of looking ahead of 1 sign for effectively, and S, DS, PT, C, PR all are initialized as 0; Change step 5.
4, return data:
If look ahead table during the fail address is ordained by Heaven, and PR is 1 and returns the data of having looked ahead and give Cache, otherwise return data not.If there is not prefetch data to return, promptly be not look ahead in ordained by Heaven table or ordained by Heaven in the list item data of not looked ahead, then proceed accessing operation and fetch required data; Change step 5.
5, upgrade the table of looking ahead:
As shown in Figure 5, no matter whether Query Result is for hitting the table of looking ahead, and all needs the information of the table of looking ahead is upgraded.Renewal comprises following two kinds of situations:
5.1) do not need the renewal of conversion estimation type:
A. if look ahead table in ordained by Heaven, the C that then will hit item increases I, is updated to successively in order: DS=IDX-PA-S, S=IDX-PA and PA=IDX, and the C with other list items subtracts D simultaneously;
B. if the miss table of looking ahead, then be new list item of current fail address distribution, the C of all list items is subtracted D, C is updated in order successively less than the list item of confidence threshold value T: DS=IDX-PA-S, S=IDX-PA and PA=IDX;
C. PT and PR are changed to 0; And with clear 0 (it is invalid to be changed to) of V of the miss list item of C=0;
5.2) need the renewal of conversion estimation type:
D. if the table of looking ahead is hit in vacation, then will hit the PT negate of item, C is changed to I, and PR is changed to 0, and is updated to successively in order: DS=IDX-PA-S, S=IDX-PA, PA=IDX.
6, look ahead:
The table of looking ahead in the fail address is ordained by Heaven judges then whether hitting in the table of looking ahead satisfies the condition of looking ahead, and as satisfying (C who hits more than or equal to threshold value T time), then triggers prefetch operation, looks ahead.And after prefetch data returns, PR is put 1.
Prefetch address is according to the difference of PT value, and its computing method are also different.When PT was 0, prefetch address was (PA+S), and when PT was 1, prefetch address was (PA+S+DS), and wherein PA, S, DS are the value after the renewal.
Prefetch operation must carry out in memory access idle period of time, and promptly all normal accessing operations carry out prefetch operation when all finishing, and wait for otherwise be taken in the formation in advance.
Application examples 1:
In the above-mentioned forecasting method, the table of looking ahead be fixed step size memory access pattern, and the confidence system adopts<1,2,2,3 for shown in the table 1 〉.Its course of work is as follows:
1) table of looking ahead distributes a new list item for first memory access fail address 0x80, and this moment, PA write memory access fail address 0x80, and V puts 1, and S, DS, PT, C, PR all are initialized as 0.
2) next memory access fail address is 0x100, because PT=0 and 0x100 ≠ PDA c(0x80+0),, only need to upgrade the value of looking ahead table and not increasing counter C so do not hit the table of looking ahead, PA<=0x100, S<=(0x100-0x80), DS<=(0x100-0x80-0).
3) next memory access fail address is 0x180, because PT=0 and 0x180=PDA c(0x100+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x180, and S<=(0x180-0x100), DS<=(0x180-0x100-0x80); So this moment, C<T did not trigger prefetch operation.
4) next memory access fail address is 0x200, because PT=0 and 0x200=PDA c(0x180+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x200, and S<=(0x200-0x180), DS<=(0x200-0x180-0x80); So this moment, C=T triggered prefetch operation, prefetch address is 0x280 (PA+S), and after returning prefetch data PR is put 1.
5) next memory access fail address is 0x280, because PT=0 and 0x280=PDA c(0x200+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x180, and S<=(0x180-0x100), DS<=(0x180-0x100-0x80); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0x300 (PA+S), and after returning prefetch data PR is put 1.
6) next memory access fail address is 0x300, because PT=0 and 0x300=PDA c(0x280+0x80), so hit the table of looking ahead; The value of this hour counter C is 3, and the value of reaching capacity is not so need to add 1 again; The renewal table of looking ahead, PA<=0x180, S<=(0x180-0x100), DS<=(0x180-0x100-0x80); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0x300 (PA+S), and after returning prefetch data PR is put 1.
Table 1
Figure BDA0000079302160000071
Application examples 2:
In the above-mentioned forecasting method, the table of looking ahead be step-length linear change memory access pattern, and the confidence system adopts<1,2,2,3 for shown in the table 2 〉.Its course of work is as follows:
1) table of looking ahead can be that first memory access fail address 0x80 distributes a new list item, and this moment, PA write memory access fail address 0x80, and V puts 1, and S, DS, PT, C, PR all are initialized as 0.
2) next memory access fail address is 0x100, because PT=0 and 0x100 ≠ PDA c(0x80+0),, only need to upgrade the value of looking ahead table and not increasing counter C so do not hit the table of looking ahead, PA<=0x100, S<=(0x100-0x80), DS<=(0x100-0x80-0).
3) next memory access fail address is 0x200, because PT=0 and 0x200=PDA c(0x100+0x80), but this moment 0x200=PDA 1(0x100+0x80+0x80), need the conversion estimation type, PT=1 thinks simultaneously and hits the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x180, and S<=(0x200-0x100), DS<=(0x180-0x100-0x80); So this moment, C<T did not trigger prefetch operation.
4) next memory access fail address is 0x380, because PT=1 and 0x380=PDA 1(0x200+0x100+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x380, and S<=(0x380-0x200), DS<=(0x380-0x200-0x100); So this moment, C=T triggered prefetch operation, prefetch address is 0x580 (PA+S+DS), and after returning prefetch data PR is put 1.
5) next memory access fail address is 0x580, because PT=1 and 0x580=PDA 1(0x380+0x180+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x580, and S<=(0x580-0x380), DS<=(0x580-0x380-0x180); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0x800 (PA+S+DS), and after returning prefetch data PR is put 1.
6) next memory access fail address is 0x800, because PT=1 and 0x800=PDA 1(0x580+0x200+0x80), so hit the table of looking ahead; The value of this hour counter C is 3, and the value of reaching capacity is not so need to add 1 again; The renewal table of looking ahead, PA<=0x800, S<=(0x800-0x580), DS<=(0x800-0x580-0x200); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0xB00 (PA+S+DS), and after returning prefetch data PR is put 1.
Table 2
To sum up, forecasting method of the present invention, whether the index that uses the fail address to look ahead and show as inquiry according to the adjustment predicted address computing method of memory access address sequence type self adaption, and satisfies threshold value requirement triggering in good time according to the confidence value and looks ahead; The present invention can dynamically adjust the predicted address computing method, and is portable strong, and the accuracy of looking ahead height is applicable to the improvement in performance of Cache.
The above only is a preferred implementation of the present invention, and protection scope of the present invention also not only is confined to the foregoing description, and all technical schemes that belongs under the thinking of the present invention all belong to protection scope of the present invention.Should be pointed out that for those skilled in the art the some improvements and modifications not breaking away under the principle of the invention prerequisite should be considered as protection scope of the present invention.

Claims (8)

1. the Cache forecasting method of an adaptive step is characterized in that may further comprise the steps:
(1) table of looking ahead is set: the table of looking ahead is set, stores the fail address in showing, be recorded in the PA territory described looking ahead;
(2) inquire about the table of looking ahead: when taking place at first to inquire about the table of looking ahead when Cache lost efficacy, a high position of at first using the fail address is as index value; Content according to the clauses and subclauses of storage in the table of looking ahead obtains two predicted address by additional calculation, and described two predicted address are compared with index value respectively, if any predicted address is identical with index value, then judges and hits the table of looking ahead, and changes step (4); Otherwise judge the miss table of looking ahead, change step (3);
(3) distribute new list item: if miss looking ahead when showing then is that a new list item is distributed in this fail address in the table of looking ahead; Change step (5);
(4) return data:, return the data of having looked ahead and give Cache if hit in the look ahead table and the table of looking ahead during prefetch data; Otherwise return data not; If there is not prefetch data to return, then proceeds accessing operation and fetch required data; Change step (5);
(5) upgrade the table of looking ahead: the information to the table of looking ahead is upgraded;
(6) look ahead:, judge then whether hitting in the table of looking ahead satisfies the condition of looking ahead, and as satisfied, then triggers prefetch operation, looks ahead if hit the table of looking ahead.
2. the Cache forecasting method of adaptive step according to claim 1 is characterized in that, described index value is meant that specifically the memory access address conductively-closed when the described last Cache of generation lost efficacy falls on the ground the low n position of location, wherein n=log 2(Cache line size).
3. the Cache forecasting method of adaptive step according to claim 1 and 2, it is characterized in that, also record in the described table of looking ahead: S territory, the step-length of memory access address sequence, the step-length of described memory access address sequence are the difference of current memory access fail address and last memory access fail address;
With the DS territory, the linear change value of memory access address sequence step-length;
Two predicted address are specially in the described step (2): the predicted address PDA of fixed step size memory access type cPredicted address PDA with step-length linear change memory access type 1, PDA wherein c=PA+S, PDA 1=PA+S+DS.
4. the Cache forecasting method of adaptive step according to claim 3, it is characterized in that, also store the memory access address sequence type of prediction in the described table of looking ahead, be recorded in the PT territory, wherein, PT is that the type of prediction of the current list item of 0 expression is a fixed step size memory access type, and PT is that the type of prediction of the current list item of 1 expression is a step-length linear change memory access type;
The described table of looking ahead that hits comprises that ordained by Heaven neutralization vacation hits, described ordained by Heaven in for a type of prediction of hitting consistent with current memory access address sequence; Described vacation is hit and is current type of prediction and memory access address sequence Type-Inconsistencies.
5. the Cache forecasting method of the adaptive step of stating according to claim 4 is characterized in that, also record in the described table of looking ahead: the C territory is a saturability counter, writes down the confidence value of current list item, is used to judge and carries out the opportunity of looking ahead;
The V territory, whether identify current list item effective, and V represented that current list item was effective at 1 o'clock, otherwise invalid;
The PR territory identifies in the D territory of current list item whether the data of having looked ahead are arranged, and PR is 1 o'clock then prefetch data and effectively in the D territory of corresponding list item, otherwise invalid;
The described table of looking ahead also is provided with confidence system<I, D, T, ST 〉, in the described confidence system,, make C=C+I when hitting when looking ahead table; Miss looking ahead when showing makes C=C-D, and T is for triggering the confidence threshold value of looking ahead, and ST is the saturation value of saturability counter C;
6. the Cache forecasting method of the adaptive step of stating according to claim 5 is characterized in that, described step (5) is upgraded the table of looking ahead, and specifically comprises following two kinds:
5.1) do not need the renewal of conversion estimation type:
A. if look ahead table in ordained by Heaven, the C that then will hit item increases I, is updated to successively in order: DS=index value-PA-S, S=index value-PA and PA=index value, and the C with other list items subtracts D simultaneously;
B. if the miss table of looking ahead, then be new list item of current fail address distribution, the C of all list items is subtracted D, C is updated in order successively less than the list item of confidence threshold value T: DS=index value-PA-S, S=index value-PA and PA=index value;
C. PT and PR are changed to 0; And with the V clear 0 of the miss list item of C=0;
5.2) need the renewal of conversion estimation type:
D. if the table of looking ahead is hit in vacation, then will hit the PT negate of item, C is changed to I, and PR is changed to 0, and is updated to successively in order: DS=index value-PA-S, S=index value-PA, PA=index value.
7. Cache pre-fetching system that is used for realizing the adaptive step of each described method of claim 1~6 is characterized in that described pre-fetching system comprises:
The table of looking ahead is used to store the required information of Cache prefetch operation, and described information comprises the Cache fail address;
The address translation parts are used for converting described Cache fail address to index value that the table of looking ahead is inquired about;
Totalizer goes out two kinds of predicted address according to the information calculations in the table of looking ahead;
Comparer be used for described two kinds of predicted address and index value are compared, and the result that will compare is sent to the renewal control logic unit;
Upgrade control logic unit, be used for judging whether to carry out prefetch operation, and upgrade the described table of looking ahead according to the result of described comparer comparison.
8. the Cache pre-fetching system of adaptive step according to claim 7, it is characterized in that, described pre-fetching system also comprise be used to judge whether the data of having looked ahead are returned and arithmetical unit, describedly connect with described comparer and the described epiphase of looking ahead respectively with arithmetical unit.
CN2011102133606A 2011-07-28 2011-07-28 Step size adaptive Cache pre-fetching method and system Active CN102214146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011102133606A CN102214146B (en) 2011-07-28 2011-07-28 Step size adaptive Cache pre-fetching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011102133606A CN102214146B (en) 2011-07-28 2011-07-28 Step size adaptive Cache pre-fetching method and system

Publications (2)

Publication Number Publication Date
CN102214146A true CN102214146A (en) 2011-10-12
CN102214146B CN102214146B (en) 2013-04-10

Family

ID=44745464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011102133606A Active CN102214146B (en) 2011-07-28 2011-07-28 Step size adaptive Cache pre-fetching method and system

Country Status (1)

Country Link
CN (1) CN102214146B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521158A (en) * 2011-12-13 2012-06-27 北京北大众志微***科技有限责任公司 Method and device for realizing data pre-fetching
CN102521149A (en) * 2011-11-28 2012-06-27 曙光信息产业(北京)有限公司 Optimizing polling system and optimizing polling method for collecting data from plurality of buffer zones
CN102662862A (en) * 2012-03-22 2012-09-12 北京北大众志微***科技有限责任公司 Method and device for implementing hybrid prefetch
CN104461758A (en) * 2014-11-10 2015-03-25 中国航天科技集团公司第九研究院第七七一研究所 Exception handling method and structure tolerant of missing cache and capable of emptying assembly line quickly
WO2016097794A1 (en) * 2014-12-14 2016-06-23 Via Alliance Semiconductor Co., Ltd. Prefetching with level of aggressiveness based on effectiveness by memory access type
US9817764B2 (en) 2014-12-14 2017-11-14 Via Alliance Semiconductor Co., Ltd Multiple data prefetchers that defer to one another based on prefetch effectiveness by memory access type
WO2020020175A1 (en) * 2018-07-27 2020-01-30 华为技术有限公司 Data prefetching method and terminal device
CN111639042A (en) * 2020-06-04 2020-09-08 中科芯集成电路有限公司 Method and device for processing consistency of prefetched buffer data
CN113656332A (en) * 2021-08-20 2021-11-16 中国科学院上海高等研究院 CPU cache data prefetching method based on merged address difference sequence
CN116166575A (en) * 2023-02-03 2023-05-26 摩尔线程智能科技(北京)有限责任公司 Method, device, equipment, medium and program product for configuring access segment length

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6216219B1 (en) * 1996-12-31 2001-04-10 Texas Instruments Incorporated Microprocessor circuits, systems, and methods implementing a load target buffer with entries relating to prefetch desirability
JP2006164218A (en) * 2004-11-11 2006-06-22 Nec Corp Storage system and its cache control method
CN1955947A (en) * 2005-10-28 2007-05-02 中国科学院计算技术研究所 Memory data processing method of cache failure processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6216219B1 (en) * 1996-12-31 2001-04-10 Texas Instruments Incorporated Microprocessor circuits, systems, and methods implementing a load target buffer with entries relating to prefetch desirability
JP2006164218A (en) * 2004-11-11 2006-06-22 Nec Corp Storage system and its cache control method
CN1955947A (en) * 2005-10-28 2007-05-02 中国科学院计算技术研究所 Memory data processing method of cache failure processor

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521149A (en) * 2011-11-28 2012-06-27 曙光信息产业(北京)有限公司 Optimizing polling system and optimizing polling method for collecting data from plurality of buffer zones
CN102521149B (en) * 2011-11-28 2014-08-27 曙光信息产业(北京)有限公司 Optimizing polling system and optimizing polling method for collecting data from plurality of buffer zones
CN102521158A (en) * 2011-12-13 2012-06-27 北京北大众志微***科技有限责任公司 Method and device for realizing data pre-fetching
CN102521158B (en) * 2011-12-13 2014-09-24 北京北大众志微***科技有限责任公司 Method and device for realizing data pre-fetching
CN102662862A (en) * 2012-03-22 2012-09-12 北京北大众志微***科技有限责任公司 Method and device for implementing hybrid prefetch
CN102662862B (en) * 2012-03-22 2015-01-21 北京北大众志微***科技有限责任公司 Method and device for implementing hybrid prefetch
CN104461758B (en) * 2014-11-10 2017-08-25 中国航天科技集团公司第九研究院第七七一研究所 A kind of quick abnormality eliminating method and its processing structure for emptying streamline of tolerance cache missings
CN104461758A (en) * 2014-11-10 2015-03-25 中国航天科技集团公司第九研究院第七七一研究所 Exception handling method and structure tolerant of missing cache and capable of emptying assembly line quickly
WO2016097794A1 (en) * 2014-12-14 2016-06-23 Via Alliance Semiconductor Co., Ltd. Prefetching with level of aggressiveness based on effectiveness by memory access type
US9817764B2 (en) 2014-12-14 2017-11-14 Via Alliance Semiconductor Co., Ltd Multiple data prefetchers that defer to one another based on prefetch effectiveness by memory access type
US10387318B2 (en) 2014-12-14 2019-08-20 Via Alliance Semiconductor Co., Ltd Prefetching with level of aggressiveness based on effectiveness by memory access type
WO2020020175A1 (en) * 2018-07-27 2020-01-30 华为技术有限公司 Data prefetching method and terminal device
US11586544B2 (en) 2018-07-27 2023-02-21 Huawei Technologies Co., Ltd. Data prefetching method and terminal device
CN111639042A (en) * 2020-06-04 2020-09-08 中科芯集成电路有限公司 Method and device for processing consistency of prefetched buffer data
CN111639042B (en) * 2020-06-04 2023-06-02 中科芯集成电路有限公司 Processing method and device for prefetching buffer data consistency
CN113656332A (en) * 2021-08-20 2021-11-16 中国科学院上海高等研究院 CPU cache data prefetching method based on merged address difference sequence
CN113656332B (en) * 2021-08-20 2023-05-26 中国科学院上海高等研究院 CPU cache data prefetching method based on merging address difference value sequence
CN116166575A (en) * 2023-02-03 2023-05-26 摩尔线程智能科技(北京)有限责任公司 Method, device, equipment, medium and program product for configuring access segment length
CN116166575B (en) * 2023-02-03 2024-01-23 摩尔线程智能科技(北京)有限责任公司 Method, device, equipment, medium and program product for configuring access segment length

Also Published As

Publication number Publication date
CN102214146B (en) 2013-04-10

Similar Documents

Publication Publication Date Title
CN102214146B (en) Step size adaptive Cache pre-fetching method and system
US9098418B2 (en) Coordinated prefetching based on training in hierarchically cached processors
US10394563B2 (en) Hardware accelerated conversion system using pattern matching
US10042643B2 (en) Guest instruction to native instruction range based mapping using a conversion look aside buffer of a processor
US9921842B2 (en) Guest instruction block with near branching and far branching sequence construction to native instruction block
CN108874694B (en) Apparatus and method for spatial memory streaming prefetch engine, manufacturing and testing method
US10241795B2 (en) Guest to native block address mappings and management of native code storage
US7284112B2 (en) Multiple page size address translation incorporating page size prediction
US10185567B2 (en) Multilevel conversion table cache for translating guest instructions to native instructions
EP2946286B1 (en) Methods and apparatus for cancelling data prefetch requests for a loop
US9753856B2 (en) Variable caching structure for managing physical storage
US9047198B2 (en) Prefetching across page boundaries in hierarchically cached processors
US9886385B1 (en) Content-directed prefetch circuit with quality filtering
KR20170100003A (en) Cache accessed using virtual addresses
US10353680B2 (en) System converter that implements a run ahead run time guest instruction conversion/decoding process and a prefetching process where guest code is pre-fetched from the target of guest branches in an instruction sequence
US10642618B1 (en) Callgraph signature prefetch
US9600418B2 (en) Systems and methods for improving processor efficiency through caching
CN117389630B (en) Data caching method and device, electronic equipment and readable storage medium
CN102163144A (en) Hardware data pre-fetching method of embedded processor
Garside et al. Prefetching across a shared memory tree within a network-on-chip architecture
US20210303468A1 (en) Apparatuses, methods, and systems for a duplication resistant on-die irregular data prefetcher
EP3283966B1 (en) Virtualization-aware prefetching
JP2024511768A (en) Method and apparatus for DRAM cache tag prefetcher
CN117971723A (en) Data prefetching method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant