CN102354301A - Cache partitioning method - Google Patents

Cache partitioning method Download PDF

Info

Publication number
CN102354301A
CN102354301A CN2011102864226A CN201110286422A CN102354301A CN 102354301 A CN102354301 A CN 102354301A CN 2011102864226 A CN2011102864226 A CN 2011102864226A CN 201110286422 A CN201110286422 A CN 201110286422A CN 102354301 A CN102354301 A CN 102354301A
Authority
CN
China
Prior art keywords
data block
caching data
record
subregion
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011102864226A
Other languages
Chinese (zh)
Other versions
CN102354301B (en
Inventor
陈天洲
虞保忠
马建良
胡一帆
叶敏娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201110286422.6A priority Critical patent/CN102354301B/en
Publication of CN102354301A publication Critical patent/CN102354301A/en
Application granted granted Critical
Publication of CN102354301B publication Critical patent/CN102354301B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a cache partitioning method, which comprises the following steps of: partitioning, namely dividing a cache of the last stage into a first area and a second area with a same size logically; addition of cache data block information bits, namely adding accessed frequency bits, wherein the accessed frequency of a cache data block is represented by using two bits; and addition of a historical access record chart, namely adding the horizontal access record chart to record accessed cache data blocks, wherein each record represents information flag bits and significant bits of the cache data blocks. By the method, the cache of the last stage is partitioned, so that the utilization efficiency of the cache of the last stage is improved; and for a high-capacity cache, relatively more frequently accessed cache data blocks are stored in the cache, and relatively less frequently accessed cache data blocks are transferred to a main memory, thereby increasing the cache access hit rate and improving system performance.

Description

The cache partitions method
Technical field
The invention belongs to technical field of memory, relate to the cache partitions method under large capacity cache and the multicore architecture.
Background technology
The performance of most computers system is subjected to the internal memory average access to postpone to decide to a great extent at present, and the hit rate that improves buffer memory just can reduce the access times of internal memory, also just can improve system performance.Current processor all uses caching mechanism, and the effect of buffer memory mainly is to alleviate not matching on the speed and performance between adjustment processor and the low speed main memory.Buffer memory is carried out classification mechanism, and present processor adopts three grades of buffer memorys mostly, and (L3), its afterbody buffer memory (L3) is near main memory for L1, L2.Along with the capacity of afterbody buffer memory constantly increases, corresponding operating strategy also will be updated, and to improve the utilization factor of buffer memory, reduces the access times of main memory.
The operating strategy of buffer memory comprises insertion algorithm and replacement algorithm.Insertion algorithm is that a caching data block that from main memory, is read into buffer memory should be placed into where going up of buffer memory.And the replacement algorithm is because the spatial cache finite capacity when new cache blocks wants in, need move on to a caching data block in the buffer memory the main memory from buffer memory, so that vacating space is given new caching data block.The cache management strategy that present most of processor uses is least recently used algorithm (LRU).LRU regards one group of buffer memory as a chained list, in the time of in the buffer memory that new caching data block will insert, the caching data block of showing tail is moved on in the main memory, and other caching data blocks correspondingly move a position backward in the table, and new caching data block is placed on gauge outfit.In the cache access process, if a caching data block has been hit, lru algorithm will move on to this caching data block the gauge outfit position so.Lru algorithm is effectively for the buffer memory of management low capacity; But for the bigger buffer memory of management capacity, some poor efficiency that but seems, present afterbody buffer memory capacity is bigger; How managing this large capacity cache, is the problem that a lot of researchers are all being thought deeply.So, be necessary to study in fact, provide a kind of method to solve the problem that exists at present.
Summary of the invention
The purpose of the embodiment of the invention is to provide a kind of cache partitions method, improves the cache access hit rate, improves system performance.
The embodiment of the invention is achieved in that a kind of cache partitions method, comprises the steps:
Subregion: logically the afterbody buffer memory is divided into two identical zones of size, is respectively subregion one and subregion two;
Newly-increased caching data block information bit: increase by the access times position, represent that with 2 bits caching data block is by access times;
Newly-increased history access record table: newly-increased history access record table, the caching data block that record was visited, every information flag position and significance bit that record is exactly a caching data block.
Further, said subregion one is identical with the cached configuration of subregion two, and what subregion one was deposited is the caching data block of not visited; And being visited before that subregion two is deposited still is moved to the caching data block in the main memory.
Further, said newly-increased caching data block information bit is to increase by the access times position, representes with 2 bits.
Further, each caching data block all has some information bits, mainly comprises marker bit, significance bit, LRU position, read-write position and by the access times position.
Further, the storage of said history access record table be the Visitor Logs of the caching data block that is replaced away, every information flag position and significance bit that record is exactly a caching data block.
Further, recordable data block bar number is the same with the open ended caching data block number of subregion in the said history access record table.
Further, said history access record table is to be used for storing the marker bit that is replaced to the caching data block of main memory before, and when a caching data block will be moved in the main memory, its marker bit will be stored in this table.
Further; When a caching data block when main memory is read into the buffer memory; Need carry out table lookup operation; If the marker bit of this caching data block is in record sheet; Then it is stored in the subregion two and with the significance bit of its record in record sheet and be arranged to 0, otherwise be stored in the subregion one.
Further; What said record sheet adopted is the replacement method of first in first out; When the marker bit of a caching data block will be deposited in the table; Whether elder generation has significance bit in the look-up table is 0 record; If have, then these marker bits are deposited into this significance bit and are in 0 the record, and significance bit is arranged to 1; Otherwise, the marker bit of the last item record in the table is arranged to the marker bit that needs are stored
The present invention improves the afterbody cache partitions service efficiency of afterbody buffer memory.To large capacity cache, be kept in the buffer memory through the caching data block that access times are more, and the caching data block that access times are less moves on to main memory, thereby improve the cache access hit rate, improve system performance.
Description of drawings
Fig. 1 is a flow process diagram of the present invention.
Embodiment
In order to make the object of the invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with accompanying drawing and embodiment.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
Cache partitions method of the present invention is that large capacity cache is divided into two identical districts of size, and it is a kind of cache management strategy based on least recently used strategy.Two subregions are deposited data block of different nature respectively, and the less data block of visit is shifted out buffer memory, leave more space for access times more data block, thereby to improve the service efficiency of buffer memory.It writes down the caching data block of being visited through a history access record table.When needs read caching data block in buffer memory the time from main memory, need search this history access record table.If the marker bit information of this caching data block finds in this table, so just this caching data block is placed in the subregion two, otherwise it is placed in the subregion one.
Please with reference to shown in Figure 1, cache partitions method of the present invention comprises the steps:
Subregion: logically the afterbody buffer memory is divided into two identical zones of size, is respectively subregion one and subregion two;
Newly-increased caching data block information bit: increase by the access times position, represent that with 2 bits caching data block is by access times;
Newly-increased history access record table: newly-increased history access record table, the caching data block that record was visited, every information flag position and significance bit that record is exactly a caching data block.
Wherein, said subregion one is identical with the cached configuration of subregion two, and it all is the same aspect group degree of being associated and access delay, all is to utilize least recently used (LRU) operating strategy; What subregion one was deposited is the caching data block of not visited; And being visited before that subregion two is deposited still is moved to the caching data block in the main memory.
The advantage that is divided into the less subregion of two capacity is: the access delay of low capacity buffer memory is less, and this is that is: capacity is big because cache access postpones to be directly proportional with its amount of capacity, and access delay is height just, and capacity is little, and access delay is just little.What subregion one was deposited is the caching data block of not visited, and these data blocks are considered to visited fewer.What subregion two was deposited was visited before, but was moved to the caching data block in the main memory, and these data blocks are considered to more by access times.Like this will be many and come by the few data block branch of access times and to deposit by access times, reduce the access delay of buffer memory.
Said newly-increased caching data block information bit is to increase by the access times position, representes with 2 bits.Each caching data block all has some information bits, mainly comprises marker bit, significance bit, LRU position, read-write position.Current caching data block all has information bit, according to different cache replacement algorithms, needs different information bits.For example, the buffer memory of one 16 road set associative adopts least recently used replacement algorithm (LRU), and each caching data block just needs 4 bits to store LRU information so.This creation increases on original basis by the access times position, representes that with 2 bits the number of times so that the record buffer memory data block is visited abbreviates UC as.
When a new caching data block was read into the buffer memory (subregion 1 or subregion 2) from main memory, its UC value was set to 0, and the setting of other information bit is still according to original method.In the cache access process, if caching data block has been hit and its UC value less than 3, so, its UC value just adds 1.This is because the UC value is to represent that with 2 bits its span is 0~3.
Because buffer memory capacity is limited, in the time of need carrying out the caching data block replacement operation, is example to suppose that current new caching data block will be stored in the subregion one, the LRU strategy is at first selected that maximum caching data block of LRU value in the subregion one.This caching data block abbreviates victim as.Then, analyze the UC value of victim.If this UC value is less than 2, so just the marker bit with victim stores in the history access record table, and it is moved on in the main memory.Otherwise its UC position is set to 0, and moves on in the subregion two, and need replace away by the caching data block that LRU value in the subregion two is maximum this moment, and this caching data block abbreviates victim as.Equally, need the UC value of inspection victim, if less than 2, then directly move on to main memory, and marker bit is stored in the history access record table, otherwise just with it the zero clearing of UC position and move on in the subregion one.So repeatedly, until finding a caching data block, its LRU value maximum and UC value be less than 2, so just this caching data block moved on to main memory and its marker bit is stored in the history access record table.
The newly-increased history access record table of said newly-increased history access record table system, the caching data block that record was visited.The storage of this history access record table be the Visitor Logs of the caching data block that is replaced away, every information flag position and significance bit that record is exactly a caching data block.Recordable data block bar number is the same with the open ended caching data block number of subregion in the record sheet.For example, the buffer memory of a 2MB is divided into the subregion of two 1MB, and data cached block size is 64B, and the open ended caching data block number of each subregion is 16K so, so the capacity of this table is a 16k bar record.The k here representes 2^10, and M representes 2^20, and B representes byte.The set associative of this record sheet is the same with the set associative of subregion, the buffer memory of 16 road set associatives, and this table also is 16 road set associatives so.
The history access record table is to be used for storing the marker bit that is replaced to the caching data block of main memory before.When a caching data block will be moved in the main memory, its marker bit will be stored in this table.When a caching data block when main memory is read into the buffer memory, need carry out table lookup operation.If the marker bit of this caching data block in table, then stores it in the subregion two and with the significance bit of its record in table and is arranged to 0, otherwise it is stored in the subregion one.
What said record sheet adopted is the replacement policy of first in first out.When the marker bit of a caching data block will be deposited in the table; Whether elder generation has significance bit in the look-up table is 0 record; If have; Then these marker bits are deposited into this significance bit and are in 0 the record; And significance bit is arranged to 1; Otherwise, the marker bit of the last item record in the table is arranged to the marker bit that needs are stored.
Afterbody buffer memory (L3) with a processor is that example describes, and this cached configuration is following: capacity is 2MB, 16 road set associatives, and 4 bits are represented the LRU value, data cached block size is 64B, adopts the LRU operating strategy.It is divided into two subregion A and B, and subregion A and B are configured to: capacity is 1MB, 16 road set associatives, and 4 bits are represented the LRU value, data cached block size is 16B, adopts the LRU operating strategy.History table comprises 16K bar record, and 16 road set associatives adopt the first in first out operating strategy.(k of unit representes 2^10), M representes 2^20, and the B of unit representes byte).A caching data block D is read into the buffer memory from main memory, in record sheet, searches earlier the marker bit of D.If find, then the significance bit of respective record is arranged to 0, then D is stored among the subregion B, otherwise D is stored among the subregion A, no matter be stored in which subregion, the UC value all is set to 0.With hypothesis is that to store among the subregion A be example, and will in A, select the LRU value earlier is 15 caching data block victim.Test the UC value of victim then, if less than 2, then the marker bit with victim is deposited in the history table.Otherwise its UC value is set to 0 and move on among the subregion B, and at this moment need in subregion B, select the LRU value is 15 caching data block victim, the UC value of testing victim then.Circulate repeatedly, up to having found a caching data block victim, its LRU value is 15, and the UC value is less than 2.So just the marker bit of victim is deposited in the history access record table and with victim and moves on in the main memory.
It mainly is to large capacity cache that the present invention creates; Through improving the operating strategy of buffer memory, the caching data block that access times are more is kept in the buffer memory, and the caching data block that access times are less moves on to main memory; Thereby improve the cache access hit rate, improve system performance.
The above only is preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of within spirit of the present invention and principle, being done, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (9)

1. a cache partitions method is characterized in that, comprises the steps:
Subregion: logically the afterbody buffer memory is divided into two identical zones of size, is respectively subregion one and subregion two;
Newly-increased caching data block information bit: increase by the access times position, represent that with 2 bits caching data block is by access times;
Newly-increased history access record table: newly-increased history access record table, the caching data block that record was visited, every information flag position and significance bit that record is exactly a caching data block.
2. cache partitions method as claimed in claim 1 is characterized in that: said subregion one is identical with the cached configuration of subregion two, and what subregion one was deposited is the caching data block of not visited; And being visited before that subregion two is deposited still is moved to the caching data block in the main memory.
3. like claim 1 and 2 described cache partitions methods, it is characterized in that: said newly-increased caching data block information bit is to increase by the access times position, representes with 2 bits.
4. cache partitions method as claimed in claim 3 is characterized in that: each caching data block all has some information bits, mainly comprises marker bit, significance bit, LRU position, read-write position and by the access times position.
5. cache partitions method as claimed in claim 4 is characterized in that: the storage of said history access record table be the Visitor Logs of the caching data block that is replaced away, every information flag position and significance bit that record is exactly a caching data block.
6. cache partitions method as claimed in claim 5 is characterized in that: recordable data block bar number is the same with the open ended caching data block number of subregion in the said history access record table.
7. cache partitions method as claimed in claim 6; It is characterized in that: said history access record table is to be used for storing the marker bit that is replaced to the caching data block of main memory before; When a caching data block will be moved in the main memory, its marker bit will be stored in this table.
8. cache partitions method as claimed in claim 7; It is characterized in that: when a caching data block when main memory is read into the buffer memory; Need carry out table lookup operation; If the marker bit of this caching data block is in record sheet; Then it is stored in the subregion two and with the significance bit of its record in record sheet and be arranged to 0, otherwise be stored in the subregion one.
9. cache partitions method as claimed in claim 8; It is characterized in that: what said record sheet adopted is the replacement method of first in first out; When the marker bit of a caching data block will be deposited in the table; Whether elder generation has significance bit in the look-up table is 0 record; If have, then these marker bits are deposited into this significance bit and are in 0 the record, and significance bit is arranged to 1; Otherwise, the marker bit of the last item record in the table is arranged to the marker bit that needs are stored.
CN201110286422.6A 2011-09-23 2011-09-23 Cache partitioning method Expired - Fee Related CN102354301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110286422.6A CN102354301B (en) 2011-09-23 2011-09-23 Cache partitioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110286422.6A CN102354301B (en) 2011-09-23 2011-09-23 Cache partitioning method

Publications (2)

Publication Number Publication Date
CN102354301A true CN102354301A (en) 2012-02-15
CN102354301B CN102354301B (en) 2014-03-19

Family

ID=45577867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110286422.6A Expired - Fee Related CN102354301B (en) 2011-09-23 2011-09-23 Cache partitioning method

Country Status (1)

Country Link
CN (1) CN102354301B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method
CN104239233A (en) * 2014-09-19 2014-12-24 华为技术有限公司 Cache managing method, cache managing device and cache managing equipment
CN105743975A (en) * 2016-01-28 2016-07-06 深圳先进技术研究院 Cache placing method and system based on data access distribution
CN109032970A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of method for dynamically caching based on lru algorithm
CN110059482A (en) * 2019-04-26 2019-07-26 海光信息技术有限公司 The exclusive update method and relevant apparatus of exclusive spatial cache unit
CN116560585A (en) * 2023-07-05 2023-08-08 支付宝(杭州)信息技术有限公司 Data hierarchical storage method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543605A (en) * 2001-06-29 2004-11-03 ض� Partitioning cache metadata state
US20050216693A1 (en) * 2004-03-23 2005-09-29 International Business Machines Corporation System for balancing multiple memory buffer sizes and method therefor
CN101320353A (en) * 2008-07-18 2008-12-10 四川长虹电器股份有限公司 Design method of embedded type browser caching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543605A (en) * 2001-06-29 2004-11-03 ض� Partitioning cache metadata state
US20050216693A1 (en) * 2004-03-23 2005-09-29 International Business Machines Corporation System for balancing multiple memory buffer sizes and method therefor
CN101320353A (en) * 2008-07-18 2008-12-10 四川长虹电器股份有限公司 Design method of embedded type browser caching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANG LINGXIANG等: "Less reused filter: improving l2 cache performance via filtering less reused lines", 《PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON SUPERCOMPUTING. ACM》, 12 June 2009 (2009-06-12), pages 1 - 12 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method
CN104239233A (en) * 2014-09-19 2014-12-24 华为技术有限公司 Cache managing method, cache managing device and cache managing equipment
CN104239233B (en) * 2014-09-19 2017-11-24 华为技术有限公司 Buffer memory management method, cache management device and caching management equipment
CN105743975A (en) * 2016-01-28 2016-07-06 深圳先进技术研究院 Cache placing method and system based on data access distribution
CN105743975B (en) * 2016-01-28 2019-03-05 深圳先进技术研究院 Caching laying method and system based on data access distribution
CN109032970A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of method for dynamically caching based on lru algorithm
CN110059482A (en) * 2019-04-26 2019-07-26 海光信息技术有限公司 The exclusive update method and relevant apparatus of exclusive spatial cache unit
CN116560585A (en) * 2023-07-05 2023-08-08 支付宝(杭州)信息技术有限公司 Data hierarchical storage method and system
CN116560585B (en) * 2023-07-05 2024-04-09 支付宝(杭州)信息技术有限公司 Data hierarchical storage method and system

Also Published As

Publication number Publication date
CN102354301B (en) 2014-03-19

Similar Documents

Publication Publication Date Title
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
CN107066393A (en) The method for improving map information density in address mapping table
CN100498740C (en) Data cache processing method, system and data cache device
CN102981963B (en) A kind of implementation method of flash translation layer (FTL) of solid-state disk
CN102314397B (en) Method for processing cache data block
CN102354301B (en) Cache partitioning method
CN104115134B (en) For managing the method and system to be conducted interviews to complex data storage device
CN103246613B (en) Buffer storage and the data cached acquisition methods for buffer storage
CN101655861B (en) Hashing method based on double-counting bloom filter and hashing device
CN107368436B (en) Flash memory cold and hot data separated storage method combined with address mapping table
CN100541453C (en) Large capacity cache implementation method and storage system
CN109446117B (en) Design method for page-level flash translation layer of solid state disk
CN109582593B (en) FTL address mapping reading and writing method based on calculation
CN103942161B (en) Redundancy elimination system and method for read-only cache and redundancy elimination method for cache
CN104102591A (en) Computer subsystem and method for implementing flash translation layer in computer subsystem
CN107423229B (en) Buffer area improvement method for page-level FTL
CN102508788A (en) SSD (solid state drive) and SSD garbage collection method and device
CN105975215B (en) A kind of stacked tile type magnetic substance storage translation layer mapping table management method based on Ondemand algorithm
CN111580754B (en) Write-friendly flash memory solid-state disk cache management method
KR20130041585A (en) Cache memory system for tile based rendering and caching method thereof
US20160196063A1 (en) Apparatus and method for managing buffer having three states on the basis of flash memory
CN104991743B (en) Loss equalizing method applied to solid state hard disc resistance-variable storing device caching
CN102147798A (en) Method and device for handling Hashed search conflicts
CN103345449B (en) A kind of fingerprint forecasting method towards data de-duplication technology and system
CN107957962A (en) It is a kind of to calculate efficient figure division methods and system towards big figure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140319

Termination date: 20180923

CF01 Termination of patent right due to non-payment of annual fee