CN109032969A - A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring - Google Patents

A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring Download PDF

Info

Publication number
CN109032969A
CN109032969A CN201810684735.9A CN201810684735A CN109032969A CN 109032969 A CN109032969 A CN 109032969A CN 201810684735 A CN201810684735 A CN 201810684735A CN 109032969 A CN109032969 A CN 109032969A
Authority
CN
China
Prior art keywords
data block
access
linkseqk
list
access list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810684735.9A
Other languages
Chinese (zh)
Inventor
项道东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou Polytechnic
Original Assignee
Wenzhou Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou Polytechnic filed Critical Wenzhou Polytechnic
Priority to CN201810684735.9A priority Critical patent/CN109032969A/en
Publication of CN109032969A publication Critical patent/CN109032969A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of caching methods of LRU-K algorithm based on K value dynamic monitoring, are related to Data cache technology field.The present invention includes judging whether K access data block frequency is lower than lower threshold Ldown;If so, setting K-1 access list LinkSeqK- of data block;Judge whether K access data block frequency is more than upper limit threshold Lup;If so, setting K+1 access list LinkSeqK+ of data block.Present invention K access data block frequency monitoring and judge whether K access data block frequency exceeds lower threshold Ldown or upper limit threshold Lup and then uses K-1 access list LinkSeqK- and K+1 access list LinkSeqK+ in LRU-K algorithm, improve the performance of existing LRU-K algorithm, guarantee system handles high efficiency when different data, improves space utilisation and data storage efficiency.

Description

A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
Technical field
The invention belongs to Data cache technology fields, more particularly to a kind of LRU-K algorithm based on K value dynamic monitoring Caching method.
Background technique
The virtual memory management of memory is mode generally used now -- in the case where limited memory, it is outer to extend a part It deposits as virtual memory, real memory is used when only storing current operation to obtain information.This has undoubtedly greatly expanded memory Function, greatly improve the concurrency of computer.Virtual paged memory management, then be space needed for process is divided into it is more A page only stores the current desired page in memory, remaining page is put into the way to manage of external memory.
The superiority and inferiority of LRU-K page replacement algorithm and K value are provided with much relations, in different data handling procedures, phase The number difference bring system treatment effeciency being accessed with data block is different, often the LRU-K page replacement algorithm of identical K value As soon as the efficiency showed in operation processing is higher, but efficiency may not be quite reasonable when handling another operation.
This invention address that a kind of caching method of LRU-K algorithm based on K value dynamic monitoring is studied, for solving LRU- K page replacement algorithm system effectiveness shows the uncontrollable problem of efficiency because of different work.
Summary of the invention
The purpose of the present invention is to provide a kind of caching methods of LRU-K algorithm based on K value dynamic monitoring, by right K access data block frequency monitoring and judge whether K access data block frequency exceeds lower threshold Ldown in LRU-K algorithm Or upper limit threshold Lup uses K-1 access list LinkSeqK- and K+1 access list LinkSeqK+ in turn, realizes root According to the K value in real system processing result dynamic adjustment LRU-K algorithm, solves existing LRU-K page replacement algorithm system effect Rate shows the uncontrollable problem of efficiency because of different work.
In order to solve the above technical problems, the present invention is achieved by the following technical solutions:
The present invention is a kind of caching method of LRU-K algorithm based on K value dynamic monitoring, is included the following steps:
S000: setting data block access list LinkSeq;
S001: setting K access list LinkSeqK of data block;
S002: judge to access whether data block reaches K times;If so, executing S003;S006 is executed if not;
S003: judge whether access list LinkSeqK is full;If so, executing S005;If it is not, then executing S004;
S004: judge whether K access data block frequency is lower than lower threshold Ldown;If so, setting data block K-1 Secondary access list LinkSeqK-;If it is not, then executing S006;
S005: judge whether K access data block frequency is more than upper limit threshold Lup;If so, setting data block K+1 times Access list LinkSeqK+;If it is not, then executing S006;
S006: the data block in S002 is entered in list LinkSeqK and executes S002;
Wherein, K-1 is more than or equal to 2.
Preferably, the data block access list LinkSeq accesses data block according to LEU algorithm.
Preferably, in the K access list LinkSeqK storing data block meet system access data block number be greater than Equal to K times;Storing data block meets system access data block number and is more than or equal in the K-1 access list LinkSeqK- K-1 times;Storing data block meets system access data block number and is more than or equal to K+1 in the K+1 access list LinkSeqK+ It is secondary.
Preferably, the lower threshold Ldown is that the preset data block for meeting K access times of system accounts for total access time The minimum value of several ratio;The upper limit threshold Lup is that the preset data block for meeting K access times of system accounts for total access time The maximum value of several ratio;
Preferably, K-1 access list LinkSeqK- of data block is arranged in S004 to include the following steps:
A000: K-1 access list LinkSeqK- is judged whether there is;If so, executing A002;If it is not, then executing A001;
A001: setting K-1 access list LinkSeqK- of data block simultaneously executes A002;
A002: judge whether K access list LinkSeqK has data block;If so, executing A003;If it is not, then executing A004;
A003: data block in K access list LinkSeqK is stored according to enqueue time sequencing to K-1 Access Column In table LinkSeqK-;
A004: the data block in S002 is entered in list LinkSeqK- and executes S002.
The invention has the following advantages:
The present invention is by K access data block frequency monitoring in LRU-K algorithm and judging that K access data block frequency is It is no to use K-1 access list LinkSeqK- and K+1 access list in turn beyond lower threshold Ldown or upper limit threshold Lup LinkSeqK+ realizes according to the K value in real system processing result dynamic adjustment LRU-K algorithm, improves existing LRU-K The performance of algorithm guarantees high efficiency when system processing different data, improves space utilisation and data storage efficiency.
Certainly, it implements any of the products of the present invention and does not necessarily require achieving all the advantages described above at the same time.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will be described below to embodiment required Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is a kind of flow chart of the caching method of LRU-K algorithm based on K value dynamic monitoring of the invention;
Fig. 2 is the flow chart that K-1 access list LinkSeqK- of data block is arranged in S004.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other Embodiment shall fall within the protection scope of the present invention.
Refering to Figure 1, the present invention is a kind of caching method of LRU-K algorithm based on K value dynamic monitoring, including such as Lower step:
S000: setting data block access list LinkSeq;
S001: setting K access list LinkSeqK of data block;
S002: judge to access whether data block reaches K times;If so, executing S003;S006 is executed if not;
S003: judge whether access list LinkSeqK is full;If so, executing S005;If it is not, then executing S004;
S004: judge whether K access data block frequency is lower than lower threshold Ldown;If so, setting data block K-1 Secondary access list LinkSeqK-;If it is not, then executing S006;
S005: judge whether K access data block frequency is more than upper limit threshold Lup;If so, setting data block K+1 times Access list LinkSeqK+;If it is not, then executing S006;
S006: the data block in S002 is entered in list LinkSeqK and executes S002;
Wherein, K-1 is more than or equal to 2.
Wherein, data block access list LinkSeq accesses data block according to LEU algorithm.
Wherein, storing data block meets system access data block number and is more than or equal to K in K access list LinkSeqK It is secondary;Storing data block meets system access data block number and is more than or equal to K-1 times in K-1 access list LinkSeqK-;K+1 Storing data block meets system access data block number and is more than or equal to K+1 times in secondary access list LinkSeqK+.
Wherein, which is characterized in that lower threshold Ldown is that the preset data block for meeting K access times of system accounts for always The minimum value of the ratio of access times;Upper limit threshold Lup is that the preset data block for meeting K access times of system accounts for total access The maximum value of the ratio of number;
It please refers to shown in Fig. 2, K-1 access list LinkSeqK- of data block is set in S004 and is included the following steps:
A000: K-1 access list LinkSeqK- is judged whether there is;If so, executing A002;If it is not, then executing A001;
A001: setting K-1 access list LinkSeqK- of data block simultaneously executes A002;
A002: judge whether K access list LinkSeqK has data block;If so, executing A003;If it is not, then executing A004;
A003: data block in K access list LinkSeqK is stored according to enqueue time sequencing to K-1 Access Column In table LinkSeqK-;
A004: the data block in S002 is entered in list LinkSeqK- and executes S002.
It is worth noting that, included each unit is only drawn according to function logic in the above system embodiment Point, but be not limited to the above division, as long as corresponding functions can be realized;In addition, each functional unit is specific Title is also only for convenience of distinguishing each other, the protection scope being not intended to restrict the invention.
In addition, those of ordinary skill in the art will appreciate that realizing all or part of the steps in the various embodiments described above method It is that relevant hardware can be instructed to complete by program.
Present invention disclosed above preferred embodiment is only intended to help to illustrate the present invention.There is no detailed for preferred embodiment All details are described, are not limited the invention to the specific embodiments described.Obviously, according to the content of this specification, It can make many modifications and variations.These embodiments are chosen and specifically described to this specification, is in order to better explain the present invention Principle and practical application, so that skilled artisan be enable to better understand and utilize the present invention.The present invention is only It is limited by claims and its full scope and equivalent.

Claims (5)

1. a kind of caching method of the LRU-K algorithm based on K value dynamic monitoring, which comprises the steps of:
S000: setting data block access list LinkSeq;
S001: setting K access list LinkSeqK of data block;
S002: judge to access whether data block reaches K times;If so, executing S003;S006 is executed if not;
S003: judge whether access list LinkSeqK is full;If so, executing S005;If it is not, then executing S004;
S004: judge whether K access data block frequency is lower than lower threshold Ldown;If so, data block K-1 times visit of setting Ask list LinkSeqK-;If it is not, then executing S006;
S005: judge whether K access data block frequency is more than upper limit threshold Lup;If so, data block K+1 times access of setting List LinkSeqK+;If it is not, then executing S006;
S006: the data block in S002 is entered in list LinkSeqK and executes S002;
Wherein, K-1 is more than or equal to 2.
2. a kind of caching method of LRU-K algorithm based on K value dynamic monitoring according to claim 1, which is characterized in that The data block access list LinkSeq accesses data block according to LEU algorithm.
3. a kind of caching method of LRU-K algorithm based on K value dynamic monitoring according to claim 1, which is characterized in that
Storing data block meets system access data block number and is more than or equal to K times in the K access list LinkSeqK;
Storing data block meets system access data block number and is more than or equal to K-1 in the K-1 access list LinkSeqK- It is secondary;
Storing data block meets system access data block number and is more than or equal to K+1 in the K+1 access list LinkSeqK+ It is secondary.
4. a kind of caching method of LRU-K algorithm based on K value dynamic monitoring according to claim 1, which is characterized in that
The lower threshold Ldown is the ratio that the preset data block for meeting K access times of system accounts for total access times Minimum value;
The upper limit threshold Lup is that the preset data block for meeting K access times of system accounts for the ratio of total access times most Big value.
5. a kind of caching method of LRU-K algorithm based on K value dynamic monitoring according to claim 1, which is characterized in that K-1 access list LinkSeqK- of data block is arranged in S004 to include the following steps:
A000: K-1 access list LinkSeqK- is judged whether there is;If so, executing A002;If it is not, then executing A001;
A001: setting K-1 access list LinkSeqK- of data block simultaneously executes A002;
A002: judge whether K access list LinkSeqK has data block;If so, executing A003;If it is not, then executing A004;
A003: data block in K access list LinkSeqK is stored according to enqueue time sequencing to K-1 access list In LinkSeqK-;
A004: the data block in S002 is entered in list LinkSeqK- and executes S002.
CN201810684735.9A 2018-06-16 2018-06-16 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring Pending CN109032969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810684735.9A CN109032969A (en) 2018-06-16 2018-06-16 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810684735.9A CN109032969A (en) 2018-06-16 2018-06-16 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring

Publications (1)

Publication Number Publication Date
CN109032969A true CN109032969A (en) 2018-12-18

Family

ID=65521873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810684735.9A Pending CN109032969A (en) 2018-06-16 2018-06-16 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring

Country Status (1)

Country Link
CN (1) CN109032969A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060080510A1 (en) * 2004-10-12 2006-04-13 Benhase Michael T Apparatus and method to manage a data cache
CN1961296A (en) * 2004-06-29 2007-05-09 王德元 Buffer apparatus and method
US20120072670A1 (en) * 2010-09-21 2012-03-22 Lsi Corporation Method for coupling sub-lun load measuring metadata size to storage tier utilization in dynamic storage tiering
CN102760101A (en) * 2012-05-22 2012-10-31 中国科学院计算技术研究所 SSD-based (Solid State Disk) cache management method and system
CN103257935A (en) * 2013-04-19 2013-08-21 华中科技大学 Cache management method and application thereof
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache
CN106557431A (en) * 2016-11-25 2017-04-05 郑州云海信息技术有限公司 A kind of pre-head method and device for multichannel sequential flow
CN107220188A (en) * 2017-05-31 2017-09-29 莫倩 A kind of automatic adaptation cushion block replacement method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1961296A (en) * 2004-06-29 2007-05-09 王德元 Buffer apparatus and method
US20060080510A1 (en) * 2004-10-12 2006-04-13 Benhase Michael T Apparatus and method to manage a data cache
US20120072670A1 (en) * 2010-09-21 2012-03-22 Lsi Corporation Method for coupling sub-lun load measuring metadata size to storage tier utilization in dynamic storage tiering
CN102760101A (en) * 2012-05-22 2012-10-31 中国科学院计算技术研究所 SSD-based (Solid State Disk) cache management method and system
CN103257935A (en) * 2013-04-19 2013-08-21 华中科技大学 Cache management method and application thereof
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache
CN106557431A (en) * 2016-11-25 2017-04-05 郑州云海信息技术有限公司 A kind of pre-head method and device for multichannel sequential flow
CN107220188A (en) * 2017-05-31 2017-09-29 莫倩 A kind of automatic adaptation cushion block replacement method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李军锋等: "代理服务器上的自适应存储缓冲技术", 《计算机工程与应用》 *

Similar Documents

Publication Publication Date Title
EP3367251A1 (en) Storage system and solid state hard disk
CN103019962B (en) Data buffer storage disposal route, device and system
CN104636414B (en) The method of access to updated file is provided and executes the computer of this method
US20140052946A1 (en) Techniques for opportunistic data storage
EP3633515B1 (en) Memory allocation method, apparatus, electronic device, and computer storage medium
CN104899153A (en) Background application cleaning method and system
CN107665095B (en) Apparatus, method and readable storage medium for memory space management
CN108984130A (en) A kind of the caching read method and its device of distributed storage
CN106201652B (en) Data processing method and virtual machine
CN109358873A (en) A kind of application program update method, storage medium and terminal device
CN109032970A (en) A kind of method for dynamically caching based on lru algorithm
CN110471769B (en) Resource management method and device for virtual machine
JP2005196793A5 (en)
CN109032969A (en) A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
CN111510479B (en) Resource allocation method and device for heterogeneous cache system
CN106856441A (en) VIM systems of selection and device in NFVO
CN104050189B (en) The page shares processing method and processing device
CN109063210A (en) Resource object querying method, device, equipment and the storage medium of storage system
CN103257892B (en) A kind of multi-task scheduling method based on grand combination and system
CN105183375B (en) A kind of control method and device of the service quality of hot spot data
CN110083314A (en) A kind of logical volume delet method, system and relevant apparatus
CN102999728B (en) Based on date storage method and the device of safety desktop
CN110347614A (en) Memory space mapping algorithm, buffer status machine, storage device, storage medium
CN104572655B (en) The method, apparatus and system of data processing
CN102445978B (en) A kind of method and apparatus of management data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181218