CN109558337B - Dynamic access method, device and storage medium for cache - Google Patents

Dynamic access method, device and storage medium for cache Download PDF

Info

Publication number
CN109558337B
CN109558337B CN201811457878.2A CN201811457878A CN109558337B CN 109558337 B CN109558337 B CN 109558337B CN 201811457878 A CN201811457878 A CN 201811457878A CN 109558337 B CN109558337 B CN 109558337B
Authority
CN
China
Prior art keywords
access frequency
block
frequency block
access
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811457878.2A
Other languages
Chinese (zh)
Other versions
CN109558337A (en
Inventor
王道邦
方敏
于召鑫
杨恒
段舒文
李艳国
仇悦
周泽湘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Toyou Feiji Electronics Co ltd
Original Assignee
Beijing Toyou Feiji Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Toyou Feiji Electronics Co ltd filed Critical Beijing Toyou Feiji Electronics Co ltd
Priority to CN201811457878.2A priority Critical patent/CN109558337B/en
Publication of CN109558337A publication Critical patent/CN109558337A/en
Application granted granted Critical
Publication of CN109558337B publication Critical patent/CN109558337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0864Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to a dynamic access method, a device and a storage medium for a cache. The method comprises the following steps: dividing different blocks into a high access frequency block and a low access frequency block according to statistics of access frequencies of the different blocks in a cache; migrating data stored in the high access frequency block to the low access frequency block; and accessing the cache according to the data storage position after the migration is completed. The method, the device and the storage medium for dynamically accessing the cache, which are provided by the invention, enable the block access of the application to the cache storage medium to be consistent in the whole service life cycle, thereby achieving the purposes of fully utilizing the performance and the service life of the high-speed storage medium and reducing the use cost of clients.

Description

Dynamic access method, device and storage medium for cache
Technical Field
The present invention relates to the field of storage management technologies, and in particular, to a method and apparatus for dynamic access to a cache, and a storage medium.
Background
When the existing high-speed medium cache is designed, hot data is replaced and hit by adopting a FIFO or LRU algorithm, only the high IOPS performance of the high-speed storage medium is considered to solve the problem of hot spot access performance of a CPU and a low-speed storage medium, and other characteristics of the high-speed medium are not deeply mined.
The blocks of the hot spot data distributed in the medium in general application are relatively fixed, for example, the current drama play, the video document accesses to the fixed position in the medium through the file system, the position mapping between the high-speed storage medium and the low-speed storage medium is fixed when the secondary cache is created, so that the hot spot data frequently moves on certain fixed blocks of the high-speed storage medium and the low-speed storage medium, and the frequent writing of the hot spot data to the high-speed storage medium can cause the particle life corresponding to the blocks of the high-speed medium to reach the critical speed soon. Although the space is reserved in the high-speed storage medium during the general design, and the bad block remapping algorithm mechanism is realized in the medium, the service life of the whole medium is only slowed down to a certain extent, and the high-speed storage medium is directly replaced by the conventional processing method.
It is known that a typical high-speed storage medium is usually erased before writing, and after a certain number of erasures, the performance of the block in the portion is drastically reduced, so that the original performance of the secondary cache cannot be expected.
Disclosure of Invention
The invention aims to solve the technical problem of providing a dynamic access method and a dynamic access device for a cache, which enable block access of an application to a cache storage medium to be consistent in the whole service life cycle, thereby achieving the purposes of fully utilizing the performance and the service life of the high-speed storage medium and reducing the use cost of clients.
In order to solve the technical problem, the present invention provides a method for dynamically accessing a cache, which includes: dividing different blocks into a high access frequency block and a low access frequency block according to statistics of access frequencies of the different blocks in a cache; migrating data stored in the high access frequency block to the low access frequency block; and accessing the cache according to the data storage position after the migration is completed.
Further, before dividing the different blocks into the high access frequency block and the low access frequency block, the method further comprises: dividing a cache into different blocks; hashes are established for the different blocks.
Further, according to statistics of access frequencies of different blocks in the cache, the different blocks are divided into a high access frequency block and a low access frequency block, including: counting the access frequency of different blocks according to the established hash; setting a block with access frequency higher than a preset high access frequency threshold as a high access frequency block; and setting the block with the access frequency lower than the preset low access frequency threshold as a low access frequency block.
Further, according to statistics of access frequencies of different blocks in the cache, the different blocks are divided into a high access frequency block and a low access frequency block, and the method further comprises: according to the established hash, the average access time of different blocks is counted; and setting the block with the average access time higher than the average access time threshold and the access frequency lower than the preset high access frequency threshold as the high access frequency block.
Further, according to statistics of access frequencies of different blocks in the cache, the different blocks are divided into a high access frequency block and a low access frequency block, including: counting the access frequency of different blocks according to the established hash; according to the established hash, the average access time of different blocks is counted; performing weighted average on the access frequency and the average access time of different blocks; setting a block with a weighted average result higher than a preset high result threshold as a high access frequency block; and setting the block with the weighted average result lower than the preset low result threshold as a low access frequency block.
Further, according to statistics of access frequencies of different blocks in the cache, the different blocks are divided into a high access frequency block and a low access frequency block, and the method further comprises: determining whether to perform weight value adjustment and adjustment amplitude of the weight value adjustment according to whether the average access time of each block obtained through statistics exceeds a dangerous threshold of the average access time; when the weight value is determined to be adjusted, the weight value is adjusted by the set adjustment amplitude.
Further, migrating the data stored in the high access frequency block to the low access frequency block includes: opening up a data exchange space in a system memory; and completing data exchange between the high access frequency block and the low access frequency block by utilizing the data exchange space.
Further, after completing data exchange between the high access frequency block and the low access frequency block by using the data exchange space, migrating data stored in the high access frequency block to the low access frequency block, further comprising: and updating the hash according to the data storage block after the data exchange is completed.
In addition, the invention also provides a dynamic access device of the cache, which comprises: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of dynamic access to a cache as described hereinbefore.
Furthermore, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements a method of dynamic access to a cache as described above.
With such a design, the invention has at least the following advantages:
the block access of the application to the cache storage medium is consistent in the whole service life cycle, so that the purposes of fully utilizing the performance and the service life of the high-speed storage medium and reducing the use cost of clients are achieved.
Drawings
The foregoing is merely an overview of the present invention, and the present invention is further described in detail below with reference to the accompanying drawings and detailed description.
FIG. 1 is a flow chart of a method of dynamic access of a cache of the present invention;
FIG. 2 is a flow chart of access frequency statistics in a dynamic access method of a cache in one embodiment of the invention;
FIG. 3 is a flow chart of access frequency statistics in a dynamic access method of a cache in accordance with another embodiment of the present invention;
FIG. 4 is a block diagram of a dynamic access device of the cache of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
FIG. 1 is a flow chart of a method of dynamic access of a cache of the present invention. The dynamic access method of the cache comprises the following steps:
s11, dividing different blocks into a high access frequency block and a low access frequency block according to statistics of access frequencies of the different blocks in the cache.
In order to prevent excessive access to some fixed blocks in the cache, in the embodiment provided by the invention, statistics are performed on the operating frequency of normal access operations of the cache, namely, read-write operations.
The statistics of the access frequency are statistics performed in units of blocks. That is, the statistics of the access frequency indicates the number of times a block is accessed in a unit time. The number of times accessed includes the number of operations to perform a read operation on the block and the number of operations to perform a write operation on the block.
In order to be able to perform statistics on the access frequency, the method of dynamic access of the cache preferably further comprises, prior to the performance of the statistical operation: dividing a cache into different blocks; hashes are established for the different blocks.
The above-described block division may be a block division with a fixed capacity. The result of performing fixed capacity block partitioning is: each block divided has a capacity of a fixed size. In addition to fixed capacity block partitioning, variable capacity block partitioning may be performed on the cache depending on the content of the stored data in the different blocks. For example, data belonging to the same file is divided into the same variable-capacity blocks depending on the file to which the data is stored in the cache.
The purpose of establishing hashes for different blocks is to improve the efficiency of data access. After the hashes are established, access to the data stored in the cache may be more efficiently performed based on a lookup of the corresponding hashes.
After the statistics of the access frequency of the different blocks is completed, the different blocks in the cache are identified as a high access frequency block and a low access frequency block according to the statistics result of the access frequency.
In addition, in the identification process of the high access frequency block and the low access frequency block, it is more preferable that not only the statistics of the access frequency be considered, but also the average access time of different blocks be considered.
In a preferred embodiment, a block should be identified as a high access frequency block if its average access time has exceeded a predetermined threshold even though its access frequency has not reached a level at which it is identified as a high access frequency block.
In another preferred embodiment, it is determined which blocks should be divided into high access frequency blocks and which blocks should be divided into low access frequency blocks by a weighted average of both access frequency statistics and average storage time for each block.
And S12, migrating the data stored in the high access frequency block to the low access frequency block.
The data stored in the identified high access frequency block is migrated to the low access frequency block, so that different blocks can be mutually converged in the accessed frequency, and bad blocks caused by frequently accessing one or some blocks can be avoided.
In the embodiment of the invention, the data in the high access frequency block is preferably migrated to the low access frequency block by adopting a mode of opening up a data exchange space. Specifically, a section of storage space can be opened up in the free storage space of external storage to be used as a data exchange space. Then, the data exchange between the high access frequency block and the low access frequency block is completed by using the opened data exchange space.
More specifically, the data in the high access frequency block may be copied to the data exchange space first, then the data in the low access frequency block may be copied to the high access frequency block, and finally the data in the data exchange space may be copied to the low access frequency block. In this way, data migration through the data exchange space can be completed.
In addition, after the data access is performed by using the hash, the hash for facilitating the data access needs to be updated according to the actual storage locations of different data memories after the data migration operation is completed.
S13, accessing the cache according to the data storage position after migration is completed.
Preferably, the continued access to the cache is accomplished based on the updated hash.
It should be noted that, the method for dynamically accessing the cache provided in this embodiment does not run only once in the actual running process. In contrast, in an actual application scenario, the above method may need to be repeatedly executed in the cache according to a set frequency, so as to ensure balance in access frequency among each block of the cache.
FIG. 2 is a flow chart illustrating the operation of access frequency statistics in a dynamic method of caching in accordance with the present invention, under a preferred embodiment. Referring to fig. 2, according to statistics of access frequencies of different blocks in a cache, the different blocks are divided into a high access frequency block and a low access frequency block, including the following steps:
s21, according to the established hashes, the access frequencies of different blocks are counted.
As described above, the access efficiency can be greatly improved by completing the data access to the cache through the hash that has been established. Similarly, statistics of the block access frequency is completed through the established hash, and the statistics efficiency can be greatly improved.
S22, setting the block with the access frequency higher than the preset high access frequency threshold as the high access frequency block.
In this embodiment, a corresponding threshold is set for the identification of the high access frequency block, which is referred to as a high access frequency threshold. When the access frequency of a block is higher than the high access frequency threshold, the block can be identified as a high access frequency block.
S23, setting the block with the access frequency lower than the preset low access frequency threshold as the low access frequency block.
Similar to the identification of the high access frequency block, a corresponding threshold is set for the identification of the low access frequency block. This threshold is called the low access frequency threshold. When the access frequency statistics of a block in a unit time is lower than the low access frequency threshold, the block can be identified as a low access frequency block.
S24, according to the established hashes, the average access time of different blocks is counted.
In this embodiment, in order to prevent bad blocks from occurring, when identifying a high access frequency block, not only the statistic value of the access frequency but also the average access time needs to be considered. The average access time is the time it takes to perform an access operation on one block. The index may also be obtained by counting each access operation of the block.
S25, setting the block with the average access time higher than the average access time threshold and the access frequency lower than the preset high access frequency threshold as the high access frequency block.
If the access frequency of a block is below the high access frequency threshold, but the average access time of the block is already above the preset average access time threshold, the block is identified as a high access frequency block.
The average storage time is taken into account when identifying blocks with high access frequency, the main purpose being to avoid that the high frequency access to some blocks is still maintained when the read-write performance of it has significantly degraded. This will provide an advantage for bad blocks to occur in the cache.
It should be noted that the step of counting the average access time and further determining the high access frequency block according to the average access time is an optional step. That is, even if only the operations S21 to S23 are performed, the identification of the high access frequency block and the low access frequency block can be completed.
FIG. 3 is a flow chart illustrating the operation of access frequency statistics in the dynamic approach of the cache of the present invention, under another preferred embodiment. Referring to fig. 3, according to statistics of access frequencies of different blocks in a cache, the different blocks are divided into a high access frequency block and a low access frequency block, including the following steps:
s31, according to the established hashes, the access frequencies of different blocks are counted.
S32, according to the established hashes, the average access time of different blocks is counted.
S33, carrying out weighted average on the access frequency and the average access time of different blocks.
Specifically, a weighted value is respectively assigned to the access frequency and the average access time obtained by statistics, and then the two are weighted and averaged according to the assigned weighted value.
In this embodiment, in order to facilitate dynamic adjustment of the weighting values, each of the different blocks has its own weighting value stored therein. Therefore, each block can conveniently and independently adjust the weighting value according to the actual running condition of the block, and the situation that each block cannot independently adjust the weighting value when the uniform weighting value is adopted is avoided.
S34, setting the block with the weighted average result higher than the preset high result threshold as the high access frequency block.
If the result of the weighted average is higher than a preset high result threshold, the block is identified as a high access frequency block.
S35, setting the block with the weighted average result lower than the preset low result threshold as the low access frequency block.
If the result of the weighted average is lower than a predetermined low result threshold, the block is identified as a low access frequency block.
S36, determining whether to perform weight adjustment and adjusting amplitude of the weight adjustment according to whether the average access time of each block obtained through statistics exceeds a dangerous threshold of the average access time.
In order to more effectively prevent bad blocks, a danger threshold is set for the average access time of each block. Once the average access time of a block exceeds the set risk threshold, an adjustment to the weighting value is determined. And the magnitude of the weight adjustment needs to be determined further based on the extent to which the hazard threshold is exceeded.
Preferably, the level is set for the degree exceeding the danger threshold, and the higher the level is, the larger the magnitude of increase in the weighted value of the average access time is.
S37, when the weight value is determined to be adjusted, the weight value is adjusted by the set adjustment amplitude.
When the degree of exceeding the dangerous threshold is higher, the weighted average operation is performed by using a larger weighted value corresponding to the average access time.
It should be noted that the step of adjusting the weighting value is an optional step in the present embodiment. That is, even if only the operations of S31 to S35 are performed, the identification of the high access frequency block and the low access frequency block can be completed.
FIG. 4 is a block diagram of a dynamic access device of the cache of the present invention. Referring to fig. 4, a dynamic access device of a cache includes: a Central Processing Unit (CPU) 401, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the system operation are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output portion 407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 408 including a hard disk or the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In particular, according to embodiments of the present invention, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present invention include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411. The above-described functions defined in the method of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 401. The computer readable medium of the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented in software or in hardware.
The above description is only of the preferred embodiments of the present invention, and is not intended to limit the invention in any way, and some simple modifications, equivalent variations or modifications can be made by those skilled in the art using the teachings disclosed herein, which fall within the scope of the present invention.

Claims (7)

1. A method for dynamic access to a cache, comprising:
dividing different blocks into a high access frequency block and a low access frequency block according to statistics of access frequencies of the different blocks in a cache; wherein the divided blocks are blocks with a fixed size or the data belonging to the same file is divided into the same variable capacity blocks;
migrating data stored in the high access frequency block to the low access frequency block;
accessing the cache according to the data storage position after the migration is completed;
before dividing the different blocks into the high access frequency block and the low access frequency block, the method further comprises:
dividing a cache into different blocks;
establishing hashes for different blocks;
dividing different blocks into a high access frequency block and a low access frequency block according to statistics of access frequencies of the different blocks in a cache, comprising:
counting the access frequency of different blocks according to the established hash;
according to the established hash, the average access time of different blocks is counted;
performing weighted average on the access frequency and the average access time of different blocks;
setting a block with a weighted average result higher than a preset high result threshold as a high access frequency block;
setting a block with a weighted average result lower than a preset low result threshold as a low access frequency block;
according to statistics of access frequencies of different blocks in the cache, the different blocks are divided into a high access frequency block and a low access frequency block, and the method further comprises the following steps:
determining whether to perform weight value adjustment and adjustment amplitude of the weight value adjustment according to whether the average access time of each block obtained through statistics exceeds a dangerous threshold of the average access time;
when the weight value is determined to be adjusted, the weight value is adjusted by the set adjustment amplitude.
2. The method for dynamically accessing a cache memory according to claim 1, wherein the dividing the different blocks into the high access frequency block and the low access frequency block according to statistics of access frequencies of the different blocks in the cache memory comprises:
counting the access frequency of different blocks according to the established hash;
setting a block with access frequency higher than a preset high access frequency threshold as a high access frequency block;
and setting the block with the access frequency lower than the preset low access frequency threshold as a low access frequency block.
3. The method for dynamically accessing a cache memory according to claim 2, wherein the different blocks are divided into a high access frequency block and a low access frequency block according to statistics of access frequencies of the different blocks in the cache memory, further comprising:
according to the established hash, the average access time of different blocks is counted;
and setting the block with the average access time higher than the average access time threshold and the access frequency lower than the preset high access frequency threshold as the high access frequency block.
4. The method of claim 1, wherein migrating data stored in the high access frequency block to the low access frequency block comprises:
opening up a data exchange space in a system memory;
and completing data exchange between the high access frequency block and the low access frequency block by utilizing the data exchange space.
5. The method according to claim 4, wherein after completing data exchange between the high access frequency block and the low access frequency block by using the data exchange space, migrating data stored in the high access frequency block to the low access frequency block, further comprising:
and updating the hash according to the data storage block after the data exchange is completed.
6. A cache dynamic access device, comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of dynamic access of a cache according to any of claims 1 to 5.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when executed, implements a method for dynamic access of a cache according to any of claims 1 to 5.
CN201811457878.2A 2018-11-30 2018-11-30 Dynamic access method, device and storage medium for cache Active CN109558337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811457878.2A CN109558337B (en) 2018-11-30 2018-11-30 Dynamic access method, device and storage medium for cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811457878.2A CN109558337B (en) 2018-11-30 2018-11-30 Dynamic access method, device and storage medium for cache

Publications (2)

Publication Number Publication Date
CN109558337A CN109558337A (en) 2019-04-02
CN109558337B true CN109558337B (en) 2023-09-19

Family

ID=65868365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811457878.2A Active CN109558337B (en) 2018-11-30 2018-11-30 Dynamic access method, device and storage medium for cache

Country Status (1)

Country Link
CN (1) CN109558337B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11720500B2 (en) * 2021-09-03 2023-08-08 International Business Machines Corporation Providing availability status on tracks for a host to access from a storage controller cache
US11726913B2 (en) 2021-09-03 2023-08-15 International Business Machines Corporation Using track status information on active or inactive status of track to determine whether to process a host request on a fast access channel

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788995A (en) * 2009-12-31 2010-07-28 成都市华为赛门铁克科技有限公司 Hotspot data identification method and device
CN102117248A (en) * 2011-03-09 2011-07-06 浪潮(北京)电子信息产业有限公司 Caching system and method for caching data in caching system
CN103392207A (en) * 2011-10-05 2013-11-13 Lsi公司 Self-journaling and hierarchical consistency for non-volatile storage
CN103500072A (en) * 2013-09-27 2014-01-08 华为技术有限公司 Data migration method and data migration device
CN105205014A (en) * 2015-09-28 2015-12-30 北京百度网讯科技有限公司 Data storage method and device
CN105808443A (en) * 2014-12-29 2016-07-27 华为技术有限公司 Data migration method, apparatus and system
CN106371762A (en) * 2016-08-19 2017-02-01 浪潮(北京)电子信息产业有限公司 Optimization method and system of storage data
CN107491272A (en) * 2017-09-29 2017-12-19 郑州云海信息技术有限公司 A kind of method, apparatus of Data Migration, equipment and storage medium
CN108228110A (en) * 2018-01-31 2018-06-29 网宿科技股份有限公司 A kind of method and apparatus for migrating resource data
CN108519862A (en) * 2018-03-30 2018-09-11 百度在线网络技术(北京)有限公司 Storage method, device, system and the storage medium of block catenary system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9760310B2 (en) * 2015-08-06 2017-09-12 International Business Machines Corporation High performance data storage management using dynamic compression

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788995A (en) * 2009-12-31 2010-07-28 成都市华为赛门铁克科技有限公司 Hotspot data identification method and device
CN102117248A (en) * 2011-03-09 2011-07-06 浪潮(北京)电子信息产业有限公司 Caching system and method for caching data in caching system
CN103392207A (en) * 2011-10-05 2013-11-13 Lsi公司 Self-journaling and hierarchical consistency for non-volatile storage
CN103500072A (en) * 2013-09-27 2014-01-08 华为技术有限公司 Data migration method and data migration device
CN105808443A (en) * 2014-12-29 2016-07-27 华为技术有限公司 Data migration method, apparatus and system
CN105205014A (en) * 2015-09-28 2015-12-30 北京百度网讯科技有限公司 Data storage method and device
CN106371762A (en) * 2016-08-19 2017-02-01 浪潮(北京)电子信息产业有限公司 Optimization method and system of storage data
CN107491272A (en) * 2017-09-29 2017-12-19 郑州云海信息技术有限公司 A kind of method, apparatus of Data Migration, equipment and storage medium
CN108228110A (en) * 2018-01-31 2018-06-29 网宿科技股份有限公司 A kind of method and apparatus for migrating resource data
CN108519862A (en) * 2018-03-30 2018-09-11 百度在线网络技术(北京)有限公司 Storage method, device, system and the storage medium of block catenary system

Also Published As

Publication number Publication date
CN109558337A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
US10430338B2 (en) Selectively reading data from cache and primary storage based on whether cache is overloaded
US8972690B2 (en) Methods and apparatuses for usage based allocation block size tuning
CN108334284B (en) Tail delay perception foreground garbage collection algorithm
US9591096B2 (en) Computer system, cache control method, and server
US8667247B2 (en) Volume record data set optimization apparatus and method
EP4020153A1 (en) Cache space management method and device
CN110737399B (en) Method, apparatus and computer program product for managing a storage system
CN109558337B (en) Dynamic access method, device and storage medium for cache
CN107544926B (en) Processing system and memory access method thereof
CN106649145A (en) Self-adaptive cache strategy updating method and system
CN106170757A (en) A kind of date storage method and device
US20170090755A1 (en) Data Storage Method, Data Storage Apparatus and Solid State Disk
US11593268B2 (en) Method, electronic device and computer program product for managing cache
US20200210219A1 (en) Storage control method and storage controller for user individual service environment
CN113127382A (en) Data reading method, device, equipment and medium for additional writing
CN113094392A (en) Data caching method and device
US11341055B2 (en) Method, electronic device, and computer program product for storage management
WO2023165543A1 (en) Shared cache management method and apparatus, and storage medium
US10387330B1 (en) Less recently and frequently used (LRAFU) cache replacement policy
EP4044039A1 (en) Data access method and apparatus, and storage medium
CN110658999B (en) Information updating method, device, equipment and computer readable storage medium
US12050539B2 (en) Data access method and apparatus and storage medium
US11435956B2 (en) Method, electronic device, and computer program product for data compression
US20240211154A1 (en) Method, device, and computer program product for de-duplicating data
CN115639949A (en) Method, apparatus and computer program product for managing a storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant