CN112631521A - Method, system, equipment and medium for controlling water level of cache pool - Google Patents

Method, system, equipment and medium for controlling water level of cache pool Download PDF

Info

Publication number
CN112631521A
CN112631521A CN202011565584.9A CN202011565584A CN112631521A CN 112631521 A CN112631521 A CN 112631521A CN 202011565584 A CN202011565584 A CN 202011565584A CN 112631521 A CN112631521 A CN 112631521A
Authority
CN
China
Prior art keywords
data
cache pool
speed
pool
water level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011565584.9A
Other languages
Chinese (zh)
Other versions
CN112631521B (en
Inventor
李吉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202011565584.9A priority Critical patent/CN112631521B/en
Publication of CN112631521A publication Critical patent/CN112631521A/en
Application granted granted Critical
Publication of CN112631521B publication Critical patent/CN112631521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method, a system, equipment and a storage medium for controlling the water level of a cache pool, wherein the method comprises the following steps: deploying a distributed storage cluster, and respectively creating a high-speed cache pool and a low-speed storage pool in the distributed storage cluster; responding to the received data, and judging whether the read-write times of the data in unit time exceed a first threshold value; in response to the fact that the number of times of reading and writing data in unit time does not exceed a first threshold, writing the data into a corresponding position of a cache pool according to the range of dirty data, and judging whether the water level of the cache pool exceeds a second threshold or not; and responding to the water level of the cache pool exceeding a second threshold value, determining a corresponding brushing-down speed according to the current water level of the cache pool, and brushing down the data in the cache pool to the low-speed storage pool from large to small according to the range of the dirty data according to the brushing-down speed. According to the invention, non-hot data is flushed according to the range of dirty data, and the hot data is stored in the cache pool under the condition of ensuring the normal water level of the cache pool.

Description

Method, system, equipment and medium for controlling water level of cache pool
Technical Field
The present invention relates to the field of distributed clusters, and more particularly, to a method, a system, a computer device, and a readable medium for controlling a water level of a cache pool.
Background
Under the conditions that a storage system is safe and reliable and has limited cost, it is extremely important to obtain the maximum storage capacity by using limited resources and fully exert the performance of the system on hardware, and the aim can be achieved just by using data hierarchical storage.
Data hierarchical storage generally adopts a strategy of hierarchical storage pools, namely hot-spot data accessed by a user at high frequency are placed in a cache pool, and non-hot-spot data accessed by the user at low frequency are stored in a common data pool with low performance but high storage capacity. The cache pool generally uses high-performance, small-capacity but relatively high-price Nvme, SSD or Nvdim as storage media, and the ordinary data pool generally uses a HDD with slow speed, high capacity but relatively low price as a storage medium.
The storage capacity of the cache pool is not too high due to the product cost and the storage capacity of the high-speed storage medium, but for the storage performance, the cache pool needs to store as much hot data as possible and eliminate non-hot data. However, in the prior art, there is no method for storing high-frequency data in a cache pool and eliminating non-hot-spot data to a slow storage pool as much as possible, and the cache pool in the prior art is often high in water level and cannot exert normal performance.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a method, a system, a computer device, and a computer-readable storage medium for controlling a cache pool water level, in which hot data and non-hot data are distinguished according to read-write times in a unit time, the non-hot data are divided according to a range of dirty data, and a flush is performed when the cache pool water level is too high, so that the hot data are stored in the cache pool, the cache pool water level is kept in a healthy state, and performance of a distributed cluster is improved.
Based on the above object, an aspect of the embodiments of the present invention provides a method for controlling a water level of a cache pool, including the following steps: deploying a distributed storage cluster, and respectively creating a high-speed cache pool and a low-speed storage pool in the distributed cluster; responding to received data, and judging whether the read-write times of the data in unit time exceed a first threshold value; in response to that the read-write times of the data in unit time do not exceed a first threshold, writing the data into a corresponding position of the cache pool according to the range of dirty data of the data, and judging whether the water level of the cache pool exceeds a second threshold; and responding to the water level of the cache pool exceeding a second threshold value, determining a corresponding brushing-down speed according to the current water level of the cache pool, and brushing down the data in the cache pool to the low-speed storage pool according to the brushing-down speed from large to small according to the range of dirty data.
In some embodiments, the writing the data to the corresponding location of the cache pool according to the range of dirty data of the data includes: the range of dirty data for the data is recorded by object metadata and the object metadata is inserted into the LRU linked list of the cache pool.
In some embodiments, said inserting said object metadata into an LRU linked list of said cache pool comprises: in response to a range of dirty data of the data being greater than or equal to a third threshold, writing the data to a first sub-region of a non-hot region of the LRU linked list; in response to the range of dirty data of the data being less than a third threshold and greater than or equal to a fourth threshold, writing the data to a second sub-region of a non-hot-spot region of the LRU linked list; and in response to the range of dirty data for the data being less than a fourth threshold, writing the data to a third sub-region of the non-hot-spot region of the LRU linked list.
In some embodiments, the method further comprises: and in response to the number of times of reading and writing the data in the unit time exceeding a first threshold, writing the data into a hot spot area of the LRU linked list.
In some embodiments, the flushing data in the cache pool from large to small according to the brushing-down speed to the low-speed storage pool comprises: printing data of the first sub-area to the low-speed storage pool; and in response to the completion of the data down-brushing of the first sub-area, down-brushing the data of the second sub-area to the low-speed storage pool and detecting whether the first sub-area has newly written data in real time.
In some embodiments, the determining a corresponding swipe down speed according to the current water level of the cache pool comprises: in response to the water level of the cache pool exceeding a second threshold and not exceeding a fifth threshold, performing a brush-down using a first brush-down speed and a first brush-down interval; and in response to the water level of the cache pool exceeding a fifth threshold, increasing the first swipe speed and decreasing the first swipe interval.
In some embodiments, the determining a corresponding swipe down speed according to the current water level of the cache pool comprises: in response to the performance data of the distributed storage cluster not being within a preset range, increasing the first brushing-down speed and decreasing the first brushing-down interval.
In another aspect of the embodiments of the present invention, a system for controlling a water level of a cache pool is further provided, including: the system comprises a creating module, a storage module and a management module, wherein the creating module is configured to deploy a distributed storage cluster and respectively create a high-speed cache pool and a low-speed storage pool in the distributed cluster; the judging module is configured to respond to received data and judge whether the read-write times of the data in unit time exceed a first threshold value; the writing module is configured to respond that the number of times of reading and writing the data in unit time does not exceed a first threshold, write the data into a corresponding position of the cache pool according to a range of dirty data of the data, and judge whether the water level of the cache pool exceeds a second threshold; and the lower brushing module is configured to respond to the fact that the water level of the cache pool exceeds a second threshold value, determine a corresponding lower brushing speed according to the current water level of the cache pool, and brush down the data in the cache pool to the low-speed storage pool from large to small according to the range of dirty data according to the lower brushing speed.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method as above.
In a further aspect of the embodiments of the present invention, a computer-readable storage medium is also provided, in which a computer program for implementing the above method steps is stored when the computer program is executed by a processor.
The invention has the following beneficial technical effects: according to the invention, hot data and non-hot data are distinguished according to the reading and writing times in unit time, the non-hot data are divided according to the range of the dirty data, and the non-hot data are flushed when the water level of the cache pool is too high, so that the hot data are stored in the cache pool, the water level of the cache pool is kept in a healthy state, and the performance of the distributed cluster is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1 is a diagram illustrating an embodiment of a method for controlling a water level of a cache pool according to the present invention;
fig. 2 is a schematic diagram of a hardware structure of an embodiment of a computer device for controlling a cache pool level according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In view of the above, according to a first aspect of the embodiments of the present invention, an embodiment of a method for controlling a cache pool level is provided. Fig. 1 is a schematic diagram illustrating an embodiment of a method for controlling a cache pool level according to the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, deploying the distributed storage cluster, and respectively creating a high-speed cache pool and a low-speed storage pool in the distributed storage cluster;
s2, responding to the received data, and judging whether the read-write times of the data in unit time exceed a first threshold value;
s3, in response to the fact that the number of times of reading and writing data in unit time does not exceed a first threshold, writing the data into a corresponding position of a cache pool according to the range of dirty data of the data, and judging whether the water level of the cache pool exceeds a second threshold; and
and S4, responding to the fact that the water level of the cache pool exceeds a second threshold value, determining a corresponding brushing-down speed according to the current water level of the cache pool, and brushing down the data in the cache pool to the low-speed storage pool from large to small according to the range of the dirty data according to the brushing-down speed.
The embodiment of the invention adopts a mode of two-stage storage pools, and a high-speed Cache Pool (Cache Pool) is deployed on storage media such as a high-performance Nvme disk and the like and is used for storing hot spot data of a user; a low-speed storage Pool (Slow Pool) is disposed on a storage medium such as an HDD having a large storage capacity. In the embodiment of the invention, the default Cache Pool can adopt a copy mode, the low-speed storage Pool can adopt an erasure correction Pool, the Cache Pool is invisible to users, the low-speed storage Pool is visible to users, metadata of a single object is managed by Onod, and all the metadata are cached in Cache Pool by default.
The distributed storage cluster is deployed, and a cache pool and a low-speed storage pool are respectively created in the distributed cluster. The distributed storage cluster is deployed, two storage pools are created, the storage medium of the Cache Pool is a high-speed medium Nvme and the like, and the storage medium of the Slow Pool can be a low-speed storage medium such as an HDD and the like.
And responding to the received data, and judging whether the read-write times of the data in unit time exceed a first threshold value. And in response to the fact that the number of times of reading and writing data in unit time does not exceed a first threshold, writing the data into a corresponding position of the cache pool according to the range of dirty data of the data, and judging whether the water level of the cache pool exceeds a second threshold.
In some embodiments, the writing the data to the corresponding location of the cache pool according to the range of dirty data of the data includes: the range of dirty data for the data is recorded by object metadata and the object metadata is inserted into the LRU linked list of the cache pool.
In some embodiments, said inserting said object metadata into an LRU linked list of said cache pool comprises: in response to a range of dirty data of the data being greater than or equal to a third threshold, writing the data to a first sub-region of a non-hot region of the LRU linked list; in response to the range of dirty data of the data being less than a third threshold and greater than or equal to a fourth threshold, writing the data to a second sub-region of a non-hot-spot region of the LRU linked list; and in response to the range of dirty data for the data being less than a fourth threshold, writing the data to a third sub-region of the non-hot-spot region of the LRU linked list.
In some embodiments, the method further comprises: and in response to the number of times of reading and writing the data in the unit time exceeding a first threshold, writing the data into a hot spot area of the LRU linked list.
In order to accurately control the data caching, the LRU linked list is designed as follows:
m _ hot _ lru: the hot spot area is used for storing hot spot data, and the object data added into the area is not refreshed;
m _ dirty _ big _ lru: the first sub-area of the non-hotspot area is used for storing data with dirty (dirty) data volume exceeding dirty _ big _ io, and the data is preferentially processed during the brushing-down process;
m _ dirty _ mid _ lru: a second sub-area of the non-hotspot area, configured to store data whose dirty data amount is less than dirty _ big _ io and exceeds dirty _ mid _ io;
m _ dirty _ small _ lru: and a third sub-area of the non-hotspot area, configured to store data with a dirty data amount smaller than dirty _ mid _ io.
And responding to the water level of the cache pool exceeding a second threshold value, determining a corresponding brushing-down speed according to the current water level of the cache pool, and brushing down the data in the cache pool to the low-speed storage pool from large to small according to the range of the dirty data according to the brushing-down speed. To control the water level of the cache pool, three water level levels may be defined for the cache pool: high water level (80%), medium water level (70%), low water level (60%). The second threshold may be 60%, and when the water level in the cache pool exceeds the second threshold, the data in the cache pool needs to be flushed.
In some embodiments, the flushing data in the cache pool from large to small according to the brushing-down speed to the low-speed storage pool comprises: printing data of the first sub-area to the low-speed storage pool; and in response to the completion of the data down-brushing of the first sub-area, down-brushing the data of the second sub-area to the low-speed storage pool and detecting whether the first sub-area has newly written data in real time. Each time a brush is down, a certain amount of Oode is obtained from LRU, the default is preferentially obtained from m _ dirty _ big _ LRU, and when m _ dirty _ big _ LRU is insufficient, the Oode is obtained from m _ dirty _ mid _ LRU or m _ dirty _ small _ LRU. And if the data conforming to the first sub-area is written when the data of the second sub-area is flushed, the data is flushed preferentially.
In some embodiments, the determining a corresponding swipe down speed according to the current water level of the cache pool comprises: in response to the water level of the cache pool exceeding a second threshold and not exceeding a fifth threshold, performing a brush-down using a first brush-down speed and a first brush-down interval; and in response to the water level of the cache pool exceeding a fifth threshold, increasing the first swipe speed and decreasing the first swipe interval. The fifth threshold may be 70% and when the water level exceeds the fifth threshold, it is necessary to increase the speed of the downstroke and decrease the downstroke interval to bring the water level down as quickly as possible.
In some embodiments, the determining a corresponding swipe down speed according to the current water level of the cache pool comprises: in response to the performance data of the distributed storage cluster not being within a preset range, increasing the first brushing-down speed and decreasing the first brushing-down interval. If the performance data of the distributed cluster is not within the preset range, which indicates that the water level in the cache pool has affected the performance of the distributed cluster at this time, the water level needs to be lowered as soon as possible, so that the brushing speed can be further increased and the first brushing interval can be reduced.
It should be particularly noted that, the steps in the embodiments of the method for controlling the buffer pool level described above can be mutually intersected, replaced, added, and deleted, so that these methods for controlling the buffer pool level that are reasonably changed in permutation and combination also belong to the scope of the present invention, and the scope of the present invention should not be limited to the embodiments.
In view of the above object, according to a second aspect of the embodiments of the present invention, there is provided a system for controlling a water level of a cache pool, including: the system comprises a creating module, a storage module and a management module, wherein the creating module is configured to deploy a distributed storage cluster and respectively create a high-speed cache pool and a low-speed storage pool in the distributed cluster; the judging module is configured to respond to received data and judge whether the read-write times of the data in unit time exceed a first threshold value; the writing module is configured to respond that the number of times of reading and writing the data in unit time does not exceed a first threshold, write the data into a corresponding position of the cache pool according to a range of dirty data of the data, and judge whether the water level of the cache pool exceeds a second threshold; and the lower brushing module is configured to respond to the fact that the water level of the cache pool exceeds a second threshold value, determine a corresponding lower brushing speed according to the current water level of the cache pool, and brush down the data in the cache pool to the low-speed storage pool from large to small according to the range of dirty data according to the lower brushing speed.
In some embodiments, the write module is configured to: the range of dirty data for the data is recorded by object metadata and the object metadata is inserted into the LRU linked list of the cache pool.
In some embodiments, the write module is configured to: in response to a range of dirty data of the data being greater than or equal to a third threshold, writing the data to a first sub-region of a non-hot region of the LRU linked list; in response to the range of dirty data of the data being less than a third threshold and greater than or equal to a fourth threshold, writing the data to a second sub-region of a non-hot-spot region of the LRU linked list; and in response to the range of dirty data for the data being less than a fourth threshold, writing the data to a third sub-region of the non-hot-spot region of the LRU linked list.
In some embodiments, the system further comprises: and the second writing module is configured to write the data into the hot spot area of the LRU linked list in response to the number of times of reading and writing the data in the unit time exceeding a first threshold value.
In some embodiments, the lower brush module is configured to: printing data of the first sub-area to the low-speed storage pool; and in response to the completion of the data down-brushing of the first sub-area, down-brushing the data of the second sub-area to the low-speed storage pool and detecting whether the first sub-area has newly written data in real time.
In some embodiments, the lower brush module is configured to: in response to the water level of the cache pool exceeding a second threshold and not exceeding a fifth threshold, performing a brush-down using a first brush-down speed and a first brush-down interval; and in response to the water level of the cache pool exceeding a fifth threshold, increasing the first swipe speed and decreasing the first swipe interval.
In some embodiments, the lower brush module is configured to: in response to the performance data of the distributed storage cluster not being within a preset range, increasing the first brushing-down speed and decreasing the first brushing-down interval.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, deploying the distributed storage cluster, and respectively creating a high-speed cache pool and a low-speed storage pool in the distributed storage cluster; s2, responding to the received data, and judging whether the read-write times of the data in unit time exceed a first threshold value; s3, in response to the fact that the number of times of reading and writing data in unit time does not exceed a first threshold, writing the data into a corresponding position of a cache pool according to the range of dirty data of the data, and judging whether the water level of the cache pool exceeds a second threshold; and S4, responding to the water level of the cache pool exceeding a second threshold, determining the corresponding brushing-down speed according to the current water level of the cache pool, and brushing down the data in the cache pool to the low-speed storage pool according to the range of the dirty data according to the brushing-down speed.
In some embodiments, the writing the data to the corresponding location of the cache pool according to the range of dirty data of the data includes: the range of dirty data for the data is recorded by object metadata and the object metadata is inserted into the LRU linked list of the cache pool.
In some embodiments, said inserting said object metadata into an LRU linked list of said cache pool comprises: in response to a range of dirty data of the data being greater than or equal to a third threshold, writing the data to a first sub-region of a non-hot region of the LRU linked list; in response to the range of dirty data of the data being less than a third threshold and greater than or equal to a fourth threshold, writing the data to a second sub-region of a non-hot-spot region of the LRU linked list; and in response to the range of dirty data for the data being less than a fourth threshold, writing the data to a third sub-region of the non-hot-spot region of the LRU linked list.
In some embodiments, the steps further comprise: and in response to the number of times of reading and writing the data in the unit time exceeding a first threshold, writing the data into a hot spot area of the LRU linked list.
In some embodiments, the flushing data in the cache pool from large to small according to the brushing-down speed to the low-speed storage pool comprises: printing data of the first sub-area to the low-speed storage pool; and in response to the completion of the data down-brushing of the first sub-area, down-brushing the data of the second sub-area to the low-speed storage pool and detecting whether the first sub-area has newly written data in real time.
In some embodiments, the determining a corresponding swipe down speed according to the current water level of the cache pool comprises: in response to the water level of the cache pool exceeding a second threshold and not exceeding a fifth threshold, performing a brush-down using a first brush-down speed and a first brush-down interval; and in response to the water level of the cache pool exceeding a fifth threshold, increasing the first swipe speed and decreasing the first swipe interval.
In some embodiments, the determining a corresponding swipe down speed according to the current water level of the cache pool comprises: in response to the performance data of the distributed storage cluster not being within a preset range, increasing the first brushing-down speed and decreasing the first brushing-down interval.
Fig. 2 is a schematic diagram of a hardware structure of an embodiment of the computer apparatus for controlling a cache pool level according to the present invention.
Taking the apparatus shown in fig. 2 as an example, the apparatus includes a processor 301 and a memory 302, and may further include: an input device 303 and an output device 304.
The processor 301, the memory 302, the input device 303 and the output device 304 may be connected by a bus or other means, and fig. 2 illustrates the connection by a bus as an example.
The memory 302 is a non-volatile computer-readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for controlling the cache pool level in the embodiment of the present application. The processor 301 executes various functional applications of the server and data processing by running nonvolatile software programs, instructions and modules stored in the memory 302, that is, implements the method of controlling the cache pool level of the above-described method embodiment.
The memory 302 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the method of controlling the level of the cache pool, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 302 optionally includes memory located remotely from processor 301, which may be connected to a local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 303 may receive information such as a user name and a password that are input. The output means 304 may comprise a display device such as a display screen.
Program instructions/modules corresponding to one or more methods of controlling cache pool level are stored in the memory 302 and, when executed by the processor 301, perform the method of controlling cache pool level in any of the above-described method embodiments.
Any embodiment of the computer apparatus for performing the method for controlling a cache pool level as described above may achieve the same or similar effects as any of the preceding method embodiments corresponding thereto.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the method as above.
Finally, it should be noted that, as one of ordinary skill in the art can appreciate that all or part of the processes of the methods of the above embodiments can be implemented by a computer program to instruct related hardware, and the program of the method for controlling the buffer pool level can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the methods as described above. The storage medium of the program may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like. The embodiments of the computer program may achieve the same or similar effects as any of the above-described method embodiments.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A method of controlling a cache pool level, comprising the steps of:
deploying a distributed storage cluster, and respectively creating a high-speed cache pool and a low-speed storage pool in the distributed cluster;
responding to received data, and judging whether the read-write times of the data in unit time exceed a first threshold value;
in response to that the read-write times of the data in unit time do not exceed a first threshold, writing the data into a corresponding position of the cache pool according to the range of dirty data of the data, and judging whether the water level of the cache pool exceeds a second threshold; and
and responding to the water level of the cache pool exceeding a second threshold value, determining a corresponding brushing-down speed according to the current water level of the cache pool, and brushing down the data in the cache pool to the low-speed storage pool according to the brushing-down speed from large to small according to the range of dirty data.
2. The method of claim 1, wherein writing the data to the corresponding location of the cache pool according to the range of dirty data of the data comprises:
the range of dirty data for the data is recorded by object metadata and the object metadata is inserted into the LRU linked list of the cache pool.
3. The method of claim 2, wherein the inserting the object metadata into the LRU linked list of the cache pool comprises:
in response to a range of dirty data of the data being greater than or equal to a third threshold, writing the data to a first sub-region of a non-hot region of the LRU linked list;
in response to the range of dirty data of the data being less than a third threshold and greater than or equal to a fourth threshold, writing the data to a second sub-region of a non-hot-spot region of the LRU linked list; and
in response to the range of dirty data for the data being less than a fourth threshold, writing the data to a third sub-area of the non-hot-spot area of the LRU linked list.
4. The method of claim 2, further comprising:
and in response to the number of times of reading and writing the data in the unit time exceeding a first threshold, writing the data into a hot spot area of the LRU linked list.
5. The method of claim 3, wherein the brushing down data in the cache pool to the low-speed storage pool by a range of dirty data from large to small according to the brushing down speed comprises:
printing data of the first sub-area to the low-speed storage pool; and
and in response to the completion of the data down-brushing of the first sub-area, down-brushing the data of the second sub-area to the low-speed storage pool and detecting whether the first sub-area has newly written data in real time.
6. The method of claim 1, wherein determining a corresponding brush-down speed based on the current water level of the cache pool comprises:
in response to the water level of the cache pool exceeding a second threshold and not exceeding a fifth threshold, performing a brush-down using a first brush-down speed and a first brush-down interval; and
in response to the water level of the cache pool exceeding a fifth threshold, increasing the first swipe speed and decreasing the first swipe interval.
7. The method of claim 6, wherein determining a corresponding brush-down speed based on the current water level of the cache pool comprises:
in response to the performance data of the distributed storage cluster not being within a preset range, increasing the first brushing-down speed and decreasing the first brushing-down interval.
8. A system for controlling a cache pool level, comprising:
the system comprises a creating module, a storage module and a management module, wherein the creating module is configured to deploy a distributed storage cluster and respectively create a high-speed cache pool and a low-speed storage pool in the distributed cluster;
the judging module is configured to respond to received data and judge whether the read-write times of the data in unit time exceed a first threshold value;
the writing module is configured to respond that the number of times of reading and writing the data in unit time does not exceed a first threshold, write the data into a corresponding position of the cache pool according to a range of dirty data of the data, and judge whether the water level of the cache pool exceeds a second threshold; and
and the lower brushing module is configured to respond to the fact that the water level of the cache pool exceeds a second threshold value, determine a corresponding lower brushing speed according to the current water level of the cache pool, and brush down the data in the cache pool to the low-speed storage pool from large to small according to the range of dirty data according to the lower brushing speed.
9. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011565584.9A 2020-12-25 2020-12-25 Method, system, equipment and medium for controlling water level of cache pool Active CN112631521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011565584.9A CN112631521B (en) 2020-12-25 2020-12-25 Method, system, equipment and medium for controlling water level of cache pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011565584.9A CN112631521B (en) 2020-12-25 2020-12-25 Method, system, equipment and medium for controlling water level of cache pool

Publications (2)

Publication Number Publication Date
CN112631521A true CN112631521A (en) 2021-04-09
CN112631521B CN112631521B (en) 2023-01-06

Family

ID=75325332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011565584.9A Active CN112631521B (en) 2020-12-25 2020-12-25 Method, system, equipment and medium for controlling water level of cache pool

Country Status (1)

Country Link
CN (1) CN112631521B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204536A (en) * 2021-05-27 2021-08-03 济南浪潮数据技术有限公司 Read-write optimization method and device for distributed storage system
CN113625955A (en) * 2021-06-30 2021-11-09 济南浪潮数据技术有限公司 Dirty data processing method, device and medium for distributed storage system
CN113625966A (en) * 2021-07-25 2021-11-09 济南浪潮数据技术有限公司 Data brushing method, system, equipment and medium
CN113722072A (en) * 2021-09-14 2021-11-30 华瑞指数云(河南)科技有限公司 Storage system file merging method and device based on intelligent distribution
CN113806087A (en) * 2021-09-10 2021-12-17 济南浪潮数据技术有限公司 Method and device for adjusting service speed based on brushing speed
CN114020828A (en) * 2021-09-27 2022-02-08 南京云创大数据科技股份有限公司 Distributed hierarchical storage system
CN114237518A (en) * 2022-02-22 2022-03-25 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal
CN115268798A (en) * 2022-09-27 2022-11-01 天津卓朗昆仑云软件技术有限公司 Cache data flushing method and system
CN115686385A (en) * 2023-01-03 2023-02-03 苏州浪潮智能科技有限公司 Data storage method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885728A (en) * 2014-04-04 2014-06-25 华中科技大学 Magnetic disk cache system based on solid-state disk
CN109086462A (en) * 2018-09-21 2018-12-25 郑州云海信息技术有限公司 The management method of metadata in a kind of distributed file system
CN111708484A (en) * 2020-05-22 2020-09-25 苏州浪潮智能科技有限公司 Method, system, device and medium for controlling data brushing speed
CN111857589A (en) * 2020-07-16 2020-10-30 苏州浪潮智能科技有限公司 SSD cache down-flushing speed control method and system in distributed storage system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885728A (en) * 2014-04-04 2014-06-25 华中科技大学 Magnetic disk cache system based on solid-state disk
CN109086462A (en) * 2018-09-21 2018-12-25 郑州云海信息技术有限公司 The management method of metadata in a kind of distributed file system
CN111708484A (en) * 2020-05-22 2020-09-25 苏州浪潮智能科技有限公司 Method, system, device and medium for controlling data brushing speed
CN111857589A (en) * 2020-07-16 2020-10-30 苏州浪潮智能科技有限公司 SSD cache down-flushing speed control method and system in distributed storage system

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204536A (en) * 2021-05-27 2021-08-03 济南浪潮数据技术有限公司 Read-write optimization method and device for distributed storage system
CN113625955B (en) * 2021-06-30 2023-12-22 济南浪潮数据技术有限公司 Dirty data processing method, device and medium of distributed storage system
CN113625955A (en) * 2021-06-30 2021-11-09 济南浪潮数据技术有限公司 Dirty data processing method, device and medium for distributed storage system
CN113625966A (en) * 2021-07-25 2021-11-09 济南浪潮数据技术有限公司 Data brushing method, system, equipment and medium
CN113625966B (en) * 2021-07-25 2024-02-13 济南浪潮数据技术有限公司 Data brushing method, system, equipment and medium
CN113806087B (en) * 2021-09-10 2023-12-26 济南浪潮数据技术有限公司 Method and device for adjusting service speed based on brushing speed
CN113806087A (en) * 2021-09-10 2021-12-17 济南浪潮数据技术有限公司 Method and device for adjusting service speed based on brushing speed
CN113722072B (en) * 2021-09-14 2024-02-13 华瑞指数云科技(深圳)有限公司 Storage system file merging method and device based on intelligent shunting
CN113722072A (en) * 2021-09-14 2021-11-30 华瑞指数云(河南)科技有限公司 Storage system file merging method and device based on intelligent distribution
CN114020828A (en) * 2021-09-27 2022-02-08 南京云创大数据科技股份有限公司 Distributed hierarchical storage system
CN114020828B (en) * 2021-09-27 2024-05-31 南京云创大数据科技股份有限公司 Distributed hierarchical storage system
CN114237518A (en) * 2022-02-22 2022-03-25 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal
CN114237518B (en) * 2022-02-22 2022-05-24 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal
CN115268798A (en) * 2022-09-27 2022-11-01 天津卓朗昆仑云软件技术有限公司 Cache data flushing method and system
CN115686385A (en) * 2023-01-03 2023-02-03 苏州浪潮智能科技有限公司 Data storage method and device, computer equipment and storage medium
CN115686385B (en) * 2023-01-03 2023-03-21 苏州浪潮智能科技有限公司 Data storage method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112631521B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN112631521B (en) Method, system, equipment and medium for controlling water level of cache pool
CN102782683B (en) Buffer pool extension for database server
JP2008084316A (en) Mapping information management apparatus and method for nonvolatile memory supporting different cell types
CN105637470B (en) Method and computing device for dirty data management
KR100690804B1 (en) Method for executing garbage collection of mobile terminal
US20150331624A1 (en) Host-controlled flash translation layer snapshot
CN108647151A (en) It is a kind of to dodge system metadata rule method, apparatus, equipment and storage medium entirely
CN111581126B (en) SSD-based log data storage method, SSD-based log data storage device, SSD-based log data storage equipment and SSD-based log data storage medium
CN106909313A (en) Accumulator system and control method
CN110018788A (en) It is classified storage method, device, electronic equipment and computer readable storage medium
CN111400204B (en) Solid-state disk caching method, system and related equipment
CN108089825B (en) Storage system based on distributed cluster
CN112799595B (en) Data processing method, device and storage medium
CN103514110A (en) Cache management method and device for nonvolatile memory device
CN110597457A (en) Solid state disk, control method of solid state disk and controller
CN113282249B (en) Data processing method, system, device and medium
KR102343246B1 (en) Data storage device and operating method thereof
CN112463054B (en) Method and equipment for improving read-write performance
KR100643287B1 (en) Data processing device and method for flash memory
CN111142803B (en) Metadata disk refreshing method, device, equipment and medium
CN113608695A (en) Data processing method, system, device and medium
CN110780994A (en) Method, equipment and medium for regulating and controlling memory
CN111124304B (en) Data migration method and device, electronic equipment and storage medium
CN113806087B (en) Method and device for adjusting service speed based on brushing speed
CN105915595B (en) Method for cluster storage system to access data and cluster storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant