CN110209343A - Date storage method, device, server and storage medium - Google Patents

Date storage method, device, server and storage medium Download PDF

Info

Publication number
CN110209343A
CN110209343A CN201810814260.0A CN201810814260A CN110209343A CN 110209343 A CN110209343 A CN 110209343A CN 201810814260 A CN201810814260 A CN 201810814260A CN 110209343 A CN110209343 A CN 110209343A
Authority
CN
China
Prior art keywords
data
cache unit
queue
storage
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810814260.0A
Other languages
Chinese (zh)
Other versions
CN110209343B (en
Inventor
胡健鹰
陈光明
周可
王桦
程彬
肖志立
吉永光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Huazhong University of Science and Technology
Original Assignee
Tencent Technology Shenzhen Co Ltd
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Huazhong University of Science and Technology filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810814260.0A priority Critical patent/CN110209343B/en
Publication of CN110209343A publication Critical patent/CN110209343A/en
Application granted granted Critical
Publication of CN110209343B publication Critical patent/CN110209343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of date storage method, device, server and storage mediums, belong to Internet technical field.The described method includes: inquiring the storage location of the second data when receiving the write request to the first data;It stores when inquiring the second data in the first cache unit, by the storage of the first data into the second cache unit;Return to the write-in result to the first data.The present invention is based on the first cache units and the second cache unit to store to data, compared to the data storage method for individually using the second cache unit, save cost, for the data being stored in the first cache unit, when being stored again to the data, store that data into the second cache unit, not only increase reading and writing data speed, and reduces and number is written to the data of the first cache unit, avoid the damage of the first cache unit, improve Information Security, this kind of mode has taken into account cost, the demand of reading and writing data speed and safety etc..

Description

Date storage method, device, server and storage medium
Technical field
The present invention relates to Internet technical field, in particular to a kind of date storage method, device, server and storage are situated between Matter.
Background technique
In Internet technical field, the reading and writing data speed ratio HDD of SSD (Solid State Drives, solid state hard disk) The reading and writing data speed of (Hard Disk Drive, hard disk drive) is fast, wants to meet to read-write time delay (Latency) Higher user is asked, public cloud manufacturer both domestic and external attempts to use the higher SSD of literacy data cached, publicly-owned in the hope of improving The random read-write ability of cloud storage.
Fig. 1 shows the architecture diagram that the cloud storage system of data storage is carried out based on SDD, referring to Fig. 1, the cloud storage system Including logical layer, cache layer and accumulation layer etc..Wherein, logical layer is responsible for handling write request, the read requests of data, And the write request of data, read requests are redirected;Storage medium in cache layer is SSD, is responsible for caching public cloud Small part data in storage;Storage medium in accumulation layer is HDD, the most of data being responsible in the publicly-owned cloud storage of storage. The process of data storage is carried out using cloud storage system shown in FIG. 1 are as follows: when receiving the write request to data, by data It is written in SSD.
Since SSD has data write-in number limitation, and under publicly-owned cloud storage scene, number that publicly-owned cloud storage is stored Also very big according to larger, data access amount is measured, with the application of the publicly-owned cloud storage based on SSD, SSD will soon be damaged, the later period It needs largely to replace, during replacing SSD, is stored in data on SSD there may be losing or the risk of leakage, because This, existing data storage method safety is poor.
Summary of the invention
In order to solve problems in the prior art, the embodiment of the invention provides a kind of date storage method, device, servers And storage medium.The technical solution is as follows:
On the one hand, a kind of date storage method is provided, which comprises
When receiving the write request to the first data, the storage location of the second data is inquired, second data are There is the storing data of identical data mark with first data;
It stores when inquiring second data in the first cache unit, by first data storage to described second In cache unit, there are first cache unit determining data number is written;
Return to the write-in result to first data.
On the other hand, a kind of data storage device is provided, described device includes:
Enquiry module, for inquiring the storage location of the second data, institute when receiving the write request to the first data Stating the second data is the storing data for having identical data mark with first data;
Memory module, for storing in the first cache unit when inquiring second data, by first data It storing in second cache unit, there are first cache unit determining data number is written,;
Return module, for returning to the write-in result to first data.
On the other hand, a kind of server is provided, the server includes processor and memory, is deposited in the memory Contain at least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Cheng Sequence, the code set or described instruction collection are loaded by the processor and are executed to realize date storage method.
On the other hand, a kind of computer readable storage medium is provided, at least one finger is stored in the storage medium Enable, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or Described instruction collection is loaded by processor and is executed to realize date storage method.
Technical solution provided in an embodiment of the present invention has the benefit that
Data are stored based on the first cache unit and the second cache unit, it is single using the second caching compared to individually The data storage method of member, saves cost, for the data being stored in the first cache unit, when again to the data into When row storage, the second cache unit is stored that data into, reading and writing data speed is not only increased, and reduced slow to first Number is written in the data of memory cell, avoids the damage of the first cache unit, improves Information Security, this kind of mode taken into account at The demand of sheet, data reading speed and safety etc..
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of architecture diagram of cloud storage system that data storage is carried out based on SSD;
Fig. 2 is a kind of architecture diagram of cloud storage system provided in an embodiment of the present invention;
Fig. 3 is a kind of flow chart of date storage method provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram of data writing process provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram of data read process provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of data storage device provided in an embodiment of the present invention;
Fig. 7 is a kind of server for data storage shown according to an exemplary embodiment.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is described in further detail.
Before executing the present invention, noun of the present invention is explained first.
HDD is a kind of common memory, for example, C disk, D disk etc. in computer disk subregion.
SSD referred to as consolidates disk, for the hard disk being made using solid-state electronic storage chip array, by control unit and storage Component (FLASH chip, dram chip) composition.
RAM is also referred to as random access memory, is directly handed over CPU (Central Processing Unit, central processing unit) The internal storage of data is changed, main memory (memory) is also, RAM can not only read and write at any time, but also read or write speed is very fast, can be used as Operating system or other be currently running in program ephemeral data storaging medium.
Cloud storage refers to through functions such as cluster application, network technology or distributed file systems, will be a large amount of each in network The different types of storage equipment of kind gathers collaborative work by application software, provides data storage and business to outside jointly The system of access function.Cloud storage includes publicly-owned cloud storage and private cloud storage etc..
The architecture diagram of the cloud storage system of the embodiment of the present invention
Under publicly-owned cloud storage scene, in order to take into account cost, reading and writing data speed and the Information Security stored Etc. demand, the embodiment of the invention provides a kind of date storage method, Fig. 2 is cloud storage system involved in this method Architecture diagram, which is utilized that HDD equipment is cheap, SSD equipment read or write speed is very fast, the write-in of the data of RAM device The advantages that number is without limitation, referring to fig. 2, which includes logical layer, cache layer and accumulation layer etc..Wherein, cache layer In storage medium be RAM and SSD, the storage medium in accumulation layer is still HDD.By to data access in publicly-owned cloud storage Situation is analyzed, and the use ratio of RAM, SSD and HDD are set 1:50:1000 by the embodiment of the present invention, thus enterprise at This with reach a balance in reading and writing data speed.
The embodiment of the present invention, by redirecting on I/O path, can filter out different type in data storage procedure Data then for different types of data, be respectively stored in different storage mediums.Wherein, different types of data Including dsc data, warm data and cold data.Dsc data refers to be closer between larger read-write amount, access time and current time Data;Warm data refer to the farther away data of distance between less read-write amount, access time and current time;Cold data refers to reading The amount of writing seldom, far data of distance between access time and current time.Using RAM read or write speed, fast, no data is written secondary The advantage of number limitation, the embodiment of the present invention store dsc data into RAM, largely write flow by carrying, reduce to SSD's Number is written in data, improves SSD service life;Using the faster advantage of SSD read or write speed, the embodiment of the present invention deposits warm data It stores up in SSD, further increases the response speed of system;Using the advantage that HDD is cheap, cold data is deposited in present invention implementation It stores up in HDD, reduces entreprise cost.
Data-storage system provided in an embodiment of the present invention has the advantage that
The first, the cache way combined using RAM and SSD solves asking for the cloud storage literacy difference based on HDD Topic, the other response speed of SSD stage can be had by making the cloud storage based on HDD also, be greatly improved user experience.
Second, the cache way combined using RAM and SSD makes cloud storage in cost close to HDD, and price is very low It is honest and clean, while saving enterprise's O&M cost.
Third, based on the cache way that RAM and SSD is combined, by the way that warm data into RAM, to be stored to dsc data storage Into SSD, the data write-in flow of SSD is greatly reduced, the service life of SSD is extended, further decreases enterprise's O&M Cost, while improving the maintainability of system.
Data Stored Procedure based on write request
The embodiment of the invention provides a kind of date storage method, this method is applied in cloud storage system shown in Fig. 2, The cloud storage system includes the first cache unit and the second cache unit, which has determining data write-in time Number, the cost of the first cache unit is lower than the cost of the second cache unit, and the reading and writing data speed of the first cache unit is slower than The reading and writing data speed of second cache unit, referring to Fig. 3, the method flow that the present invention implements to provide includes:
301, when receiving the write request to the first data, server inquires the storage location of the second data, when looking into It askes to the second data and stores in the first cache unit, execute step 302, be stored in the second caching list when inquiring the second data When first, step 303 is executed, when inquiring the second data and being not stored in the first cache unit and the second cache unit, executes step Rapid 304.
Under cloud storage scene, because of business demand, when user wants the storage of the first data into server, Yong Huke The write request to the first data is sent by terminal to server, which includes Data Identification, user identifier etc..It should Data Identification can be data name, can be used for distinguishing different data.The user identifier can be the login of cloud storage system Account etc. can be used for distinguishing the data processing request of different user.When receiving write request, server based on data mark Know, inquire the storage location of the second data, and then the first data are stored according to the storage location of the second data.Wherein, Second data are the storing data for having identical data mark with the first data, and specifically, the second data can be and first Data have the same data of identical data mark, or have the different editions number of identical data mark with the first data According to.
For the ease of inquiring the data stored, the server in the embodiment of the present invention safeguards a cache table, The corresponding relationship being stored between the storage location and Data Identification in cache unit in the cache table, the cache unit include the One cache unit and the second cache unit.Wherein, the first cache unit and the second cache unit have the characteristics that following:
The first, cost of the cost of the first cache unit lower than the second cache unit;
The second, the reading and writing data speed of the first cache unit is slower than the reading and writing data speed of the second cache unit;
There are determining data number is written for third, the first cache unit, and number is written in the data of the second cache unit Without limitation.
In view of above-mentioned several features of the first cache unit and the second cache unit, the embodiment of the present invention is deposited in progress data Chu Shi can save entreprise cost, improve reading and writing data speed, while extending the service life of the first cache unit, improve first The Information Security stored on cache unit.In practical applications, the first cache unit can be SSD, the second cache unit It can be RAM, 3D X point etc..
Under cloud storage scene, when carrying out data storage using cloud storage system, server priority can store data Into cache unit, it is based on the memory mechanism, when receiving the write request to the first data, server is inquiring the second data Storage location when, can first inquire the Data Identification whether is stored in the cache table, if being stored with data mark in cache table Know, server passes through the storage location in query caching unit and the corresponding relationship between Data Identification again, obtains the second data Storage location.
In view of the first data can be the data being written for the first time, or the non-data being written for the first time, when the first number According to for be written for the first time data when, the second data that there is identical data mark with the first data will not be stored on server, The storage location of the second data will not more be inquired;When the first data are the non-data being written for the first time, server passes through inquiry Cache table can inquire the storage location of the second data, and include two kinds of situations for the storage location of the second data, a kind of To inquire the storage of the second data into the first cache unit, another kind is stored to inquire the second data to the second cache unit In, for above-mentioned three kinds of query results, operation performed by the embodiment of the present invention is also different in the follow-up process, specifically, If inquiring the second data to be stored in the first cache unit, step 302 will be executed;If inquiring the second data to be stored in In second cache unit, step 303 will be executed, if inquiring the second data is not stored in the first cache unit and the second caching Unit will execute step 304.
302, server stores the first data into the second cache unit.
Under a kind of business scenario, the second data are the data being written for the first time, the first cache unit are stored in, in this kind Under business scenario, when receiving the write request to the first data, server inquires the second data and is stored in the first caching Unit, and then by the storage of the first data into the second cache unit.
Under another business scenario, the second data are the non-data being written for the first time, in business procession, because of access Time before current time at a distance from it is longer, the first cache unit is transferred to from the second cache unit by server, in this kind Under business scenario, when receiving the write request to the first data, server inquires the second data and is stored in the first caching Unit, and then by the storage of the first data into the second cache unit.
Specifically, when the second data are stored in the first cache unit, server stores the first data to the second caching In unit, the step of taking, is as follows:
3021, server detects in the second cache unit with the presence or absence of remaining storage location.
Since the storage location of the second cache unit is limited, in order to avoid the second cache unit is because storing Man Wufa Storing data, before by the storage to the second cache unit of the first data, server needs the residue to the second cache unit to deposit Storage space, which is set, to be detected.
3022, when there is remaining storage location in the second cache unit, server stores the first data to the first data The head of queue.
Wherein, the first data queue is used to store the data in the second cache unit.First data queue follows LRU The storage principle of the storage principle of (Least recently used, least recently used), LRU is accessed according to the history of data Time is ranked up data, and the distance between history access time and current time nearest data are stored in the head of queue The distance between history access time and current time farthest data are stored in the tail portion of queue by portion, full storing In the case of, preferential superseded the distance between history access time and current time farthest data.For in the first data queue Data for, the distance between the access time of current first data to be written and current time be it is nearest, therefore, base In the storage principle of LRU, server can store the first data to the head of the first data queue.
3023, when there is no remaining storage location, server is by the tail portion number of the first data queue in the second cache unit According to the head for being transferred to the second data queue.
To ensure that dsc data can be enriched to the second cache unit, to improve reading and writing data speed, when determining the second caching There is no when remaining storage location in unit, the tail data of the first data queue can be transferred to the second data queue by server Head, then, then by the storage of the first data to the head of the first data queue.Wherein, the second data queue is for storing the Data in one cache unit.
Since the storage location of the first cache unit is also limited, in order to avoid the first cache unit because nothing has been expired in storage Method storing data, server can be used when the tail data of the first data queue to be transferred to the head of the second data queue Following steps 30231~30233:
30231, server detects in the first cache unit with the presence or absence of remaining storage location.
30232, when there is remaining storage location in the first cache unit, server is by the tail data of the first data queue It is transferred to the head of the second data queue.
It should be noted that described in all embodiments of the invention transfer refer to by Src StUnit data duplication after Purpose storage unit is stored, then the data on Src StUnit are deleted.Specifically, server can be by first data queue's tail Portion's data are stored to the head of the second data queue, and the tail data of the first data queue is deleted.
In embodiments of the present invention, the second data queue follows FIFO (First Input First Output, first enters elder generation Dequeue) storage principle, FIFO principle is ranked up data according to the historical storage time of data, by the historical storage time The nearest data of the distance between current time are stored in the head of queue, will be between historical storage time and current time The tail portion of queue is stored in apart from farthest data, in the case where storage expire, preferentially the superseded historical storage time with currently The distance between time farthest data.For the data in the second data queue, current first data team to be written The distance between storage time and current time of the tail data of column be it is nearest, therefore, server can be by the first data team Column tail data is stored to the head of the second data queue.
30233, when there is no remaining storage location, server is by the tail portion number of the second data queue in the first cache unit According to being transferred in storage unit, and the tail data of the first data queue is transferred to the head of the second data queue.
Wherein, the reading and writing data speed of storage unit is slower than the reading and writing data speed of the first cache unit, and the first caching is single The reading and writing data speed of member is slower than the reading and writing data speed of the second cache unit.For the data in the second data queue, The distance between storage time and current time of the tail data of second data queue be it is farthest, therefore, first caching There is no when remaining storage location in unit, the tail data of the second data queue is transferred to storage unit by server priority In;The distance between storage time and current time of the tail data of current first data queue to be written be it is nearest, Therefore, server can store first data queue's tail data to the head of the second data queue.
This step can make dsc data be enriched to the second caching by storing the first data to the head of the first data queue In unit, thus the advantage for making full use of the second cache unit read or write speed fast, then first data or reading are written next time When taking first data, the head quick obtaining that can line up from the first data shortens average waiting duration to first data, Improve read or write speed.
It is that SSD stores the data described in step 302 for the second cache unit is RAM with the first cache unit Process will be illustrated below using several specific examples.
For example, under cloud storage scene, when receiving the write request of the second data that data are identified as with a for the first time, Server stores the second data to SSD, and the storage location for the second data that Data Identification is a is recorded in cache table.? In business procession, when receiving the write request of the first data that data are identified as with a, server passes through query caching Table inquires the second data and is stored in SSD, and server stores the first data that with the second data there is identical data to identify It is deleted from SSD into RAM, and by the second data.When server stores the first data to RAM, the first number of RAM can detect According to whether there is remaining storage location in queue, if there is remaining storage location in the first data queue, by the first data It is directly stored in the head of the first data queue;If detecting SSD's there is no remaining storage location in the first data queue With the presence or absence of remaining storage location in second data queue, if there is remaining storage location in the second data queue, by the The tail data of one data queue is transferred to the head of the second data queue, then the first data are stored to the first data queue Head;If the tail data of the second data queue is transferred to and is deposited there is no remaining storage location in the second data queue The tail data of first data queue, is transferred to the head of the second data queue by storage unit, and by the first data storage to the The head of one data queue.
In another example, because of business demand, it is the second of a that server, which will be stored in Data Identification in RAM, under cloud storage scene Data are transferred in SSD.Under this kind of scene, when receiving the write request of the first data that data are identified as with a, service Device is inquired the second data and is stored in SSD by query caching table, and server will be identified with the second data with identical data The first data storage deleted from SSD into RAM, and by the second data.First data are stored the method to RAM by server Identical as above-mentioned example, details are not described herein again.
303, the first data are transferred to the head of the first data queue by server.
For the data in the first data queue, the access time of current first data to be written and current time The distance between be it is nearest, therefore, store when inquiring the second data in the second cache unit, server can by this first Data are transferred to the head that the first data are lined up.Specifically, which is transferred to the head that the first data are lined up by server When portion, the first data can be stored to the storage location to the second data in the second data queue, then by adjusting the first number According to the storage location of data each in queue, so that the first data are stored to the head of the first data queue;Server can also lead to The storage location of each data in the first data queue of adjustment is crossed, so that the second data are stored to the head of the first data queue, Then the second data are updated using the first data.Certainly, other modes also can be used, the embodiment of the present invention is not made specifically Restriction.
It is that SSD stores the data described in step 303 for the second cache unit is RAM with the first cache unit Process will be illustrated below using a specific example.
For example, Data Identification is that the second data of a are stored in SSD when being written for the first time, rear under cloud storage scene In continuous business procession, when receiving the read requests to the second data, the second data are transferred in RAM by server, And the storage location for the second data that Data Identification is a is recorded in cache table.Under this kind of scene, when receiving to data mark When knowing the write request of the first data for being a, server is inquired the second data and is stored in RAM by query caching table, is taken Business device stores the first data to the head of the first data queue.
304, server stores the first data into the first cache unit.
When storage time of second data in the first cache unit is longer, by server storage to responsible storage cold data Storage unit or the first data be the data being written for the first time, the not stored Data Identification phase with the first data on server With the second data when, server does not inquire the by query caching table in the first cache unit and the second cache unit Two data, server can store the first data into the first cache unit at this time.
Specifically, server stores the first data into the first cache unit, and used steps are as follows:
3041, server detects in the first cache unit with the presence or absence of remaining storage location.
3042, when there is remaining storage location in the first cache unit, server stores the first data to the second data The head of queue.
3043, when there is no remaining storage location, server is by the tail portion number of the second data queue in the first cache unit According to being transferred in storage unit, and by the first data storage to the head of the second data queue.
It can guarantee using this step timely from the dirty data (data brushed under the first cache unit) in the first cache unit Storage unit is updated, to improve Information Security.
By taking the first cache unit is SSD as an example, for the data storage procedure described in step 304, one will be used below A specific example is illustrated.
For example, the first data are the data being written for the first time, not stored and the first data have identical data mark on server The storing data known, under this kind of scene, server can be stored the first data to the head of the second data queue of SSD.
305, server returns to the write-in result to the first data.
After writing first data into corresponding storage location, server also sets the data mode of the second data to In vain, to avoid two parts of different data of storage in the first cache unit and the second cache unit.To get user in time This data write request, server will return to being written as a result, so far based on write request to the first data to user Data Stored Procedure terminates.
After the completion of data storage, the embodiment of the present invention also by according to current state data memory, is carried out cache table It updates, in the subsequent use process, the inquiry of data being carried out based on updated cache table, improves data query Accuracy.
Fig. 4 shows the embodiment of the present invention and provides the data storage procedure based on write request, is with the first cache unit SSD, the second cache unit are RAM, and for storage unit is HDD, data storage procedure is as follows:
1, when receiving the write request to the first data, server query caching table, if had with the first data Second data of identical data mark are stored in RAM, execute step 2;If there is identical data mark with the first data Second data are stored in SSD, execute step 3;If having the second data of identical data mark not stored with the first data In caching (RAM and SSD), step 6 is executed.The effect of the step is to improve the performance of cloud storage system using cache table.
2, server writes first data into RAM, and updates the head of the first data queue into RAM.The step Effect be based on LRU principle adjustment RAM in data storing order so that the data of newest access are located at the first data queue Head, while dsc data is enriched in RAM, the advantage for making full use of RAM read or write speed fast, improve reading and writing data speed Degree.Step 9 is executed again.
3, it whether there is remaining storage location in server detection RAM, if there are remaining storage location in RAM, Execute step 2;If remaining storage location is not present in RAM, step 4 is executed.The effect of the step be to determine whether by The tail data of the first data queue is transferred in SSD in RAM.
4, server obtains the tail data of the first data queue in RAM, and the tail data of the first data queue is stored It is deleted from the first data queue to the head of the second data queue of SSD, and by the tail data of RAM.The effect of the step It is in the case where the storage of RAM has been expired, the minimum data of the temperature stored is converted into warm data.
5, server writes first data into the head of RAM, and sets invalid for the data mode cached in SSD. The effect of the step is, to improve the read or write speed of data, to avoid simultaneously from dsc data storage is filtered out in SSD into RAM Two parts of different data are stored in RAM and SSD.
6, server is checked with the presence or absence of remaining storage location in SSD, if held in SSD there is no remaining storage location Row step 7;If there is remaining storage location in SSD, step 8 is executed.The effect of the step is to determine whether in SSD The tail data of two data queues is transferred in HDD.
7, server will brush HDD under the tail data of the second data queue in SSD, and the tail data is counted from second According to being deleted in queue.The effect of the step is to guarantee that the dirty data in SSD updates to HDD, to improve data Safety.
8, the tail data of the first data queue in RAM is written to the head of the second data queue of SSD by server.It should The effect of step is in the case where the storage of RAM has been expired, and the dsc data in RAM is downgraded to warm data, to be hot number According to reserved storage location.
9, server returns to the write-in to the first data as a result, process terminates to upper-layer user.
Above-mentioned steps 301 to step 304 is the data storage procedure based on write request, the embodiment of the invention also provides A kind of data storage procedure based on read requests, the process are as follows:
The first step, when receiving the read requests to third data, server obtains third data.
Wherein, third data are by any data that stores on server.When receiving the read requests to third data When, server is based on cache table, inquires the storage location of third data, and then according to the storage location of third data, obtains the Three data.
Second step, server adjust the storage location of third data, and return to third data.
After being read out to third data, the history access time of third data will change, data type To change, therefore, in order to improve it is subsequent third data are read out, are written when read-write efficiency, service in this step Device will also adjust the storage location of third data, and after the completion of adjustment, third data are returned to upper-layer user by network.
Specifically, when the storage location of server adjustment third data, including but not limited to following several situations:
The first situation, when third data are stored in storage unit, third data are transferred to the second data by server The head of queue.
In inventive embodiments, storage unit is responsible for storing cold data, after being read out third data, third data Warm data will be upgraded to by cold data, therefore, server needs to be transferred into be stored in the first cache unit.It considers The memory space of first cache unit is limited, server when third data to be transferred to the head of the second data queue, Detailed process is as follows:
A, server detects in the first cache unit with the presence or absence of remaining storage location.
B, when there is remaining storage location in the first cache unit, third data are transferred to the second data queue by server Head.
C, when, there is no remaining storage location, server turns the tail data of the second data queue in the first cache unit Storage unit is moved on to, and third data are transferred to the head of the second data queue.
It is SSD with the first cache unit, for storage unit is HDD, the data storage procedure in the case of this kind are as follows:
Under cloud storage scene, third data are stored in HDD, when receiving the read requests to third data, clothes Business device obtains third data from HDD, third data is transferred in SSD, and third data are returned to upper-layer user.Service When third data are transferred in SSD by device, it can detect with the presence or absence of remaining storage location in the second data queue of SSD, if There is remaining storage location in second data queue, then third data are directly stored in the head of the second data queue, if There is no remaining storage location in second data queue, then the tail of the queue data of the second data queue are transferred in HDD, and by the Three data are stored to the head of the second data queue.
Second situation, when third data are stored in the first cache unit, third data are transferred to first by server The head of data queue.
In the embodiment of the present invention, the first cache unit is responsible for storing warm data, after being read out to third data, third Data upgrade to dsc data by warm data, thus, server needs to be transferred into be stored in the first cache unit.Consider Memory space to the first cache unit is limited, and server is on the head that third data are transferred to the first data queue When, detailed process is as follows:
A, server detects in the second cache unit with the presence or absence of remaining storage location.
B, when there is remaining storage location in the second cache unit, third data are transferred to the first data queue by server Head.
B, when, there is no remaining storage location, server turns the tail data of the first data queue in the second cache unit The head of the second data queue is moved on to, and third data are transferred to the head of the first data queue.
When the tail data of first data queue is transferred to the head of the second data queue by server, steps are as follows:
B1, server detect in the first cache unit with the presence or absence of remaining storage location.
B2, when there is remaining storage location in the first cache unit, server turns the tail data of the first data queue Move on to the head of the second data queue.
B3, when there is no remaining storage location, server is by the tail data of the second data queue in the first cache unit It is transferred in storage unit, and the tail data of the first data queue is transferred to the head of the second data queue.
It is SSD with the first cache unit, the second cache unit is RAM, for storage unit is HDD, in the case of this kind Data storage procedure are as follows:
Under cloud storage scene, third data are stored in SSD, when receiving the read requests to third data, clothes Business device obtains third data from SSD, third data is transferred in RAM, and third data are returned to upper-layer user.Service When third data are transferred in RAM by device, it can detect with the presence or absence of remaining storage location in the first data queue of RAM, if There is remaining storage location in first data queue, then third data are directly stored in the head of the first data queue, if There is no remaining storage location in first data queue, then the tail of the queue data of the first data queue are transferred in SSD, and by the Three data are stored to the head of the first data queue.It, can when the tail of the queue data of first data queue are transferred in SSD by server It detects with the presence or absence of remaining storage location in the second data queue of SSD, if there is remaining storage position in the second data queue It sets, then the tail of the queue data of the first data queue is directly stored in the head of the second data queue, if in the second data queue There is no remaining storage locations, then the tail of the queue data of the second data queue are transferred in HDD, and by the team of the first data queue Mantissa is according to storage to the head of the second data queue.
The third situation, when third data are stored in the second cache unit, third data are transferred to first by server The head of data queue.
After this is read out third data, the history access time of third data will change, and become and work as The distance of the preceding time nearest time, when embodying the history access of data due to the data arrangement sequence in the first data queue Between, therefore, it is necessary to adjust its sequence in the first data queue, it is adjusted to the head of the first data queue.
Data storage procedure by taking the second cache unit is RAM as an example, in the case of this kind are as follows:
Under cloud storage scene, third data are stored in RAM, when receiving the read requests to third data, clothes Business device obtains third data from RAM, and third data are transferred to the head of the first data queue of RAM, and by third data Return to upper-layer user.
After having adjusted the storage location of third data, the embodiment of the present invention also by according to current state data memory, Cache table is updated, in the subsequent use process, the inquiry of data being carried out based on updated cache table, is mentioned The high accuracy of data query.
Fig. 5 shows the embodiment of the present invention and provides the data storage procedure based on read requests, is with the first cache unit SSD, the second cache unit are RAM, and for storage unit is HDD, data storage procedure is as follows:
1, when receiving the read requests to third data, server query caching table, if third data are stored in In RAM, third data are obtained from RAM, and execute step 2;If third data are stored in SSD, third is obtained from SSD Data, and execute step 3;If third data are not stored in caching (RAM and SSD), step 9 is executed.The effect of the step It is to improve the performance of cloud storage system using cache table.
2, third data are transferred to the head of the first data queue in RAM by server.The effect of the step is to be based on LRU principle adjusts data storing order in RAM, so that the data of newest access are located at the head of the first data queue, by hot number According to the advantage for being enriched in RAM, while making full use of RAM read or write speed fast, reading and writing data speed is improved.Step 10 is executed again.
3, it whether there is remaining storage location in server detection RAM, if there are remaining storage location in RAM, Execute step 2;If remaining storage location is not present in RAM, step 4 is executed.The effect of the step be to determine whether by The tail data of the first data queue is transferred in SSD in RAM.
4, server obtains the tail data of the first data queue in RAM, and the tail data of the first data queue is stored It is deleted from the first data queue to the head of the second data queue of SSD, and by the tail data of RAM.The effect of the step It is in the case where the storage of RAM has been expired, the minimum data of the temperature stored is converted into warm data.
5, third data are transferred to the head of RAM by server.The effect of the step is to filter out dsc data from SSD It stores in RAM, to improve the read or write speed of data.
6, with the presence or absence of remaining storage location in server detection SSD, if held in SSD there is no remaining storage location Row step 7;If there is remaining storage location in SSD, step 8 is executed.The effect of the step is to determine whether in SSD The tail data of two data queues is transferred in HDD.
7, server will brush HDD under the tail data of the second data queue in SSD, and the tail data is counted from second According to being deleted in queue.The effect of the step is to guarantee that the dirty data in SSD updates to HDD, to improve data Safety.
8, the tail data of the first data queue in RAM is transferred to the head of the second data queue of SSD by server.It should The effect of step is in the case where the storage of RAM has been expired, and the dsc data in RAM is downgraded to warm data, to be hot number According to reserved storage location, step 10 is executed.
9, server reads third data from HDD.The effect of the step is to cache in the first cache unit and second In unit when not stored third data, data are got from low layer, guarantee business can be gone on smoothly, needed for user can get Data.After reading third data in HDD, server also be will test with the presence or absence of remaining storage location in SSD, if There is remaining storage location in SSD, then third data is transferred to the head of the second data queue in SSD, if do not deposited in SSD In remaining storage location, then will be brushed in HDD under the tail data of the second data queue in SSD, and third data are transferred to The head of second data queue in SSD.
10, server returns to third data to upper-layer user, and process terminates.
Method provided in an embodiment of the present invention stores data based on the first cache unit and the second cache unit, Compared to the data storage method for individually using the second cache unit, cost is saved, for being stored in the first cache unit In data store that data into the second cache unit when storing again to the data, not only increase data reading Writing rate, and reduce and number is written to the data of the first cache unit, the damage of the first cache unit is avoided, number is improved According to safety, this kind of mode has taken into account the demand of cost, data reading speed and safety etc..
Referring to Fig. 6, the embodiment of the invention provides a kind of data storage device, which includes:
Enquiry module 601, for inquiring the storage position of the second data when receiving the write request to the first data It sets, the second data are the storing data for having identical data mark with the first data;
Memory module 602 arrives the first data storage for storing in the first cache unit when inquiring the second data In second cache unit, there are the first cache unit determining data number is written;
Return module 603, for returning to the write-in result to the first data.
In another embodiment of the present invention, memory module 602, for detecting in the second cache unit with the presence or absence of surplus Remaining storage location;When there is remaining storage location in the second cache unit, by the head of the first data storage to the first data queue Portion, the first data queue are used to store the data in the second cache unit;When there is no remaining storage positions in the second cache unit It sets, the tail data of the first data queue is transferred to the head of the second data queue, by the first data storage to the first data The head of queue, the second data queue are used to store the data in the first cache unit.
In another embodiment of the present invention, memory module 602, for detecting in the first cache unit with the presence or absence of surplus Remaining storage location;When there is remaining storage location in the first cache unit, the tail data of the first data queue is transferred to the The head of two data queues;When there is no remaining storage locations in the first cache unit, by the tail data of the second data queue It is transferred in storage unit, and the tail data of the first data queue is transferred to the head of the second data queue;
Wherein, the reading and writing data speed of storage unit is slower than the reading and writing data speed of the first cache unit, and the first caching is single The reading and writing data speed of member is slower than the reading and writing data speed of the second cache unit.
In another embodiment of the present invention, memory module 602, for be stored in second slow when inquire the second data When memory cell, the first data are transferred to the head of the first data queue, the first data queue is for storing the second cache unit In data.
In another embodiment of the present invention, memory module 602 are not stored in first for that ought inquire the second data When cache unit and the second cache unit, by the storage of the first data into the first cache unit.
In another embodiment of the present invention, memory module 602, for detecting in the first cache unit with the presence or absence of surplus Remaining storage location;When there is remaining storage location in the first cache unit, by the head of the first data storage to the second data queue Portion, the second data queue are used to store the data in the first cache unit;When there is no remaining storage positions in the first cache unit It sets, the tail data of the second data queue is transferred in storage unit, and the first data are stored to the second data queue Head;
Wherein, the reading and writing data speed of storage unit is slower than the reading and writing data speed of the first cache unit, and the first caching is single The reading and writing data speed of member is slower than the reading and writing data speed of the second cache unit.
In another embodiment of the present invention, memory module 602, for working as the read requests received to third data When, obtain third data;The storage location of third data is adjusted, and returns to third data.
In another embodiment of the present invention, memory module 602, for when third data are stored in storage unit, Third data are transferred to the head of the second data queue;When third data are stored in the first cache unit, by third data It is transferred to the head of the first data queue;When third data are stored in the second cache unit, third data are transferred to first The head of data queue.
In another embodiment of the present invention, memory module 602 are detected and are deposited in the first cache unit with the presence or absence of residue Storage space is set;When there is remaining storage location in the first cache unit, third data are transferred to the head of the second data queue;When There is no remaining storage locations in first cache unit, the tail data of the second data queue are transferred to storage unit, and will Third data are transferred to the head of the second data queue.
In another embodiment of the present invention, memory module 602, for detecting in the second cache unit with the presence or absence of surplus Remaining storage location;When there is remaining storage location in the second cache unit, third data are transferred to the head of the first data queue Portion;When remaining storage location is not present in the second cache unit, the tail data of the first data queue is transferred to the second data The head of queue, and third data are transferred to the head of the first data queue.
In another embodiment of the present invention, memory module 602, for detecting in the first cache unit with the presence or absence of surplus Remaining storage location;When there is remaining storage location in the first cache unit, the tail data of the first data queue is transferred to the The head of two data queues;When there is no remaining storage locations in the first cache unit, by the tail data of the second data queue It is transferred in storage unit, and the tail data of the first data queue is transferred to the head of the second data queue.
To sum up, device provided in an embodiment of the present invention carries out data based on the first cache unit and the second cache unit Storage saves cost, for being stored in the first caching compared to the data storage method for individually using the second cache unit Data in unit store that data into the second cache unit, not only increase number when storing again to the data According to read or write speed, and reduce and number is written to the data of the first cache unit, avoids the damage of the first cache unit, improve Information Security, this kind of mode have taken into account the demand of cost, data reading speed and safety etc..
Fig. 7 is a kind of server for data storage shown according to an exemplary embodiment.Referring to Fig. 7, server 700 include processing component 722, further comprises one or more processors, and the memory as representated by memory 732 Resource, can be by the instruction of the execution of processing component 722, such as application program for storing.The application journey stored in memory 732 Sequence may include it is one or more each correspond to one group of instruction module.In addition, processing component 722 is configured as It executes instruction, to execute function performed by server in above-mentioned date storage method.
Server 700 can also include that a power supply module 726 be configured as the power management of execute server 700, and one A wired or wireless network interface 750 is configured as server 700 being connected to network and input and output (I/O) interface 758.Server 700 can be operated based on the operating system for being stored in memory 732, such as Windows ServerTM, Mac OS XTM, UnixTM,LinuxTM, FreeBSDTMOr it is similar.
Server provided in an embodiment of the present invention deposits data based on the first cache unit and the second cache unit Storage saves cost, for being stored in the first caching list compared to the data storage method for individually using the second cache unit Data in member store that data into the second cache unit, not only increase data when storing again to the data Read or write speed, and reduce and number is written to the data of the first cache unit, the damage of the first cache unit is avoided, is improved Information Security, this kind of mode have taken into account the demand of cost, data reading speed and safety etc..
The embodiment of the invention provides a kind of computer readable storage medium, at least one is stored in the storage medium Instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set Or described instruction collection is loaded by processor and is executed to realize date storage method shown in Fig. 3.
It should be understood that data storage device provided by the above embodiment is in storing data, only with above-mentioned each function The division progress of module can according to need and for example, in practical application by above-mentioned function distribution by different function moulds Block is completed, i.e., the internal structure of data storage device is divided into different functional modules, with complete it is described above whole or Person's partial function.In addition, data storage device provided by the above embodiment and date storage method embodiment belong to same design, Its specific implementation process is detailed in embodiment of the method, and which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware Complete, relevant hardware can also be instructed to complete by program, program can store in a kind of computer-readable storage In medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely a prefered embodiment of the invention, is not intended to limit the invention, all in the spirit and principles in the present invention Within, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.

Claims (14)

1. a kind of date storage method, which is characterized in that the described method includes:
When receiving the write request to the first data, inquire the second data storage location, second data for institute State the storing data that the first data have identical data mark;
It stores when inquiring second data in the first cache unit, by first data storage to second caching In unit, there are first cache unit determining data number is written;
Return to the write-in result to first data.
2. the method according to claim 1, wherein described be stored in described the when inquiring second data When one cache unit, by first data storage into second cache unit, comprising:
It detects in second cache unit with the presence or absence of remaining storage location;
When there is remaining storage location in second cache unit, by the head of first data storage to the first data queue Portion, first data queue are used to store the data in second cache unit;
When remaining storage location is not present in second cache unit, the tail data of first data queue is transferred to The head of second data queue, by first data storage to the head of first data queue, second data team Column are for storing the data in first cache unit.
3. according to the method described in claim 2, it is characterized in that, described when there is no residues to deposit in second cache unit Storage space is set, and the tail data of first data queue is transferred to the head of the second data queue, comprising:
It detects in first cache unit with the presence or absence of remaining storage location;
When there is remaining storage location in first cache unit, the tail data of first data queue is transferred to institute State the head of the second data queue;
When remaining storage location is not present in first cache unit, the tail data of second data queue is transferred to In storage unit, and the tail data of first data queue is transferred to the head of second data queue;
Wherein, the reading and writing data speed of the storage unit is slower than the reading and writing data speed of first cache unit, and described The reading and writing data speed of one cache unit is slower than the reading and writing data speed of second cache unit.
4. the method according to claim 1, wherein the method also includes:
It stores when inquiring second data in second cache unit, first data is transferred to the first data The head of queue, first data queue are used to store the data in second cache unit.
5. the method according to claim 1, wherein the method also includes:
When inquiring second data and being not stored in first cache unit and second cache unit, by described One data are stored into first cache unit.
6. according to the method described in claim 5, it is characterized in that, described by first data storage to first caching In unit, comprising:
It detects in first cache unit with the presence or absence of remaining storage location;
When there is remaining storage location in first cache unit, by the head of first data storage to the second data queue Portion, second data queue are used to store the data in first cache unit;
When remaining storage location is not present in first cache unit, the tail data of second data queue is transferred to In storage unit, and first data are stored to the head of second data queue;
Wherein, the reading and writing data speed of the storage unit is slower than the reading and writing data speed of first cache unit, and described The reading and writing data speed of one cache unit is slower than the reading and writing data speed of second cache unit.
7. method according to any one of claim 1 to 6, which is characterized in that the method also includes:
When receiving the read requests to third data, the third data are obtained;
The storage location of the third data is adjusted, and returns to the third data.
8. the method according to the description of claim 7 is characterized in that the storage location of the adjustment third data, comprising:
When the third data are stored in storage unit, the third data are transferred to the head of second data queue Portion;
When the third data are stored in first cache unit, the third data are transferred to first data team The head of column;
When the third data are stored in second cache unit, the third data are transferred to first data team The head of column.
9. according to the method described in claim 8, it is characterized in that, described when the third data are stored in storage unit, The third data are transferred to the head of second data queue, comprising:
It detects in first cache unit with the presence or absence of remaining storage location;
When there is remaining storage location in first cache unit, the third data are transferred to the head of the second data queue Portion;
When remaining storage location is not present in first cache unit, the tail data of second data queue is transferred to The storage unit, and the third data are transferred to the head of second data queue.
10. according to the method described in claim 8, it is characterized in that, described delay when the third data are stored in described first When memory cell, the third data are transferred to the head of first data queue, comprising:
It detects in second cache unit with the presence or absence of remaining storage location;
When there is remaining storage location in second cache unit, the third data are transferred to first data queue Head;
When remaining storage location is not present in second cache unit, the tail data of first data queue is transferred to The head of second data queue, and the third data are transferred to the head of first data queue.
11. according to the method described in claim 10, it is characterized in that, described when there is no residues in second cache unit The tail data of first data queue is transferred to the head of second data queue by storage location, comprising:
It detects in first cache unit with the presence or absence of remaining storage location;
When there is remaining storage location in first cache unit, the tail data of first data queue is transferred to institute State the head of the second data queue;
When remaining storage location is not present in first cache unit, the tail data of second data queue is transferred to In the storage unit, and the tail data of first data queue is transferred to the head of second data queue.
12. a kind of data storage device, which is characterized in that described device includes:
Enquiry module inquires the storage location of the second data for when receiving the write request to the first data, and described the Two data are the storing data for having identical data mark with first data;
Memory module stores first data for storing in the first cache unit when inquiring second data Into second cache unit, there are first cache unit determining data number is written;
Return module, for returning to the write-in result to first data.
13. a kind of server, which is characterized in that the server includes processor and memory, is stored in the memory At least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, institute It states code set or described instruction collection is loaded as the processor and executed to realize as described in any one of claims 1 to 11 Date storage method.
14. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, extremely in the storage medium A few Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or described Instruction set is loaded as processor and is executed to realize the date storage method as described in any one of claims 1 to 11.
CN201810814260.0A 2018-07-23 2018-07-23 Data storage method, device, server and storage medium Active CN110209343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810814260.0A CN110209343B (en) 2018-07-23 2018-07-23 Data storage method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810814260.0A CN110209343B (en) 2018-07-23 2018-07-23 Data storage method, device, server and storage medium

Publications (2)

Publication Number Publication Date
CN110209343A true CN110209343A (en) 2019-09-06
CN110209343B CN110209343B (en) 2021-12-14

Family

ID=67779875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810814260.0A Active CN110209343B (en) 2018-07-23 2018-07-23 Data storage method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN110209343B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242727A (en) * 2022-07-15 2022-10-25 深圳市腾讯计算机***有限公司 User request processing method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103403667A (en) * 2012-12-19 2013-11-20 华为技术有限公司 Data processing method and device
CN103455283A (en) * 2013-08-19 2013-12-18 华中科技大学 Hybrid storage system
CN104142894A (en) * 2013-05-06 2014-11-12 华为技术有限公司 Data reading-writing method, storage controller and computer
US10409502B2 (en) * 2015-09-08 2019-09-10 Huawei Technologies Co., Ltd. Method and apparatus for writing metadata into cache

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103403667A (en) * 2012-12-19 2013-11-20 华为技术有限公司 Data processing method and device
CN104142894A (en) * 2013-05-06 2014-11-12 华为技术有限公司 Data reading-writing method, storage controller and computer
CN103455283A (en) * 2013-08-19 2013-12-18 华中科技大学 Hybrid storage system
US10409502B2 (en) * 2015-09-08 2019-09-10 Huawei Technologies Co., Ltd. Method and apparatus for writing metadata into cache

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242727A (en) * 2022-07-15 2022-10-25 深圳市腾讯计算机***有限公司 User request processing method, device, equipment and medium
CN115242727B (en) * 2022-07-15 2023-08-08 深圳市腾讯计算机***有限公司 User request processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN110209343B (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN102063406B (en) Network shared Cache for multi-core processor and directory control method thereof
CN100530195C (en) File reading system and method of distributed file systems
CN103198025A (en) Method and system form near neighbor data cache sharing
US8966155B1 (en) System and method for implementing a high performance data storage system
US11461151B2 (en) Controller address contention assumption
US20090106507A1 (en) Memory System and Method for Using a Memory System with Virtual Address Translation Capabilities
CN110998557A (en) High availability database through distributed storage
JP2004326162A (en) Network system, server, data processing method, and program
US20050223168A1 (en) Storage control device, control method and storage medium recording a control program
CN103530388A (en) Performance improving data processing method in cloud storage system
EP2765522B1 (en) Method and device for data pre-heating
CN103076992B (en) A kind of internal storage data way to play for time and device
CN109446114A (en) Spatial data caching method and device and storage medium
CN107341114B (en) Directory management method, node controller and system
US20190004968A1 (en) Cache management method, storage system and computer program product
US8307044B2 (en) Circuits, systems, and methods to integrate storage virtualization in a storage controller
US20070011288A1 (en) Apparatus and method for achieving thermal management through the allocation of redundant data processing devices
CN114860154B (en) Data migration using cache state changes
US20120185672A1 (en) Local-only synchronizing operations
US11921683B2 (en) Use of time to live value during database compaction
CN110209343A (en) Date storage method, device, server and storage medium
US11288238B2 (en) Methods and systems for logging data transactions and managing hash tables
JPH11143779A (en) Paging processing system for virtual storage device
US20230333989A1 (en) Heterogenous-latency memory optimization
US7536422B2 (en) Method for process substitution on a database management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant