CN111625198A - Metadata caching method and metadata caching device - Google Patents

Metadata caching method and metadata caching device Download PDF

Info

Publication number
CN111625198A
CN111625198A CN202010466534.9A CN202010466534A CN111625198A CN 111625198 A CN111625198 A CN 111625198A CN 202010466534 A CN202010466534 A CN 202010466534A CN 111625198 A CN111625198 A CN 111625198A
Authority
CN
China
Prior art keywords
data
radix tree
metadata
tree
radix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010466534.9A
Other languages
Chinese (zh)
Inventor
胡伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Biwin Storage Technology Co Ltd
Original Assignee
Biwin Storage Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Biwin Storage Technology Co Ltd filed Critical Biwin Storage Technology Co Ltd
Priority to CN202010466534.9A priority Critical patent/CN111625198A/en
Publication of CN111625198A publication Critical patent/CN111625198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0623Securing storage systems in relation to content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a metadata caching method and a metadata caching device, wherein the metadata caching method comprises the steps of constructing a radix tree; receiving a metadata caching request, wherein the metadata caching request comprises first data and second data corresponding to the first data; correspondingly storing the first data and the second data by adopting the radix tree; according to the embodiment of the invention, the mapping information between the first data and the second data contained in the metadata cache request is stored by constructing the radix tree in the metadata cache and adopting the data structure of the radix tree, the high efficiency of metadata storage is ensured by the metadata cache mechanism based on the radix tree, the radix tree has stable layer height, and the stability of the metadata operation performance can be ensured along with the increase of the data writing amount of a user.

Description

Metadata caching method and metadata caching device
Technical Field
The present invention relates to the field of metadata write caching, and in particular, to a metadata caching method and a metadata caching apparatus.
Background
Currently, in the field of solid State disk (ssd) (solid State disk), a common processing method for a write request is shown in fig. 1:
firstly, the writing request data is stored to a NAND gate flash memory NAND through a flash Translation layer FTL (flash Translation layer), then mapping information to be updated is stored to a metadata writing cache region, and finally the mapping information in the metadata writing cache region is refreshed in batch to a mapping table when the number of the mapping pieces accumulated in the metadata writing cache region reaches a certain degree.
Among other things, using a metadata write cache has several benefits: 1. the mapping table is prevented from being required to be updated after writing is completed every time, and the burden of a system caused by updating the mapping table is avoided; once the mapping table is not hit, the mapping data is read from the NAND; the mapping information to be updated is stored in the metadata writing cache region, so that the writing operation time can be shortened, and the firmware can quickly respond to a new request of the host, thereby improving the reading and writing performance; 2. the use of the write cache can improve the efficiency of updating the mapping table, because there may be a case where multiple pieces of mapping information are located in the same mapping table, which can save the time for querying and updating the mapping table.
The common metadata write cache generally selects a data structure with good query, insertion and deletion performance, and simultaneously considers the utilization rate of space, and after all, the available cache in firmware is very limited. The array is slow to insert and the linked list is slow to query, so that the array and the linked list are basically not considered; the hash table hashtable has good efficiency in processing sparse data, but once processing 10k to 100k of data, a large amount of bucket space is consumed, and if space is to be saved, a large amount of key conflicts are inevitably caused. Therefore, it is the mainstream practice to select a binary tree or a multi-tree, and as the amount of data written increases, the red-black tree red-balck tree or balanced binary tree avltree will result in the layer height of the tree being increased, for example, 1k items of data are stored, and the layer height of the avl tree is 10, that is, the log2Performance also degrades with increasing amounts of data written 1024.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: provided are a metadata caching method and a metadata caching device, which improve the stability of metadata operation performance.
In order to solve the technical problems, the invention adopts a technical scheme that:
a metadata caching method comprises the following steps:
constructing a radix tree;
receiving a metadata caching request, wherein the metadata caching request comprises first data and second data corresponding to the first data;
and correspondingly storing the first data and the second data by adopting the radix tree.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a metadata caching apparatus, comprising:
the building module is used for building a radix tree;
the device comprises a receiving module, a caching module and a caching module, wherein the receiving module is used for receiving a metadata caching request which comprises first data and second data corresponding to the first data;
and the storage module is used for correspondingly storing the first data and the second data by adopting the radix tree.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned metadata caching method.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
an electronic device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the steps of the metadata caching method when executing the computer program.
The invention has the beneficial effects that: the method comprises the steps that a radix tree is built in a metadata writing cache, mapping information between first data and second data contained in a metadata cache request is stored by adopting a data structure of the radix tree, the high efficiency of metadata storage is guaranteed by a metadata writing cache mechanism based on the radix tree, the radix tree has stable layer height, and the stability of metadata operability can be guaranteed along with the increase of user writing data quantity.
Drawings
FIG. 1 is a flow chart of the steps of a conventional process for a solid state hard disk write request in the prior art;
FIG. 2 is a flowchart illustrating steps of a metadata caching method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a metadata caching apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating insertion of data into a radix tree according to an embodiment of the present invention;
FIG. 6 is a flow chart illustrating query data in a radix tree according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating the partition of the cache space in the metadata write cache according to an embodiment of the present invention;
description of reference numerals:
1. a metadata caching apparatus;
12. building a module; 13. a receiving module; 14. a storage module;
2. an electronic device;
21. a memory; 22. a processor.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 2, an embodiment of the present invention provides a metadata caching method, including:
constructing a radix tree;
receiving a metadata caching request, wherein the metadata caching request comprises first data and second data corresponding to the first data;
and correspondingly storing the first data and the second data by adopting the radix tree.
From the above description, the beneficial effects of the present invention are: the method comprises the steps that a radix tree is built in a metadata writing cache, mapping information between first data and second data contained in a metadata cache request is stored by adopting a data structure of the radix tree, the high efficiency of metadata storage is guaranteed by a metadata writing cache mechanism based on the radix tree, the radix tree has stable layer height, and the stability of metadata operability can be guaranteed along with the increase of user writing data quantity.
Further, the correspondingly storing the first data and the second data by using the radix tree includes:
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the first data;
storing the second data on the leaf node.
The method further comprises receiving a metadata query request, wherein the metadata query request comprises third data;
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the third data;
acquiring data stored by the leaf node as fourth data corresponding to the third data;
and transmitting the fourth data.
Further, receiving a metadata deletion request, wherein the metadata deletion request comprises fifth data;
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the fifth data;
and deleting the sixth data stored by the leaf node.
As can be seen from the above description, the corresponding radix tree is determined according to the first data and searched in the corresponding radix tree, the corresponding leaf node is determined, the second data corresponding to the first data is stored in the leaf node, and the mapping information between the first data and the second data can be conveniently and quickly stored by means of the radix tree, so that the storage efficiency is high, and the quick query and deletion operations can be realized, and the favorable performance of metadata insertion, query and deletion is ensured by means of the radix tree.
Further, the radix tree includes a unique identifier;
the determining a corresponding radix tree according to the data to be operated and searching on the corresponding radix tree, wherein the determining a corresponding leaf node comprises:
splitting the data to be operated into a first data block and a second data block according to a preset data splitting rule;
determining a corresponding unique identifier according to the first data block;
searching a corresponding radix tree according to the unique identifier corresponding to the first data block;
searching on the searched radix tree according to the second data block, and determining a corresponding leaf node.
According to the description, the data to be operated is divided into the first data block and the second data block, the corresponding radix tree is determined according to the first data block, the corresponding radix tree is searched in the corresponding radix tree according to the second data block, the corresponding leaf node is determined, and the corresponding leaf node can be conveniently, quickly and accurately positioned according to the data to be operated.
Further, the radix tree includes a tree height;
the searching on the found radix tree according to the second data block and determining the corresponding leaf node comprises:
splitting the second data block into a subdata sequence corresponding to the tree height according to the tree height;
selecting subdata from the subdata sequence in sequence according to a preset sequence, searching on a corresponding node level on the radix tree, and determining nodes corresponding to the subdata until the subdata sequence is traversed;
and determining a node corresponding to the last subdata of the subdata sequence as the leaf node.
As can be seen from the above description, the second data block is split into the sub-data sequences corresponding to the tree height, the sub-data is sequentially selected from the sub-data sequences according to the preset sequence, the corresponding nodes are selected on the corresponding node levels on the radix tree until the sub-data sequences are traversed, and the updating, deleting and inserting only need to be performed for the comparison with the tree height for the corresponding times, so that the time complexity is low, and the efficiency is high.
Further, when a metadata cache request is received, if the node corresponding to the subdata cannot be searched on the node level corresponding to the radix tree, applying for a node from the node cache region of the radix tree, and supplementing insertion.
As can be seen from the above description, if there is no node corresponding to the child data on the search path, the node can be directly applied from the node cache region of the radix tree and is additionally inserted, so that the random access and use are realized, and the flexibility is improved.
Further, the method also comprises the following steps:
receiving a new radix tree request, wherein the new radix tree request comprises a unique identifier and a tree height corresponding to a new radix tree;
and constructing a corresponding radix tree according to the unique identifier and the tree height corresponding to the newly-built radix tree.
From the above description, it can be known that a radix tree is dynamically created according to the increase of the actually processed data amount, and the pressure brought by the increase of the written data amount of the user is shared by the newly created radix tree, so that the stability of the metadata operation performance is ensured.
Further, the first data is a logical allocation address, and the second data is a physical allocation address index.
As can be seen from the above description, the above radix tree-based metadata write caching mechanism is applied to a specific scenario of metadata write caching, and is used for storage, query, and deletion operations of LAA (logical allocation address) -PAA (physical allocation address) key value pairs in a metadata write caching area, so that stable metadata operation performance can be ensured.
Referring to fig. 3, another embodiment of the present invention provides a metadata caching apparatus, including:
the building module is used for building a radix tree;
the device comprises a receiving module, a caching module and a caching module, wherein the receiving module is used for receiving a metadata caching request which comprises first data and second data corresponding to the first data;
and the storage module is used for correspondingly storing the first data and the second data by adopting the radix tree.
Another embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned metadata caching method.
Referring to fig. 4, another embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the steps of the metadata caching method when executing the computer program.
The metadata caching method, the metadata caching device, the computer readable storage medium and the electronic equipment provided by the invention can be applied to any scene needing caching the metadata, and the following description is given for the concrete scene of writing and caching the firmware metadata:
example one
Referring to fig. 2, a metadata caching method includes the steps of:
s1, constructing a radix tree;
the metadata write cache region stores a large number of LAA-PAA (logical Au Address-Physical AuAddress) key-value pairs, wherein the LAA is calculated according to the LBA (logical Block Address) requested by the host, and the specific formula is as follows: LAA ═ LBA/sectors _ per _ au (typically 8), and if calculated in 256G space, the number of LAAs is 0x400,0000; the PAA indicates a specific physical address, and the size of the PAA is generally 4 Bytes;
different capacities will provide different numbers of radix tree radix trees, and two buffer spaces can be allocated in the metadata write buffer according to the size of the metadata write buffer and the size of the LAA-PAA key value pair to be stored, and are respectively used for storing radix tree nodes and PAA (physical allocation address); determining the number of radix trees to be constructed according to the size of the allocated cache space for storing the radix tree nodes;
taking the number of LAAs in 256G space as 0x400,0000 as an example, this embodiment will construct 0x400 radix trees, that is, radix-tree-root [0x400 ];
during specific establishment, a radix tree establishment request is received, wherein the radix tree establishment request comprises the number of radix trees, the tree height of the radix trees and a unique identifier corresponding to each radix tree;
creating a corresponding radix tree according to the number of the radix trees, the tree height of the radix trees and the unique identifier corresponding to each radix tree;
in an alternative embodiment, the tree height may be set to 4;
s2, receiving a metadata cache request, wherein the metadata cache request comprises first data and second data corresponding to the first data;
in a metadata write cache application scenario, first data is an LAA logic allocation address, and second data is a PAA physical allocation address index;
for example, at the time of FTL write, real PAA is already allocated, where PAA is 4Bytes, after the write is completed, LAA is 0x3520221, PAA is 0x34528976 needs to be saved in the metadata write buffer, at this time, a space of 4Bytes needs to be allocated from the PAA buffer, 0x34528976 needs to be saved, an index of this space is 25, where the index refers to a subscript, that is, 25 identifies the 25 th PAA in the PAA buffer, then the metadata cache request includes a logical allocation address LAA, that is, 0x3520221, and a physical allocation address PAA index 25;
s3, correspondingly storing the first data and the second data by adopting the radix tree;
the radix tree is a mechanism for associating pointers with integer key values, and is efficient in storage, and capable of being queried quickly, and is used for mapping pointers with integer values, in this embodiment, a mapping relationship between first data and second data is stored through the radix tree, each radix tree is composed of a plurality of radix tree nodes, each node has a fixed pointer with a size of 2^ n pointing to a next-level node, the size of n can be set according to actual needs, for example, the value of n is set to 4, each node can hold information of 16 next-level nodes, and the information corresponds to a 4-bit combined branch [0000, 0001, 0010,0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1111], where the radix tree node can be defined as follows:
TypedefstructRadix_Tree_Node_t{
Uint16_t sub_radix_tree_node_index[16];
}Radix_Tree_Node;
when the mapping information of the first data and the second data is maintained, specifically, the first data is split into a first data block and a second data block according to a preset data splitting rule;
determining a corresponding unique identifier according to the first data block;
searching a corresponding radix tree according to the unique identifier corresponding to the first data block;
splitting the second data block into a subdata sequence corresponding to the tree height according to the tree height;
selecting subdata from the subdata sequence in sequence according to a preset sequence, searching on a corresponding node level on the radix tree, and determining nodes corresponding to the subdata until the subdata sequence is traversed;
and determining a node corresponding to the last subdata of the subdata sequence as the leaf node.
Storing the second data on the leaf node;
the first data logically allocated address LAA is 0x3520221, and the second data physically allocated address PAA index is 25:
referring to fig. 5, first, 0x3520221 is split into two parts, 0x352 and 0x0221, and a root node of a radix tree corresponding to 0x352 is found;
then according to binary value 0000/0010/0010/0001 of 0x0221 and tree height (assumed to be 4), splitting the binary value into a sub-data sequence {0000, 0010, 0010, 0001} corresponding to the tree height 4, traversing according to 4 bits from the lower bit, as shown by a dotted line in fig. 5, finding a corresponding next-level node intermediate node 1 at a tree root node according to 0001, finding a next-level node intermediate node 2 at the intermediate node 1 according to 0010, finding a leaf node at the intermediate node 2 according to 0010, and finally storing a PAA index 25 at the leaf node on a pointer corresponding to the leaf node, namely a corresponding slot, according to 0000;
the traversal order of the sub-data sequence may be set as required, for example, the sub-data sequence may start from a low bit, or start from a high bit, or an order is agreed, and traversal is performed according to the agreed order;
in an optional implementation manner, if the node corresponding to the subdata cannot be searched on the node level corresponding to the radix tree, applying for the node from a node cache region of the radix tree, and performing supplementary insertion;
namely, if the radix tree node does not exist on the path, the node supplementary insertion is applied from the radix tree node cache region;
in another optional embodiment, the method further comprises receiving a metadata query request, the metadata query request including third data;
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the third data;
acquiring data stored by the leaf node as fourth data corresponding to the third data;
transmitting the fourth data;
taking the third data LAA as 0x234F232 as an example, wherein the high-level 0x234 identifier is a radix tree unique identifier radix tree id corresponding to the third data LAA, i.e. it indicates which radix tree is identified, and the remaining part 0xF232 of LAA is used to search the determined radix tree:
the binary system corresponding to 0xF232 is 1111/0010/0011/0010, which is split into sub-data sequences {1111,0010,0011,0010} according to 4 bits of tree height 4, and then 4-bit searching is performed from the lower level, as shown by the dotted line in fig. 6, and the searching is performed sequentially at the corresponding node level:
0010->0011->0010->1111;
finally, finding out the PAA index stored on the corresponding radix tree node, namely the PAA index;
in another optional embodiment, the method further comprises receiving a metadata deletion request, the metadata deletion request including fifth data;
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the fifth data;
deleting the sixth data stored by the leaf node;
or taking the fifth data as 0x234F232 as an example, referring to fig. 6, after finding the index of the PAA stored on the corresponding radix tree node, that is, the PAA index, it deletes it;
it can be seen that if the radix tree has a level set to 4, each node can store 16 nodes, each radix tree can store 16 × 16 — 64k nodes, and the insertion, query and deletion only need to be compared 4 times, thus showing its high efficiency;
FIG. 7 illustrates allocation of two cache spaces in a firmware-limited metadata write cache region for storing radix tree nodes and PAA (physical Allocation Address), respectively;
according to the format of each Radix Tree Node defined above, the size of each Radix Tree Node is 2Bytes × 16 ═ 32Bytes, the sub _ Radix _ Tree _ Node _ index of the middle Node points to the index of Radix _ Tree _ Node in the cache, and the sub _ Radix _ Tree _ Node _ index of the leaf Node points to the index of PAA in the cache;
preliminarily estimating the memory consumption of the two, if 4k Radix _ Tree _ nodes are consumed, the memory needs to be consumed by 4k 32-128 kB, the consumption of the first layer and the second layer in the radxtree is not considered, and the number of covered PAAs is [4k,4k 16 ]; wherein, the memory size that 4k PAAs need to consume is 4k × 4Bytes ═ 16 KB;
in another optional embodiment, the method further comprises the steps of:
receiving a new radix tree request, wherein the new radix tree request comprises a unique identifier and a tree height corresponding to a new radix tree;
constructing a corresponding radix tree according to the unique identifier and the tree height corresponding to the newly-built radix tree;
that is, as the amount of user-written data increases, stress can be shared by dynamically adding radix trees.
Example two
Referring to fig. 3, a metadata caching apparatus 1 includes:
a building module 11, configured to build a radix tree;
a receiving module 12, configured to receive a metadata cache request, where the metadata cache request includes first data and second data corresponding to the first data;
a storage module 13, configured to use the radix tree to correspondingly store the first data and the second data;
in addition, the above corresponding modules are used for executing the corresponding steps in the first embodiment.
EXAMPLE III
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the metadata caching method according to an embodiment.
Example four
Referring to fig. 4, an electronic device 2 includes a memory 21, a processor 22, and a computer program stored on the memory 21 and executable on the processor 22, where the processor 22 executes the computer program to implement the steps of the metadata caching method according to the first embodiment.
In summary, according to the metadata caching method, device, computer-readable storage medium, and electronic device provided by the present invention, a radix tree is constructed in a metadata write cache, and mapping information between first data and second data included in a metadata cache request is stored by using a data structure of the radix tree, because a layer height of the radix tree is fixed, and by setting a fixed smaller layer height, query, deletion, and insertion only need to be performed for comparing with the layer height for a corresponding number of times, so that computation complexity is low, efficiency is high, and as a user write data amount increases, pressure can be shared by dynamically adding a plurality of radix trees, and stability of metadata operability is ensured.
In the above embodiments provided in the present application, it should be understood that the disclosed method, apparatus, computer-readable storage medium, and system may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of components or modules may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or components or modules, and may be in an electrical, mechanical or other form.
The components described as separate parts may or may not be physically separate, and parts displayed as components may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the components can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each component may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (10)

1. A metadata caching method, comprising the steps of:
constructing a radix tree;
receiving a metadata caching request, wherein the metadata caching request comprises first data and second data corresponding to the first data;
and correspondingly storing the first data and the second data by adopting the radix tree.
2. The method of claim 1, wherein the using the radix tree to store the first data and the second data correspondingly comprises:
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the first data;
storing the second data on the leaf node.
3. The method of claim 2, further comprising receiving a metadata query request, wherein the metadata query request includes third data;
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the third data;
acquiring data stored by the leaf node as fourth data corresponding to the third data;
and transmitting the fourth data.
4. The method of claim 2, further comprising receiving a metadata deletion request, wherein the metadata deletion request includes fifth data;
determining a corresponding radix tree according to data to be operated, searching on the corresponding radix tree, and determining a corresponding leaf node, wherein the data to be operated is the fifth data;
and deleting the sixth data stored by the leaf node.
5. A metadata caching method according to any one of claims 2 to 4, wherein said radix tree comprises a unique identifier;
the determining a corresponding radix tree according to the data to be operated and searching on the corresponding radix tree, wherein the determining a corresponding leaf node comprises:
splitting the data to be operated into a first data block and a second data block according to a preset data splitting rule;
determining a corresponding unique identifier according to the first data block;
searching a corresponding radix tree according to the unique identifier corresponding to the first data block;
searching on the searched radix tree according to the second data block, and determining a corresponding leaf node.
6. The metadata caching method of claim 5, wherein the radix tree comprises a tree height;
the searching on the found radix tree according to the second data block and determining the corresponding leaf node comprises:
splitting the second data block into a subdata sequence corresponding to the tree height according to the tree height;
selecting subdata from the subdata sequence in sequence according to a preset sequence, searching on a corresponding node level on the radix tree, and determining nodes corresponding to the subdata until the subdata sequence is traversed;
and determining a node corresponding to the last subdata of the subdata sequence as the leaf node.
7. The method of claim 6, wherein when a metadata cache request is received, if no node corresponding to the child data is searched at the node level corresponding to the radix tree, applying for a node from a node cache area of the radix tree and performing complementary insertion.
8. The metadata caching method according to any one of claims 1 to 4 and 6 to 7, further comprising the steps of:
receiving a new radix tree request, wherein the new radix tree request comprises a unique identifier and a tree height corresponding to a new radix tree;
and constructing a corresponding radix tree according to the unique identifier and the tree height corresponding to the newly-built radix tree.
9. A metadata cache method according to any one of claims 1 to 4 and 6 to 7, wherein said first data is a logical assigned address and said second data is a physical assigned address index.
10. A metadata caching apparatus, comprising:
the building module is used for building a radix tree;
the device comprises a receiving module, a caching module and a caching module, wherein the receiving module is used for receiving a metadata caching request which comprises first data and second data corresponding to the first data;
and the storage module is used for correspondingly storing the first data and the second data by adopting the radix tree.
CN202010466534.9A 2020-05-28 2020-05-28 Metadata caching method and metadata caching device Pending CN111625198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010466534.9A CN111625198A (en) 2020-05-28 2020-05-28 Metadata caching method and metadata caching device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010466534.9A CN111625198A (en) 2020-05-28 2020-05-28 Metadata caching method and metadata caching device

Publications (1)

Publication Number Publication Date
CN111625198A true CN111625198A (en) 2020-09-04

Family

ID=72270037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010466534.9A Pending CN111625198A (en) 2020-05-28 2020-05-28 Metadata caching method and metadata caching device

Country Status (1)

Country Link
CN (1) CN111625198A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821058A (en) * 2023-08-28 2023-09-29 腾讯科技(深圳)有限公司 Metadata access method, device, equipment and storage medium
CN116893786A (en) * 2023-09-05 2023-10-17 苏州浪潮智能科技有限公司 Data processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526965A (en) * 2009-04-29 2009-09-09 成都市华为赛门铁克科技有限公司 Locating method of index nodes of disk file and device thereof
CN103098034A (en) * 2010-07-28 2013-05-08 Fusion-Io股份有限公司 Apparatus, system, and method for conditional and atomic storage operations
CN108897698A (en) * 2018-06-29 2018-11-27 郑州云海信息技术有限公司 A kind of file data blocks addressing method, system and equipment and storage medium
CN111125447A (en) * 2019-12-22 2020-05-08 北京浪潮数据技术有限公司 Metadata access method, device and equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526965A (en) * 2009-04-29 2009-09-09 成都市华为赛门铁克科技有限公司 Locating method of index nodes of disk file and device thereof
CN103098034A (en) * 2010-07-28 2013-05-08 Fusion-Io股份有限公司 Apparatus, system, and method for conditional and atomic storage operations
CN108897698A (en) * 2018-06-29 2018-11-27 郑州云海信息技术有限公司 A kind of file data blocks addressing method, system and equipment and storage medium
CN111125447A (en) * 2019-12-22 2020-05-08 北京浪潮数据技术有限公司 Metadata access method, device and equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TING WU 等: "Multigranularity Space Management Scheme for Accelerating the Write Performance of In-Memory File Systems", IEEE, pages 1 - 12 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821058A (en) * 2023-08-28 2023-09-29 腾讯科技(深圳)有限公司 Metadata access method, device, equipment and storage medium
CN116821058B (en) * 2023-08-28 2023-11-14 腾讯科技(深圳)有限公司 Metadata access method, device, equipment and storage medium
CN116893786A (en) * 2023-09-05 2023-10-17 苏州浪潮智能科技有限公司 Data processing method and device, electronic equipment and storage medium
CN116893786B (en) * 2023-09-05 2024-01-09 苏州浪潮智能科技有限公司 Data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP5996088B2 (en) Cryptographic hash database
US9471500B2 (en) Bucketized multi-index low-memory data structures
CN105117415B (en) A kind of SSD data-updating methods of optimization
Zhong et al. {REMIX}: Efficient Range Query for {LSM-trees}
CN103229164B (en) Data access method and device
KR20210058988A (en) Counter-based compression of key-value storage tree data blocks
CN111832065A (en) Software implemented using circuitry and method for key-value storage
US7870122B2 (en) Self-tuning index for flash-based databases
CN104809179A (en) Device and method for accessing Hash table
US20150324281A1 (en) System and method of implementing an object storage device on a computer main memory system
CN111625198A (en) Metadata caching method and metadata caching device
CN114064984B (en) World state increment updating method and device based on sparse array linked list
CN110275838A (en) The address conversion and its accelerator of KV storage equipment
CN106055679A (en) Multi-level cache sensitive indexing method
CN111324305A (en) Data writing/reading method in distributed storage system
US7210019B2 (en) Exclusive access for logical blocks
KR20230026946A (en) Key value storage device with hashing
CN113094336B (en) Cuckoo hash-based file system directory management method and system
CN117573676A (en) Address processing method and device based on storage system, storage system and medium
US11366609B2 (en) Technique for encoding deferred reference count increments and decrements
CN108804571B (en) Data storage method, device and equipment
CN111104435B (en) Metadata organization method, device and equipment and computer readable storage medium
CN112527196B (en) Cache read-write method and device, computer readable storage medium and electronic equipment
Liu et al. Pea hash: a performant extendible adaptive hashing index
CN109325023B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 floors 1-3 and 4 of buildings 4 and 8, zone 2, Zhongguan honghualing Industrial South Zone, No. 1213 Liuxian Avenue, Pingshan community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong

Applicant after: BIWIN STORAGE TECHNOLOGY Co.,Ltd.

Address before: 518000 1st, 2nd, 4th and 6th floors of No.4 factory building of tongfuyu industrial city, Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: BIWIN STORAGE TECHNOLOGY Co.,Ltd.