WO2017050014A1 - 一种数据存储处理方法和装置 - Google Patents

一种数据存储处理方法和装置 Download PDF

Info

Publication number
WO2017050014A1
WO2017050014A1 PCT/CN2016/092414 CN2016092414W WO2017050014A1 WO 2017050014 A1 WO2017050014 A1 WO 2017050014A1 CN 2016092414 W CN2016092414 W CN 2016092414W WO 2017050014 A1 WO2017050014 A1 WO 2017050014A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
level
data
unit
query
Prior art date
Application number
PCT/CN2016/092414
Other languages
English (en)
French (fr)
Inventor
郭军
Original Assignee
北京奇虎科技有限公司
奇智软件(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司, 奇智软件(北京)有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2017050014A1 publication Critical patent/WO2017050014A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers

Definitions

  • the present invention relates to the field of network communication technologies, and in particular, to a data storage processing method and apparatus.
  • the present invention has been made in order to provide a data storage processing method and apparatus that overcomes the above problems or at least partially solves the above problems.
  • a data storage processing method including:
  • the data is stored in the database, the first-level cache and the second-level cache respectively; when the data query request is received, the query is first performed in the first-level cache; if the requested data exists in the first-level cache, the requester is returned The queried data is queried in the secondary cache if the query result cannot be obtained from the primary cache; if the requested data exists in the secondary cache, the queried data is returned to the requester, if the secondary cache cannot be obtained The result of the query is then queried in the database; if the requested data exists in the database, the queried data is returned to the requesting party, and if the requested data does not exist in the database, the result of the query failure is returned to the requesting party.
  • a data storage processing apparatus comprising: a database unit, a level 1 cache unit, a level 2 cache unit, a write processing unit, and a read processing unit; the write processing unit being adapted to data Stored separately in the database unit, the first level cache unit, and the second level cache unit; the read processing The unit is adapted to first query to the level 1 cache unit when receiving the data query request; if the requested data exists in the level 1 cache unit, return the queryed data to the requester, if the level 1 cache unit cannot If the query result is obtained, the query is performed in the second cache unit; if the requested data exists in the second cache unit, the queryed data is returned to the requester, and if the query result cannot be obtained from the secondary cache unit, the database unit is obtained.
  • the query is performed; if the requested data exists in the database unit, the queried data is returned to the requesting party, and if the requested data does not exist in the database unit, the result of the query failure is returned to the requesting
  • a computer program comprising computer readable code, which when executed on a server causes the server to perform a data storage processing method as described above.
  • a computer readable medium wherein a computer program as described above is stored.
  • the technical solution provided by the present invention stores data in the database, the first-level cache, and the second-level cache by setting the storage mode of the data.
  • the first query is performed in the first-level cache. If the requested data exists in the level 1 cache, the data of the query is returned to the requester; if the result of the query cannot be obtained from the level 1 cache, the level 1 cache may be crashed or unavailable, such as downtime, which requires The query is performed in the secondary cache.
  • the queryed data is returned to the requester, indicating that the primary cache may be in a crash or in an unavailable state, such as downtime; If the query result is not available in the cache, the L2 cache may also crash or be in an unavailable state such as downtime. However, this situation rarely occurs. In this case, you need to query the database. If the requested data exists in the database, the queryed data is returned to the requester, and if the requested data does not exist in the database, the result of the query failure is returned to the requester. In this way, the first-level cache is used to alleviate most of the data query pressure of the database.
  • the second-level cache can be used to query the data query that cannot obtain the query result from the first-level cache. Processing, so that all data query requests can be processed. Only data query requests that do not get the query result in the L1 cache and the L2 cache will be queried in the database. Such data query requests are extremely small. Within the processing power of the database, which greatly reduces the access pressure of the database, and can better handle the large number of data query requests in a short period of time, which has the advantages of reducing equipment loss and reducing personnel maintenance costs. Beneficial effect.
  • FIG. 1 shows a flow chart of a data storage processing method in accordance with one embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a data storage processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram showing a correspondence relationship of a cache node
  • FIG. 4 is a schematic diagram showing a correspondence relationship of another cache node
  • Figure 5 schematically shows a block diagram of a server for performing the method according to the invention
  • Fig. 6 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • FIG. 1 is a flowchart of a data storage processing method according to an embodiment of the present invention. As shown in FIG. 1, the method includes:
  • step S110 the data is stored in a database, a level 1 cache, and a level 2 cache, respectively.
  • step S120 when receiving the data query request, the query is first performed in the first level cache.
  • Step S130 if the requested data exists in the first level cache, the queried data is returned to the requesting party, and if the query result cannot be obtained from the level 1 cache, the query is performed in the second level cache.
  • Step S140 if the requested data exists in the second level cache, the queried data is returned to the requesting party. If the query result cannot be obtained from the second level cache, the query is performed in the database.
  • Step S150 If the requested data exists in the database, the queried data is returned to the requesting party, and if the requested data does not exist in the database, the result of the query failure is returned to the requesting party.
  • the data is stored in the database, the first-level cache, and the second-level cache respectively.
  • the first query is performed in the first-level cache.
  • the level 1 cache If the requested data exists in the level 1 cache, the data of the query is returned to the requester; if the query result cannot be obtained from the level 1 cache, the level 1 cache may crash or be in an unavailable state such as downtime, and further needs to be The query is performed in the level cache; if the requested data exists in the second level cache, the queryed data is returned to the requester, indicating that the level 1 cache is indeed unavailable due to a state such as a crash or downtime; if the same cannot be obtained from the second If the query result is obtained in the level cache, the second level cache may also crash or be in an unavailable state such as downtime. However, this situation rarely occurs, and it is necessary to query the database at this time.
  • the level 2 cache also has no queried data.
  • This utilizes two levels of caching to alleviate most of the data query pressure of the database. Even when the first level cache is too late to process too many data query requests, the second level cache can be used to query the data that cannot obtain the query result from the first level cache.
  • the request is processed, so that all data query requests can be processed substantially. Only the data query request that does not obtain the query result in the primary cache and the secondary cache can be queried in the database, and such a data query request is extremely A small amount, within the processing power of the database, thereby greatly reducing the access pressure of the database, and better able to handle a large number of data query requests in a relatively short period of time, with reduced equipment loss and reduced personnel maintenance costs.
  • the benefits are described in this specification.
  • storing the data in the database, the first-level cache, and the second-level cache separately includes: for one piece of data, first writing the piece of data into the database, and then writing the piece of data into the first-level cache and the second-level cache; When any one of the primary cache and the secondary cache fails to write, the data is deleted from the primary cache and the secondary cache.
  • Level 1 cache Level 2 cache
  • the main processing of the data query request processing is the first-level cache.
  • the data is first written into the first-level cache, and then the data is written into the second-level cache.
  • the data is deleted from the level 1 cache and the level 2 cache; likewise, if the data is written to the level 1 cache successfully, and the write to the level 2 cache fails, the Strip data from level 1 cache and level 2 Deleted in the cache.
  • the network server generally uses a relational database, and the key is generated when the data is stored, and the processing of the data query request is to obtain the data corresponding to the key by querying the key. Even if a data write fails, if the key of the data is not deleted, a data query request processing error will occur. For example, a piece of data fails when it is written to the level 1 cache, and a key corresponding to the piece of data is left in the level 1 cache, and the value is empty, so even if the piece of data is successfully written in the level 2 cache, When processing a data query request, the query is first performed in the first level cache, then the actual returned data is an empty value, and the data query request cannot be returned correctly. In order to prevent this from happening, in this embodiment, when any one of the primary cache and the secondary cache fails to be written, the data is deleted from the primary cache and the secondary cache, and the data may be deleted. The key and the corresponding value.
  • a first-level cache and a second-level cache are provided: the first-level cache is composed of N cache nodes, the second-level cache is composed of N cache nodes, and the first-level cache is N.
  • the cache node stores the same data in one-to-one correspondence with the N cache nodes of the second level cache; N is a natural number.
  • Such a level 1 cache and a level 2 cache actually form the master-slave mode.
  • the first level cache is used as the master layer
  • the second level cache is used as the slave layer.
  • the data stored by the two caches is identical to the number of cache nodes formed, and can be regarded as a two-layer cache that is mirrored each other. This ensures the stability of the structured data query process and the consistency of checking and confirming the stored data.
  • the primary cache and the secondary cache do not have a master-slave relationship, but the access sequence is different in the query.
  • the primary cache is composed of M caches, each cache is composed of N cache nodes, and the secondary cache is composed of N caches.
  • the cache node is composed; the N cache nodes in each group cache in the level 1 cache store the same data in one-to-one correspondence with the N cache nodes of the level 2 cache.
  • the level 1 cache is used as a cluster.
  • each group of caches and any other group of caches are mirror images of each other, and the number of nodes and the stored data are completely the same. This is to deal with the situation where the number of query requests for a piece of data or some data is exceptionally large in a batch of data query requests. If the data query request is evenly distributed according to the data, the query workload of each cache node is uneven, and even the cache node collapses. For example, each cache node stores 100 pieces of data, and the query requests of each data are relatively consistent, and are about 300 per minute. At this time, a certain data in a certain cache node suddenly becomes hotspot data, and the amount of data query is rapidly increased.
  • multiple sets of caches are set in the L1 cache, so that when a certain data becomes hotspot data and the amount of data query is increased, the data query request can be allocated to the L1 cache according to certain rules.
  • the group cache since each cache group has a corresponding cache node to store the hotspot data, the data query can be successfully performed, which solves the problem that a certain amount of data or a certain amount of data is large.
  • storing data separately into the database, the primary cache, and the secondary cache includes: storing the data in a database; storing the data in each set of caches of the primary cache, and storing the data to the database In the secondary cache.
  • querying into the level one cache includes:
  • Query requests are directed to a set of caches in the level 1 cache by a consistent hash algorithm, and queries are made in the group cache; or, depending on the load capacity and/or availability status of each group of caches in the level 1 cache, the query request is made.
  • a set of caches directed to the level 1 cache into which queries are made.
  • This embodiment further illustrates how to allocate a query request if the first level cache contains multiple sets of caches.
  • Balance means that the hash result can be distributed to all caches as much as possible, so that all cache nodes can be utilized.
  • Monotonic means that if some content has been dispatched to the corresponding cache through the hash, a new cache is added to the system. The result of the hash should be such that the original allocated content can be mapped to the original or new cache without being mapped to other cache groups in the old Level 1 cache.
  • Dispersion In a distributed environment, the terminal may not see all the cache, but only a part of it. When the terminal wants to map the content to the cache through the hash process, the buffer range may be different due to different terminals, resulting in inconsistent hash results. The final result is that the same content is mapped to different terminals by different terminals. In the cache group. This situation is obviously should be avoided Because it causes the same content to be stored in different caches, reducing the efficiency of system storage. The definition of dispersibility is the severity of the above situation. A good hash algorithm should be able to avoid inconsistencies as much as possible, that is, to minimize the dispersion.
  • Load The load problem actually looks at the dispersion problem from another angle. Since different terminals may map the same content to different cache groups, it may be mapped to different content by different users for a particular cache group. As with dispersibility, this situation should also be avoided, so a good hash algorithm should be able to minimize the load on the cache.
  • the query request can be directed to a set of caches of the level 1 cache according to the load capacity and/or availability status of each group of caches in the level 1 cache.
  • the method further comprises:
  • Level 1 cache and Level 2 cache After the level 1 cache and the level 2 cache are full, when new data needs to be stored, the data whose access amount is lower than the preset value is deleted from the level 1 cache and the level 2 cache, and the new data is written. Level 1 cache and Level 2 cache.
  • the LRU algorithm means that pages that are frequently used in the first few instructions are likely to be used frequently in the following instructions. Conversely, pages that have not been used for a long time are likely to be unused for a longer period of time in the future. So you only need to find the least used page to recall the memory each time you swap.
  • the data that has not been accessed for a long time can be deleted when the primary cache and the secondary cache are full, and can be implemented by setting a threshold. New data can be written after the storage space is obtained.
  • the method is suitable for a short period of time. If the problem occurs for a long time, the maintenance personnel can add a new cache according to the storage situation and the like.
  • the method further comprises:
  • This embodiment is also intended to protect data consistency.
  • the value of the key of a piece of data may need to be updated continuously, such as subsequent reports of the news.
  • the database hard disk must be modified first.
  • the data in the level 1 cache and the level 2 cache need to be modified.
  • the data storage rule mentioned in the foregoing embodiment when the data fails to be modified in the L1 cache or the L2 cache, the data in the L1 cache and the L2 cache need to be deleted at the same time, and the reason is not described herein.
  • FIG. 2 is a schematic structural diagram of a data storage processing apparatus according to an embodiment of the present invention.
  • the data storage processing apparatus 200 includes: a database unit 210, a level 1 cache unit 220, The second level cache unit 230, the write processing unit 240, and the read processing unit 250.
  • the write processing unit 240 is adapted to store data in the database unit 210, the first level cache unit 220, and the second level cache unit 230, respectively.
  • the read processing unit 250 is adapted to first query the first level cache unit 220 when receiving the data query request; if the requested data exists in the level 1 cache unit 220, return the queryed data to the requester, if If the level 1 cache unit 220 cannot obtain the query result, it can query the level 2 cache unit 230; if the requested data exists in the level 2 cache unit 230, the queryed data is returned to the requester, if the second level cache is 230 yuan. If the query result cannot be obtained, the query is made to the database unit 210; if the requested data exists in the database unit 210, the queried data is returned to the requester, and if the requested data does not exist in the database unit 210, the requester is returned. The result of the query failure.
  • the data storage processing device 200 shown in FIG. 2 stores the data in the database unit 210, the first-level cache unit 220, and the second-level cache unit 230 by setting the storage manner of the data.
  • the query is performed in the level 1 cache unit 220.
  • the level 1 cache unit 220 If the requested data exists in the level 1 cache unit 220, the data of the query is returned to the requester; if the query result cannot be obtained from the level 1 cache unit 220, the level 1 cache unit 220 may crash or be in an unavailable state such as downtime, and need to further query into the secondary cache unit 230; if the requested data exists in the secondary cache unit 230, the queryed data is returned to the requester, indicating that this time The level cache unit 220 is indeed unavailable due to a state such as a crash or downtime; if the same cannot obtain the query result from the second level cache unit 230, it indicates that the level two cache unit 230 may also crash or be in an unavailable state such as a downtime.
  • the tier 2 cache unit 230 also has no query.
  • the data needs to be queried directly to the database unit 210 at this time. If the requested data exists in the database unit 210, the queried data is returned to the requesting party, and if the requested data does not exist in the database unit 210, the result of the query failure is returned to the requesting party.
  • This utilizes two-level cache to alleviate most of the data query pressure of the database.
  • the second-level cache can be used to query the data that cannot obtain the query result from the first-level cache.
  • the request is processed, so that all data query requests can be processed substantially. Only the data query request that does not obtain the query result in the primary cache and the secondary cache can be queried in the database, and such a data query request is extremely A small amount, within the processing power of the database, thus greatly reducing the access pressure of the database, but also better
  • the need to process a large number of data query requests in a relatively short period of time has the beneficial effect of reducing equipment loss and reducing personnel maintenance costs.
  • the write processing unit 240 is adapted to write the strip data into the database unit 210 for a piece of data, and then write the strip data into the level 1 cache unit 220 and the level 2 cache unit 230; When any one of the unit 210 and the second level buffer unit 220 fails to write, the piece of data is deleted from the level 1 cache unit 210 and the level 2 cache unit 220.
  • the data query request processing is mainly the first level cache unit 220. Generally, the data is first written into the level 1 cache unit 220, and then the data is written into the level 2 buffer unit 230.
  • the piece of data is deleted from the level 1 buffer unit 220 and the level 2 buffer unit 230; likewise, if the data is written to the level 1 cache unit 220 successfully, but is written
  • the secondary cache unit 230 fails, the piece of data is deleted from the primary cache unit 220 and the secondary cache unit 230.
  • the network server generally uses a relational database, and the key is generated when the data is stored, and the processing of the data query request is to obtain the data corresponding to the key by querying the key. Even if a data write fails, if the key of the data is not deleted, a data query request processing error will occur.
  • a piece of data fails when written to the level 1 cache unit 220, and a key corresponding to the piece of data remains in the level 1 cache unit 220, and the value is empty, even if the piece of data is in the level 2 cache unit. If the writing is successful, the data that is actually returned is an empty value, and the data query request cannot be returned correctly.
  • the data is taken from the primary cache unit 220 and the secondary cache unit 230. Delete, specifically delete the key of the data and the corresponding value.
  • the first level cache unit 220 is composed of N cache nodes
  • the second level cache unit 230 is composed of N cache nodes
  • the first cache unit 220 has N caches.
  • the node stores the same data in one-to-one correspondence with the N cache nodes of the secondary cache unit 230; N is a natural number.
  • FIG. 3 shows a schematic diagram of a correspondence relationship of cache nodes.
  • such a level 1 cache unit and a level 2 cache unit actually form a master-slave mode.
  • the first level cache unit is used as the master layer
  • the second level cache unit is used as the slave layer.
  • it can be seen as a two-layer cache that mirrors each other. This ensures the stability of the structured data query process and the consistency of checking and confirming the stored data.
  • the primary cache and the secondary cache do not have a master-slave relationship, but the access sequence is different in the query.
  • the level 1 cache unit 220 is composed of M groups of caches, each group of buffers is composed of N cache nodes, and the second level cache unit 230 is composed of N cache nodes.
  • the N cache nodes in each set of caches in the level 1 cache unit 220 store the same data in one-to-one correspondence with the N cache nodes of the level 2 cache unit 230.
  • FIG. 4 shows a schematic diagram of a correspondence relationship of still another cache node.
  • the first-level cache unit is used as a cluster.
  • each group of caches and any other group of caches are mirror images of each other, and the number of nodes and stored data are included. It is exactly the same. This is to deal with the situation where the number of query requests for a piece of data or some data is exceptionally large in a batch of data query requests. If the data query request is evenly distributed according to the data, the query workload of each cache node is uneven, and even the cache node collapses. For example, each cache node stores 100 pieces of data, and the query requests of each data are relatively consistent, and are about 300 per minute.
  • a large number of data query requests to access the database cause the database to crash.
  • multiple sets of caches are set in the first-level cache unit, so that when a certain data becomes hotspot data and the amount of data query is increased, the data query request can be allocated to the first-level cache according to a certain rule.
  • the data query can be successfully performed, which solves the problem that a certain amount of data or a certain amount of data is large.
  • the write processing unit 240 is adapted to store data in the database unit 210, store the data in each set of caches of the level one cache unit 220, and store the data in two.
  • Level buffer unit 230 In one embodiment of the present invention, in the above apparatus, the write processing unit 240 is adapted to store data in the database unit 210, store the data in each set of caches of the level one cache unit 220, and store the data in two.
  • Level buffer unit 230 In one embodiment of the present invention, in the above apparatus, the write processing unit 240 is adapted to store data in the database unit 210, store the data in each set of caches of the level one cache unit 220, and store the data in two.
  • Level buffer unit 230 In one embodiment of the present invention, in the above apparatus, the write processing unit 240 is adapted to store data in the database unit 210, store the data in each set of caches of the level one cache unit 220, and store the data in two.
  • the read processing unit 250 is adapted to direct the query request to the first level cache unit 220 by the consistency hash algorithm when performing the query in the level 1 cache unit 220.
  • the group caches the query to the group cache; or is adapted to direct the query request to a set of caches of the level 1 cache unit 220 according to the load capabilities and/or available states of the group caches in the level 1 cache unit 220. Query into the group cache.
  • This embodiment further illustrates the problem of how to allocate a query request if the first level cache unit 220 contains multiple sets of caches.
  • a better solution to the problem is to adopt a consistent hash algorithm.
  • it can be considered from the following aspects:
  • Balance means that the hash result can be distributed to all caches as much as possible, so that all cache nodes can be utilized.
  • Monotonic means that if some content has been dispatched to the corresponding cache through the hash, a new cache is added to the system. The result of the hash should be such that the original allocated content can be mapped to the original or new cache without being mapped to other cache groups in the old Level 1 cache.
  • Dispersion In a distributed environment, the terminal may not see all the cache, but only a part of it. When the terminal wants to map the content to the cache through the hash process, the buffer range may be different due to different terminals, resulting in inconsistent hash results. The final result is that the same content is mapped to different terminals by different terminals. In the cache group. This situation is obviously to be avoided because it causes the same content to be stored in different caches, reducing the efficiency of system storage. The definition of dispersibility is the severity of the above situation. A good hash algorithm should be able to avoid inconsistencies as much as possible, that is, to minimize the dispersion.
  • the load problem actually looks at the dispersion problem from another angle. Since different terminals may map the same content to different cache groups, it may be mapped to different content by different users for a particular cache group. As with dispersibility, this situation should also be avoided, so a good hash algorithm should be able to minimize the load on the cache.
  • the query request may be directed to a set of caches of the level 1 cache unit 220 according to the load capacity and/or available status of each group cache in the level 1 cache unit 220.
  • the write processing unit 240 is adapted to be cached in the first level. After the unit 220 and the second level buffer unit 230 are full, when there is new data to be stored, the data of the access amount lower than the preset value is deleted from the level 1 buffer unit 220 and the level 2 buffer unit 230, and the The new data is written into the level 1 cache unit 220 and the level 2 cache unit 230.
  • the LRU algorithm means that pages that are frequently used in the first few instructions are likely to be used frequently in the following instructions. Conversely, pages that have not been used for a long time are likely to be unused for a longer period of time in the future. So you only need to find the least used page to recall the memory each time you swap.
  • the data that has not been accessed for a long time can be deleted when the level 1 cache unit 220 and the level 2 cache unit 230 are full, and can be implemented by setting a threshold. New data can be written after the storage space is obtained.
  • the method is suitable for a short period of time. If the problem occurs for a long time, the maintenance personnel can add a new cache according to the storage situation and the like.
  • the write processing unit 240 is further adapted to: when modifying data in the database unit 210, the same data in the first level cache unit 220 and the second level cache unit 230 Make the same changes.
  • This embodiment is also intended to protect data consistency.
  • the value of the key of a piece of data may need to be updated continuously, such as subsequent reports of the news.
  • the database unit 210 is first modified first. After the modification is successful, the data in the level 1 cache unit 220 and the level 2 buffer unit 230 need to be modified.
  • the data storage rule mentioned in the foregoing embodiment when the data fails to be modified in the L1 cache unit 220 or the L2 cache unit 230, the data in the L1 cache unit 220 and the L2 cache unit 230 need to be deleted at the same time. No longer.
  • the technical solution of the present invention alleviates the access pressure of the database by setting the level 1 cache and the level 2 cache to process the data query request, and according to various situations that may be encountered in the implementation, the level 1 cache and the level 2
  • the cache nodes in the cache are allocated accordingly, and the methods for writing and modifying data in different situations and the way to process data query requests are described.
  • the technical solution of the invention as a complete technical solution, effectively alleviates the access pressure of the database when facing a large number of data query requests, and provides an orderly, reliable and complete solution method, which has the advantages of reducing equipment loss and reducing personnel maintenance cost. Beneficial effect.
  • modules in the devices of the embodiments can be adaptively changed and placed in one or more devices different from the embodiment.
  • the modules or units or components of the embodiments may be combined into one module or unit or component, and further they may be divided into a plurality of sub-modules or sub-units or sub-components.
  • any combination of the features disclosed in the specification, including the accompanying claims, the abstract and the drawings, and any methods so disclosed, or All processes or units of the device are combined.
  • Each feature disclosed in this specification (including the accompanying claims, the abstract and the drawings) may be replaced by alternative features that provide the same, equivalent or similar purpose.
  • Various component embodiments of the present invention may be implemented in hardware or on one or more processors
  • the running software modules are implemented or implemented in a combination of them.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components of the data storage processing device in accordance with embodiments of the present invention.
  • the invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.
  • Figure 5 shows a block diagram of a server for performing the method according to the invention.
  • the server conventionally includes a processor 510 and a computer program product or computer readable medium in the form of a memory 520.
  • the memory 520 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 520 has a memory space 530 for program code 531 for performing any of the method steps described above.
  • storage space 530 for program code may include various program code 531 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such computer program products are typically portable or fixed storage units as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 520 in the server of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit includes computer readable code 531', code that can be read by a processor, such as 510, which, when executed by a server, causes the server to perform various steps in the methods described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种数据存储处理方法和装置,所述方法包括:将数据分别存储到数据库、一级缓存和二级缓存中(S110);当接收到数据查询请求时,先到一级缓存中进行查询(S120);如果一级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从一级缓存无法获得查询结果则到二级缓存中进行查询(S130);如果二级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从二级缓存无法获得查询结果,则到数据库中进行查询(S140);如果数据库中存在所请求的数据则向请求方返回查询到的数据,如果数据库中不存在所请求的数据则向请求方返回查询失败的结果(S150)。该方法减轻了数据库的访问压力,具有减少器材损耗,降低人员维护成本的有益效果。

Description

一种数据存储处理方法和装置 技术领域
本发明涉及网络通信技术领域,特别涉及一种数据存储处理方法和装置。
背景技术
随着网络通信技术的发展和生活节奏的加快,人们获知信息的方式越来越多,同时获知信息的速度也越来越快,例如,许多事件甚至刚刚发生就在网络上广为流传,而提供这些热点新闻的服务方往往需要应付大量用户在短时间内极大量的数据查询。服务方通常将获取到的数据采用数据库进行存储,存储的数据写入硬盘中,而在用户希望查询数据时如果直接从硬盘读取数据,会造成硬盘访问压力过大。通常采取的做法是将数据库中的数据写入缓存中,当接收到数据查询请求时从缓存中查找数据。然而,对于在短时间内接收到大量数据查询请求的情况,即使在读取速度更快的缓存中查找数据有时也会造成缓存访问压力过大,而一旦读取速度更快的缓存崩溃,大量的数据查询请求就会直接读取数据库的硬盘,由于硬盘的读取速度不如缓存,可以预见硬盘也会崩溃。
发明内容
鉴于上述问题,提出了本发明以便提供一种克服上述问题或者至少部分地解决上述问题的数据存储处理方法和装置。
依据本发明的一个方面,提供了一种数据存储处理方法,包括:
将数据分别存储到所述数据库、一级缓存和二级缓存中;当接收到数据查询请求时,先到一级缓存中进行查询;如果一级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从一级缓存无法获得查询结果则到二级缓存中进行查询;如果二级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从二级缓存无法获得查询结果,则到数据库中进行查询;如果数据库中存在所请求的数据则向请求方返回查询到的数据,如果数据库中不存在所请求的数据则向请求方返回查询失败的结果。
依据本发明的另一方面,提供了一种数据存储处理装置,包括:数据库单元、一级缓存单元、二级缓存单元、写处理单元和读处理单元;所述写处理单元,适于将数据分别存储到数据库单元、一级缓存单元和二级缓存单元中;所述读处理 单元,适于在接收到数据查询请求时,先到一级缓存单元中进行查询;如果一级缓存单元中存在所请求的数据则向请求方返回查询到的数据,如果从一级缓存单元无法获得查询结果则到二级缓存单元中进行查询;如果二级缓存单元中存在所请求的数据则向请求方返回查询到的数据,如果从二级缓存单元无法获得查询结果,则到数据库单元中进行查询;如果数据库单元中存在所请求的数据则向请求方返回查询到的数据,如果数据库单元中不存在所请求的数据则向请求方返回查询失败的结果。
依据本发明的一方面,提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行如上所述的数据存储处理方法。
依据本发明的另一方面,提供一种计算机可读介质,其中存储了如上所述的计算机程序。
由上述可知,本发明提供的技术方案通过设置数据的存储方式,将数据分别存储到数据库、一级缓存和二级缓存中,当接收到数据查询请求时,首先到一级缓存中进行查询,如果一级缓存中存在所请求的数据,则向请求方返回查询的数据;如果从一级缓存中无法获得查询结果,则说明一级缓存可能崩溃或处于宕机等不可用状态,这需要再到二级缓存中进行查询,如果二级缓存中存在所请求的数据则向请求方返回查询到的数据,说明此时一级缓存可能崩溃或处于宕机等不可用状态;如果无法从二级缓存中同样无法获得查询结果,则说明二级缓存可能也崩溃或处于宕机等不可用状态,不过此种情况极少出现,此时需要到数据库中进行查询。如果数据库中存在所请求的数据则向请求方返回查询到的数据,如果数据库中不存在所请求的数据则向请求方返回查询失败的结果。这样利用一级缓存缓解了数据库的大部分数据查询压力,即使当一级缓存来不及处理过多的数据查询请求时,也可以利用二级缓存对从一级缓存中无法获得查询结果的数据查询请求进行处理,这样基本可以实现对所有数据查询请求的处理,只有在一级缓存和二级缓存中都未获得查询结果的数据查询请求才会到数据库中进行查询,这样的数据查询请求是极为少量,在数据库的处理能力内的,从而极大地减轻了数据库的访问压力,也能更好地应需要在较短时间内处理大量的数据查询请求的情况,具有减少器材损耗,降低人员维护成本的有益效果。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技 术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了根据本发明一个实施例的一种数据存储处理方法的流程图;
图2示出了根据本发明一个实施例的一种数据存储处理装置的结构示意图;
图3示出了一种缓存节点的对应关系示意图;
图4示出了又一种缓存节点的对应关系示意图;
图5示意性地示出了用于执行根据本发明的方法的服务器的框图;以及
图6示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施例
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
图1示出了根据本发明一个实施例的一种数据存储处理方法的流程图,如图1所示,该方法包括:
步骤S110,将数据分别存储到数据库、一级缓存和二级缓存中。
步骤S120,当接收到数据查询请求时,先到一级缓存中进行查询。
步骤S130,如果一级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从一级缓存无法获得查询结果则到二级缓存中进行查询。
步骤S140,如果二级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从二级缓存无法获得查询结果,则到数据库中进行查询。
步骤S150,如果数据库中存在所请求的数据则向请求方返回查询到的数据,如果数据库中不存在所请求的数据则向请求方返回查询失败的结果。
可见,如图1所示的方法,通过设置数据的存储方式,将数据分别存储到数据库、一级缓存和二级缓存中,当接收到数据查询请求时,首先到一级缓存中进行查询,如果一级缓存中存在所请求的数据,则向请求方返回查询的数据;如果从一级缓存中无法获得查询结果,则一级缓存可能崩溃或处于宕机等不可用状态,需要进一步到二级缓存中进行查询;如果二级缓存中存在所请求的数据则向请求方返回查询到的数据,说明此时一级缓存确实因处于崩溃或宕机等状态导致不可用;如果同样无法从二级缓存中获得查询结果,则说明二级缓存也可能也崩溃或处于宕机等不可用状态,不过此种情况极少出现,此时需要到数据库中进行查询。此外,还存在一级缓存中没有存储数据查询请求所查询的数据的情形(由于一级缓存和二级缓存的数据具有一致性,在此情形二级缓存中同样也没有所查询的数据),此时需要直接到数据库进行查询。如果数据库中存在所请求的数据则向请求方返回查询到的数据,如果数据库中不存在所请求的数据则向请求方返回查询失败的结果。这样利用两级的缓存缓解了数据库的大部分数据查询压力,即使当一级缓存来不及处理过多的数据查询请求时,也可以利用二级缓存对无法从一级缓存中获得查询结果的数据查询请求进行处理,这样基本可以实现对所有数据查询请求的处理,只有在一级缓存和二级缓存中都未获得查询结果的数据查询请求才会到数据库中进行查询,这样的数据查询请求是极为少量,在数据库的处理能力内的,从而极大地减轻了数据库的访问压力,也能更好地应需要在较短时间内处理大量的数据查询请求的情况,具有减少器材损耗,降低人员维护成本的有益效果。
具体地,将数据分别存储到数据库、一级缓存和二级缓存中包括:对于一条数据,先将该条数据写入数据库中,再将该条数据写入一级缓存和二级缓存中;当一级缓存和二级缓存的任一个写入失败时,将该条数据从一级缓存和二级缓存中删除。
这保证了一级缓存、二级缓存和数据库中数据的一致性。由于数据库中的硬盘是数据的主要存储方式,对于一条数据,首先需要将其写入数据库的硬盘中。然后,再将该条数据写入一级缓存和二级缓存中。承担数据查询请求处理的主要是一级缓存,一般先将数据写入一级缓存,再将数据写入二级缓存。当数据写入一级缓存失败时,则将该条数据从一级缓存和二级缓存中删除;同样地,如果数据写入一级缓存成功,而在写入二级缓存失败时,将该条数据从一级缓存和二级 缓存中删除。这是由于网络服务器一般采用关系型数据库,数据存储时会对应产生key,而数据查询请求的处理就是通过查询key获得与该key对应的数据。即使一条数据写入失败,如果不删除该条数据的key,就会出现数据查询请求处理错误的情况。例如,一条数据在写入一级缓存时失败,而在一级缓存中依然留下了与该条数据对应的key,而value为空,那么即使该条数据在二级缓存写入成功,由于在处理数据查询请求时先在一级缓存中进行查询,那么实际返回的数据就是空value,并不能正确返回数据查询请求。为了避免这种情况的发生,本实施例中采取当一级缓存和二级缓存的任一个写入失败时,将该条数据从一级缓存和二级缓存中删除,具体可以删除该条数据的key和对应的value。
在本发明的一个实施例中,提供了一种一级缓存和二级缓存的组成方式:一级缓存由N个缓存节点组成,二级缓存由N个缓存节点组成;一级缓存的N个缓存节点与二级缓存的N个缓存节点一一对应地存储同样的数据;N为自然数。
这样的一级缓存和二级缓存实际形成了master-slave模式。一级缓存作为master层,二级缓存作为slave层,二者存储的数据和组成的缓存节点数完全相同,可以看做互为镜像的两层缓存。这样就保证了结构数据查询流程的稳定性,以及便于检查和确认存储数据的一致性。请注意,在上述master-slave模式下,一级缓存和二级缓存并没有主从的关系,只是在查询时访问顺序上的不同。
在本发明的一个实施例中,提供了又一种一级缓存和二级缓存的组成方式:一级缓存由M组缓存组成,每组缓存由N个缓存节点组成,二级缓存由N个缓存节点组成;一级缓存中的每组缓存中的N个缓存节点均与二级缓存的N个缓存节点一一对应地存储同样的数据。
这时一级缓存作为一个集群,类比前述实施例,本实施例中每组缓存与其他任一组缓存都是互为镜像的,所包含的节点数和存储的数据是完全相同的。这是为了处理如下情形:在一批数据查询请求中,对于某条或某些数据的查询请求量格外大。如果将数据查询请求按数据进行平均分配,会造成各缓存节点查询工作量不均,甚至造成缓存节点的崩溃。如,每一缓存节点中存储100条数据,各数据的查询请求量比较一致,都在每分钟300条左右。此时某一缓存节点中的某一条数据突然变成了热点数据,数据查询量急速加大,此时其他各缓存节点并未受到影响,而存储该热点数据的缓存节点无法应付突然产生的大量数据查询请求,导致崩溃。而该数据在其他缓存节点中并未存储,此时大量的数据查询请求无法 在一级缓存中获得查询结果,需要到二级缓存中进行查询。而如果二级缓存的配置与一级缓存完全相同,则依然无法处理如此大量的数据查询请求导致存储该热点数据的缓存节点崩溃,大量的数据查询请求访问数据库造成数据库崩溃。此时为了解决该问题,在一级缓存中设置多组缓存,这样在某一数据变为热点数据,数据查询量加大时,可以将数据查询请求按一定的规则分配到一级缓存的各组缓存中,由于每组缓存中都有对应的缓存节点保存该热点数据,可以成功进行数据查询,极好地解决了某条或某些数据查询量大的问题。
具体地,将数据分别存储到所述数据库、一级缓存和二级缓存中包括:将该数据存储到数据库中;将该数据存储到一级缓存的每组缓存中,以及将该数据存储到二级缓存中。
同样地,可以参考前述实施例中数据存储的规则,保证各缓存和数据库中数据的一致性,在此不再赘述。
在本发明的一个实施例中,到一级缓存中进行查询包括:
通过一致性哈希算法将查询请求定向到一级缓存的一组缓存,到该组缓存中进行查询;或者,根据一级缓存中的各组缓存的负载能力和/或可用状态,将查询请求定向到一级缓存的一组缓存,到该组缓存中进行查询。
本实施例进一步地说明了,如果一级缓存中包含多组缓存,如何分配查询请求的问题。考虑到本发明的技术方案可以很好地应用在分布式***中,一个较好解决该问题的方法是采用一致性哈希算法。在具体实施时,可以从以下几个方面进行考虑:
1、平衡性(Balance):平衡性是指哈希的结果能够尽可能分布到所有的缓缓存中去,这样可以使得所有的缓存节点都得到利用。
2、单调性(Monotonicity):单调性是指如果已经有一些内容通过哈希分派到了相应的缓存中,又有新的缓存加入到***中。哈希的结果应能够保证原有已分配的内容可以被映射到原有的或者新的缓存中去,而不会被映射到旧的一级缓存中的其他缓存组。
3、分散性(Spread):在分布式环境中,终端有可能看不到所有的缓存,而是只能看到其中的一部分。当终端希望通过哈希过程将内容映射到缓存上时,由于不同终端所见的缓存范围有可能不同,从而导致哈希的结果不一致,最终的结果是相同的内容被不同的终端映射到不同的缓存组中。这种情况显然是应该避免 的,因为它导致相同内容被存储到不同缓存中去,降低了***存储的效率。分散性的定义就是上述情况发生的严重程度。好的哈希算法应能够尽量避免不一致的情况发生,也就是尽量降低分散性。
4、负载(Load):负载问题实际上是从另一个角度看待分散性问题。既然不同的终端可能将相同的内容映射到不同的缓存组中,那么对于一个特定的缓存组而言,也可能被不同的用户映射为不同的内容。与分散性一样,这种情况也是应当避免的,因此好的哈希算法应能够尽量降低缓存的负荷。此时可以根据一级缓存中的各组缓存的负载能力和/或可用状态,将查询请求定向到一级缓存的一组缓存。
在本发明的一个实施例中,该方法进一步包括:
一级缓存和二级缓存存满后,当有新的数据需要存储时,从一级缓存和二级缓存中删除访问量低于预设值的数据,并将所述的新的数据写入一级缓存和二级缓存中。
具体地,可以采用LRU算法进行实现。LRU算法,是指在前面几条指令中使用频繁的页面很可能在后面的几条指令中频繁使用。反过来说,已经很久没有使用的页面很可能在未来较长的一段时间内不会被用到。因此只需要在每次调换时,找到最少使用的那个页面调出内存。应用到本实施例中,即可以将在一级缓存和二级缓存存满时,删除很久未访问的数据,具体可以通过设置阈值来实现。在获得存储空间后便可以写入新数据。当然,一般而言该方法适用于短时间内,如果该问题长期出现,维护人员可以根据存储情况等添置新缓存。
在本发明的一个实施例中,该方法进一步包括:
当对数据库中的数据进行修改时,对一级缓存和二级缓存中的相同数据也做同样的修改。
该实施例同样是为了保护数据的一致性。一条数据的key对应的value可能需要不断地更新,如新闻的后续报道。而在修改时,同样先要在数据库硬盘先进行修改,修改成功后还需要对一级缓存和二级缓存中的数据进行修改。依然如前述实施例中提到的数据存储规则,当数据在一级缓存或二级缓存修改失败时,需要同时删除一级缓存和二级缓存中的数据,理由不再赘述。
图2示出了根据本发明一个实施例的一种数据存储处理装置的结构示意图,如图2所示,数据存储处理装置200包括:数据库单元210、一级缓存单元220、 二级缓存单元230、写处理单元240和读处理单元250。
写处理单元240,适于将数据分别存储到数据库单元210、一级缓存单元220和二级缓存单元230中。
读处理单元250,适于在接收到数据查询请求时,先到一级缓存单元220中进行查询;如果一级缓存单元220中存在所请求的数据则向请求方返回查询到的数据,如果从一级缓存单元220无法获得查询结果则到二级缓存单元230中进行查询;如果二级缓存单元230中存在所请求的数据则向请求方返回查询到的数据,如果从二级缓存单230元无法获得查询结果,则到数据库单元210中进行查询;如果数据库单元210中存在所请求的数据则向请求方返回查询到的数据,如果数据库单元210中不存在所请求的数据则向请求方返回查询失败的结果。
可见,图2所示的数据存储处理装置200通过设置数据的存储方式,将数据分别存储到数据库单元210、一级缓存单元220和二级缓存单元230中,当接收到数据查询请求时,首先到一级缓存单元220中进行查询,如果一级缓存单元220中存在所请求的数据,则向请求方返回查询的数据;如果从一级缓存单元220中无法获得查询结果,则一级缓存单元220可能崩溃或处于宕机等不可用状态,需要进一步到二级缓存单元230中进行查询;如果二级缓存单元230中存在所请求的数据则向请求方返回查询到的数据,说明此时一级缓存单元220确实因处于崩溃或宕机等状态导致不可用;如果同一无法从二级缓存单元230中获得查询结果,则说明二级缓存单元230也可能也崩溃或处于宕机等不可用状态,不过此种情况极少出现,此时需要到数据库单元210中进行查询。此外,还存在一级缓存单元220中没有存储数据查询请求所查询的数据的情形(由于一级缓存和二级缓存的数据具有一致性,在此情形二级缓存单元230中同样也没有所查询的数据),此时需要直接到数据库单元210进行查询。如果数据库单元210中存在所请求的数据则向请求方返回查询到的数据,如果数据库单元210中不存在所请求的数据则向请求方返回查询失败的结果。这样利用两级级缓存缓解了数据库的大部分数据查询压力,即使当一级缓存来不及处理过多的数据查询请求时,也可以利用二级缓存对无法从一级缓存中获得查询结果的数据查询请求进行处理,这样基本可以实现对所有数据查询请求的处理,只有在一级缓存和二级缓存中都未获得查询结果的数据查询请求才会到数据库中进行查询,这样的数据查询请求是极为少量,在数据库的处理能力内的,从而极大地减轻了数据库的访问压力,也能更好地应 需要在较短时间内处理大量的数据查询请求的情况,具有减少器材损耗,降低人员维护成本的有益效果。
具体地,写处理单元240,适于对于一条数据,先将该条数据写入数据库单元210中,再将该条数据写入一级缓存单元220和二级缓存单元230中;当一级缓存单元210和二级缓存单元220的任一个写入失败时,将该条数据从一级缓存单元210和二级缓存单元220中删除。
这保证了一级缓存单元220、二级缓存单元230和数据库单元210中数据的一致性。由于数据库中的硬盘是数据的主要存储方式,对于一条数据,首先需要将其写入数据库的硬盘中。然后,再将该条数据写入一级缓存和二级缓存中。在本实施例中承担数据查询请求处理的主要是一级缓存单元220,一般先将数据写入一级缓存单元220,再将数据写入二级缓存单元230。当数据写入一级缓存单元220失败时,则将该条数据从一级缓存单元220和二级缓存单元230中删除;同样地,如果数据写入一级缓存单元220成功,而在写入二级缓存单元230失败时,将该条数据从一级缓存单元220和二级缓存单元230中删除。这是由于网络服务器一般采用关系型数据库,数据存储时会对应产生key,而数据查询请求的处理就是通过查询key获得与该key对应的数据。即使一条数据写入失败,如果不删除该条数据的key,就会出现数据查询请求处理错误的情况。例如,一条数据在写入一级缓存单元220时失败,而在一级缓存单元220中依然留下了与该条数据对应的key,而value为空,那么即使该条数据在二级缓存单元230写入成功,由于在处理数据查询请求时先在一级缓存单元220中进行查询,那么实际返回的数据就是空value,并不能正确返回数据查询请求。为了避免这种情况的发生,本实施例中采取当一级缓存单元220和二级缓存单元230的任一个写入失败时,将该条数据从一级缓存单元220和二级缓存单元230中删除,具体可以删除该条数据的key和对应的value。
在本发明的一个实施例中,图2所示的装置中,一级缓存单元220由N个缓存节点组成,二级缓存单元230由N个缓存节点组成;一级缓存单元220的N个缓存节点与二级缓存单元230的N个缓存节点一一对应地存储同样的数据;N为自然数。图3示出了一种缓存节点的对应关系示意图。
如图3所示,这样的一级缓存单元和二级缓存单元实际形成了master-slave模式。一级缓存单元作为master层,二级缓存单元作为slave层,二者存储的数 据和组成的缓存节点数完全相同,可以看做互为镜像的两层缓存。这样就保证了结构数据查询流程的稳定性,以及便于检查和确认存储数据的一致性。请注意,在上述master-slave模式下,一级缓存和二级缓存并没有主从的关系,只是在查询时访问顺序上的不同。
在本发明的一个实施例中,图2所示的装置中,一级缓存单元220由M组缓存组成,每组缓存由N个缓存节点组成,二级缓存单元230由N个缓存节点组成;一级缓存单元220中的每组缓存中的N个缓存节点均与二级缓存单元230的N个缓存节点一一对应地存储同样的数据。图4示出了又一种缓存节点的对应关系示意图。
如图4所示,这时一级缓存单元作为一个集群,类比前述实施例,本实施例中每组缓存与其他任一组缓存都是互为镜像的,所包含的节点数和存储的数据是完全相同的。这是为了处理如下情形:在一批数据查询请求中,对于某条或某些数据的查询请求量格外大。如果将数据查询请求按数据进行平均分配,会造成各缓存节点查询工作量不均,甚至造成缓存节点的崩溃。如,每一缓存节点中存储100条数据,各数据的查询请求量比较一致,都在每分钟300条左右。此时某一缓存节点中的某一条数据突然变成了热点数据,数据查询量急速加大,此时其他各缓存节点并未受到影响,而存储该热点数据的缓存节点无法应付突然产生的大量数据查询请求,导致崩溃。而该数据在其他缓存节点中并未存储,此时大量的数据查询请求无法在一级缓存单元中获得查询结果,需要到二级缓存单元中进行查询。而如果二级缓存单元的配置与一级缓存单元完全相同,则依然无法处理如此大量的数据查询请求导致存储该热点数据的缓存节点崩溃,大量的数据查询请求访问数据库造成数据库崩溃。此时为了解决该问题,在一级缓存单元中设置多组缓存,这样在某一数据变为热点数据,数据查询量加大时,可以将数据查询请求按一定的规则分配到一级缓存的各组缓存中,由于每组缓存中都有对应的缓存节点保存该热点数据,可以成功进行数据查询,极好地解决了某条或某些数据查询量大的问题。
在本发明的一个实施例中,上述装置中,写处理单元240,适于将数据存储到数据库单元210中,将数据存储到一级缓存单元220的每组缓存中,以及将数据存储到二级缓存单元230中。
同样地,可以参考前述实施例中数据存储的规则,保证各缓存单元和数据库 单元210中数据的一致性,在此不再赘述。
在本发明的一个实施例中,上述装置中,读处理单元250,适于在到一级缓存单元220中进行查询时,通过一致性哈希算法将查询请求定向到一级缓存单元220的一组缓存,到该组缓存中进行查询;或者,适于根据一级缓存单元220中的各组缓存的负载能力和/或可用状态,将查询请求定向到一级缓存单元220的一组缓存,到该组缓存中进行查询。
本实施例进一步地说明了,如果一级缓存单元220中包含多组缓存,如何分配查询请求的问题。考虑到本发明的技术方案可以很好地应用在分布式***中,一个较好解决该问题的方法是采用一致性哈希算法。在具体实施时,可以从以下几个方面进行考虑:
1、平衡性(Balance):平衡性是指哈希的结果能够尽可能分布到所有的缓缓存中去,这样可以使得所有的缓存节点都得到利用。
2、单调性(Monotonicity):单调性是指如果已经有一些内容通过哈希分派到了相应的缓存中,又有新的缓存加入到***中。哈希的结果应能够保证原有已分配的内容可以被映射到原有的或者新的缓存中去,而不会被映射到旧的一级缓存中的其他缓存组。
3、分散性(Spread):在分布式环境中,终端有可能看不到所有的缓存,而是只能看到其中的一部分。当终端希望通过哈希过程将内容映射到缓存上时,由于不同终端所见的缓存范围有可能不同,从而导致哈希的结果不一致,最终的结果是相同的内容被不同的终端映射到不同的缓存组中。这种情况显然是应该避免的,因为它导致相同内容被存储到不同缓存中去,降低了***存储的效率。分散性的定义就是上述情况发生的严重程度。好的哈希算法应能够尽量避免不一致的情况发生,也就是尽量降低分散性。
4、负载(Load):负载问题实际上是从另一个角度看待分散性问题。既然不同的终端可能将相同的内容映射到不同的缓存组中,那么对于一个特定的缓存组而言,也可能被不同的用户映射为不同的内容。与分散性一样,这种情况也是应当避免的,因此好的哈希算法应能够尽量降低缓存的负荷。此时可以根据一级缓存单元220中的各组缓存的负载能力和/或可用状态,将查询请求定向到一级缓存单元220的一组缓存。
在本发明的一个实施例中,上述装置中,写处理单元240,适于在一级缓存 单元220和二级缓存单元230存满后,当有新的数据需要存储时,从一级缓存单元220和二级缓存单元230中删除访问量低于预设值的数据,并将所述的新的数据写入一级缓存单元220和二级缓存单元230中。
具体地,可以采用LRU算法进行实现。LRU算法,是指在前面几条指令中使用频繁的页面很可能在后面的几条指令中频繁使用。反过来说,已经很久没有使用的页面很可能在未来较长的一段时间内不会被用到。因此只需要在每次调换时,找到最少使用的那个页面调出内存。应用到本实施例中,即可以将在一级缓存单元220和二级缓存单元230存满时,删除很久未访问的数据,具体可以通过设置阈值来实现。在获得存储空间后便可以写入新数据。当然,一般而言该方法适用于短时间内,如果该问题长期出现,维护人员可以根据存储情况等添置新缓存。
在本发明的一个实施例中,上述装置中,写处理单元240,进一步适于当对数据库单元210中的数据进行修改时,对一级缓存单元220和二级缓存单元230中的相同数据也做同样的修改。
该实施例同样是为了保护数据的一致性。一条数据的key对应的value可能需要不断地更新,如新闻的后续报道。而在修改时,同样先要在数据库单元210先进行修改,修改成功后还需要对一级缓存单元220和二级缓存单元230中的数据进行修改。依然如前述实施例中提到的数据存储规则,当数据在一级缓存单元220或二级缓存单元230修改失败时,需要同时删除一级缓存单元220和二级缓存单元230中的数据,理由不再赘述。
综上所述,本发明的技术方案,通过设置一级缓存和二级缓存处理数据查询请求,缓解了数据库的访问压力,同时根据实施中可能遇见的多种情形,对一级缓存和二级缓存中的缓存节点做了相应的分配,并介绍了不同情形下数据的写入和修改方法,以及具体处理数据查询请求的方式。本发明的技术方案作为一个完整的技术方案,有效地减轻了数据库面对大量数据查询请求时的访问压力,提供了有序、可靠、完整的解决方法,具有减少器材损耗,降低人员维护成本的有益效果。
以上所述仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本发明的保护范围内。
需要说明的是:
在此提供的算法和显示不与任何特定计算机、虚拟装置或者其它设备固有相关。各种通用装置也可以与基于在此的示教一起使用。根据上面的描述,构造这类装置所要求的结构是显而易见的。此外,本发明也不针对任何特定编程语言。应当明白,可以利用各种编程语言实现在此描述的本发明的内容,并且上面对特定语言所做的描述是为了披露本发明的最佳实施方式。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上 运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的数据存储处理装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
例如,图5示出了用于执行根据本发明的方法的服务器的框图。该服务器传统上包括处理器510和以存储器520形式的计算机程序产品或者计算机可读介质。存储器520可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器520具有用于执行上述方法中的任何方法步骤的程序代码531的存储空间530。例如,用于程序代码的存储空间530可以包括分别用于实现上面的方法中的各种步骤的各个程序代码531。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图6所述的便携式或者固定存储单元。该存储单元可以具有与图5的服务器中的存储器520类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码531’,即可以由例如诸如510之类的处理器读取的代码,这些代码当由服务器运行时,导致该服务器执行上面所描述的方法中的各个步骤。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本发明的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一” 或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
此外,还应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。

Claims (18)

  1. 一种数据存储处理方法,其中,该方法包括:
    将数据分别存储到所述数据库、一级缓存和二级缓存中;
    当接收到数据查询请求时,先到一级缓存中进行查询;
    如果一级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从一级缓存无法获得查询结果则到二级缓存中进行查询;
    如果二级缓存中存在所请求的数据则向请求方返回查询到的数据,如果从二级缓存无法获得查询结果,则到数据库中进行查询;
    如果数据库中存在所请求的数据则向请求方返回查询到的数据,如果数据库中不存在所请求的数据则向请求方返回查询失败的结果。
  2. 如权利要求1所述的方法,其中,所述将数据分别存储到所述数据库、一级缓存和二级缓存中包括:
    对于一条数据,先将该条数据写入数据库中,再将该条数据写入一级缓存和二级缓存中;
    当一级缓存和二级缓存的任一个写入失败时,将该条数据从一级缓存和二级缓存中删除。
  3. 如权利要求1所述的方法,其中,
    所述一级缓存由N个缓存节点组成,所述二级缓存由N个缓存节点组成;
    一级缓存的N个缓存节点与二级缓存的N个缓存节点一一对应地存储同样的数据;
    N为自然数。
  4. 如权利要求1所述的方法,其中,
    所述一级缓存由M组缓存组成,每组缓存由N个缓存节点组成,所述二级缓存由N个缓存节点组成;
    所述一级缓存中的每组缓存中的N个缓存节点均与二级缓存的N个缓存节点一一对应地存储同样的数据。
  5. 如权利要求4所述的方法,其中,所述将数据分别存储到所述数据库、一级缓存和二级缓存中包括:
    将该数据存储到数据库中;
    将该数据存储到一级缓存的每组缓存中,以及将该数据存储到二级缓存中。
  6. 如权利要求4所述的方法,其中,所述到一级缓存中进行查询包括:
    通过一致性哈希算法将查询请求定向到一级缓存的一组缓存,到该组缓存中进行查询;
    或者,根据一级缓存中的各组缓存的负载能力和/或可用状态,将查询请求定向到一级缓存的一组缓存,到该组缓存中进行查询。
  7. 如权利要求1-6中任一项所述的方法,其中,该方法进一步包括:
    一级缓存和二级缓存存满后,当有新的数据需要存储时,从一级缓存和二级缓存中删除访问量低于预设值的数据,并将所述的新的数据写入一级缓存和二级缓存中。
  8. 如权利要求1-6中任一项所述的方法,其中,该方法进一步包括:
    当对数据库中的数据进行修改时,对一级缓存和二级缓存中的相同数据也做同样的修改。
  9. 一种数据存储处理装置,其中,该装置包括:数据库单元、一级缓存单元、二级缓存单元、写处理单元和读处理单元;
    所述写处理单元,适于将数据分别存储到数据库单元、一级缓存单元和二级缓存单元中;
    所述读处理单元,适于在接收到数据查询请求时,先到一级缓存单元中进行查询;如果一级缓存单元中存在所请求的数据则向请求方返回查询到的数据,如果从一级缓存单元无法获得查询结果则到二级缓存单元中进行查询;如果二级缓存单元中存在所请求的数据则向请求方返回查询到的数据,如果从二级缓存单元无法获得查询结果,则到数据库单元中进行查询;如果数据库单元中存在所请求的数据则向请求方返回查询到的数据,如果数据库单元中不存在所请求的数据则向请求方返回查询失败的结果。
  10. 如权利要求9所述的装置,其中,
    所述写处理单元,适于对于一条数据,先将该条数据写入数据库单元中,再将该条数据写入一级缓存单元和二级缓存单元中;当一级缓存单元和二级缓存单元的任一个写入失败时,将该条数据从一级缓存单元和二级缓存单元中删除。
  11. 如权利要求9所述的装置,其中,
    所述一级缓存单元由N个缓存节点组成,所述二级缓存单元由N个缓存节点组成;
    所述一级缓存单元的N个缓存节点与所述二级缓存单元的N个缓存节点一一对应地存储同样的数据;
    N为自然数。
  12. 如权利要求9所述的装置,其中,
    所述一级缓存单元由M组缓存组成,每组缓存由N个缓存节点组成,所述二级缓存单元由N个缓存节点组成;
    所述一级缓存单元中的每组缓存中的N个缓存节点均与二级缓存单元的N个缓存节点一一对应地存储同样的数据。
  13. 如权利要求12所述的装置,其中,
    所述写处理单元,适于将数据存储到数据库单元中,将数据存储到一级缓存单元的每组缓存中,以及将数据存储到二级缓存单元中。
  14. 如权利要求12所述的装置,其中,
    所述读处理单元,适于在到一级缓存单元中进行查询时,通过一致性哈希算法将查询请求定向到一级缓存单元的一组缓存,到该组缓存中进行查询;或者,适于根据一级缓存单元中的各组缓存的负载能力和/或可用状态,将查询请求定向到一级缓存单元的一组缓存,到该组缓存中进行查询。
  15. 如权利要求9-14中任一项所述的装置,其中,
    所述写处理单元,适于在一级缓存单元和二级缓存单元存满后,当有新的数据需要存储时,从一级缓存单元和二级缓存单元中删除访问量低于预设值的数据,并将所述的新的数据写入一级缓存单元和二级缓存单元中。
  16. 如权利要求9-14中任一项所述的装置,其中,
    所述写处理单元,进一步适于当对数据库单元中的数据进行修改时,对一级缓存单元和二级缓存单元中的相同数据也做同样的修改。
  17. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行根据权利要求1-8中的任一个所述的数据存储处理方法。
  18. 一种计算机可读介质,其中存储了如权利要求17所述的计算机程序。
PCT/CN2016/092414 2015-09-21 2016-07-29 一种数据存储处理方法和装置 WO2017050014A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510604139.1A CN105183394B (zh) 2015-09-21 2015-09-21 一种数据存储处理方法和装置
CN201510604139.1 2015-09-21

Publications (1)

Publication Number Publication Date
WO2017050014A1 true WO2017050014A1 (zh) 2017-03-30

Family

ID=54905503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/092414 WO2017050014A1 (zh) 2015-09-21 2016-07-29 一种数据存储处理方法和装置

Country Status (2)

Country Link
CN (1) CN105183394B (zh)
WO (1) WO2017050014A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021674A (zh) * 2017-12-06 2018-05-11 浙江远算云计算有限公司 一种同步云端仿真数据的多级缓存传输加速***
CN110837521A (zh) * 2019-11-15 2020-02-25 北京金山云网络技术有限公司 数据查询方法、装置和服务器
CN110941619A (zh) * 2019-12-02 2020-03-31 浪潮软件股份有限公司 针对多种使用场景的图数据存储模型和结构的定义方法
CN112597354A (zh) * 2020-12-22 2021-04-02 贝壳技术有限公司 一种提供配置参数的方法、装置、***及存储介质
CN112783926A (zh) * 2021-01-20 2021-05-11 银盛支付服务股份有限公司 一种减少调用服务耗时的方法
CN113114642A (zh) * 2021-03-30 2021-07-13 广州宸祺出行科技有限公司 一种接口整合的驾驶员身份认证方法及装置
CN113360528A (zh) * 2020-03-06 2021-09-07 北京沃东天骏信息技术有限公司 基于多级缓存的数据查询方法和装置
CN114035748A (zh) * 2021-11-10 2022-02-11 罗普特科技集团股份有限公司 一种数据文件的存取方法与***
CN115185860A (zh) * 2022-09-14 2022-10-14 沐曦集成电路(上海)有限公司 一种缓存访问***
CN115905323A (zh) * 2023-01-09 2023-04-04 北京创新乐知网络技术有限公司 适用于多种搜索策略的搜索方法、装置、设备及介质

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183394B (zh) * 2015-09-21 2018-09-04 北京奇虎科技有限公司 一种数据存储处理方法和装置
CN107231395A (zh) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 数据存储方法、装置和***
CN106777085A (zh) * 2016-12-13 2017-05-31 东软集团股份有限公司 一种数据处理方法、装置及数据查询***
CN106934044B (zh) * 2017-03-16 2020-02-14 北京深思数盾科技股份有限公司 一种数据处理方法及装置
CN107562829B (zh) * 2017-08-22 2020-09-29 上海幻电信息科技有限公司 数据访问方法及设备
CN108196795B (zh) * 2017-12-30 2020-09-04 惠龙易通国际物流股份有限公司 一种数据存储方法、设备及计算机存储介质
CN108446356B (zh) * 2018-03-12 2023-08-29 上海哔哩哔哩科技有限公司 数据缓存方法、服务器及数据缓存***
CN109446222A (zh) * 2018-08-28 2019-03-08 厦门快商通信息技术有限公司 一种双缓存的数据存储方法、装置及存储介质
CN110909025A (zh) * 2018-09-17 2020-03-24 深圳市优必选科技有限公司 数据库的查询方法、查询装置及终端
CN109376175A (zh) * 2018-10-24 2019-02-22 上海中商网络股份有限公司 一种数据管理方法、装置、设备及存储介质
CN109710639A (zh) * 2018-11-26 2019-05-03 厦门市美亚柏科信息股份有限公司 一种基于双缓存机制的检索方法、装置及存储介质
CN111372277B (zh) * 2018-12-26 2023-07-14 南京中兴新软件有限责任公司 数据分发方法、装置及存储介质
CN111694865A (zh) * 2020-06-02 2020-09-22 中国工商银行股份有限公司 基于分布式***的四层结构数据获取方法和装置
CN113596177B (zh) * 2021-08-13 2023-06-27 四川虹美智能科技有限公司 智能家居设备的ip地址的解析方法和装置
CN113946591A (zh) * 2021-12-20 2022-01-18 北京力控元通科技有限公司 一种热点数据缓存方法、***及电子设备
CN115134134A (zh) * 2022-06-23 2022-09-30 中国民航信息网络股份有限公司 一种信息处理方法、装置及设备
CN114817341B (zh) * 2022-06-30 2022-09-06 北京奥星贝斯科技有限公司 访问数据库的方法和装置
CN115934583B (zh) * 2022-11-16 2024-07-12 智慧星光(安徽)科技有限公司 分级缓存方法、装置及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692229A (zh) * 2009-07-28 2010-04-07 武汉大学 基于数据内容的三维空间数据自适应多级缓存***
CN103607312A (zh) * 2013-11-29 2014-02-26 广州华多网络科技有限公司 用于服务器***的数据请求处理方法及***
CN104090934A (zh) * 2014-06-26 2014-10-08 山东金质信息技术有限公司 一种标准服务平台分布式并行计算数据库及其检索方法
CN104866434A (zh) * 2015-06-01 2015-08-26 北京圆通慧达管理软件开发有限公司 面向多应用的数据存储***和数据存储、调用方法
CN105183394A (zh) * 2015-09-21 2015-12-23 北京奇虎科技有限公司 一种数据存储处理方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404649B (zh) * 2008-11-11 2012-01-11 阿里巴巴集团控股有限公司 一种基于cache的数据处理***及其方法
KR101516245B1 (ko) * 2012-02-24 2015-05-04 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 제스처 기반 게임 시스템을 위한 안전 방안
WO2015100653A1 (zh) * 2013-12-31 2015-07-09 华为技术有限公司 一种数据缓存方法、装置及***
CN103701957A (zh) * 2014-01-14 2014-04-02 互联网域名***北京市工程研究中心有限公司 Dns递归方法及其***
CN104683329B (zh) * 2015-02-06 2018-11-13 成都品果科技有限公司 一种移动设备客户端的数据缓存方法及***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692229A (zh) * 2009-07-28 2010-04-07 武汉大学 基于数据内容的三维空间数据自适应多级缓存***
CN103607312A (zh) * 2013-11-29 2014-02-26 广州华多网络科技有限公司 用于服务器***的数据请求处理方法及***
CN104090934A (zh) * 2014-06-26 2014-10-08 山东金质信息技术有限公司 一种标准服务平台分布式并行计算数据库及其检索方法
CN104866434A (zh) * 2015-06-01 2015-08-26 北京圆通慧达管理软件开发有限公司 面向多应用的数据存储***和数据存储、调用方法
CN105183394A (zh) * 2015-09-21 2015-12-23 北京奇虎科技有限公司 一种数据存储处理方法和装置

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021674A (zh) * 2017-12-06 2018-05-11 浙江远算云计算有限公司 一种同步云端仿真数据的多级缓存传输加速***
CN110837521A (zh) * 2019-11-15 2020-02-25 北京金山云网络技术有限公司 数据查询方法、装置和服务器
CN110941619A (zh) * 2019-12-02 2020-03-31 浪潮软件股份有限公司 针对多种使用场景的图数据存储模型和结构的定义方法
CN110941619B (zh) * 2019-12-02 2023-05-16 浪潮软件股份有限公司 针对多种使用场景的图数据存储模型和结构的定义方法
CN113360528A (zh) * 2020-03-06 2021-09-07 北京沃东天骏信息技术有限公司 基于多级缓存的数据查询方法和装置
CN112597354A (zh) * 2020-12-22 2021-04-02 贝壳技术有限公司 一种提供配置参数的方法、装置、***及存储介质
CN112783926A (zh) * 2021-01-20 2021-05-11 银盛支付服务股份有限公司 一种减少调用服务耗时的方法
CN113114642A (zh) * 2021-03-30 2021-07-13 广州宸祺出行科技有限公司 一种接口整合的驾驶员身份认证方法及装置
CN114035748A (zh) * 2021-11-10 2022-02-11 罗普特科技集团股份有限公司 一种数据文件的存取方法与***
CN115185860A (zh) * 2022-09-14 2022-10-14 沐曦集成电路(上海)有限公司 一种缓存访问***
CN115185860B (zh) * 2022-09-14 2022-12-02 沐曦集成电路(上海)有限公司 一种缓存访问***
CN115905323A (zh) * 2023-01-09 2023-04-04 北京创新乐知网络技术有限公司 适用于多种搜索策略的搜索方法、装置、设备及介质
CN115905323B (zh) * 2023-01-09 2023-08-18 北京创新乐知网络技术有限公司 适用于多种搜索策略的搜索方法、装置、设备及介质

Also Published As

Publication number Publication date
CN105183394B (zh) 2018-09-04
CN105183394A (zh) 2015-12-23

Similar Documents

Publication Publication Date Title
WO2017050014A1 (zh) 一种数据存储处理方法和装置
US10229004B2 (en) Data transfer priority levels
US8332367B2 (en) Parallel data redundancy removal
US8433681B2 (en) System and method for managing replication in an object storage system
US11245774B2 (en) Cache storage for streaming data
WO2017117919A1 (zh) 数据存储方法和装置
US20050172076A1 (en) System for managing distributed cache resources on a computing grid
JP5817558B2 (ja) 情報処理装置、分散処理システム、キャッシュ管理プログラムおよび分散処理方法
WO2016131175A1 (zh) 多核***中数据访问者目录的访问方法及设备
US11429311B1 (en) Method and system for managing requests in a distributed system
US10146833B1 (en) Write-back techniques at datastore accelerators
US10642745B2 (en) Key invalidation in cache systems
CN111475279B (zh) 用于备份的智能数据负载平衡的***和方法
CN117407159A (zh) 内存空间的管理方法及装置、设备、存储介质
US11436193B2 (en) System and method for managing data using an enumerator
CN117009389A (zh) 数据缓存方法、装置、电子设备和可读存储介质
US20180157593A1 (en) Value cache in a computing system
CN112130747A (zh) 分布式对象存储***及数据读写方法
US20230359556A1 (en) Performing Operations for Handling Data using Processor in Memory Circuitry in a High Bandwidth Memory
US20150106884A1 (en) Memcached multi-tenancy offload
US20150088826A1 (en) Enhanced Performance for Data Duplication
CN110022348B (zh) 用于动态备份会话的***和方法
CN108694209B (zh) 基于对象的分布式索引方法和客户端
US7051158B2 (en) Single computer distributed memory computing environment and implementation thereof
US11816088B2 (en) Method and system for managing cross data source data access requests

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16847896

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16847896

Country of ref document: EP

Kind code of ref document: A1