CN110598138A - Cache-based processing method and device - Google Patents

Cache-based processing method and device Download PDF

Info

Publication number
CN110598138A
CN110598138A CN201810600524.2A CN201810600524A CN110598138A CN 110598138 A CN110598138 A CN 110598138A CN 201810600524 A CN201810600524 A CN 201810600524A CN 110598138 A CN110598138 A CN 110598138A
Authority
CN
China
Prior art keywords
target data
cache pool
level cache
request
pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810600524.2A
Other languages
Chinese (zh)
Inventor
陈然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201810600524.2A priority Critical patent/CN110598138A/en
Publication of CN110598138A publication Critical patent/CN110598138A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/541Client-server

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a processing method and device based on cache, and relates to the technical field of computers. Wherein, the method comprises the following steps: inquiring a local cache according to the data inquiry request to acquire target data; and when the target data does not exist in the local cache, acquiring the target data from a server, counting the request times of the target data, and writing the target data acquired from the server into the local cache when the request times exceed an activation threshold. Through the steps, the access pressure of the high-frequency network request on the server side can be effectively relieved, the response efficiency of the request is improved, the cache mechanism can be automatically triggered according to the statistical condition of the request, and the flexibility of the cache mechanism is improved.

Description

Cache-based processing method and device
Technical Field
The invention relates to the technical field of computers, in particular to a processing method and device based on cache.
Background
In the existing RPC (remote procedure call) technology, a server provides a set of function interfaces to a client, and all caching policies are placed on the server. In addition, in the use of the cache, the server side performs start-stop control on the use of the cache based on the switch configuration of the configuration file or the database configuration table.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: firstly, all caching strategies are placed at a server side for processing, network congestion is easy to occur in a high concurrency scene, and timely return of response data of RPC requests is seriously influenced. Second, starting and stopping caching relies on manual configuration (modifying or setting configuration files, database configuration tables, etc.), and the working mechanism is not flexible. Thirdly, the prior art mainly depends on a timeout mechanism to manage the cache data, and the management means is single. In addition, the prior art does not manage the cache data in a grading way.
Disclosure of Invention
In view of this, the present invention provides a processing method and apparatus based on a cache, which can not only effectively relieve the access pressure of a high-frequency network request on a server and improve the response efficiency of the request, but also automatically trigger a cache mechanism according to the statistical condition of the request, thereby improving the flexibility of the cache mechanism.
To achieve the above object, according to one aspect of the present invention, a cache-based processing method is provided.
The processing method based on the cache comprises the following steps: inquiring a local cache according to the data inquiry request to acquire target data; when the target data does not exist in the local cache, the target data is obtained from a server, the request times of the target data are counted, and the target data obtained from the server is written into the local cache when the request times of the target data exceed an activation threshold.
Optionally, the local cache includes: a first-level cache pool and a second-level cache pool; the step of querying the local cache according to the data query request to obtain the target data comprises: inquiring a primary cache pool according to the data inquiry request; when target data exist in a first-level cache pool, acquiring the target data from the first-level cache pool; when target data do not exist in the first-level cache pool, querying the second-level cache pool according to the data query request; and when the target data exists in the second-level cache pool, acquiring the target data from the second-level cache pool.
Optionally, the method further comprises: before the step of obtaining the target data from the first-level cache pool is executed, the target data in the first-level cache pool is confirmed to be within a first effective duration, and the request number of the target data within the first effective duration is not greater than a first abrasion threshold value.
Optionally, the method further comprises: when target data in a first-level cache pool exceeds a first effective duration or the request times of the target data in the first effective duration are larger than a first abrasion threshold value, acquiring the target data from a server; when the target data in the first-level cache pool exceeds a first effective time length, deleting the target data in the first-level cache pool; and when the request times of the target data in the first-level cache pool in the first effective time length are larger than a first abrasion threshold value, updating the target data in the first-level cache pool.
Optionally, the method further comprises: before the step of obtaining the target data from the second-level cache pool is executed, the target data in the second-level cache pool is confirmed to be within a second effective duration, and the request number of the target data within the second effective duration is not larger than a second abrasion threshold.
Optionally, the method further comprises: when the target data in the secondary cache pool exceeds a second effective duration or the request times of the target data in the second effective duration are larger than a second abrasion threshold, acquiring the target data from a server; and deleting the target data in the second-level cache pool when the target data in the second-level cache pool exceeds the second effective time; and when the request times of the target data in the second-level cache pool in the second effective duration are larger than a second abrasion threshold value, updating the target data in the second-level cache pool.
Optionally, the method further comprises: after the step of confirming that the target data in the secondary cache pool is within the second effective duration is executed, judging whether the request times of the target data in the jump counting period are greater than a jump threshold value; if so, writing the target data acquired from the server into the primary cache pool, and deleting the target data in the secondary cache pool; wherein the transition statistical period is less than the second effective duration.
Optionally, the local cache further includes a third-level cache pool; the step of counting the number of requests for the target data comprises: inquiring a third-level cache pool according to the data inquiry request to obtain a corresponding statistical record; under the condition that the corresponding statistical record is within a third effective duration, adding one to the request times in the corresponding statistical record; and setting the number of requests in the corresponding statistical record to be one under the condition that the corresponding statistical record exceeds a third effective duration.
Optionally, the first level cache pool, the second level cache pool, and/or the third level cache pool employ an LRU storage mechanism.
To achieve the above object, according to another aspect of the present invention, there is provided a cache-based processing apparatus.
The cache-based processing device of the present invention comprises: the acquisition module is used for inquiring the local cache according to the data inquiry request so as to acquire target data; the communication module is used for acquiring the target data from a server side when the target data does not exist in the local cache; the cache starting module is used for counting the request times of the target data when the target data does not exist in the local cache; and under the condition that the request times of the target data exceed an activation threshold, the cache starting module is further used for writing the target data acquired from the server into the local cache.
To achieve the above object, according to still another aspect of the present invention, there is provided an electronic apparatus.
The electronic device of the present invention includes: one or more processors; and storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement the cache-based processing method of the present invention.
To achieve the above object, according to still another aspect of the present invention, there is provided a computer-readable medium.
The computer-readable medium of the invention has stored thereon a computer program which, when executed by a processor, implements the cache-based processing method of the invention.
One embodiment of the above invention has the following advantages or benefits: the local cache is arranged at the client, the target data is obtained from the local cache according to the data query request, and when the target data does not exist in the local cache, the target data is obtained from the server, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of a high-frequency network request to the server is effectively relieved, and the response efficiency of the data query request is improved. In addition, the request times of the target data are counted when the target data do not exist in the local cache, and the target data acquired from the server side are written into the local cache when the request times exceed the activation threshold, so that the local cache mechanism of the client side can be automatically triggered according to the counting condition of the request times, and the flexibility of the cache mechanism is improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a cache-based processing method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the main steps of a cache-based processing method according to another embodiment of the present invention;
FIG. 3 is a diagram illustrating a portion of steps of a cache-based processing method according to yet another embodiment of the present invention;
FIG. 4 is a diagram illustrating a portion of steps of a cache-based processing method according to yet another embodiment of the present invention;
FIG. 5 is a schematic diagram of the main blocks of a cache-based processing apparatus according to one embodiment of the present invention;
FIG. 6 is a diagram illustrating the components of a local cache, according to an embodiment of the invention;
FIG. 7 is a schematic diagram of the main blocks of a cache-based processing apparatus according to another embodiment of the present invention;
FIG. 8 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
FIG. 9 is a block diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of main steps of a cache-based processing method according to an embodiment of the present invention. The method of the embodiment of the invention can be executed by a client. As shown in fig. 1, the cache-based processing method according to the embodiment of the present invention includes:
and S101, inquiring a local cache according to the data inquiry request to acquire target data.
The data query request may be in a request format such as an RPC (remote procedure call) request or an Http request. In a specific example, the caller may make an RPC request through parameters defined in a function interface provided by the server. After receiving the RPC request of the caller, the client can query the local cache according to the parameters in the RPC request. And if the target data exists in the local cache, the client directly acquires the target data from the local cache. The target data may be understood as "result data of a query". For example, the request parameter is an article identifier, and the target data is detail information of the article. For example, the request parameter is a name of a merchant, and the target data is transaction information of the merchant.
The local cache may be set in a memory of the client, and may cache data in a Key-Value (Key-Value) form. Further, the local cache may be a multi-level cache, including: a first level cache pool and a second level cache pool. The first-level cache pool and the second-level cache pool are mainly used for caching target data with high request frequency. And the request frequency of the cache data in the first-level cache pool is higher than that of the cache data in the second-level cache pool.
And step S102, when the target data does not exist in the local cache, acquiring the target data from a server.
Specifically, when target data does not exist in the local cache, the client may send a data query request to the server, and then receive result data, that is, the target data, corresponding to the data query request returned by the server.
Step S103, when the target data does not exist in the local cache, counting the request times of the target data, and writing the target data acquired from the server into the local cache when the request times of the target data exceed an activation threshold.
In step S103, the number of requests for the target data may be understood as "the number of calls for the request parameter". For example, if the request parameter in the data query request is a product identifier, the client counts the number of calls of the received request parameter "product identifier".
In the embodiment of the invention, the local cache is arranged at the client, the target data is obtained from the local cache according to the data query request, and the target data is obtained from the server when the target data does not exist in the local cache, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of a high-frequency network request to the server is effectively relieved, and the response efficiency of the data query request is improved. In addition, the request times of the target data are counted when the target data do not exist in the local cache, and the target data acquired from the server side are written into the local cache when the request times exceed the activation threshold, so that the local cache mechanism of the client side can be automatically triggered according to the counting condition of the request times, and the flexibility of the cache mechanism is improved.
Fig. 2 is a schematic diagram of main steps of a cache-based processing method according to another embodiment of the present invention. In the embodiment shown in fig. 2, the local cache includes: the system comprises a first-level cache pool, a second-level cache pool and a third-level cache pool. The first-level cache pool and the second-level cache pool are mainly used for caching target data with high request frequency, and the third-level cache pool is mainly used for caching statistical records of data query requests. And the request frequency of the cache data in the first-level cache pool is higher than that of the cache data in the second-level cache pool. As shown in fig. 2, the cache-based processing method according to the embodiment of the present invention includes:
step S201, inquiring the primary cache pool according to the data inquiry request.
The data query request may be in a request format such as an RPC request or an Http request. In specific implementation, the caller can make an RPC request through parameters defined in a function interface provided by the server. After receiving the RPC request, the client may first query a first-level cache pool in the local cache according to parameters in the RPC request. The first level cache pool may cache data in the form of key-value pairs. Further, the primary cache pool employs a LRU (least recently used) storage mechanism to ensure that the most recently requested target data is correctly recorded and the target data requested earlier is deleted from the primary cache pool.
Step S202, when the target data exists in the first-level cache pool, the target data is obtained from the first-level cache pool. Further, after the target data is fetched from the primary cache pool, the fetched target data may be returned to the caller.
And step S203, when the target data does not exist in the primary cache pool, querying the secondary cache pool according to the data query request.
The secondary cache pool may cache data in the form of Key-Value pairs (Key-values). Further, the secondary cache pool employs an LRU (least recently used) storage mechanism to ensure that the most recently requested target data is correctly recorded and the target data requested earlier is deleted from the secondary cache pool. By adopting an LRU storage mechanism in the first-level cache pool and the second-level cache pool, hot data can not occupy a large amount of memory resources of the client. In addition, the first-level buffer pool and the second-level buffer pool can also adopt a FIFO (first-in first-out queue) storage mechanism.
And step S204, when the target data exists in the secondary cache pool, acquiring the target data from the secondary cache pool. Further, after the target data is retrieved from the secondary cache pool, the retrieved target data may be returned to the caller.
And S205, when the target data does not exist in the secondary cache pool, acquiring the target data from the server.
Specifically, when the first-level cache pool and the second-level cache pool do not have the requested target data, the client may send the data query request to the server, and then receive result data, that is, the target data, corresponding to the data query request returned by the server. Further, after the target data is obtained from the server, the obtained target data may be returned to the caller.
And S206, querying the third-level cache pool according to the data query request to obtain a corresponding statistical record.
The third level cache pool may cache the statistics records in the form of Key-Value pairs (Key-Value). Further, the third-level cache pool adopts a storage mechanism of LRU (least recently used) to ensure that the statistical record of the latest data query request is correctly stored, and the earlier statistical record is deleted from the third-level cache pool. Specifically, when the first-level cache pool and the second-level cache pool do not have the requested target data, the client may query the third-level cache pool according to the request parameter in the data query request, so as to obtain the statistical record corresponding to the requested parameter. Wherein the statistical record may include: request parameters, a third validity period (otherwise known as a "statistical cycle duration"), a number of requests within the third validity period, an activation threshold. In specific implementation, the third effective duration and the size of the activation threshold may be flexibly set as required. For example, the third effective time period may be set to 5 minutes, and the activation threshold may be set to 100 times.
And step S207, judging whether the corresponding statistical record is in a third effective duration. If yes, go to step S208; if not, go to step S212.
In an alternative embodiment, the statistical record may further include an end-of-cycle time. In this alternative embodiment, step S207 further comprises: and comparing the cycle ending time in the obtained statistical record with the current time. If the period ending time is later than the current time, judging that the obtained statistical record is in a third effective duration; otherwise, judging that the obtained statistical record exceeds the third effective duration. In another alternative embodiment, the statistical record may further include a status identifier. In this alternative embodiment, step S207 further comprises: when the value of the state identifier is "true", it indicates that the obtained statistical record is in a third effective duration; and when the value of the state identifier is 'false', the obtained statistical record exceeds a third effective time.
And step S208, adding one to the request times in the corresponding statistical records. After step S208, step S209 is performed.
Step S209, determining whether the number of requests in the corresponding statistical record exceeds an activation threshold. If yes, go to step S210; if not, go to step S211.
In the embodiment of the present invention, when the number of requests for target data exceeds the activation threshold, it indicates that the data query request is more active, and a cache policy needs to be opened for the data query request, that is, the requested target data is placed in the local cache through step S210.
And step S210, writing the target data acquired from the server into a secondary cache pool.
Step S211, no operation. It is to be noted that the "no operation" in step S211 means that the operation in step S210 is not performed.
Step S212, setting the number of requests in the corresponding statistical record to one. Further, after step S212, the method of the embodiment of the present invention further includes the following steps: and updating the cycle ending time or the value of the state identifier in the statistical record.
In the embodiment of the invention, the local cache comprising the first-level cache pool, the second-level cache pool and the third-level cache pool is arranged at the client, so that a large number of data query requests can be directly processed locally, the number of the requests transmitted to the server through a network is obviously reduced, the access pressure of high-frequency network requests on the server is effectively relieved, and the response efficiency of the data query requests is improved. In addition, through the steps, the local cache mechanism of the client can be automatically triggered in real time according to the statistical condition of the request times, so that manual configuration is not required, and the flexibility of the cache mechanism is improved. In addition, by adopting the LRU storage mechanism in the first-level cache pool, the second-level cache pool and the third-level cache pool, the cache data can be ensured not to occupy a large amount of memory resources of the client.
On the basis of the embodiment shown in fig. 2, the present invention makes a further improvement, so as to propose another cache-based processing method. In still another embodiment of the present invention, the processing flow from step S201 to step S204 is mainly improved. The steps related to the improvement in another embodiment of the present invention are described in detail below with reference to fig. 3 and 4, and the steps that are not improved are not described again.
FIG. 3 is a diagram illustrating a portion of steps of a cache-based processing method according to yet another embodiment of the invention. As shown in fig. 3, the method of an embodiment of the present invention includes the steps of:
step S301, judging that target data exists in the first-level cache pool.
Step S302, judging whether the target data is in a first effective duration. If yes, go to step S303; if not, go to step S306.
In this embodiment of the present invention, the first-level cache pool includes, in addition to the request parameter and the target data corresponding to the request parameter, also: a first validity period, a number of requests for target data within the first validity period, a first wear threshold. In specific implementation, the first effective duration and the first wear threshold can be flexibly set as required. For example, the first effective period may be set to 5 minutes, and the first wear threshold may be set to 10000 times.
In an optional embodiment, the first level cache pool further comprises: the effective deadline of the target data. In this alternative embodiment, step S302 further includes: and comparing the effective deadline of the target data acquired by the query with the current time. If the effective deadline is later than the current time, judging that the target data is in a first effective duration; otherwise, judging that the target data exceeds the first effective duration. In another optional embodiment, the first level cache pool may further include: status identification of the target data. In this alternative embodiment, step S302 further includes: when the value of the state identifier of the target data is "true", indicating that the target data is in the first effective duration; when the value of the state flag of the target data is "false", it indicates that the target data exceeds the first validity period.
Further, before determining that the target data is in the first effective duration and executing step S303, the method of the embodiment of the present invention further includes the following steps: and updating the request times of the target data in the first-level cache pool. Specifically, the update operation may be to add 1 to the number of requests.
Step S303, judging whether the request times of the target data are not more than a first abrasion threshold value. If yes, go to step S304; if not, go to step S305.
And step S304, acquiring target data from the primary cache pool.
Step S305, the target data is obtained from the server side, and the target data in the first-level cache pool is updated.
Specifically, when the number of requests of the target data in the first effective duration is greater than a first wear threshold, the client may send a data query request to the server, and then receive the target data returned by the server. Then, the client can return the target data acquired from the server to the caller, and update the cache data of the primary cache pool according to the target data returned by the server.
And S306, acquiring the target data from the server side, and deleting the target data from the primary cache pool.
Specifically, when the target data exceeds the first effective duration, the client may send a data query request to the server, and then receive the target data returned by the server. Next, the client may return the target data obtained from the server to the caller, delete the target data corresponding to the data query request in the primary cache pool, and write the statistical record of the request in the third cache pool.
In the embodiment of the present invention, through steps S301 to S306, the cache data in the primary cache pool can be managed from multiple dimensions of the first effective duration and the first wear threshold, and the cache data can be updated and deleted in time.
FIG. 4 is a diagram illustrating a portion of steps of a cache-based processing method according to yet another embodiment of the invention. As shown in fig. 4, the method of the embodiment of the present invention includes the steps of:
and S401, target data exists in the second-level cache pool. After step S401, step S402 is executed.
And step S402, judging whether the target data is in a second effective duration. If yes, go to step S403; if not, go to step S407.
In the embodiment of the present invention, the second-level cache pool includes, in addition to the request parameter and the target data corresponding to the request parameter, the following: the second effective duration, the number of requests for target data within the second effective duration, a second wear threshold, a jump statistic period, and a jump threshold. And the transition statistical period is less than the second effective duration. In specific implementation, the second effective duration, the jump statistical period, the second wear threshold and the jump threshold can be flexibly set according to requirements. For example, the second effective period may be set to 2 minutes, the transition statistical period may be set to 30 seconds, the second wear threshold may be set to 1000 times, and the transition threshold may be set to 900 times.
In an optional embodiment, the second level cache pool further comprises: the effective deadline of the target data. In this alternative embodiment, step S402 further includes: and comparing the effective deadline of the target data acquired by the query with the current time. If the effective deadline is later than the current time, judging that the target data is in a second effective duration; otherwise, judging that the target data exceeds a second effective duration. In another optional embodiment, the second level cache pool may further include: status identification of the target data. In this alternative embodiment, step S402 further includes: when the value of the state identifier of the target data is "true", indicating that the target data is in a second effective duration; when the value of the state flag of the target data is "false", it indicates that the target data exceeds the second validity period.
Step S403, the total number of requests to update the target data, and the number of requests within the transition statistical period. After step S403, step S404 may be performed.
Specifically, the update operation in step S403 may be: and adding 1 to the total request times of the target data, and adding 1 to the request times in the transition statistical period.
And step S404, judging whether the number of requests in the jump counting period is not greater than the jump threshold value. If yes, go to step S405; if not, go to step S408.
Step S405, judging whether the total request times are not larger than a second abrasion threshold value. If yes, go to step S406; if not, go to step S409.
And step S406, acquiring target data from the secondary cache pool.
Step S407, the target data is obtained from the server side, and the target data is deleted from the secondary cache pool.
Specifically, when the target data exceeds the second effective duration, the client may send a data query request to the server, and then receive the target data returned by the server. Next, the client may return the target data obtained from the server to the caller, delete the target data corresponding to the data query request in the second-level cache pool, and write the statistical record of the data query request in the third cache pool.
Step S408, writing the target data acquired from the server into the primary cache pool, and deleting the target data from the secondary cache pool.
Specifically, when the number of requests of the target data in the transition statistical period is greater than the transition threshold, the client may send a data query request to the server, and then receive the target data returned by the server. Next, the client may return the target data obtained from the server to the caller, write the target data returned by the server into the primary cache pool, and delete the original target data in the secondary cache pool.
And step S409, acquiring target data from the server and updating the target data in the secondary cache pool.
Specifically, when the number of requests of the target data in the second effective duration is greater than the second wear threshold, the client may send a data query request to the server, and then receive the target data returned by the server. And then, the client can return the target data acquired from the server to the caller, and update the cache data of the secondary cache pool according to the target data returned by the server.
In the embodiment of the invention, through the steps, the cache data in the second-level cache pool can be managed from a plurality of dimensions of the second effective duration, the second abrasion threshold and the jump threshold, the classification of the cache data can be realized, and the cache data can be updated and deleted in time.
Fig. 5 is a schematic diagram of main blocks of a cache-based processing apparatus according to an embodiment of the present invention. As shown in fig. 5, a cache-based processing apparatus 500 according to an embodiment of the present invention includes: the system comprises an acquisition module 501, a communication module 502 and a cache starting module 503.
The obtaining module 501 is configured to query the local cache according to the data query request to obtain the target data.
The data query request may be in a request format such as an RPC (remote procedure call) request or an Http request. In a specific example, the caller may make an RPC request through parameters defined in a function interface provided by the server. After receiving the RPC request from the caller, the obtaining module 501 may query the local cache according to the parameters in the RPC request. If the target data exists in the local cache, the obtaining module 501 obtains the target data from the local cache directly. The target data may be understood as "result data of a query". For example, the request parameter is an article identifier, and the target data is detail information of the article. For example, the request parameter is a name of a merchant, and the target data is transaction information of the merchant.
The local cache may be set in a memory of the client, and may cache data in a Key-Value (Key-Value) form. Further, the local cache may be a multi-level cache, including: a first level cache pool and a second level cache pool. The first-level cache pool and the second-level cache pool are mainly used for caching target data with high request frequency. And the request frequency of the cache data in the first-level cache pool is higher than that of the cache data in the second-level cache pool.
A communication module 502, configured to obtain the target data from a server when the target data does not exist in the local cache.
Specifically, when the target data does not exist in the local cache, the communication module 502 may send a data query request to the server, and then receive result data corresponding to the data query request, that is, the target data, returned by the server.
A cache starting module 503, configured to count the number of times of requests for the target data when the target data does not exist in the local cache; and under the condition that the request times of the target data exceed an activation threshold, the cache starting module is further used for writing the target data acquired from the server into the local cache.
The number of requests for the target data may be understood as "number of calls for request parameter". For example, if the request parameter in the data query request is a product identifier, the client counts the number of calls of the received request parameter "product identifier".
In the embodiment of the invention, the local cache is arranged at the client, the target data is obtained from the local cache through the obtaining module 501, and the target data is obtained from the server through the communication module 502 when the target data does not exist in the local cache, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of a high-frequency network request to the server is effectively relieved, and the response efficiency of the data query request is improved. In addition, the cache starting module 503 writes the target data acquired from the server into the local cache when the number of requests exceeds the activation threshold, and can automatically trigger the local cache mechanism of the client according to the statistical condition of the number of requests, thereby improving the flexibility of the cache mechanism.
Fig. 6 is a schematic diagram of the composition of a local cache according to an embodiment of the present invention. As shown in fig. 6, the local cache 600 according to the embodiment of the present invention includes: a first level cache pool 601, a second level cache pool 602, and a third level cache pool 603.
The first-level cache pool 601 is mainly used for caching target data with high request frequency, and may also be referred to as a "pressure sample pool". In addition, the first-level cache pool 601 includes, in addition to the request parameter and the target data corresponding to the request parameter: a first validity period, a number of requests for target data within the first validity period, a first wear threshold. In specific implementation, the first effective duration and the first wear threshold can be flexibly set as required. For example, the first effective period may be set to 5 minutes, and the first wear threshold may be set to 10000 times.
The second level cache pool 602 is mainly used for caching target data with high request frequency, and may also be referred to as an "active sample pool". In addition, the second-level cache pool comprises request parameters and target data corresponding to the request parameters, and also comprises the following steps: the second effective duration, the number of requests for target data within the second effective duration, a second wear threshold, a jump statistic period, and a jump threshold. And the transition statistical period is less than the second effective duration. In specific implementation, the second effective duration, the jump statistical period, the second wear threshold and the jump threshold can be flexibly set according to requirements. For example, the second effective period may be set to 2 minutes, the transition statistical period may be set to 30 seconds, the second wear threshold may be set to 1000 times, and the transition threshold may be set to 900 times.
The third level cache pool 603 is mainly used for caching the statistical records of the data query request, and may also be referred to as a "sample screening pool". The difference between the third level cache pool 603 and the first level cache pool and the second level cache pool mainly lies in: the third level cache pool 603 does not store the requested target data, and the first level cache pool and the second level cache pool store the requested target data. The statistical record may include, among other things, request parameters, a third validity period (or "statistical period duration"), a number of requests within the third validity period, and an activation threshold. In specific implementation, the third effective duration and the size of the activation threshold may be flexibly set as required. For example, the third effective time period may be set to 5 minutes, and the activation threshold may be set to 100 times.
In the embodiment of the present invention, the first level cache pool 601, the second level cache pool 602, and the third level cache pool 603 may cache data in the form of Key-Value pairs (Key-Value). Further, the first level cache pool, the second level cache pool and the third level cache pool adopt a storage mechanism of LRU (least recently used) to ensure that the target data or the statistical record which is requested recently is correctly recorded, and the target data or the statistical record which is requested earlier is deleted. In addition, the first-level buffer pool, the second-level buffer pool and the third-level buffer pool can also adopt a FIFO (first-in first-out queue) storage mechanism. In addition, in specific implementation, the storage capacity of the first level cache pool 601, the second level cache pool 602, and the third level cache pool 603 may be defined during initialization. In a preferred embodiment, the storage capacities of the first level cache pool, the second level cache pool and the third level cache pool have the following proportional relationship, that is, 1: 4: 16. for example, the storage capacity of the primary cache pool is 128KB, the storage capacity of the secondary cache pool is 512KB, and the storage capacity of the tertiary cache pool is 2048 KB.
Fig. 7 is a schematic diagram of main blocks of a cache-based processing apparatus according to another embodiment of the present invention. In the embodiment of the present invention, the local cache adopts the structure shown in fig. 6. As shown in fig. 7, a cache-based processing apparatus 700 according to an embodiment of the present invention includes: the system comprises an acquisition module 701, a first confirmation module 702, a cache management module 703, a communication module 704, a second confirmation module 705 and a cache starting module 706.
The obtaining module 701 is configured to query the local cache according to the data query request to obtain the target data, and specifically includes: 1) the obtaining module 701 queries the primary cache pool according to the data query request; when target data exists in the first-level cache pool, and the first confirmation module 701 confirms that the target data is within the first effective duration and the request frequency of the target data within the first effective duration is not greater than the first wear threshold, the obtaining module 701 obtains the target data from the first-level cache pool. 2) When the first-level cache pool does not have target data, the obtaining module 701 queries the second-level cache pool according to the data query request; when the target data exists in the secondary cache pool and the second confirmation module 705 confirms that the target data is within the second effective duration and the request number of the target data within the second effective duration is not greater than the second wear threshold, the obtaining module 701 obtains the target data from the secondary cache pool.
The first determining module 702 is configured to determine whether target data in the primary cache pool is in a first valid duration, and determine whether the number of requests of the target data in the first valid duration is not greater than a first wear threshold. In a preferred embodiment, the first determining module 702 may first execute the determining logic of "whether the target data in the first-level cache pool is in the first valid duration" and then execute the determining logic of "whether the number of requests in the first valid duration is not greater than the first wear threshold" after determining that the target data in the first-level cache pool is in the first valid duration.
The cache management module 703 is configured to delete the target data in the primary cache pool and write the statistical record of the data query request in the third cache pool when the first confirmation module 702 determines that the target data in the primary cache pool exceeds the first valid duration. The cache management module 703 is further configured to update the target data in the first-level cache pool when the first determining module 702 determines that the number of times of requests of the target data in the first-level cache pool in the first effective duration is greater than the first wear threshold.
The second determining module 705 is configured to determine whether the target data in the secondary cache pool is in the second valid duration, and determine whether the number of times of requests of the target data in the second valid duration is not greater than the second wear threshold. In a preferred embodiment, the second validation module 705 may first execute the determination logic of "whether the target data in the second level cache pool is in the first valid duration" and then execute the determination logic of "whether the number of requests in the second valid duration is not greater than the second wear threshold" after validating that the target data is in the second valid duration.
The cache management module 703 is further configured to delete the target data in the secondary cache pool and write the statistical record of the data query request in the third cache pool when the second confirmation module 705 determines that the target data in the secondary cache pool exceeds the second valid duration. The cache management module 703 is further configured to update the target data in the second-level cache pool when the second confirmation module 705 determines that the number of times of requests of the target data in the second effective duration is greater than the second wear threshold.
In a preferred embodiment, the second confirming module 705 is further configured to determine whether the number of requests of the target data in the second level cache pool within the transition statistical period is greater than the transition threshold after confirming that the target data in the second level cache pool is in the second valid duration. In this preferred embodiment, the cache management module 703 is further configured to, when the number of requests of the target data in the transition statistical period is greater than the transition threshold, write the target data acquired from the server into the primary cache pool, and delete the target data in the secondary cache pool.
A communication module 704, configured to obtain the target data from a server when the target data does not exist in the local cache. The communication module 704 is further configured to obtain the target data from the server when the first determining module 702 determines that the target data in the primary cache pool exceeds the first effective duration or the number of requests of the target data in the first effective duration is greater than a first wear threshold. The communication module 704 is further configured to obtain the target data from the server when the second confirmation module 705 determines that the target data in the secondary cache pool exceeds the second effective duration or the number of requests of the target data in the second effective duration is greater than the second wear threshold.
A cache starting module 706, configured to count the number of times of requests for the target data when the target data does not exist in the local cache. And, in the case that the number of requests for the target data exceeds the activation threshold, the cache starting module 706 is further configured to write the target data acquired from the server into the local cache.
Illustratively, counting the number of requests for the target data by the cache initiation module 706 includes: the cache starting module 706 queries the third-level cache pool according to the data query request to obtain a corresponding statistical record; under the condition that the corresponding statistical record is within the third effective duration, the cache starting module 706 adds one to the request times in the corresponding statistical record; when the corresponding statistical record exceeds the third validity duration, the cache initiation module 706 sets the number of requests in the corresponding statistical record to one.
In the embodiment of the invention, the following technical effects can be achieved through the above device: 1) the local cache comprising the first-level cache pool, the second-level cache pool and the third-level cache pool is arranged at the client, and the local cache is inquired by the acquisition module according to the data inquiry request, so that a large number of data inquiry requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of a high-frequency network request to the server is effectively relieved, and the response efficiency of the data inquiry request is improved. 2) By arranging the cache starting module, the local cache mechanism of the client can be automatically triggered in real time according to the statistical condition of the request times, so that the manual configuration is not required, and the flexibility of the cache mechanism is improved. 3) By arranging the cache management module, the data in the first-level cache pool and the second-level cache pool can be managed in a grading way from multiple dimensions, and can be updated and deleted in time. 4) By adopting the LRU storage mechanism in the first-level cache pool, the second-level cache pool and the third-level cache pool, the cache data can be ensured not to occupy a large amount of memory resources of the client.
Fig. 8 illustrates an exemplary system architecture 800 of a cache-based processing method or a cache-based processing apparatus to which embodiments of the invention may be applied.
As shown in fig. 8, the system architecture 800 may include terminal devices 801, 802, 803, a network 804, and a server 805. The network 804 serves to provide a medium for communication links between the terminal devices 801, 802, 803 and the server 805. Network 804 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 801, 802, 803 to interact with a server 805 over a network 804 to receive or send messages (such as data query requests) or the like. Various client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc., may be installed on the terminal devices 801, 802, 803.
The terminal devices 801, 802, 803 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 805 may be a server that provides various services, such as a background management server that supports shopping websites browsed by users using the terminal devices 801, 802, 803. The background management server may analyze and perform other processing on the received data such as the data query request, and feed back a processing result (for example, response data of the data query request) to the terminal device.
It should be noted that, the processing method based on the cache provided by the embodiment of the present invention is generally executed by the terminal device, and accordingly, the processing apparatus based on the cache is generally disposed in the terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 8 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 9 illustrates a schematic block diagram of a computer system 900 suitable for use in implementing an electronic device of an embodiment of the invention. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the system 900 are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
The following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The above-described functions defined in the system of the present invention are executed when the computer program is executed by a Central Processing Unit (CPU) 901.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module, a communication module, and a cache initiation module. The names of these modules do not form a limitation on the module itself in some cases, for example, the obtaining module may also be described as a "module for querying a local cache according to a data query request to obtain target data".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform the following: inquiring a local cache according to the data inquiry request to acquire target data; when the target data does not exist in the local cache, acquiring the target data from a server, counting the request times of the target data, and writing the target data acquired from the server into the local cache when the request times of the target data exceed an activation threshold.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A method of cache-based processing, the method comprising:
inquiring a local cache according to the data inquiry request to acquire target data;
obtaining the target data from a server when the target data does not exist in the local cache, and,
counting the request times of the target data, and writing the target data acquired from a server into the local cache under the condition that the request times of the target data exceed an activation threshold.
2. The method of claim 1, wherein the local caching comprises: a first-level cache pool and a second-level cache pool;
the step of querying the local cache according to the data query request to obtain the target data comprises: inquiring a primary cache pool according to the data inquiry request; when target data exist in a first-level cache pool, acquiring the target data from the first-level cache pool; when target data do not exist in the first-level cache pool, querying the second-level cache pool according to the data query request; and when the target data exists in the second-level cache pool, acquiring the target data from the second-level cache pool.
3. The method of claim 2, further comprising:
before the step of obtaining the target data from the first-level cache pool is executed, the target data in the first-level cache pool is confirmed to be within a first effective duration, and the request number of the target data within the first effective duration is not greater than a first abrasion threshold value.
4. The method of claim 3, further comprising:
when target data in a first-level cache pool exceeds a first effective duration or the request times of the target data in the first effective duration are larger than a first abrasion threshold value, acquiring the target data from a server; and the number of the first and second groups,
deleting the target data in the first-level cache pool when the target data in the first-level cache pool exceeds a first effective time length; and when the request times of the target data in the first-level cache pool in the first effective time length are larger than a first abrasion threshold value, updating the target data in the first-level cache pool.
5. The method of claim 2, further comprising:
before the step of obtaining the target data from the second-level cache pool is executed, the target data in the second-level cache pool is confirmed to be within a second effective duration, and the request number of the target data within the second effective duration is not larger than a second abrasion threshold.
6. The method of claim 5, further comprising:
when the target data in the secondary cache pool exceeds a second effective duration or the request times of the target data in the second effective duration are larger than a second abrasion threshold, acquiring the target data from a server; and the number of the first and second groups,
deleting the target data in the second-level cache pool when the target data in the second-level cache pool exceeds the second effective time; and when the request times of the target data in the second-level cache pool in the second effective duration are larger than a second abrasion threshold value, updating the target data in the second-level cache pool.
7. The method of claim 5, further comprising:
after the step of confirming that the target data in the secondary cache pool is within the second effective duration is executed, judging whether the request times of the target data in the jump counting period are greater than a jump threshold value; if so, writing the target data acquired from the server into the primary cache pool, and deleting the target data in the secondary cache pool; wherein the transition statistical period is less than the second effective duration.
8. The method of any of claims 2 to 7, wherein the local cache further comprises a third level cache pool;
the step of counting the number of requests for the target data comprises: inquiring a third-level cache pool according to the data inquiry request to obtain a corresponding statistical record; under the condition that the corresponding statistical record is within a third effective duration, adding one to the request times in the corresponding statistical record; and setting the number of requests in the corresponding statistical record to be one under the condition that the corresponding statistical record exceeds a third effective duration.
9. The method as claimed in claim 8, wherein the first level cache pool, the second level cache pool and/or the third level cache pool employ an LRU storage mechanism.
10. A cache-based processing apparatus, the apparatus comprising:
the acquisition module is used for inquiring the local cache according to the data inquiry request so as to acquire target data;
the communication module is used for acquiring the target data from a server side when the target data does not exist in the local cache;
the cache starting module is used for counting the request times of the target data when the target data does not exist in the local cache; and under the condition that the request times of the target data exceed an activation threshold, the cache starting module is further used for writing the target data acquired from the server into the local cache.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 9.
CN201810600524.2A 2018-06-12 2018-06-12 Cache-based processing method and device Pending CN110598138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810600524.2A CN110598138A (en) 2018-06-12 2018-06-12 Cache-based processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810600524.2A CN110598138A (en) 2018-06-12 2018-06-12 Cache-based processing method and device

Publications (1)

Publication Number Publication Date
CN110598138A true CN110598138A (en) 2019-12-20

Family

ID=68848959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810600524.2A Pending CN110598138A (en) 2018-06-12 2018-06-12 Cache-based processing method and device

Country Status (1)

Country Link
CN (1) CN110598138A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245822A (en) * 2020-01-08 2020-06-05 北京松果电子有限公司 Remote procedure call processing method and device and computer storage medium
CN111414383A (en) * 2020-02-21 2020-07-14 车智互联(北京)科技有限公司 Data request method, data processing system and computing device
CN111522836A (en) * 2020-04-22 2020-08-11 杭州海康威视***技术有限公司 Data query method and device, electronic equipment and storage medium
CN111782391A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Resource allocation method, device, electronic equipment and storage medium
CN112131260A (en) * 2020-09-30 2020-12-25 中国民航信息网络股份有限公司 Data query method and device
CN112398852A (en) * 2020-11-12 2021-02-23 北京天融信网络安全技术有限公司 Message detection method, device, storage medium and electronic equipment
CN112398849A (en) * 2020-11-12 2021-02-23 北京天融信网络安全技术有限公司 Method and device for updating embedded threat information data set
CN112506973A (en) * 2020-12-14 2021-03-16 ***股份有限公司 Method and device for managing stored data
CN112685454A (en) * 2021-03-10 2021-04-20 江苏金恒信息科技股份有限公司 Industrial data hierarchical storage system and method and industrial data hierarchical query method
CN113760982A (en) * 2021-01-18 2021-12-07 西安京迅递供应链科技有限公司 Data processing method and device
CN113900830A (en) * 2021-12-10 2022-01-07 北京达佳互联信息技术有限公司 Resource processing method and device, electronic equipment and storage medium
CN114143376A (en) * 2021-11-18 2022-03-04 青岛聚看云科技有限公司 Server for loading cache, display equipment and resource playing method
CN115174471A (en) * 2021-04-07 2022-10-11 中国科学院声学研究所 Cache management method for storage unit of ICN (integrated circuit network) router

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521252A (en) * 2011-11-17 2012-06-27 四川长虹电器股份有限公司 Access method of remote data
CN104217019A (en) * 2014-09-25 2014-12-17 中国人民解放军信息工程大学 Content inquiry method and device based on multiple stages of cache modules
US20150149721A1 (en) * 2013-11-25 2015-05-28 Apple Inc. Selective victimization in a multi-level cache hierarchy
CN106446097A (en) * 2016-09-13 2017-02-22 郑州云海信息技术有限公司 File reading method and system
CN106815287A (en) * 2016-12-06 2017-06-09 ***股份有限公司 A kind of buffer memory management method and device
CN107122410A (en) * 2017-03-29 2017-09-01 武汉斗鱼网络科技有限公司 A kind of buffering updating method and device
CN107301215A (en) * 2017-06-09 2017-10-27 北京奇艺世纪科技有限公司 A kind of search result caching method and device, searching method and device
CN107623702A (en) * 2016-07-13 2018-01-23 阿里巴巴集团控股有限公司 A kind of data cache method, apparatus and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521252A (en) * 2011-11-17 2012-06-27 四川长虹电器股份有限公司 Access method of remote data
US20150149721A1 (en) * 2013-11-25 2015-05-28 Apple Inc. Selective victimization in a multi-level cache hierarchy
CN104217019A (en) * 2014-09-25 2014-12-17 中国人民解放军信息工程大学 Content inquiry method and device based on multiple stages of cache modules
CN107623702A (en) * 2016-07-13 2018-01-23 阿里巴巴集团控股有限公司 A kind of data cache method, apparatus and system
CN106446097A (en) * 2016-09-13 2017-02-22 郑州云海信息技术有限公司 File reading method and system
CN106815287A (en) * 2016-12-06 2017-06-09 ***股份有限公司 A kind of buffer memory management method and device
CN107122410A (en) * 2017-03-29 2017-09-01 武汉斗鱼网络科技有限公司 A kind of buffering updating method and device
CN107301215A (en) * 2017-06-09 2017-10-27 北京奇艺世纪科技有限公司 A kind of search result caching method and device, searching method and device

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245822A (en) * 2020-01-08 2020-06-05 北京松果电子有限公司 Remote procedure call processing method and device and computer storage medium
CN111414383A (en) * 2020-02-21 2020-07-14 车智互联(北京)科技有限公司 Data request method, data processing system and computing device
CN111414383B (en) * 2020-02-21 2024-03-15 车智互联(北京)科技有限公司 Data request method, data processing system and computing device
CN111522836A (en) * 2020-04-22 2020-08-11 杭州海康威视***技术有限公司 Data query method and device, electronic equipment and storage medium
CN111522836B (en) * 2020-04-22 2023-10-10 杭州海康威视***技术有限公司 Data query method and device, electronic equipment and storage medium
CN111782391A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Resource allocation method, device, electronic equipment and storage medium
CN112131260A (en) * 2020-09-30 2020-12-25 中国民航信息网络股份有限公司 Data query method and device
CN112398849B (en) * 2020-11-12 2022-12-20 北京天融信网络安全技术有限公司 Method and device for updating embedded threat information data set
CN112398852B (en) * 2020-11-12 2022-11-15 北京天融信网络安全技术有限公司 Message detection method, device, storage medium and electronic equipment
CN112398852A (en) * 2020-11-12 2021-02-23 北京天融信网络安全技术有限公司 Message detection method, device, storage medium and electronic equipment
CN112398849A (en) * 2020-11-12 2021-02-23 北京天融信网络安全技术有限公司 Method and device for updating embedded threat information data set
CN112506973B (en) * 2020-12-14 2023-12-15 ***股份有限公司 Method and device for managing storage data
CN112506973A (en) * 2020-12-14 2021-03-16 ***股份有限公司 Method and device for managing stored data
CN113760982A (en) * 2021-01-18 2021-12-07 西安京迅递供应链科技有限公司 Data processing method and device
CN113760982B (en) * 2021-01-18 2024-05-17 西安京迅递供应链科技有限公司 Data processing method and device
CN112685454A (en) * 2021-03-10 2021-04-20 江苏金恒信息科技股份有限公司 Industrial data hierarchical storage system and method and industrial data hierarchical query method
CN115174471A (en) * 2021-04-07 2022-10-11 中国科学院声学研究所 Cache management method for storage unit of ICN (integrated circuit network) router
CN115174471B (en) * 2021-04-07 2024-03-26 中国科学院声学研究所 Cache management method for storage unit of ICN router
CN114143376A (en) * 2021-11-18 2022-03-04 青岛聚看云科技有限公司 Server for loading cache, display equipment and resource playing method
CN113900830B (en) * 2021-12-10 2022-04-01 北京达佳互联信息技术有限公司 Resource processing method and device, electronic equipment and storage medium
CN113900830A (en) * 2021-12-10 2022-01-07 北京达佳互联信息技术有限公司 Resource processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110598138A (en) Cache-based processing method and device
CN109684358B (en) Data query method and device
CN109947668B (en) Method and device for storing data
CN108629029B (en) Data processing method and device applied to data warehouse
CN109542361B (en) Distributed storage system file reading method, system and related device
CN113010818B (en) Access current limiting method, device, electronic equipment and storage medium
CN108804447B (en) Method and system for responding to data request by using cache
CN107547548B (en) Data processing method and system
CN109918191B (en) Method and device for preventing frequency of service request
CN112118352B (en) Method and device for processing notification trigger message, electronic equipment and computer readable medium
CN109842621A (en) A kind of method and terminal reducing token storage quantity
CN109213824B (en) Data capture system, method and device
CN114091704A (en) Alarm suppression method and device
CN113760982B (en) Data processing method and device
CN112445988A (en) Data loading method and device
CN112631504A (en) Method and device for realizing local cache by using off-heap memory
CN113742131B (en) Method, electronic device and computer program product for storage management
CN113364887A (en) File downloading method based on FTP, proxy server and system
CN112884181A (en) Quota information processing method and device
CN114374657A (en) Data processing method and device
CN109087097B (en) Method and device for updating same identifier of chain code
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN110019671B (en) Method and system for processing real-time message
CN112699116A (en) Data processing method and system
CN113722193A (en) Method and device for detecting page abnormity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination