WO2020181820A1 - 数据缓存方法、装置、计算机设备和存储介质 - Google Patents

数据缓存方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2020181820A1
WO2020181820A1 PCT/CN2019/118426 CN2019118426W WO2020181820A1 WO 2020181820 A1 WO2020181820 A1 WO 2020181820A1 CN 2019118426 W CN2019118426 W CN 2019118426W WO 2020181820 A1 WO2020181820 A1 WO 2020181820A1
Authority
WO
WIPO (PCT)
Prior art keywords
request
time
cache
stored
preset
Prior art date
Application number
PCT/CN2019/118426
Other languages
English (en)
French (fr)
Inventor
李桃
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020181820A1 publication Critical patent/WO2020181820A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the technical field of data processing, in particular to a data caching method, device, computer equipment and storage medium.
  • the cache is usually the backup memory of each device system. Due to the limited memory resources, a fixed memory value is usually set. When the cache size exceeds the fixed value, it will be based on LRU (Least Rencently Used), LFU (Least Frequently Used) or FIFO ( Algorithms such as First In First Out clean up the cache.
  • LRU Least Rencently Used
  • LFU Least Frequently Used
  • FIFO Algorithms such as First In First Out clean up the cache.
  • the rule engine instance is cached to process the data requested by the device, without the need to re-create the object for each request. When the cache reaches a set fixed value, it will be cleaned up according to the cache algorithm.
  • the main purpose of this application is to provide a data caching method, device, computer equipment and storage medium, aiming to solve the technical problem of insufficient utilization of cache resources in a scenario where the request frequency is basically fixed.
  • this application proposes a data caching method for caching data with a fixed request frequency, including:
  • the call frequency and call time of each object in the cache are obtained, where the call time is the time when the object was called last time based on the current time;
  • the target object is deleted, and the requested object is stored in the cache.
  • This application also provides a data caching device for caching data with a fixed request frequency, including:
  • the receiving request unit is used to receive the current request sent by the device
  • the judging object unit is used to judge whether the requested object of the current request is an object in the cache
  • the calling object unit is used to call the request object from the cache if the request object is an object in the cache; otherwise, obtain the request object from a preset database, and determine whether the request object needs to be Stored in the cache;
  • the obtaining time unit is used to obtain the calling frequency and calling time of each object in the cache if it is determined that the requested object needs to be stored in the cache.
  • the calling time is based on the current time the object was last received The moment of call
  • the calculation time unit is configured to calculate the time when each of the objects is called each time within a first preset time according to the calling frequency and the calling time, where the first preset time is a specified time from the current moment Time period within
  • the comparison time unit is used to compare the called time corresponding to each of the objects to obtain the latest target time, and record the object corresponding to the target time as the target object;
  • the object deletion unit is used to delete the target object and store the requested object in the cache.
  • the present application also provides a computer device, including a memory and a processor, the memory stores computer-readable instructions, and the processor implements the steps of the foregoing method when the computer-readable instructions are executed.
  • the present application also provides a computer-readable storage medium on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the steps of the foregoing method are implemented.
  • This method can improve the cache hit rate of the scene with a fixed request frequency, make full use of cache resources, and compare with existing LRU, LFU, and FIFO caches, and then delete the object and store the requested object in the cache. Compared with the algorithm, it consumes less time and cost.
  • FIG. 1 is a schematic diagram of the steps of a data caching method in an embodiment of this application
  • FIG. 2 is a schematic block diagram of the structure of a data caching device in an embodiment of the application
  • FIG. 3 is a schematic block diagram of the structure of a computer device according to an embodiment of the application.
  • the data caching method in this embodiment includes:
  • Step S1 Receive the current request sent by the device
  • Step S2 Determine whether the requested object of the current request is an object in the cache
  • Step S3 If the requested object is an object in the cache, call the requested object from the cache; otherwise, obtain the requested object from a preset database, and determine whether the requested object needs to be stored in the store.
  • the cache
  • Step S4 If it is determined that the requested object needs to be stored in the cache, the call frequency and call time of each object in the cache are obtained, and the call time is based on the current time the object was called last time ;
  • Step S5 Calculate the time when each of the objects is called within a first preset time according to the calling frequency and the calling time, and the first preset time is the time within a specified time from the current moment segment;
  • Step S6 Compare the called time corresponding to each of the objects to obtain the latest target time, and record the object corresponding to the target time as the target object;
  • Step S7 Delete the target object, and store the requested object in the cache.
  • the foregoing data caching method is used to cache data with a fixed request frequency, and is mainly applied to a scenario with a fixed request frequency, such as an Internet of Things scenario.
  • the above-mentioned current request includes at least a request header and a request body.
  • the request header includes information such as the request method, version, and protocol used.
  • the request body includes server parameters (such as device ID), request content and other information, and the request object is obtained according to the request content.
  • the request object is the data or tool used to process the request. For example, in a scenario of processing device status, the device needs to report status data, which includes temperature, humidity, battery status, etc.
  • the server receives the request and initializes a rule engine.
  • the rule engine is responsible for analyzing and processing the above data.
  • the rule engine may be an object in the cache.
  • the request sent by the device received by the current system is recorded as the current request.
  • step S2 it is determined whether the requested object of the current request is an object in the cache. It is known that after receiving the current request, it is necessary to respond to the request. If the requested object (data used for response) is stored in In the cache, there is no need to search and retrieve from the preset database, but directly call from the cache, which will improve efficiency, but there are cases where the request object is not in the cache. For example, the request object is a, but it is in the cache. There are three bcds, then the cache cannot be called at this time, so after the request is obtained, it is judged whether the corresponding request object is an object in the cache, such as searching the request object in the cache, and if it is found, it can be determined that there is a request object in the cache.
  • the request object is directly called. If the request object is not found in the cache, it can be determined that there is no request object in the cache. At this time, the request object needs to be obtained from the system database, which is preset and has the corresponding The database of the request object (response data) of each request. In order to improve the efficiency of the subsequent system, it can be further determined whether the request object needs to be stored in the cache. For example, according to preset rules or algorithms, it is a benchmark that costs less To further determine whether the request object needs to be stored in the cache, it should be known that the above cache is a cache with full capacity.
  • the above fixed frequency means that the frequency of requests is fixed, so the frequency of calling the cache is also fixed, such as request a Request every hour, request b requests every 15 minutes.
  • the request object when it is determined that the request object needs to be stored in the cache, it means that in the process of subsequent system operation and response to the request, calling the request object from the cache is more efficient and costs less. Since the capacity of the cache is fixed, if the requested object needs to be stored in the cache, one of the original objects in the cache needs to be deleted. This can be achieved by the deleted target object, which is within the first preset time. The latest object at the time when each object is called next time. After deleting the target object, the above request object is stored in the cache for the next request call.
  • the above request frequency is the frequency of calling the requested object.
  • the Internet of Things scenario it is generally a scenario with a basically fixed calling frequency, so the information of each object in the cache can be obtained directly from the system.
  • the calling frequency and the last calling time of each object where the last calling time is based on the current time, the time the object was called last time.
  • calculate the time of each call within the first preset time where the first preset time is the time period within the specified time from the current moment , And finally compare the calculated time to get the latest target time.
  • the object corresponding to the target time is the above target object. Therefore, the above caching method can be called Lastly Arrived (LA) caching Algorithm, the LA algorithm is a self-defined algorithm for this application.
  • the following formula is used to calculate the time when each object is called each time within the first preset time: Wherein, i is any of the above-described object cache, T i is the time of calling of the object i, f i is the frequency of call objects i, t i is the time before the first call to the object i.
  • the corresponding call frequency is 0.05 times per second, and the time corresponding to the last call is 12:10:15.
  • the time of the next call by the above formula is 12:10:35, and then the next call
  • the time of is 12:10:55, so all the calling moments of object A in the first preset time are calculated, and so on, all the calling moments of each object in the cache are calculated in the first preset time, and then Comparing these calling moments to get the latest target moment, the object corresponding to the target moment is the object that should be deleted, and it is recorded as the target object for ease of description.
  • the replacement target The object is the latest object at the time when each object is called next time within the first preset time, so that the cache hit rate is higher, the cost is less, and the efficiency is higher.
  • the cost c as the extra time cost of a cache miss (that is, for a request, the cache has the request object
  • the time-consuming response to the call is the difference between the time-consuming response when there is no request object in the cache that needs to be obtained from the database and then responded).
  • the total cost S is the sum of all costs in a certain period of time.
  • the hit rate can be expressed as:
  • the cached objects are 1, 2, 3, 4...k
  • the corresponding frequency of being called is f1, f2, f3,...fk
  • the last called time is respectively denoted as t1, t2, t3...tk.
  • the time when the i object is subsequently called is: Therefore, the latest time in this period can be expressed as: That is, the i object corresponding to the last T LA is the object that needs to be cleared out of the cache.
  • the existing cache algorithms include LRU, LFU, and FIFO algorithms.
  • the LA algorithm provided by this application has the highest number of cache hits at each time point and the total cost is the least.
  • step S3 includes:
  • Step S31 According to the preset request frequency of all the requests, the order of the requests sorted according to the request time within the second preset time is calculated;
  • Step S32 Calculate the time required for traversing each request in the arrangement sequence when the request object is not stored in the cache, and record it as the first time-consuming;
  • Step S33 Calculate the time required to traverse each request in the sort order when the requested object is stored in the cache, and record it as the second time-consuming;
  • Step S34 Compare the first time and the second time
  • Step S35 If the first time is longer than the second time, it is determined that the request object needs to be stored in the cache;
  • Step S36 If the first time is shorter than the second time, it is determined that there is no need to store the request object in the cache.
  • judging whether the request object needs to be stored in the cache is implemented through the above steps S31-36.
  • the request frequency of each request can be obtained , And then calculate the order in which the requests are sorted in the order of the request time within the second preset time according to the request frequency.
  • step S31 includes:
  • Step S310 Calculate all request moments of each request within the second preset time according to the preset request frequency of all requests;
  • Step S310 Sort the requests corresponding to each request moment in chronological order to obtain the arrangement order
  • H j (n) is the request time of the nth request of request j
  • f j is the request frequency of request j
  • t 0 is the second request. The initial moment within the preset time.
  • the request time of all requests within this period of time can be calculated by the request frequency, and then the requests corresponding to each request time are arranged in chronological order Sort to get the above sequence, for example, suppose the initial time t 0 , the frequency of request a is f a , the frequency of request b is f b , the frequency of request c is f c , then the arrival time of the next request a is The arrival time of the next request b is: The arrival time of the next request c is:
  • the arrival time of all requests a can be expressed as: n is a natural integer; the arrival time of all requests b can be expressed as:
  • the arrival time of all requests c can be expressed as: At this time, the order of the corresponding requests can be obtained according to the order of these moments from small to large.
  • the corresponding requests are traversed in the above arrangement order to obtain the corresponding time-consuming, and they are respectively recorded as the first time-consuming , And the second time-consuming, so that it can be determined whether the request object needs to be stored in the cache according to the first time-consuming and the second time-consuming.
  • the order of the requests is abc-ab-ab, where the current request object is a, and only bc can be placed in the cache. If the object is not a. Stored in the cache.
  • the sum of the time consumed by the system to traverse each request in the above sequence is the first time. It is worth noting that it takes longer to obtain an object from the database than from the cache, and respond to each The request time, the time taken from the database or the cache can be obtained through code statistics, so the above-mentioned first time can be obtained through statistics.
  • object a is stored in the cache
  • the last object b in the above order is deleted from the cache
  • the above request object is a
  • there is ac in the cache there is ac in the cache, and the order of each request within 9 minutes is abc-ab- ab
  • the system traverses the second time consuming of each request in this order.
  • steps S34-S36 after obtaining the first time-consuming and the second time-consuming, compare the two.
  • the first time-consuming is longer than the second time-consuming, it means that the request object is not stored in the cache
  • the system consumes more time, is slower, and costs more, so it is determined that the requested object needs to be stored in the cache.
  • the first time consuming is shorter than the second time consuming, it means that if the request object is not stored in the cache, the system consumes less time and the cost is lower, so it is determined that there is no need to store the requested object in the cache.
  • node S(i) For any binary tree (the request object is divided into two cases when the request arrives, one is stored in the cache, the other is not stored in the cache) node S(i) consists of five parts: parent node index, left child node Index, the index of the right child node, the total time c(i) at t(i) and the current cache set m(i).
  • parent node index For the left child node Sl(i+1) of S(i) represents the set of objects A ⁇ u(i) and When the set A does not use the cache, the right child node is Sr(i+1), which means that the set A uses the cache.
  • the time consumption of hitting the cache is set to 0, and the time consumption of hitting the cache is set to 1.
  • the index of the right child node of S(0) points to Sr(1), and the index of the parent node of Sr(1) points to S(0), c(1 ) Is equal to c(0) and Add the number k of elements in the set
  • the set p is the set of cache instances remaining after k objects are eliminated using the above LA algorithm.
  • S(0) right child node index points to Sr(1)
  • Sr(1) parent node index points to S(0)
  • c(1) is equal to c(0) and Add the number k of elements in the set.
  • Step 3 Repeat the above steps until the elements in Q are taken out, and finally get a binary tree.
  • find the node with the smallest c(n) among all leaf nodes in the binary tree in the corresponding time period And then search upwards according to the index of its parent node until the root node S(0).
  • This path is the shortest time-consuming path.
  • the left child node does not use the cache and the right child node uses the cache, determine which time on the path the request object needs to be stored in the cache, so as to obtain the maximum value of whether the request object is stored in the cache at each request time in the time period. Excellent strategy.
  • step S3 includes:
  • Step S31' Obtain the calling frequency of each object in the cache
  • Step S32' Calculate the next requested object according to the calling frequency, and record it as the next object
  • Step S33' Determine whether the next object is the requested object of the current request
  • Step S34' If yes, it is determined that the requested object needs to be stored in the cache; otherwise, it is determined that the requested object does not need to be stored in the cache.
  • the calling frequency of each object in the cache is obtained from the system, and the request object of the next request is calculated according to the calling frequency of each object and the time of the last call, that is, the order is calculated
  • the next object is obtained, it is further judged whether the next object is the requested object of the current request. If it is, it is determined that the requested object needs to be stored in the cache for the next call; if not, it is determined that the requested object does not need to be stored in the cache in.
  • step S2 includes:
  • Step S21 Identify the request object according to the current request
  • Step S22 Compare the requested object with each object in a cache list, where the cache list is a list of all objects stored in the cache;
  • Step S23 If the requested object is consistent with the object in the cache list, it is determined that the requested object is an object in the cache; otherwise, it is determined that the requested object is not an object in the cache.
  • the known request object is the data or tool used to process the request. Since each request includes a request header and a request body, the request body includes server-side parameters (such as device ID) and request content. You can obtain information about processing the content from the requested content, identify the requested object of the current request based on this information, and then compare the requested object with the objects in the cache list.
  • the cache list is a list of all objects stored in the cache. When an object consistent with the requested object is found in the cache list, it can be determined that the requested object is an object in the cache. When an object consistent with the requested object is not found in the cache list, it indicates that the requested object is not an object in the cache.
  • the data caching device in this embodiment includes:
  • the receiving request unit 100 is configured to receive the current request sent by the device
  • the judging object unit 200 is used to judge whether the requested object of the current request is an object in the cache
  • the calling object unit 300 is configured to call the request object from the cache if the request object is an object in the cache, otherwise obtain the request object from a preset database, and determine whether the request needs to be Store the object in the cache;
  • the obtaining time unit 400 is configured to, if it is determined that the requested object needs to be stored in the cache, obtain the calling frequency and the calling time of each object in the cache, and the calling time is based on the previous time of the object at the current time The moment of being called;
  • the calculation time unit 500 is configured to calculate the time when each of the objects is called within a first preset time according to the calling frequency and the calling time, where the first preset time is specified from the current moment Time period of time;
  • the comparison time unit 600 is configured to compare the called time corresponding to each of the objects to obtain the latest target time, and record the object corresponding to the target time as the target object;
  • the object deletion unit 700 is configured to delete the target object and store the requested object in the cache.
  • the following formula is used to calculate the time when each object is called next within the first preset time: Wherein, i is any of the above-described object cache, T i is the time of calling of the object i, f i is the frequency of call objects i, t i is the time before the first call to the object i.
  • the aforementioned calling object unit 300 includes:
  • the calculation sequence subunit is used to calculate the sequence of the requests in the second preset time according to the request time according to the preset request frequency of all the requests;
  • the first time-consuming sub-unit is used to calculate the time required to traverse the requests in the arrangement sequence when the request object is not stored in the cache, and record it as the first time-consuming;
  • the second time-consuming subunit is configured to calculate the time required to traverse the requests in the sort order when the requested object is stored in the cache, and record it as the second time-consuming;
  • the comparison time-consuming subunit is used to compare the first time-consuming with the second time-consuming
  • a first determination subunit for determining that the request object needs to be stored in the cache when the first time is longer than the second time
  • the second determination subunit is used for determining that the request object does not need to be stored in the cache when the first time is shorter than the second time.
  • the foregoing calculation sequence sub-unit includes:
  • the calculation time module is configured to calculate all the request times of each request within the second preset time according to the preset request frequency of all requests;
  • a sorting request module configured to sort the requests corresponding to each of the request moments in chronological order to obtain the sorting order
  • H j (n) is the request time of the nth request of request j
  • f j is the request frequency of request j
  • t 0 is the second request. The initial moment within the preset time.
  • the aforementioned calling object unit 300 includes:
  • the calling frequency subunit is used to obtain the calling frequency of each object in the cache
  • the calculating object subunit is used to calculate the next requested object according to the calling frequency, and record it as the next object;
  • the judging object subunit is used to judge whether the next object is the requested object of the current request
  • the judgment storage subunit is used to judge that the next object is the request object of the current request, and then it is determined that the requested object needs to be stored in the cache, otherwise it is determined that the requested object does not need to be stored in the cache. In the cache.
  • the aforementioned judgment object unit 200 includes:
  • the identifying object subunit is used to identify the requested object according to the current request
  • the comparison object subunit is used to compare the requested object with each object in a cache list, where the cache list is a list of all objects stored in the cache;
  • the determining cache subunit is configured to determine that the requested object is an object in the cache if the requested object is consistent with an object in the cache list; otherwise, determine that the requested object is not an object in the cache .
  • an embodiment of the present application also provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 3.
  • the computer equipment includes a processor, a memory, a network interface and a database connected through a system bus. Among them, the computer designed processor is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the memory provides an environment for the operation of the operating system and computer readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store all the data required for calling the cached object.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instructions are executed by the processor to realize a data caching method.
  • the above-mentioned processor executes the steps of the above-mentioned data caching method: receiving the current request sent by the device; judging whether the requested object of the current request is an object in the cache; if the requested object is an object in the cache, then from the cache Call the request object in the database, otherwise obtain the request object from the preset database, and determine whether the request object needs to be stored in the cache; if it is determined that the request object needs to be stored in the cache, then Obtain the calling frequency and calling time of each object in the cache.
  • the calling time is the time when the object was called last time based on the current moment; according to the calling frequency and the calling time, it is calculated that each object is in the first
  • the first preset time is the time period within a specified time from the current moment; the calling time corresponding to each object is compared to obtain the latest target
  • the object corresponding to the target time is recorded as the target object; the target object is deleted, and the requested object is stored in the cache.
  • the step of judging whether the requested object needs to be stored in the cache includes: calculating according to the preset request frequency of all requests to obtain the order of the requests according to the request time within the second preset time Arrangement order; calculate the time required to traverse each request in the order of the order when the requested object is not stored in the cache, and record it as the first time-consuming; calculate when the requested object is stored in the cache In the case of traversing the time required for each request in the sort order, and record it as the second time; compare the first time and the second time; if the first time is more If the second time-consuming time is longer, it is determined that the request object needs to be stored in the cache; if the first time-consuming time is shorter than the second time-consuming time, it is determined that the request object does not need to be stored in the cache .
  • the above step of calculating the order of the requests in chronological order within the second preset time according to the preset request frequency of all requests includes: calculating according to the preset request frequency of all requests All the request moments of each request within the second preset time; sort the requests corresponding to each of the request moments in chronological order to obtain the arrangement order; wherein, all requests j are calculated by the following formula Request time: Where j is any of the preset requests, H j (n) is the request time of the nth request of request j , f j is the request frequency of request j, and t 0 is the second request. The initial moment within the preset time.
  • the above step of determining whether the requested object needs to be stored in the cache includes: obtaining the calling frequency of each object in the cache; calculating the next requested object according to the calling frequency, and Recorded as the next object; determine whether the next object is the requested object of the current request; if so, it is determined that the requested object needs to be stored in the cache, otherwise it is determined that the requested object does not need to be stored The cache.
  • the step of judging whether the requested object of the current request is an object in the cache includes: identifying the requested object according to the current request; and comparing the requested object with each object in the cache list
  • the cache list is a list of all objects stored in the cache; if the requested object is consistent with the objects in the cache list, it is determined that the requested object is an object in the cache; otherwise, it is determined that the requested object is the object in the cache; The requested object is not an object in the cache.
  • FIG. 3 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • the computer-readable storage medium is, for example, a non-volatile computer-readable storage medium or a volatile computer-readable storage medium on which Stored with computer-readable instructions, when the computer-readable instructions are executed by the processor, a data caching method is implemented, which specifically includes: receiving the current request sent by the device; judging whether the requested object of the current request is an object in the cache; if If the request object is an object in the cache, call the request object from the cache; otherwise, obtain the request object from a preset database, and determine whether the request object needs to be stored in the cache; if It is determined that the requested object needs to be stored in the cache, and then the calling frequency and the calling time of each object in the cache are obtained.
  • the calling time is based on the current moment when the object was called last time; according to the The calling frequency and the calling time calculate the moment when each of the objects is called within a first preset time, and the first preset time is a time period within a specified time from the current moment; The object corresponding to the called time is compared to obtain the latest target time, the object corresponding to the target time is recorded as the target object; the target object is deleted, and the requested object is stored in the cache .
  • the step of judging whether the requested object needs to be stored in the cache includes: calculating according to the preset request frequency of all requests to obtain the order of the requests according to the request time within the second preset time Arrangement order; calculate the time required to traverse each request in the order of the order when the requested object is not stored in the cache, and record it as the first time-consuming; calculate when the requested object is stored in the cache In the case of traversing the time required for each request in the sort order, and record it as the second time; compare the first time and the second time; if the first time is more If the second time-consuming time is longer, it is determined that the request object needs to be stored in the cache; if the first time-consuming time is shorter than the second time-consuming time, it is determined that the request object does not need to be stored in the cache .
  • the above step of calculating the order of the requests in chronological order within the second preset time according to the preset request frequency of all requests includes: calculating according to the preset request frequency of all requests All the request moments of each request within the second preset time; sort the requests corresponding to each of the request moments in chronological order to obtain the arrangement order; wherein, all requests j are calculated by the following formula Request time: Where j is any of the preset requests, H j (n) is the request time of the nth request of request j , f j is the request frequency of request j, and t 0 is the second request. The initial moment within the preset time.
  • the step of judging whether the requested object of the current request is an object in the cache includes: identifying the requested object according to the current request; and comparing the requested object with each object in the cache list
  • the cache list is a list of all objects stored in the cache; if the requested object is consistent with the objects in the cache list, it is determined that the requested object is an object in the cache; otherwise, it is determined that the requested object is the object in the cache; The requested object is not an object in the cache.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本申请提出的数据缓存方法、装置、计算机设备和存储介质,用于缓存请求频率固定的数据,其中方法包括:接收设备发来的当前请求;判断所述当前请求的请求对象是否为缓存中的对象;若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象;否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;若判定需要将所述请求对象存入所述缓存中,则删除所述缓存中的目标对象,并将所述请求对象存入所述缓存中,所述目标对象为在第一预设时间内各对象被调用的时刻当中最晚的对象,该方法能够提高调用缓存频率基本固定场景的缓存命中率,充分利用缓存资源,与现有LRU、LFU以及FIFO等缓存算法相比,耗费的时间代价更少。

Description

数据缓存方法、装置、计算机设备和存储介质
本申请要求于2019年3月8日提交中国专利局、申请号为201910175754.3,申请名称为“数据缓存方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及到数据处理的技术领域,特别是涉及到一种数据缓存方法、装置、计算机设备和存储介质。
背景技术
缓存通常为各设备***的备份内存,由于内存资源有限,通常会设定一个固定内存值,当缓存大小超过固定值后,再按照LRU(Least Rencently Used)、LFU(Least Frequently Used)或者FIFO(First In First Out)等算法对缓存进行清理。如规则引擎中,会缓存规则引擎实例,用于对设备请求的数据进行处理,而不需要每次请求都重新创建对象,当缓存达到设定的固定值时,会根据缓存算法进行清理。
但是,在物联网场景中,设备上报数据或请求数据的频率往往是固定的,而这种场景中若依据上述LRU、LFU或者FIFO等算法进行清理缓存并不适合,缓存命中率并不高,且缓存资源利用也不够充分,鉴于上述各种缓存算法在物联网场景中的不足之处,亟需一种针对调用缓存频率基本固定的场景的缓存方法。
技术问题
本申请的主要目的为提供一种数据缓存方法、装置、计算机设备和存储介质,旨在解决在请求频率基本固定的场景中,缓存资源利用不充分的技术问题。
技术解决方案
基于上述发明目的,本申请提出一种数据缓存方法,用于缓存请求频率固定的数据,包括:
接收设备发来的当前请求;
判断所述当前请求的请求对象是否为缓存中的对象;
若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象;否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;
依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
删除所述目标对象,并将所述请求对象存入所述缓存中。
本申请还提供一种数据缓存装置,用于缓存请求频率固定的数据,包括:
接收请求单元,用于接收设备发来的当前请求;
判断对象单元,用于判断所述当前请求的请求对象是否为缓存中的对象;
调用对象单元,用于若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象;否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
获取时刻单元,用于若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;
计算时刻单元,用于依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
对比时刻单元,用于将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
删除对象单元,用于删除所述目标对象,并将所述请求对象存入所述缓存中。
本申请还提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现上述方法的步骤。
本申请还提供了一种计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述方法的步骤。
有益效果
先判断请求的请求对象是否为缓存中的对象,若不是,则进一步判断需不需要将请求对象存入缓存中,若需要,则通过预设规则得到在第一预设时间内各对象被调用的时刻当中最晚的对象,然后删除该对象并将请求对象存入缓存中,该方法能够提高请求频率基本固定场景的缓存命中率,充分利用缓存资源,与现有LRU、LFU以及FIFO等缓存算法相比,耗费的时间代价更少。
附图说明
图1为本申请一实施例中数据缓存方法的步骤示意图;
图2为本申请一实施例中数据缓存装置的结构示意框图;
图3为本申请一实施例的计算机设备的结构示意框图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的最佳实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1,本实施例中的数据缓存方法,包括:
步骤S1:接收设备发来的当前请求;
步骤S2:判断所述当前请求的请求对象是否为缓存中的对象;
步骤S3:若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
步骤S4:若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;
步骤S5:依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
步骤S6:将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
步骤S7:删除所述目标对象,并将所述请求对象存入所述缓存中。
本实施例中,上述数据缓存方法用于缓存请求频率固定的数据,主要应用于请求频率固定的场景,例如物联网场景。上述当前请求至少包括请求头和请求体,其中请求头包括请求方法、版本、使用的协议等信息,请求体包括服务端的参数(如设备ID)、请求内容等信息,依据请求内容得到请求对象,请求对象即为用于处理该请求的数据或者工具。例如,在一个处理设备状态的场景中,设备需要上报状态数据,这些数据包括温度、湿度、电池状态之类的等等,设备发出请求后,服务端接收到该请求并初始化一个规则引擎,这个规则引擎负责对上述数据进行分析和处理,这个例子中规则引擎可以为存在缓存中对象,本实施例中,为了便于表述,将当前***接收到设备发送的请求记为当前请求。
如步骤S2所述,判断当前请求的请求对象是否为缓存中的对象,已知的是,由于接收到当前请求之后,需要对请求进行响应,若请求的对象(用于响应的数据)存于缓存当中,那么就无需从预设的数据库查找再获取,而是直接从缓存中调用即可,这样效率会提高,但是存在请求对象不在缓存中的情况,例如,请求对象为a,但是缓存中有bcd三个,那么这时便无法调用缓存,故而获取到请求之后,判断对应的请求对象是否为缓存中的对象,如在缓存中查找请求对象,若找到即可判定缓存中具有请求对象,这时直接调用该请求对象,如果在缓存中没有查找到请求对象,那么可判定缓存中不具有请求对象,这时需要从***的数据库中获取请求对象,该数据库为预设的、存有对应各个请求的请求对象(响应 数据)的数据库,为了后续***能够提高效率,可进一步判断是否需要将该请求对象存入到缓存中,如根据预设的规则或算法,以耗费更少代价的基准来进一步判断是否需要将该请求对象存入缓存,需知,上述缓存为容量已经存储满的缓存,上述频率固定是指请求的频率是固定的,所以调用缓存的频率也是固定的,如请求a每隔一小时请求一次,请求b每隔15分钟请求一次。
本实施例中,当判定需要将请求对象存入到缓存中,即表明在后续***运行,响应请求的过程中,从缓存中调用请求对象效率更好,耗费的代价更少。由于缓存的容量固定,若需要将请求对象存入缓存,那么则需要将缓存中原来的其中一个对象删除,具体可通过删除的目标对象来实现,该目标对象为在第一预设时间内,各对象下次被调用的时刻当中最晚的对象,删除这个目标对象之后,将上述请求对象存入缓存中,以便下次请求调用。
如上述步骤S4-S7所述,上述请求频率即为调用请求对象的频率,而在物联网场景中,一般都为调用频率基本固定的场景,故而可直接从***中获取到缓存中各对象的调用频率,以及各对象上一次的调用时刻,此处的上一次调用时刻为基于当前时刻,对象前一次被调用的时刻。然后依据每个对象的调用频率以及上一次的调用时刻,计算出在第一预设时间内各次被调用的时刻,此处的第一预设时间为从当前时刻起指定时间内的时间段,最后将计算出来的时刻进行对比得到其中最晚的目标时刻,该目标时刻对应的对象即为上述目标对象,由此可将上述缓存方法称为最晚到达(Lastly Arrived,简称LA)的缓存算法,该LA算法为本申请自定义的算法。
本实施例中利用如下公式,计算出各对象在第一预设时间内每次被调用的时刻:
Figure PCTCN2019118426-appb-000001
其中,i为上述缓存中的任一个对象,T i为对象i的调用时刻,f i为对象i的调用频率,t i为前一次调用对象i的时刻。
例如,缓存中的对象A,对应的调用频率为0.05次/秒,对应上一次调用的时刻为12:10:15,通过上述公式下次调用的时刻为12:10:35,再下次调用的时刻为12:10:55,如此计算出在第一预设时间内对象A的所有调用时刻,以此类推,计算出缓存中每个对象在第一预设时间内的所有调用时刻,然后将这些调用时刻进行对比,得到当中最晚的那个目标时刻,对应该目标时刻的对象即为应该删除的对象,为了便于描述将其记为目标对象。
本申请提供的缓存方法,在调用频率基本固定的场景中,由于以耗费更少代价为基准来进一步判断需要将该请求对象存入缓存,而当需要将请求对象存入缓存时,替换的目标对象为第一预设时间内各对象下次被调用的时刻当中最晚的对象,使得缓存的命中率更高,且耗费代价更少,效率更高。
以下从缓存命中率以及耗费代价等方面来说明为何从缓存中删除的是上述目标对象: 首先定义代价c为一次缓存未命中所付出的额外时间开销(即对于某一请求,缓存中具有请求对象且进行调用的响应耗费时间,与缓存中没有请求对象需从数据库获取进而响应的耗费时间的差值)。总代价S则为一定时间内的所有开销之和。
那么在t时间内,缓存命中次数为h,缓存调用总次数为m,因此,命中率可以表示为:
Figure PCTCN2019118426-appb-000002
总代价可以表示为:S=(m-h)*c。假设被缓存的对象有1,2,3,4…k,对应被调用的频率依次为f1,f2,f3,…fk,上一次被调用时刻分别记为t1,t2,t3…tk。那么当时刻为t时,i对象后续被调用的时刻为:
Figure PCTCN2019118426-appb-000003
因此该段时间内最晚时刻可以表示为:
Figure PCTCN2019118426-appb-000004
即最后T LA对应的i对象即为需要被清理出缓存的对象。
现有的缓存算法中包括LRU,LFU,FIFO算法,分别可用公式表示需要被清理出缓存的对象:T LRU=max{t-t i},i∈(1,2,3…k),
Figure PCTCN2019118426-appb-000005
T FIFO=max{t i},i∈(1,2,3…k)。将上述现有算法与本方案提供的LA算法进行比较,如下:
举例地,缓存有对象a,b,c,d,对应的调用频率分别为fa=1/5,fb=1/4,fc=1/3,fd=1/2,缓存中只能存放3个对象,上述代价c为1,缓存中已有的对象集合用U表示。假如缓存中初始对象有a,b,c,对象d在t=2时准备加入缓存,那么在t时间内,每个时间点对应的命中次数h和总代价S如下表:
Figure PCTCN2019118426-appb-000006
Figure PCTCN2019118426-appb-000007
由表格可看出,与现有中的LRU,LFU,FIFO算法对比,本申请提供的LA算法在每个时间点缓存的命中次数最高,且耗费的总代价最少。
在一个实施例中,上述步骤S3,包括:
步骤S31:依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序;
步骤S32:计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;
步骤S33:计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各请求所需的时间,并记为第二耗时;
步骤S34:将所述第一耗时与所述第二耗时进行对比;
步骤S35:若所述第一耗时比所述第二耗时长,则判定需要将所述请求对象存入所述缓存;
步骤S36:若所述第一耗时比所述第二耗时短,则判定不需要将所述请求对象存入所述缓存。
本实施例中,判断是否需要将请求对象存入缓存通过上述步骤S31-36实现,如上述步骤S31所述,由于,预设的所有请求的请求频率固定,故而可获取每个请求的请求频率,再依据请求频率计算出在第二预设时间内各请求按请求时间的先后顺序进行排序的排列顺序。
在一个实施例中,上述步骤S31,包括:
步骤S310:依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;
步骤S310:将对应每个所述请求时刻的请求按时间顺序进行排序得到所述排列顺序;
其中,通过以下公式计算得到请求j的所有请求时刻:
Figure PCTCN2019118426-appb-000008
其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
本实施例中,由于在第二预设时间内所有请求的请求频率已知,故可通过请求频率计算出该段时间内所有请求的请求时刻,然后将对应每个请求时刻的请求按时间顺序排序得到上述排列顺序,例如,设初始时间t 0,请求a的频率为f a,请求b的频率f b,请求c的频率为f c,则下一次请求a到达时刻为
Figure PCTCN2019118426-appb-000009
下一次请求b到达时刻为:
Figure PCTCN2019118426-appb-000010
下一次请求c到达时刻为:
Figure PCTCN2019118426-appb-000011
以此类推,所有请求a到达时刻可以表示为:
Figure PCTCN2019118426-appb-000012
n为自然整数;所有请求b到达时刻可以表示为:
Figure PCTCN2019118426-appb-000013
所有请求c到达时刻可以表示为:
Figure PCTCN2019118426-appb-000014
这时可依据这些时刻的从小到大的排列顺序得到对应请求的排列顺序,如将H a,H b,H c中所有时刻从小到大排序,并记录每个时刻到达的请求,得到序列Q,例如Q={(2,a),(3,b),(5,c),(6,ab),(8,a),(9,b)},其中(2,a)表示在时刻2时,到达的请求有a;(3,b)表示在时刻3时,到达的请求有b;(6,ab)表示时刻6时,到达的请求有a,b。
如上述步骤S32-S33所述,在不将请求对象存入缓存以及将请求对象存入缓存的情况下,分别按上述排列顺序遍历对应的请求得到对应耗时,并分别记为第一耗时,以及第二耗时,从而可依据第一耗时以及第二耗时来判断是否需要将请求对象存入缓存。如上述例子中,在第二预设时间内,如在9分钟中内各请求的排列顺序为a-b-c-ab-a-b,其中当前的请求对象为a,缓存中只能放置bc,若没有将对象a存入缓存,***按上述排列顺序对遍历每个请求的耗时之和即为上述第一耗时,值得注意的是,从数据库获取对象比从缓存获取对象的耗时长,且响应每个请求的耗时、从资料库或者从缓存取出的耗时可通过代码统计得到,故而可统计得到上述第一耗时。同理,当将对象a存入缓存中,则从缓存中删除排在上述顺序最后的对象b,那么上述请求对象为a,缓存中有ac,9分钟内各请求的顺序为a-b-c-ab-a-b,***按该排列顺序对遍历每个请求的耗时之和为的第二耗时。
如上述步骤S34-S36所述,获得上述第一耗时以及第二耗时后,将两者进行比较,当第一耗时比第二耗时长,即表明在没有将上述请求对象存入缓存的情况下,***耗费的时间更多,效率更慢,耗费的代价更大,故而这时判定需要将请求对象存入缓存。反之,当第一耗时比第二耗时短,说明没有将上述请求对象存入缓存的情况下,***耗费的时间更少,代价更小,故而判定不需要将请求对象存入缓存。
在另一实施例中,可以首先计算出每个请求时刻对应的请求对象在存入缓存以及不存入缓存的情况下的耗时,按照选择耗时短原则,得到每次选择将请求对象存入或者不存入缓存的优选策略,然后在响应请求时,按照该优选策略去执行,用以判断每一次请求的请求对象是否需要存入缓存。
其详细过程如下:第一步:上述请求的排列顺序可记为Q,则序列Q={(t1,u1),(t2,u2),(t3,u3),…,(tn,un)},其中t表示时刻,u表示该时刻到达的请求集合,如上述例子中,t1=2时,u1={a};t4=6时,u4={ab}。对于任意二叉树(请求到达的时将请求对象分成两种情况,一种是存入缓存,另一种是不存入缓存)的节点S(i)由五部分组成:父节点索引,左子节点索引,右子节点索引,t(i)时刻的总耗时c(i)以及当前缓存集合m(i)。对于S(i)的左子节点Sl(i+1)表示对象集合A∈u(i)且
Figure PCTCN2019118426-appb-000015
时,集合A不使用缓存的情况,右子节点为Sr(i+1)则是集合A都使用缓存的情况。一般情况下,1个对象命中缓存和未命中的耗时相差较大,所以将命中缓存的耗时设为0,未命中缓存的耗时设为1。
第二步:令二叉树的根节点为S0,表示缓存的初始状态,此时c(0)=0,
Figure PCTCN2019118426-appb-000016
父节点索引,子节点索引均等于null;依次取出序列Q中的元素,先判断t1时刻,u1∈m(0)是否为真,如果为真,则Sr(1)节点中c(1)=0,m(1)=m(0),S(0)右子节点索引指向Sr(1),Sr(1)父节点索引指向S(0)。如果为假,即
Figure PCTCN2019118426-appb-000017
分为两种情况:集合u1使用缓存和不使用缓存,当使用缓存时,S(0)右子节点索引指向Sr(1),Sr(1)父节点索引指向S(0),c(1)等于c(0)与
Figure PCTCN2019118426-appb-000018
集合中元素个数k相加,
Figure PCTCN2019118426-appb-000019
其中集合p是使用上述LA算法淘汰掉k个对象后剩下的缓存实例集合。当不使用缓存时,S(0)右子节点索引指向Sr(1),Sr(1)父节点索引指向S(0),c(1)等于c(0)与
Figure PCTCN2019118426-appb-000020
集合中元素个数k相加。
第三步:重复上述步骤,直到Q中元素取完,最后得到一颗二叉树,在上述第二预设时间内,找出对应时间段内该二叉树中所有叶节点中c(n)最小的节点,然后根据其父节点索引,向上检索,直到根节点S(0),此路径即为最短耗时路径。再根据左子节点是不使用缓存,右子节点使用缓存的规则,判断该路径上哪些时刻请求对象需要存入缓存,从而得到该时间段每个请求时刻的是否将请求对象存入缓存的最优策略。
在另一实施例中,上述步骤S3,包括:
步骤S31’:获取所述缓存中各对象的调用频率;
步骤S32’:依据所述调用频率计算出下一个请求的对象,并记为下次对象;
步骤S33’:判断所述下次对象是否为所述当前请求的请求对象;
步骤S34’:若是,则判定需要将所述请求对象存入所述缓存中,否则判定不需要将所述请求对象存入所述缓存中。
除了通过上述步骤S31-S36来实现判断是否需要将请求对象存入缓存,还可通过下次调用的对象是否为当前请求对象来判定。如上述步骤S31’-S34’所述,从***中获取缓存中每个对象的调用频率,根据各对象的调用频率以及上一次调用时刻,计算出下一次请求的 请求对象,即计算出排序在当前请求的下一次请求的对象,计算方法参照上述步骤S31所述,此处不再赘述。当得到下次对象时,进一步判断下次对象是否为当前请求的请求对象,若是,则判定需要将请求对象存入缓存,以便下次调用,若否,则判定不需要将请求对象存入缓存中。
在一个实施例中,上述步骤S2,包括:
步骤S21:依据所述当前请求识别出所述请求对象;
步骤S22:将所述请求对象与缓存列表中各对象进行对比,所述缓存列表为存储在所述缓存中所有对象的列表;
步骤S23:若所述请求对象与所述缓存列表中的对象一致,则判定所述请求对象为所述缓存中的对象;否则判定所述请求对象不为所述缓存中的对象。
本实施例中,已知请求对象即为用于处理该请求的数据或者工具,由于每个请求均包括请求头、请求体,而请求体中包含有服务端的参数(如设备ID)、请求内容等信息,可从请求内容获得处理该内容的信息,根据这些信息识别出当前请求的请求对象,然后将请求对象与缓存列表中的对象进行对比,缓存列表为存储在缓存中所有对象的列表,当在缓存列表中找到与请求对象一致的对象,即可判定请求对象为缓存中的对象,当在缓存列表中找不到与请求对象一致的对象,则表明请求对象不为缓存中的对象。
参照图2,本实施例中数据缓存装置,包括:
接收请求单元100,用于接收设备发来的当前请求;
判断对象单元200,用于判断所述当前请求的请求对象是否为缓存中的对象;
调用对象单元300,用于若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
获取时刻单元400,用于若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;
计算时刻单元500,用于依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
对比时刻单元600,用于将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
删除对象单元700,用于删除所述目标对象,并将所述请求对象存入所述缓存中。
本实施例中利用如下公式,计算出各对象在第一预设时间内下次被调用的时刻:
Figure PCTCN2019118426-appb-000021
其中,i为上述缓存中的任一个对象,T i为对象i的调用时刻,f i为对象i的调用频率,t i为前一次调用对象i的时刻。
在一个实施例中,上述调用对象单元300,包括:
计算顺序子单元,用于依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序;
第一耗时子单元,用于计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;
第二耗时子单元,用于计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各请求所需的时间,并记为第二耗时;
对比耗时子单元,用于将所述第一耗时与所述第二耗时进行对比;
第一判定子单元,用于所述第一耗时比所述第二耗时长时,则判定需要将所述请求对象存入所述缓存;
第二判定子单元,用于所述第一耗时比所述第二耗时短时,则判定不需要将所述请求对象存入所述缓存。
在一个实施例中,上述计算顺序子单元,包括:
计算时刻模块,用于依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;
排序请求模块,用于将对应每个所述请求时刻的请求按时间顺序进行排序得到所述排列顺序;
其中,通过以下公式计算得到请求j的所有请求时刻:
Figure PCTCN2019118426-appb-000022
其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
在另一实施例中,上述调用对象单元300,包括:
调用频率子单元,用于获取所述缓存中各对象的调用频率;
计算对象子单元,用于依据所述调用频率计算出下一个请求的对象,并记为下次对象;
判断对象子单元,用于判断所述下次对象是否为所述当前请求的请求对象;
判定存入子单元,用于判断所述下次对象为所述当前请求的请求对象,则判定需要将所述请求对象存入所述缓存中,否则判定不需要将所述请求对象存入所述缓存中。
在一个实施例中,上述判断对象单元200,包括:
识别对象子单元,用于依据所述当前请求识别出所述请求对象;
对比对象子单元,用于将所述请求对象与缓存列表中各对象进行对比,所述缓存列表为存储在所述缓存中所有对象的列表;
判定缓存子单元,用于若所述请求对象与所述缓存列表中的对象一致,则判定所述请求对象为所述缓存中的对象;否则判定所述请求对象不为所述缓存中的对象。
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图3所示。该计算机设备包括通过***总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机可读指令和数据库。该内存器为非易失性存储介质中的操作***和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储上述调用缓存对象所需的所有数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种数据缓存方法。
上述处理器执行上述数据缓存方法的步骤:接收设备发来的当前请求;判断所述当前请求的请求对象是否为缓存中的对象;若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;删除所述目标对象,并将所述请求对象存入所述缓存中。
在一个实施例中,上述判断是否需要将所述请求对象存入所述缓存的步骤,包括:依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序;计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各请求所需的时间,并记为第二耗时;将所述第一耗时与所述第二耗时进行对比;若所述第一耗时比所述第二耗时长,则判定需要将所述请求对象存入所述缓存;若所述第一耗时比所述第二耗时短,则判定不需要将所述请求对象存入所述缓存。
在一个实施例中,上述依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按时间顺序排序的排列顺序的步骤,包括:依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;将对应每个所述请求时刻的请求按时 间顺序进行排序得到所述排列顺序;其中,通过以下公式计算得到请求j的所有请求时刻:
Figure PCTCN2019118426-appb-000023
其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
在一个实施例中,上述判断是否需要将所述请求对象存入所述缓存的步骤,包括:获取所述缓存中各对象的调用频率;依据所述调用频率计算出下一个请求的对象,并记为下次对象;判断所述下次对象是否为所述当前请求的请求对象;若是,则判定需要将所述请求对象存入所述缓存中,否则判定不需要将所述请求对象存入所述缓存中。
在一个实施例中,上述判断所述当前请求的请求对象是否为缓存中的对象的步骤,包括:依据所述当前请求识别出所述请求对象;将所述请求对象与缓存列表中各对象进行对比,所述缓存列表为存储在所述缓存中所有对象的列表;若所述请求对象与所述缓存列表中的对象一致,则判定所述请求对象为所述缓存中的对象;否则判定所述请求对象不为所述缓存中的对象。
本领域技术人员可以理解,图3中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定。
本申请一实施例还提供一种计算机可读存储介质,所述计算机可读存储介质,例如为非易失性的计算机可读存储介质,或者为易失性的计算机可读存储介质,其上存储有计算机可读指令,计算机可读指令被处理器执行时实现一种数据缓存方法,具体为:接收设备发来的当前请求;判断所述当前请求的请求对象是否为缓存中的对象;若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;删除所述目标对象,并将所述请求对象存入所述缓存中。
在一个实施例中,上述判断是否需要将所述请求对象存入所述缓存的步骤,包括:依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序;计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各 请求所需的时间,并记为第二耗时;将所述第一耗时与所述第二耗时进行对比;若所述第一耗时比所述第二耗时长,则判定需要将所述请求对象存入所述缓存;若所述第一耗时比所述第二耗时短,则判定不需要将所述请求对象存入所述缓存。
在一个实施例中,上述依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按时间顺序排序的排列顺序的步骤,包括:依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;将对应每个所述请求时刻的请求按时间顺序进行排序得到所述排列顺序;其中,通过以下公式计算得到请求j的所有请求时刻:
Figure PCTCN2019118426-appb-000024
其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
在一个实施例中,上述判断所述当前请求的请求对象是否为缓存中的对象的步骤,包括:依据所述当前请求识别出所述请求对象;将所述请求对象与缓存列表中各对象进行对比,所述缓存列表为存储在所述缓存中所有对象的列表;若所述请求对象与所述缓存列表中的对象一致,则判定所述请求对象为所述缓存中的对象;否则判定所述请求对象不为所述缓存中的对象。
以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种数据缓存方法,用于缓存请求频率固定的数据,其特征在于,包括:
    接收设备发来的当前请求;
    判断所述当前请求的请求对象是否为缓存中的对象;
    若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
    若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;
    依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
    将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
    删除所述目标对象,并将所述请求对象存入所述缓存中。
  2. 根据权利要求1所述的数据缓存方法,其特征在于,所述依据所述调用频率以及所述调用时刻计算出各所述对象在所述第一预设时间内每次被调用的时刻的步骤,包括:
    利用以下公式计算各所述对象在所述第一预设时间内每次被调用的时刻:
    Figure PCTCN2019118426-appb-100001
    其中,i为所述缓存中的任一个对象,T i为对象i的调用时刻,f i为对象i的调用频率,t i为前一次调用对象i的时刻。
  3. 根据权利要求1所述的数据缓存方法,其特征在于,所述判断是否需要将所述请求对象存入所述缓存的步骤,包括:
    依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序;
    计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;
    计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各请求所需的时间,并记为第二耗时;
    将所述第一耗时与所述第二耗时进行对比;
    若所述第一耗时比所述第二耗时长,则判定需要将所述请求对象存入所述缓存;
    若所述第一耗时比所述第二耗时短,则判定不需要将所述请求对象存入所述缓存。
  4. 根据权利要求3所述的数据缓存方法,其特征在于,所述依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序的步骤,包括:
    依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;
    将对应每个所述请求时刻的请求按时间顺序进行排序得到所述排列顺序;
    其中,通过以下公式计算得到请求j的所有请求时刻:
    Figure PCTCN2019118426-appb-100002
    其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
  5. 根据权利要求1所述的数据缓存方法,其特征在于,所述判断是否需要将所述请求对象存入所述缓存的步骤,包括:
    获取所述缓存中各对象的调用频率;
    依据所述调用频率计算出下一个请求的对象,并记为下次对象;
    判断所述下次对象是否为所述当前请求的请求对象;
    若是,则判定需要将所述请求对象存入所述缓存中,否则判定不需要将所述请求对象存入所述缓存中。
  6. 根据权利要求1所述的数据缓存方法,其特征在于,所述判断所述当前请求的请求对象是否为缓存中的对象的步骤,包括:
    依据所述当前请求识别出所述请求对象;
    将所述请求对象与缓存列表中各对象进行对比,所述缓存列表为存储在所述缓存中所有对象的列表;
    若所述请求对象与所述缓存列表中的对象一致,则判定所述请求对象为所述缓存中的对象;否则判定所述请求对象不为所述缓存中的对象。
  7. 一种数据缓存装置,用于缓存请求频率固定的数据,其特征在于,包括:
    接收请求单元,用于接收设备发来的当前请求;
    判断对象单元,用于判断所述当前请求的请求对象是否为缓存中的对象;
    调用对象单元,用于若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
    获取时刻单元,用于若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的 时刻;
    计算时刻单元,用于依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
    对比时刻单元,用于将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
    删除对象单元,用于删除所述目标对象,并将所述请求对象存入所述缓存中。
  8. 根据权利要求7所述的数据缓存装置,其特征在于,所述调用对象单元包括:
    计算顺序子单元,用于依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按时间顺序排序的排列顺序;
    第一耗时子单元,用于计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;
    第二耗时子单元,用于计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各请求所需的时间,并记为第二耗时;
    对比耗时子单元,用于将所述第一耗时与所述第二耗时进行对比;
    第一判定子单元,用于所述第一耗时比所述第二耗时长时,则判定需要将所述请求对象存入所述缓存;
    第二判定子单元,用于所述第一耗时比所述第二耗时短时,则判定不需要将所述请求对象存入所述缓存。
  9. 根据权利要求8所述的数据缓存装置,其特征在于,所述计算顺序子单元,包括:
    计算时刻模块,用于依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;
    排序请求模块,用于将对应每个所述请求时刻的请求按时间顺序进行排序得到所述排列顺序;
    其中,通过以下公式计算得到请求j的所有请求时刻:
    Figure PCTCN2019118426-appb-100003
    其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
  10. 根据权利要求7所述的数据缓存装置,其特征在于,所述调用对象单元,包括:
    调用频率子单元,用于获取所述缓存中各对象的调用频率;
    计算对象子单元,用于依据所述调用频率计算出下一个请求的对象,并记为下次对象;
    判断对象子单元,用于判断所述下次对象是否为所述当前请求的请求对象;
    判定存入子单元,用于判断所述下次对象为所述当前请求的请求对象,则判定需要将所述请求对象存入所述缓存中,否则判定不需要将所述请求对象存入所述缓存中。
  11. 根据权利要求7所述的数据缓存装置,其特征在于,所述判断对象单元,包括:
    识别对象子单元,用于依据所述当前请求识别出所述请求对象;
    对比对象子单元,用于将所述请求对象与缓存列表中各对象进行对比,所述缓存列表为存储在所述缓存中所有对象的列表;
    判定缓存子单元,用于若所述请求对象与所述缓存列表中的对象一致,则判定所述请求对象为所述缓存中的对象;否则判定所述请求对象不为所述缓存中的对象。
  12. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现数据缓存方法,该数据缓存方法包括:
    接收设备发来的当前请求;
    判断所述当前请求的请求对象是否为缓存中的对象;
    若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
    若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;
    依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
    将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
    删除所述目标对象,并将所述请求对象存入所述缓存中。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述判断是否需要将所述请求对象存入所述缓存的步骤,包括:
    依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序;
    计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;
    计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各请求所需的时间,并记为第二耗时;
    将所述第一耗时与所述第二耗时进行对比;
    若所述第一耗时比所述第二耗时长,则判定需要将所述请求对象存入所述缓存;
    若所述第一耗时比所述第二耗时短,则判定不需要将所述请求对象存入所述缓存。
  14. 根据权利要求13所述的计算机设备,其特征在于,所述依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序的步骤,包括:
    依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;
    将对应每个所述请求时刻的请求按时间顺序进行排序得到所述排列顺序;
    其中,通过以下公式计算得到请求j的所有请求时刻:
    Figure PCTCN2019118426-appb-100004
    其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
  15. 根据权利要求12所述的计算机设备,其特征在于,所述判断是否需要将所述请求对象存入所述缓存的步骤,包括:
    获取所述缓存中各对象的调用频率;
    依据所述调用频率计算出下一个请求的对象,并记为下次对象;
    判断所述下次对象是否为所述当前请求的请求对象;
    若是,则判定需要将所述请求对象存入所述缓存中,否则判定不需要将所述请求对象存入所述缓存中。
  16. 根据权利要求12所述的计算机设备,其特征在于,所述判断所述当前请求的请求对象是否为缓存中的对象的步骤,包括:
    依据所述当前请求识别出所述请求对象;
    将所述请求对象与缓存列表中各对象进行对比,所述缓存列表为存储在所述缓存中所有对象的列表;
    若所述请求对象与所述缓存列表中的对象一致,则判定所述请求对象为所述缓存中的对象;否则判定所述请求对象不为所述缓存中的对象。
  17. 一种计算机可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现数据缓存方法,该数据缓存方法包括:
    接收设备发来的当前请求;
    判断所述当前请求的请求对象是否为缓存中的对象;
    若所述请求对象为缓存中的对象,则从所述缓存中调用所述请求对象,否则从预设数据库中获取所述请求对象,并判断是否需要将所述请求对象存入所述缓存;
    若判定需要将所述请求对象存入所述缓存中,则获取所述缓存中各对象的调用频率以及调用时刻,所述调用时刻为基于当前时刻所述对象前一次被调用的时刻;
    依据所述调用频率以及所述调用时刻计算出各所述对象在第一预设时间内每次被调用的时刻,所述第一预设时间为从当前时刻起指定时间内的时间段;
    将各所述对象对应的被调用的时刻进行对比得到时刻最晚的目标时刻,将对应所述目标时刻的对象记为所述目标对象;
    删除所述目标对象,并将所述请求对象存入所述缓存中。
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述判断是否需要将所述请求对象存入所述缓存的步骤,包括:
    依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序;
    计算在所述请求对象没有存入所述缓存的情况下按所述排列顺序遍历各请求所需的时间,并记为第一耗时;
    计算在所述请求对象存入所述缓存的情况下按所述排序顺序遍历各请求所需的时间,并记为第二耗时;
    将所述第一耗时与所述第二耗时进行对比;
    若所述第一耗时比所述第二耗时长,则判定需要将所述请求对象存入所述缓存;
    若所述第一耗时比所述第二耗时短,则判定不需要将所述请求对象存入所述缓存。
  19. 根据权利要求18所述的计算机可读存储介质,其特征在于,所述依据预设的所有请求的请求频率计算得到在第二预设时间内各请求按请求时间排序的排列顺序的步骤,包括:
    依据预设的所有请求的请求频率计算得到在所述第二预设时间内各所述请求的所有请求时刻;
    将对应每个所述请求时刻的请求按时间顺序进行排序得到所述排列顺序;
    其中,通过以下公式计算得到请求j的所有请求时刻:
    Figure PCTCN2019118426-appb-100005
    其中,j为所述预设的所有请求中的任一个请求,H j(n)为请求j的第n次请求的请求时刻,f j为请求j的请求频率,t 0为所述第二预设时间内的初始时刻。
  20. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述判断是否需要将所述请求对象存入所述缓存的步骤,包括:
    获取所述缓存中各对象的调用频率;
    依据所述调用频率计算出下一个请求的对象,并记为下次对象;
    判断所述下次对象是否为所述当前请求的请求对象;
    若是,则判定需要将所述请求对象存入所述缓存中,否则判定不需要将所述请求对象存入所述缓存中。
PCT/CN2019/118426 2019-03-08 2019-11-14 数据缓存方法、装置、计算机设备和存储介质 WO2020181820A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910175754.3A CN110018969B (zh) 2019-03-08 2019-03-08 数据缓存方法、装置、计算机设备和存储介质
CN201910175754.3 2019-03-08

Publications (1)

Publication Number Publication Date
WO2020181820A1 true WO2020181820A1 (zh) 2020-09-17

Family

ID=67189375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118426 WO2020181820A1 (zh) 2019-03-08 2019-11-14 数据缓存方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN110018969B (zh)
WO (1) WO2020181820A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113329051A (zh) * 2021-04-20 2021-08-31 海南视联大健康智慧医疗科技有限公司 数据获取方法、装置及可读存储介质
CN113806249A (zh) * 2021-09-13 2021-12-17 济南浪潮数据技术有限公司 一种对象存储有序列举方法、装置、终端及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018969B (zh) * 2019-03-08 2023-06-02 平安科技(深圳)有限公司 数据缓存方法、装置、计算机设备和存储介质
CN112364016B (zh) * 2020-10-27 2021-08-31 中国地震局地质研究所 一种异频数据对象的时间嵌套缓存模型的构建方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102223681A (zh) * 2010-04-19 2011-10-19 中兴通讯股份有限公司 一种物联网***及其中缓存的控制方法
CN103544119A (zh) * 2013-09-26 2014-01-29 广东电网公司电力科学研究院 缓存调度方法与***及其介质
CN106888262A (zh) * 2017-02-28 2017-06-23 北京邮电大学 一种缓存替换方法及装置
US20170185645A1 (en) * 2015-12-23 2017-06-29 Sybase, Inc. Database caching in a database system
CN110018969A (zh) * 2019-03-08 2019-07-16 平安科技(深圳)有限公司 数据缓存方法、装置、计算机设备和存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933849A (en) * 1997-04-10 1999-08-03 At&T Corp Scalable distributed caching system and method
JPH11112541A (ja) * 1997-09-12 1999-04-23 Internatl Business Mach Corp <Ibm> メッセージ中継方法及びメッセージ処理方法、ルータ装置、ネットワークシステム、ルータ装置を制御するプログラムを格納した記憶媒体
KR100476781B1 (ko) * 2001-12-28 2005-03-16 삼성전자주식회사 캐싱기법을 이용한 mpeg-4 시스템 단말의 제어방법
JP5385874B2 (ja) * 2010-08-23 2014-01-08 日本電信電話株式会社 キャッシュ管理装置、キャッシュ管理プログラム及び記録媒体
WO2012116369A2 (en) * 2011-02-25 2012-08-30 Fusion-Io, Inc. Apparatus, system, and method for managing contents of a cache
CN106899558B (zh) * 2015-12-21 2020-05-08 腾讯科技(深圳)有限公司 访问请求的处理方法、装置和存储介质
CN108241583A (zh) * 2017-11-17 2018-07-03 平安科技(深圳)有限公司 薪资计算的数据处理方法、应用服务器及计算机可读存储介质
CN109240613A (zh) * 2018-08-29 2019-01-18 平安科技(深圳)有限公司 数据缓存方法、装置、计算机设备及存储介质
CN109388550B (zh) * 2018-11-08 2022-03-22 浪潮电子信息产业股份有限公司 一种缓存命中率确定方法、装置、设备及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102223681A (zh) * 2010-04-19 2011-10-19 中兴通讯股份有限公司 一种物联网***及其中缓存的控制方法
CN103544119A (zh) * 2013-09-26 2014-01-29 广东电网公司电力科学研究院 缓存调度方法与***及其介质
US20170185645A1 (en) * 2015-12-23 2017-06-29 Sybase, Inc. Database caching in a database system
CN106888262A (zh) * 2017-02-28 2017-06-23 北京邮电大学 一种缓存替换方法及装置
CN110018969A (zh) * 2019-03-08 2019-07-16 平安科技(深圳)有限公司 数据缓存方法、装置、计算机设备和存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113329051A (zh) * 2021-04-20 2021-08-31 海南视联大健康智慧医疗科技有限公司 数据获取方法、装置及可读存储介质
CN113806249A (zh) * 2021-09-13 2021-12-17 济南浪潮数据技术有限公司 一种对象存储有序列举方法、装置、终端及存储介质
CN113806249B (zh) * 2021-09-13 2023-12-22 济南浪潮数据技术有限公司 一种对象存储有序列举方法、装置、终端及存储介质

Also Published As

Publication number Publication date
CN110018969A (zh) 2019-07-16
CN110018969B (zh) 2023-06-02

Similar Documents

Publication Publication Date Title
WO2020181820A1 (zh) 数据缓存方法、装置、计算机设备和存储介质
KR102290835B1 (ko) 유지관리 동작들을 위한 병합 트리 수정들
KR102266756B1 (ko) Kvs 트리
TWI702506B (zh) 用於合併樹廢棄項目指標之系統、機器可讀媒體及機器實施之方法
US10552287B2 (en) Performance metrics for diagnosing causes of poor performing virtual machines
KR102307957B1 (ko) 다중-스트림 저장 장치를 위한 스트림 선택
US6751627B2 (en) Method and apparatus to facilitate accessing data in network management protocol tables
US20170116136A1 (en) Reducing data i/o using in-memory data structures
US6993031B2 (en) Cache table management device for router and program recording medium thereof
CN109117275B (zh) 基于数据分片的对账方法、装置、计算机设备及存储介质
JP5155001B2 (ja) 文書検索装置
KR20140067881A (ko) 컨텐츠 중심 네트워크에서 컨텐츠 소유자 및 노드의 패킷 전송 방법
CN105159845A (zh) 存储器读取方法
CN108614837A (zh) 文件存储和检索的方法及装置
CN110245129A (zh) 一种分布式全局数据去重方法和装置
US20060195482A1 (en) Temporal knowledgebase
CN110088745B (zh) 数据处理***以及数据处理方法
CN102594787B (zh) 数据抓取方法、***和路由服务器
US8566521B2 (en) Implementing cache offloading
JP2018511131A (ja) オンライン媒体のための階層的なコストベースのキャッシング
CN108566335B (zh) 一种基于NetFlow的网络拓扑生成方法
CN110716941B (zh) 一种handle标识解析***及数据查询方法
CN109062694B (zh) 一种将应用程序迁移到云平台的方法
CN113810298A (zh) 一种支持网络流量抖动的OpenFlow虚拟流表弹性加速查找方法
Baskaran et al. Study of combined Web prefetching with Web caching based on machine learning technique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19919311

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19919311

Country of ref document: EP

Kind code of ref document: A1