CN117951044A - Cache identification and updating method and system - Google Patents

Cache identification and updating method and system Download PDF

Info

Publication number
CN117951044A
CN117951044A CN202410354317.9A CN202410354317A CN117951044A CN 117951044 A CN117951044 A CN 117951044A CN 202410354317 A CN202410354317 A CN 202410354317A CN 117951044 A CN117951044 A CN 117951044A
Authority
CN
China
Prior art keywords
cache
time
short
long
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410354317.9A
Other languages
Chinese (zh)
Other versions
CN117951044B (en
Inventor
唐红娟
高晴
丁川
叶凯
樊海东
鲁冰青
曾忠安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Mandala Software Co ltd
Original Assignee
Jiangxi Mandala Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Mandala Software Co ltd filed Critical Jiangxi Mandala Software Co ltd
Priority to CN202410354317.9A priority Critical patent/CN117951044B/en
Publication of CN117951044A publication Critical patent/CN117951044A/en
Application granted granted Critical
Publication of CN117951044B publication Critical patent/CN117951044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a system for identifying and updating a cache, which relate to the technical field of data storage, wherein the method comprises the following steps: dividing the cache data in the server into types to obtain a long-time cache file and a short-time cache file; the long-time cache file and the short-time cache file are matched with different cache strategies to be loaded; distributing the loaded long-time cache file and short-time cache file to different cache pools, wherein the cache pools comprise a long-time cache pool and a short-time cache pool; according to the long-time cache pool and the short-time cache pool, a cache update strategy is formulated, and the long-time cache file and the short-time cache file are updated; when a client calls a long-time cache file or a short-time cache file in a server, a check code is generated according to the type, the cache policy and the update policy, and the long-time cache file or the short-time cache file of a cache pool is acquired; the invention solves the problems of untimely updating and synchronizing of the cache data and low access speed in the prior art.

Description

Cache identification and updating method and system
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a method and system for identifying and updating a cache.
Background
With the development of interconnection application, the access amount of open sites is greatly increased, the access amount per unit time is increased in series, the background application technology of the data interaction site is subject to great examination, particularly, the reading and writing of a database become the biggest bottleneck, the unordered increase of the data amount and the centralization of access increase the access burden of the database, the response of the database is deteriorated, and finally, the display delay of a website is caused.
In order to solve the problems, in the development of medical systems, in order to accelerate the access speed of the system and reduce the concurrent access pressure of the database, part of common data is usually cached in a memory, and when the data is needed, the data is directly queried in the memory, so that the query speed is accelerated, the database is prevented from being accessed for multiple times, and the pressure of the data is increased. Because the cached data can change, the data in the cache and the data of the database are not synchronized or are not synchronized timely, are not synchronized completely and are synchronized in multiple frequencies, and the cached data is not the latest data, so that the use of the data is influenced and the caching effect is lost.
Disclosure of Invention
Accordingly, the present invention is directed to a method and system for identifying and updating a cache, which aims to solve the problems of untimely cache data updating and slow access speed in the prior art.
A first aspect of the present invention provides a method for identifying and updating a cache, the method comprising:
dividing the cache data in the server into types to obtain a long-time cache file and a short-time cache file;
Loading the long-time cache file and the short-time cache file by matching different cache strategies;
The loaded long-time cache files and the loaded short-time cache files are distributed to different cache pools, wherein each cache pool comprises a long-time cache pool and a short-time cache pool, the long-time cache files are distributed to the long-time cache pool, and the short-time cache files are distributed to the short-time cache pool;
According to the long-time cache pool and the short-time cache pool, a cache update strategy is formulated, and the long-time cache file and the short-time cache file are updated, which comprises the following steps:
The update policies include a first update policy that triggers updating the long-time cache file every first update interval and a second update policy that triggers updating the short-time cache file every second update interval, wherein the first update interval is greater than the second update interval,
The second update interval includes a second first update sub-interval and a second update sub-interval, the second update sub-interval being greater than the second first update sub-interval,
When the short cache pool is not invoked after the second update sub-interval, reloading and updating the short cache file,
When the time of calling and loading the short time cache pool exceeds the second updating sub-interval, the short time cache file does not need to be reloaded and updated,
When the time of calling and loading the short-time cache pool exceeds the second updating sub-interval and does not exceed the second updating sub-interval, the time of loading is required to be obtained and compared with the data time of the request query so as to judge whether the short-time cache file needs to be reloaded and updated or not;
When the client calls the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy and the update policy, and the long-time cache file or the short-time cache file of the cache pool is acquired.
Compared with the prior art, the invention has the beneficial effects that: the cache identification and update method provided by the invention can effectively improve the update synchronization rate of the cache data and the access rate, and particularly, the cache data in the server is divided into types to obtain a long-time cache file and a short-time cache file, and the cache data is updated for a long time and a short time to improve the synchronism of the cache data, reduce the pressure of a database and improve the acquisition efficiency of the cache data; the long-time cache file and the short-time cache file are loaded by matching different cache strategies, data loading pressure is reduced through different loading modes, the loaded long-time cache file and short-time cache file are distributed to different cache pools, each cache pool comprises a long-time cache pool and a short-time cache pool, the long-time cache file is distributed to the long-time cache pool, and the short-time cache file is distributed to the short-time cache pool; according to the long-time cache pool and the short-time cache pool, a cache update strategy is formulated, the long-time cache file and the short-time cache file are updated, and through the arrangement of different update intervals, the timely update of cache data and the integrity of the cache data are ensured, so that the pressure of a database is reduced, and the acquisition efficiency of the cache data is improved; when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy and the update policy, the long-time cache file or the short-time cache file of the cache pool is obtained, the security of a database is improved due to the arrangement of the check code, the synchronization of the database is realized, meanwhile, the integrity and the accuracy of the update synchronization of data are improved due to the verification of the check code, and therefore the technical problems that the update synchronization of cache data is not timely and the access speed is low are solved.
According to an aspect of the above technical solution, the step of loading the long-time cache file and the short-time cache file by matching with different cache policies specifically includes:
matching the long-time cache file with an initial cache strategy, and loading the long-time cache file when the service of the system is released;
And matching the short-time cache file with an initial cache policy, and loading the short-time cache file when the system is used.
According to one aspect of the above technical solution, the step of obtaining the loading time and comparing the loading time with the data time of the request query to determine whether the short-time cache file needs to be reloaded and updated specifically includes:
The loading time is required to be obtained and compared with the data time required to be queried, and whether the loading time is consistent with the data time is judged;
if yes, the short-time cache file does not need to be reloaded and updated;
If not, reloading and updating the short-time cache file.
According to an aspect of the foregoing technical solution, when the client invokes the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy, and the update policy, and the step of obtaining the long-time cache file or the short-time cache file of the cache pool specifically includes:
when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated to access the cache pool according to the type, the cache policy and the update policy;
and the cache pool sends a feedback code to the client, and when the client verifies that the verification code is consistent with the feedback code, the long-time cache file or the short-time cache file of the cache pool is obtained by unlocking.
According to one aspect of the above technical solution, when the client calls the long-time cache file or the short-time cache file in the server, a step of generating a check code to access the cache pool according to the type, the cache policy and the update policy specifically includes:
When the client calls the long-time cache file or the short-time cache file in the server, a character string is generated according to the type, the cache policy and the update policy;
And carrying out Kaiser password encryption on the character string to obtain a first ciphertext, adding a timestamp, carrying out Base64 operation to obtain a second ciphertext, and carrying out MD5 encryption on the second ciphertext to obtain a check code.
A second aspect of the present invention provides a system for cache identification and update, for performing the method for cache identification and update described in any one of the foregoing, the system comprising:
the type dividing module is used for dividing the cache data in the server into types to obtain a long-time cache file and a short-time cache file;
the cache policy matching module is used for loading the long-time cache file and the short-time cache file by matching different cache policies;
The cache pool allocation module is used for allocating the loaded long-time cache files and the loaded short-time cache files to different cache pools, wherein the cache pools comprise a long-time cache pool and a short-time cache pool, the long-time cache files are allocated to the long-time cache pool, and the short-time cache files are allocated to the short-time cache pool;
The updating policy making module is configured to make a cache updating policy according to the long-time cache pool and the short-time cache pool, and update the long-time cache file and the short-time cache file, and includes:
The update policies include a first update policy that triggers updating the long-time cache file every first update interval and a second update policy that triggers updating the short-time cache file every second update interval, wherein the first update interval is greater than the second update interval,
The second update interval includes a second first update sub-interval and a second update sub-interval, the second update sub-interval being greater than the second first update sub-interval,
When the short cache pool is not invoked after the second update sub-interval, reloading and updating the short cache file,
When the time of calling and loading the short time cache pool exceeds the second updating sub-interval, the short time cache file does not need to be reloaded and updated,
When the time of calling and loading the short-time cache pool exceeds the second updating sub-interval and does not exceed the second updating sub-interval, the time of loading is required to be obtained and compared with the data time of the request query so as to judge whether the short-time cache file needs to be reloaded and updated or not;
And the cache calling module is used for generating a check code according to the type, the cache policy and the update policy when the client calls the long-time cache file or the short-time cache file in the server, and acquiring the long-time cache file or the short-time cache file of the cache pool.
A third aspect of the present invention provides a readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the steps of the method according to any of the preceding claims.
A fourth aspect of the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any one of the methods described above when the program is executed.
Drawings
FIG. 1 is a flowchart of a method for cache identification and update according to a first embodiment of the present invention;
FIG. 2 is a block diagram illustrating a system for cache identification and update in accordance with a second embodiment of the present invention;
description of the drawings element symbols:
The type dividing module 100, the cache policy matching module 200, the cache pool distributing module 300, the updating policy making module 400 and the cache calling module 500;
the invention will be further described in the following detailed description in conjunction with the above-described figures.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Several embodiments of the invention are presented in the figures. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "mounted" on another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like are used herein for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed types.
How the long text information in the construction drawing is analyzed will be described in detail with reference to specific embodiments and drawings.
Example 1
Referring to fig. 1, a method for identifying and updating a cache in a first embodiment of the invention is shown, and the method includes steps S10 to S14.
Step S10, dividing the cache data in the server into types to obtain a long-time cache file and a short-time cache file;
The method comprises the steps of defining a long-time cache file and a short-time cache file in a server according to the requirements of a client, and updating cache data for a long time and a short time to improve the synchronism of the cache data, reduce the pressure of a database and improve the acquisition efficiency of the cache data. In addition, the long-time cache file and the short-time cache file are cached in the server, and the client can fetch the files according to the requirement, so that the data caching pressure of the client can be reduced.
In some embodiments, the data of the long-time cached file is: dictionary data such as: gender, address, etc.; the data of the short-time cache file are: traffic data, such as: certificate type, drug catalog, medical record type, etc.
Step S11, loading the long-time cache file and the short-time cache file by matching different cache strategies;
Specifically, the long-time cache file is matched with an initial cache strategy, and is loaded when the service of the system is released;
And matching the short-time cache file with an initial cache policy, and loading the short-time cache file when the system is used.
It should be noted that, the long-time cache file is loaded when the system service is released, and the short-time cache file is loaded when the system is used, so that the pressure of the database is reduced, the related data is prevented from being modified after being preloaded, and the latest data is needed to be reloaded when the system is actually used, so that the pressure of the database is increased.
Step S12, the loaded long-time cache files and the loaded short-time cache files are distributed to different cache pools, wherein each cache pool comprises a long-time cache pool and a short-time cache pool, the long-time cache files are distributed to the long-time cache pool, and the short-time cache files are distributed to the short-time cache pool;
Step S13, according to the long-time cache pool and the short-time cache pool, making a cache update policy, and updating the long-time cache file and the short-time cache file, including:
The update policies include a first update policy that triggers updating the long-time cache file every first update interval and a second update policy that triggers updating the short-time cache file every second update interval, wherein the first update interval is greater than the second update interval,
The second update interval includes a second first update sub-interval and a second update sub-interval, the second update sub-interval being greater than the second first update sub-interval,
When the short cache pool is not invoked after the second update sub-interval, reloading and updating the short cache file,
When the time of calling and loading the short time cache pool exceeds the second updating sub-interval, the short time cache file does not need to be reloaded and updated,
When the time of calling and loading the short-time cache pool exceeds the second updating sub-interval and does not exceed the second updating sub-interval, the time of loading is required to be obtained and compared with the data time of the request query so as to judge whether the short-time cache file needs to be reloaded and updated or not;
in some embodiments, the first update interval may be, for example, 20h,22h,24h,26h,28h, and so on.
In addition, the loading time is required to be obtained and compared with the data time required to be queried to judge whether the short-time cache file needs to be reloaded and updated, which specifically comprises the following steps:
The loading time is required to be obtained and compared with the data time required to be queried, and whether the loading time is consistent with the data time is judged;
if yes, the short-time cache file does not need to be reloaded and updated;
If not, reloading and updating the short-time cache file.
It should be noted that, through setting up of different update intervals, timely update of the cache data and integrity of the cache data will be ensured, pressure of the database is reduced, and obtaining efficiency of the cache data is improved.
In some embodiments, the second update sub-interval may be, for example, 4min,5min,6min, etc., and the second update sub-interval may be, for example, 9min,10min,11min, etc.
Step S14, when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy and the update policy, and the long-time cache file or the short-time cache file of the cache pool is obtained.
Before the step of calling the long-time cache file or the short-time cache file in the server by the client, the method further comprises the following steps:
judging whether the client calls the cache data or not;
if yes, the check code is valid;
If not, the check code is invalid, and subsequent steps are not required to be executed.
Specifically, when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy and the update policy to access the cache pool;
and the cache pool sends a feedback code to the client, and when the client verifies that the verification code is consistent with the feedback code, the long-time cache file or the short-time cache file of the cache pool is obtained by unlocking.
In some embodiments, the client performs consistency judgment on the feedback code and the check code, if the front check code and the back check code are consistent, the client indicates that the long-time cache file or the short-time cache file of the cache pool is acquired as available data, and the client can directly use the available data; if the front check code and the back check code are inconsistent, the long-time cache file or the short-time cache file of the cache pool is obtained, and the client needs to execute calling operation to obtain the long-time cache file or the short-time cache file again.
In addition, when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated to access the cache pool according to the type, the cache policy and the update policy, which specifically includes:
When the client calls the long-time cache file or the short-time cache file in the server, a character string is generated according to the type, the cache policy and the update policy;
it should be noted that, the client may also choose not to update the cached data, directly use the original long-time cached file and short-time cached file, and improve the user experience.
In some embodiments, the types include a long-time cache file and a short-time cache file, for example, 01 and 02 characters can be respectively assigned, a buffer update policy can be for example, 01 represents that no cache data is updated, 02 represents that a short-time cache pool is not called after a second update subinterval, the short-time cache file is reloaded and updated, 03 represents that the short-time cache pool is not required to be reloaded and updated when the time of calling and loading exceeds the second update subinterval, 04 represents that the short-time cache pool is not required to be called and loaded after the time of calling and loading exceeds the second update subinterval and does not exceed the second update subinterval, and the like, and the short-time cache pool is arranged and combined to form a character string.
And carrying out Kaiser password encryption on the character string to obtain a first ciphertext, adding a timestamp, carrying out Base64 operation to obtain a second ciphertext, and carrying out MD5 encryption on the second ciphertext to obtain a check code.
The setting of the check code can improve the safety of the database, realize the synchronization of the database, and meanwhile, the verification of the check code can improve the integrity and the accuracy of the update synchronization of the data.
In summary, the method for identifying and updating the cache in the above embodiment of the present invention can effectively improve the update synchronization rate of the cache data and improve the access rate, specifically, divide the cache data in the server into types to obtain a long-time cache file and a short-time cache file, and update the cache data for a long time and a short time to improve the synchronization of the cache data, reduce the pressure of the database, and improve the acquisition efficiency of the cache data; the long-time cache file and the short-time cache file are loaded by matching different cache strategies, data loading pressure is reduced through different loading modes, the loaded long-time cache file and short-time cache file are distributed to different cache pools, each cache pool comprises a long-time cache pool and a short-time cache pool, the long-time cache file is distributed to the long-time cache pool, and the short-time cache file is distributed to the short-time cache pool; according to the long-time cache pool and the short-time cache pool, a cache update strategy is formulated, the long-time cache file and the short-time cache file are updated, and through the arrangement of different update intervals, the timely update of cache data and the integrity of the cache data are ensured, so that the pressure of a database is reduced, and the acquisition efficiency of the cache data is improved; when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy and the update policy, the long-time cache file or the short-time cache file of the cache pool is obtained, the security of a database is improved due to the arrangement of the check code, the synchronization of the database is realized, meanwhile, the integrity and the accuracy of the update synchronization of data are improved due to the verification of the check code, and therefore the technical problems that the update synchronization of cache data is not timely and the access speed is low are solved.
Example two
Referring to fig. 2, a system for identifying and updating a cache according to a second embodiment of the present invention is shown, the system includes:
the type dividing module 100 is configured to divide the cache data in the server into types to obtain a long-time cache file and a short-time cache file;
The cache policy matching module 200 is configured to match the long-time cache file and the short-time cache file with different cache policies for loading;
The buffer pool allocation module 300 is configured to allocate the long-time buffer file and the short-time buffer file after loading to different buffer pools, where the buffer pools include a long-time buffer pool and a short-time buffer pool, the long-time buffer file is allocated to the long-time buffer pool, and the short-time buffer file is allocated to the short-time buffer pool;
An update policy making module 400, configured to make a cache update policy according to the long-time cache pool and the short-time cache pool, and update the long-time cache file and the short-time cache file, including:
The update policies include a first update policy that triggers updating the long-time cache file every first update interval and a second update policy that triggers updating the short-time cache file every second update interval, wherein the first update interval is greater than the second update interval,
The second update interval includes a second first update sub-interval and a second update sub-interval, the second update sub-interval being greater than the second first update sub-interval,
When the short cache pool is not invoked after the second update sub-interval, reloading and updating the short cache file,
When the time of calling and loading the short time cache pool exceeds the second updating sub-interval, the short time cache file does not need to be reloaded and updated,
When the time of calling and loading the short-time cache pool exceeds the second updating sub-interval and does not exceed the second updating sub-interval, the time of loading is required to be obtained and compared with the data time of the request query so as to judge whether the short-time cache file needs to be reloaded and updated or not;
And the cache calling module 500 is configured to generate a check code according to the type, the cache policy and the update policy when the client calls the long-time cache file or the short-time cache file in the server, and obtain the long-time cache file or the short-time cache file of the cache pool.
In summary, the system for identifying and updating the cache in the above embodiment of the present invention can effectively improve the update synchronization rate of the cache data and improve the access rate, specifically, the cache data is updated for a long time and a short time by the type dividing module, so as to improve the synchronism of the cache data, reduce the pressure of the database, and improve the acquisition efficiency of the cache data; the cache policy matching module utilizes different loading modes to reduce data loading pressure, and the update policy making module utilizes different time interval settings to ensure timely update of cache data and integrity of the cache data, reduce pressure of a database and improve acquisition efficiency of the cache data; the security of the database is improved by the buffer calling module through the setting of the check code, the synchronization of the database is realized, and meanwhile, the integrity and the accuracy of the update synchronization of the data are improved through the verification of the check code; therefore, the technical problems of untimely updating and synchronization of the cache data and low access speed are solved.
Example III
Another aspect of the present invention also provides a readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method described in the first embodiment above.
Example IV
In another aspect, the present invention also provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the steps of the method described in the first embodiment.
The technical features of the above embodiments may be arbitrarily combined, and for brevity, all of the possible combinations of the technical features of the above embodiments are not described, however, they should be considered as the scope of the description of the present specification as long as there is no contradiction between the combinations of the technical features.
Those of skill in the art will appreciate that the logic and/or steps represented in the flow diagrams or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (8)

1. A method for cache identification and updating, the method comprising:
dividing the cache data in the server into types to obtain a long-time cache file and a short-time cache file;
Loading the long-time cache file and the short-time cache file by matching different cache strategies;
The loaded long-time cache files and the loaded short-time cache files are distributed to different cache pools, wherein each cache pool comprises a long-time cache pool and a short-time cache pool, the long-time cache files are distributed to the long-time cache pool, and the short-time cache files are distributed to the short-time cache pool;
According to the long-time cache pool and the short-time cache pool, a cache update strategy is formulated, and the long-time cache file and the short-time cache file are updated, which comprises the following steps:
The update policies include a first update policy that triggers updating the long-time cache file every first update interval and a second update policy that triggers updating the short-time cache file every second update interval, wherein the first update interval is greater than the second update interval,
The second update interval includes a second first update sub-interval and a second update sub-interval, the second update sub-interval being greater than the second first update sub-interval,
When the short cache pool is not invoked after the second update sub-interval, reloading and updating the short cache file,
When the time of calling and loading the short time cache pool exceeds the second updating sub-interval, the short time cache file does not need to be reloaded and updated,
When the time of calling and loading the short-time cache pool exceeds the second updating sub-interval and does not exceed the second updating sub-interval, the time of loading is required to be obtained and compared with the data time of the request query so as to judge whether the short-time cache file needs to be reloaded and updated or not;
When the client calls the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy and the update policy, and the long-time cache file or the short-time cache file of the cache pool is acquired.
2. The method for identifying and updating a cache according to claim 1, wherein the step of loading the long-time cache file and the short-time cache file with different cache policies comprises:
matching the long-time cache file with an initial cache strategy, and loading the long-time cache file when the service of the system is released;
And matching the short-time cache file with an initial cache policy, and loading the short-time cache file when the system is used.
3. The method for identifying and updating cache as recited in claim 1, wherein the step of determining whether the short-time cache file needs to be reloaded with updates by obtaining a loading time and comparing the loading time with a data time of the request query comprises:
The loading time is required to be obtained and compared with the data time required to be queried, and whether the loading time is consistent with the data time is judged;
if yes, the short-time cache file does not need to be reloaded and updated;
If not, reloading and updating the short-time cache file.
4. The method for identifying and updating a cache as claimed in claim 1, wherein when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated according to the type, the cache policy and the update policy, and the step of obtaining the long-time cache file or the short-time cache file of the cache pool specifically includes:
when the client calls the long-time cache file or the short-time cache file in the server, a check code is generated to access the cache pool according to the type, the cache policy and the update policy;
and the cache pool sends a feedback code to the client, and when the client verifies that the verification code is consistent with the feedback code, the long-time cache file or the short-time cache file of the cache pool is obtained by unlocking.
5. The method for identifying and updating a cache as recited in claim 4, wherein when a client invokes the long-time cache file or the short-time cache file in the server, the step of generating a check code to access the cache pool according to the type, the cache policy, and the update policy comprises:
When the client calls the long-time cache file or the short-time cache file in the server, a character string is generated according to the type, the cache policy and the update policy;
And carrying out Kaiser password encryption on the character string to obtain a first ciphertext, adding a timestamp, carrying out Base64 operation to obtain a second ciphertext, and carrying out MD5 encryption on the second ciphertext to obtain a check code.
6. A system for cache identification and updating, characterized in that it is adapted to perform a method for cache identification and updating according to any of claims 1 to 5, said system comprising:
the type dividing module is used for dividing the cache data in the server into types to obtain a long-time cache file and a short-time cache file;
the cache policy matching module is used for loading the long-time cache file and the short-time cache file by matching different cache policies;
The cache pool allocation module is used for allocating the loaded long-time cache files and the loaded short-time cache files to different cache pools, wherein the cache pools comprise a long-time cache pool and a short-time cache pool, the long-time cache files are allocated to the long-time cache pool, and the short-time cache files are allocated to the short-time cache pool;
The updating policy making module is configured to make a cache updating policy according to the long-time cache pool and the short-time cache pool, and update the long-time cache file and the short-time cache file, and includes:
The update policies include a first update policy that triggers updating the long-time cache file every first update interval and a second update policy that triggers updating the short-time cache file every second update interval, wherein the first update interval is greater than the second update interval,
The second update interval includes a second first update sub-interval and a second update sub-interval, the second update sub-interval being greater than the second first update sub-interval,
When the short cache pool is not invoked after the second update sub-interval, reloading and updating the short cache file,
When the time of calling and loading the short time cache pool exceeds the second updating sub-interval, the short time cache file does not need to be reloaded and updated,
When the time of calling and loading the short-time cache pool exceeds the second updating sub-interval and does not exceed the second updating sub-interval, the time of loading is required to be obtained and compared with the data time of the request query so as to judge whether the short-time cache file needs to be reloaded and updated or not;
And the cache calling module is used for generating a check code according to the type, the cache policy and the update policy when the client calls the long-time cache file or the short-time cache file in the server, and acquiring the long-time cache file or the short-time cache file of the cache pool.
7. A readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 5.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 5 when the program is executed.
CN202410354317.9A 2024-03-27 2024-03-27 Cache identification and updating method and system Active CN117951044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410354317.9A CN117951044B (en) 2024-03-27 2024-03-27 Cache identification and updating method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410354317.9A CN117951044B (en) 2024-03-27 2024-03-27 Cache identification and updating method and system

Publications (2)

Publication Number Publication Date
CN117951044A true CN117951044A (en) 2024-04-30
CN117951044B CN117951044B (en) 2024-05-31

Family

ID=90798357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410354317.9A Active CN117951044B (en) 2024-03-27 2024-03-27 Cache identification and updating method and system

Country Status (1)

Country Link
CN (1) CN117951044B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040054777A1 (en) * 2002-09-16 2004-03-18 Emmanuel Ackaouy Apparatus and method for a proxy cache
CN109614404A (en) * 2018-11-01 2019-04-12 阿里巴巴集团控股有限公司 A kind of data buffering system and method
KR20190068859A (en) * 2017-12-11 2019-06-19 엔에이치엔 주식회사 Method and system for synchronizing file update of cache server
CN110008190A (en) * 2019-03-21 2019-07-12 武汉理工大学 A kind of periodic small documents caching replacement method
CN113515530A (en) * 2021-03-30 2021-10-19 贵州白山云科技股份有限公司 Cache object updating method, device, equipment and storage medium
CN114968845A (en) * 2022-05-29 2022-08-30 苏州浪潮智能科技有限公司 Cache processing method, system, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040054777A1 (en) * 2002-09-16 2004-03-18 Emmanuel Ackaouy Apparatus and method for a proxy cache
KR20190068859A (en) * 2017-12-11 2019-06-19 엔에이치엔 주식회사 Method and system for synchronizing file update of cache server
CN109614404A (en) * 2018-11-01 2019-04-12 阿里巴巴集团控股有限公司 A kind of data buffering system and method
CN110008190A (en) * 2019-03-21 2019-07-12 武汉理工大学 A kind of periodic small documents caching replacement method
CN113515530A (en) * 2021-03-30 2021-10-19 贵州白山云科技股份有限公司 Cache object updating method, device, equipment and storage medium
CN114968845A (en) * 2022-05-29 2022-08-30 苏州浪潮智能科技有限公司 Cache processing method, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
季美丽;王新华;徐连诚;: "超节点P2P网络中一种有效的缓存策略", 微型机与应用, no. 17, 10 September 2010 (2010-09-10) *
王鑫;: "缓存技术在Web中的应用研究", 潍坊学院学报, no. 04, 15 August 2011 (2011-08-15) *

Also Published As

Publication number Publication date
CN117951044B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
WO2021073452A1 (en) Blockchain network-based data processing method and device, electronic device and storage medium
US7769992B2 (en) File manipulation during early boot time
US11347933B1 (en) Distributed collaborative storage with operational transformation
CN108289098B (en) Authority management method and device of distributed file system, server and medium
CN110572450A (en) Data synchronization method and device, computer readable storage medium and computer equipment
CN110597541B (en) Interface updating processing method, device, equipment and storage medium based on block chain
CN110555041A (en) Data processing method, data processing device, computer equipment and storage medium
CN111309785A (en) Spring framework-based database access method and device, computer equipment and medium
CN110837648A (en) Document processing method, device and equipment
CN111475519B (en) Data caching method and device
DE102021127237A1 (en) MEASURING CONTAINER
CN115964389A (en) Data processing method, device and equipment based on block chain and readable storage medium
CN117951044B (en) Cache identification and updating method and system
CN114003432A (en) Parameter checking method and device, computer equipment and storage medium
CN113282626A (en) Redis-based data caching method and device, computer equipment and storage medium
DE112021000408T5 (en) PREDICTIVE DELIVERY OF REMOTELY STORED FILES
CN113849119A (en) Storage method, storage device, and computer-readable storage medium
CN110460685B (en) User unique identifier processing method and device, computer equipment and storage medium
CN111966701A (en) Metadata updating method, device, equipment and storage medium
CN115277678B (en) File downloading method, device, computer equipment and storage medium
CN111273962A (en) Configuration management method, device, computer readable storage medium and computer equipment
CN114201370B (en) Webpage file monitoring method and system
CN111078257B (en) H5 application package loading method and related device
CN112637192B (en) Authorization method and system for accessing micro-service
CN115145674A (en) Page jump method, device, equipment and medium based on dynamic anchor point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant