Disclosure of Invention
The embodiment of the invention provides a data management method and a data management device, which can ensure that data read from a cache by a user is consistent with data in a disk.
In a first aspect, an embodiment of the present invention provides a data management method, including:
acquiring a first reading request from a user, wherein the first reading request is used for reading first execution data with high real-time requirements;
Judging whether the first execution data is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk;
if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user;
if the first execution data is not stored in the cache, further judging whether the first execution data is stored in a disk, wherein the first execution data stored in the disk is stored by a data storage end;
if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user;
and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
Preferably, the method comprises the steps of,
further comprises:
s1: obtaining a write request from the data storage end, wherein the write request is used for indicating to write second execution data into the disk;
s2: judging whether the second execution data has high real-time requirements or not;
S3: if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, executing S4, otherwise executing S5;
s4: deleting the historical execution data stored in the cache, and executing S5;
s5: storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
s6: and synchronizing the second execution data stored in the disk to the cache.
Preferably, the method comprises the steps of,
after S2, further comprising:
if the second execution data has no high real-time requirement, storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
synchronizing the second execution data stored in the disk to the cache, and deleting the historical execution data stored in the cache.
Preferably, the method comprises the steps of,
further comprises:
acquiring a second read request from the user, wherein the second read request is used for reading the second execution data without high real-time requirements;
judging whether the second execution data is stored in the cache or not, wherein the second execution data stored in the cache is synchronized from the second execution data stored in the disk;
If the second execution data is stored in the cache, returning the second execution data stored in the cache to the user;
and if the second execution data is not stored in the cache, returning the historical execution data stored in the cache to the user.
Preferably, the method comprises the steps of,
before S1, further comprising:
setting at least two cache queues;
before said determining whether historical execution data matching said second execution data is stored in said cache if said second execution data has a high real-time requirement, further comprising:
determining a target cache queue for caching second execution data from the at least two cache queues;
caching the second execution data into the target cache queue;
before S5, further comprising:
establishing a connection associated with the target cache queue when the second execution data in the target cache queue is in a readable state;
s5 and S6 are performed through the connection.
Preferably, the method comprises the steps of,
after the setting at least two cache queues, before the obtaining the write request from the data storage end, further includes:
Setting a queue identifier of each cache queue respectively;
setting at least two hash values;
determining the association relation between each queue identifier and at least one hash value;
after the obtaining the write request from the data storage side, before the establishing the connection associated with the target cache queue, further comprising:
determining a unique identification of the second execution data;
carrying out hash calculation on the unique identifier to obtain a hash value of the second execution data;
the determining a target cache queue for storing the second execution data from the at least two cache queues includes:
determining a target queue identification associated with the hash value of the second execution data according to the association relation;
and taking the cache queue indicated by the target queue identification as a target cache queue.
Preferably, the method comprises the steps of,
the step of judging whether the first execution data is stored in the cache, if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user, and if the first execution data is not stored in the cache, further judging whether the first execution data is stored in a disk, including:
Determining whether the first execution data is stored in a cache or not in a first duration;
if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user;
if the first execution data is not stored in the cache within the first duration, further judging whether the first execution data is stored in a disk or not;
the step of judging whether the first execution data is stored in the disk, if so, returning the first execution data stored in the disk to the user, and if not, sending alarm information to the user comprises:
determining whether the first execution data is stored in the disk in a second duration;
if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user;
if the first execution data is not stored in the disk in the second time period, sending alarm information to the user;
Wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
In a second aspect, an embodiment of the present invention provides a data management apparatus, including:
the system comprises a request management module, a first reading module and a second reading module, wherein the request management module is used for acquiring a first reading request from a user, and the first reading request is used for reading first execution data with high real-time requirements;
the cache management module is used for judging whether the first execution data to be read by the first read request acquired by the request management module is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk; if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user; if the first execution data is not stored in the cache, triggering a disk management module;
the disk management module is configured to determine whether the first execution data is stored in a disk when triggered by the cache management module under a condition that the first execution data is not stored in the cache, where the first execution data stored in the disk is stored by a data storage end; if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user; and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
Preferably, the method comprises the steps of,
further comprises: a data attribute management module;
the request management module is further configured to obtain a write request from the data storage end, where the write request is used to instruct writing second execution data into the disk;
the data attribute management module is used for judging whether the second execution data has high real-time requirements; if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, triggering the cache management module to execute S4, otherwise, triggering the disk management module to execute S5;
the cache management module is further configured to S4: deleting the history execution data stored in the cache and triggering the disk management module to execute S5;
the disk management module is further configured to execute S5 when triggered: storing the second execution data into the disk, and deleting the historical execution data stored in the disk; s6: and synchronizing the second execution data stored in the disk to the cache.
Preferably, the method comprises the steps of,
The cache management module is used for determining whether the first execution data is stored in the cache or not within a first duration; if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user; triggering the disk management module if the first execution data is not stored in the cache within the first duration;
the disk management module is used for determining whether the first execution data is stored in a disk in a second duration when triggered by the cache management module under the condition that the first execution data is not stored in the cache in the first duration; if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user; if the first execution data is not stored in the disk in the second time period, sending alarm information to the user; wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
The embodiment of the invention provides a data management method and a data management device, wherein when first execution data with high real-time requirement is obtained, the first execution data is not directly obtained from a disk, but whether the first execution data synchronized from the disk is stored in a cache is firstly determined, so that the first execution data stored in the determined cache is quickly responded to a user, and the access times of the disk are reduced to the greatest extent; if the first execution data is not stored in the cache, the first execution data stored in the disk is returned to the user, and when the first execution data is not stored in the disk, the failure of updating the first execution data in the process of updating the first execution data to the disk can be determined, and at the moment, warning information needs to be sent to the user to prompt the user that the first execution data fails to be updated in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a data management method, including:
step 101: acquiring a first reading request from a user, wherein the first reading request is used for reading first execution data with high real-time requirements;
step 102: judging whether the first execution data is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk;
step 103: if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user;
step 104: if the first execution data is not stored in the cache, further judging whether the first execution data is stored in a disk, wherein the first execution data stored in the disk is stored by a data storage end;
step 105: if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user;
step 106: and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
In the embodiment of the invention, if the first execution data with high real-time requirement is obtained, the first execution data is not directly obtained from the disk, but the first execution data synchronized from the disk is firstly determined whether to be stored in the cache or not, so that the first execution data stored in the determined cache is quickly responded to the user, and the access times of the disk are reduced to the greatest extent; if the first execution data is not stored in the cache, the first execution data stored in the disk is returned to the user, and when the first execution data is not stored in the disk, the failure of updating the first execution data in the process of updating the first execution data to the disk can be determined, and at the moment, warning information needs to be sent to the user to prompt the user that the first execution data fails to be updated in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
In one embodiment of the present invention, further comprising:
s1: obtaining a write request from the data storage end, wherein the write request is used for indicating to write second execution data into the disk;
S2: judging whether the second execution data has high real-time requirements or not;
s3: if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, executing S4, otherwise executing S5;
s4: deleting the historical execution data stored in the cache, and executing S5;
s5: storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
s6: and synchronizing the second execution data stored in the disk to the cache.
In the embodiment of the invention, for the second execution data with high real-time requirement, before the second execution data is stored in the disk and the cache, if the history execution data matched with the second execution data exists in the cache, the history execution data in the cache is deleted, then the second execution data is stored in the disk, and then the history execution data in the disk is deleted, so that the condition that the second execution data is not successfully stored in the disk and the history execution data in the disk is deleted again to influence the user data reading service is avoided. And finally, synchronizing the second execution data stored in the disk to the cache so as to ensure that the data in the cache is consistent with the data in the disk.
Specifically, if the number of failures in the first execution data storage disk with the high real-time requirement reaches the threshold value, historical execution data, which is stored in the disk and matches with the first execution data, can be synchronized into the cache after 2 hours based on a preset cache update time, for example, 2 hours, so as to ensure that the data in the cache is consistent with the data in the disk.
In an embodiment of the present invention, after S2, further comprising:
if the second execution data has no high real-time requirement, storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
synchronizing the second execution data stored in the disk to the cache, and deleting the historical execution data stored in the cache.
In the embodiment of the invention, when the second execution data to be written does not have high real-time requirement, the frequency of the second execution data being read is not high, so that the second execution data can be stored in the disk first, then the history execution data which is stored in the disk and matched with the second execution data is deleted, so that only one part of effective data is reserved in the disk, then the second execution data stored in the disk is synchronized into the cache, finally the history execution data stored in the cache is deleted, so that the deletion of the history execution data in the cache is avoided first, and when the second execution data fails in the process of storing the second execution data in the disk, the history execution data stored in the disk is required to be synchronized into the cache again, thereby simplifying the data updating operation.
In one embodiment of the present invention, further comprising:
acquiring a second read request from the user, wherein the second read request is used for reading the second execution data without high real-time requirements;
judging whether the second execution data is stored in the cache or not, wherein the second execution data stored in the cache is synchronized from the second execution data stored in the disk;
if the second execution data is stored in the cache, returning the second execution data stored in the cache to the user;
and if the second execution data is not stored in the cache, returning the historical execution data stored in the cache to the user.
In the embodiment of the invention, when the user has the requirement of reading the second execution data with or without high real-time performance requirements, for example, reading the data without high real-time performance requirements, such as the explanation of certain vocabulary entries, the workflow of a company and the like. Because the influence on the user is small before and after the data with no high real-time requirement is updated, the second execution data can be read from the cache preferentially and returned to the user. When the second execution data is not stored in the cache, the second execution data stored in the disk is not synchronized into the cache, so that the historical execution data matched with the second execution data in the cache can be returned to the user to respond to the user as soon as possible.
In an embodiment of the present invention, before the step S1, the method further includes:
setting at least two cache queues;
before said determining whether historical execution data matching said second execution data is stored in said cache if said second execution data has a high real-time requirement, further comprising:
determining a target cache queue for caching second execution data from the at least two cache queues;
caching the second execution data into the target cache queue;
before S5, further comprising:
establishing a connection associated with the target cache queue when the second execution data in the target cache queue is in a readable state;
s5 and S6 are performed through the connection.
In the embodiment of the invention, by setting at least two cache queues, when a data writing request is provided at a data storage end, second execution data indicated for the writing request in each cache queue is matched with a corresponding target cache queue so as to realize the distribution of the data to be written into different cache queues, and then connection for writing into the second cache queue is established based on the target cache queues, thereby completing the writing service of the second execution data. The data to be written can be distributed in an balanced way, and a plurality of cache queues can work simultaneously, so that the data writing service can be completed as soon as possible, and the data can be responded to a user as soon as possible. And the situation that a plurality of connections need to be established to occupy the memory when the same data is subjected to the write-once service can be avoided.
In an embodiment of the present invention, after the setting at least two cache queues, before the obtaining the write request from the data storage side, the method further includes:
setting a queue identifier of each cache queue respectively;
setting at least two hash values;
determining the association relation between each queue identifier and at least one hash value;
after the obtaining the write request from the data storage side, before the establishing the connection associated with the target cache queue, further comprising:
determining a unique identification of the second execution data;
carrying out hash calculation on the unique identifier to obtain a hash value of the second execution data;
the determining a target cache queue for storing the second execution data from the at least two cache queues includes:
determining a target queue identification associated with the hash value of the second execution data according to the association relation;
and taking the cache queue indicated by the target queue identification as a target cache queue.
In the embodiment of the invention, when the second execution data to be written has high real-time requirement, the hash value of the unique identifier of the second execution data is calculated by determining the unique identifier of the second execution data, then the target queue identifier associated with the hash value of the second execution data can be determined according to the queue identifiers of different cache queues and the association relation between each queue identifier and one hash value, and then the target cache queue is determined, so that the data to be written is uniformly distributed to different cache queues, and the data writing service is completed as soon as possible through the simultaneous work of a plurality of queues.
In an embodiment of the present invention, the determining whether the first execution data is stored in the cache, if yes, returning the first execution data stored in the cache to the user, and if not, further determining whether the first execution data is stored in the disk includes:
determining whether the first execution data is stored in a cache or not in a first duration;
if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user;
if the first execution data is not stored in the cache within the first duration, further judging whether the first execution data is stored in a disk or not;
the step of judging whether the first execution data is stored in the disk, if so, returning the first execution data stored in the disk to the user, and if not, sending alarm information to the user comprises:
Determining whether the first execution data is stored in the disk in a second duration;
if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user;
if the first execution data is not stored in the disk in the second time period, sending alarm information to the user;
wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
In the embodiment of the invention, when it is determined that the first execution data synchronized from the disk is stored in the cache within the first time period (for example, 3 s), the first execution data in the cache may be returned to the user. Otherwise, determining whether the first execution data stored by the data storage end is stored in the disk in a second time period (for example, 4 s), if the first execution data is stored in the disk in the second time period, returning the first execution data in the bottom disk to the user to respond to the data reading service of the user, otherwise, sending alarm information of failure in updating the first execution data in the disk to the user, so that the user can confirm that the first execution data is not successfully updated in the disk. Because the first reading request sent by the user has a corresponding response time, the read first execution data or the alarm information of the first execution data not stored in the disk needs to be returned to the user within the response time, so that the user is responded within the response time.
As shown in fig. 2, in order to more clearly illustrate the technical solution and advantages of the present invention, the following details of a data management method provided by the present invention may specifically include the following steps:
step 201: a first read request from a user for reading first execution data of a high real-time requirement is obtained.
Step 202: whether the first execution data is stored in the cache is judged, if yes, step 203 is executed, otherwise, step 204 is executed.
Step 203: and returning the first execution data stored in the cache to the user, and ending the current flow.
Step 204: it is determined whether the disk stores the first execution data stored by the data storage terminal, if so, step 205 is executed, and if not, step 206 is executed.
Step 205: and returning the first execution data stored in the disk to the user, and ending the current flow.
Step 206: and sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data update fails.
Specifically, when the user requests to read the first execution data (for example, data with high real-time performance such as inventory, amount of money and the like of the shopping platform) with high real-time performance requirements, the first execution data is preferentially read from the cache and returned to the user, so that the reading times of the magnetic disk are reduced. When the first execution data synchronized by the disk does not exist in the cache, the first execution data is read from the disk at the bottom layer and returned to the user, so that the data reading requirement of the user is met. When the first execution data does not exist in the disk, the first execution data is abnormal when written in the disk, so that warning information needs to be sent to a user to prompt the user that the first execution data is not updated successfully in the disk.
Specifically, the buffering time can be preset for buffering, for example, 2h, when the data in the disk is in synchronization with the buffering failure, the data in the disk can be synchronized into the buffering again when the buffering time expires, so that the consistency of the data in the disk and the data in the buffering is ensured.
Specifically, when determining whether the first execution data with the high real-time requirement is stored in the cache, the first execution data with the high real-time requirement needs to be completed within a first time period (for example, 3 s), and when determining whether the first execution data with the high real-time requirement stored in the data storage end is stored in the disk, the second execution data with the high real-time requirement needs to be completed within a second time period. Because the first reading request sent by the user has a corresponding response time length, the sum of the first time length and the second time length is not greater than the response time length, namely, the read first execution data or the alarm information of the first execution data not stored in the magnetic disk is returned to the user in the response time length, so that the user is responded in the response time.
Step 207: when a write request from the data storage end for indicating writing the second execution data to the disk is obtained, it is determined whether the second execution data has a high real-time requirement, if so, step 208 is executed, otherwise, step 214 is executed.
Specifically, the data storage end may send a data writing request after the user requests to read the data, or may send a data writing request again after the data writing, where the data storage end in this embodiment sends a writing request after the user fails to read the data.
Step 208: determining a target cache queue for caching the second execution data from the preset at least two cache queues, and caching the second execution data into the target cache queue, and executing step 209.
Specifically, when a data writing request is sent from a data storage end, when second execution data indicated by the writing request has a high real-time requirement, hash calculation is firstly performed based on unique identifiers of the second execution data to obtain hash values of the second execution data, then, based on each queue identifier of at least two preset queue identifiers of the cache queues, an association relation with one preset hash value is used for determining a target queue identifier associated with the hash values of the second execution data, further, the cache queue indicated by the target queue identifier is determined to be a target cache queue matched with the second execution data, and the second execution data is cached in the target cache queue to wait for writing.
Specifically, the association relationship between each queue identifier and one hash value may be an equality relationship, a multiple relationship, but is not limited thereto.
Step 209: it is determined whether the cache stores historical execution data matched with the second execution data, if so, step 210 is executed, otherwise step 211 is executed.
Step 210: the history execution data stored in the cache is deleted, and step 211 is executed.
Step 211: when the second execution data in the target cache queue is in a readable state, a connection associated with the target cache queue is established, and step 212 is performed.
Step 212: the second execution data is stored in the disk through the connection, and the history execution data stored in the disk is deleted, and step 213 is performed.
Step 213: and synchronizing the second execution data stored in the disk into the cache through connection, and ending the current flow.
Step 214: the second execution data is stored in the disk, and the history execution data stored in the disk is deleted, and step 215 is performed.
Step 215: and synchronizing the second execution data stored in the disk to the cache, and deleting the historical execution data stored in the cache.
Specifically, after the second execution data with high real-time requirement is cached in the target cache queue, the historical execution data matched with the second execution data in the cache is deleted, so that the situation that the data read by the user from the cache is inconsistent with the data stored in the disk due to the fact that the user reads the data from the cache and new data is being written in the disk at the moment is avoided. And when the second execution data in the target cache queue is in a readable state, and writing service can be performed, establishing connection associated with the target cache queue, storing the second execution data into a disk through the connection, deleting historical execution data matched with the second execution data in the disk, so as to keep one part of effective data in the disk, and finally synchronizing the second execution data in the disk into a cache, thereby realizing the aim that the data in the cache is consistent with the data in the disk.
Step 216: when a second read request for reading the second execution data without high real-time requirement is obtained from the user, it is determined whether the second execution data synchronized from the disk is stored in the cache, if yes, step 217 is executed, otherwise step 218 is executed.
Specifically, the user may read the second execution data without high real-time requirement under any condition, and in this embodiment, after the second execution data with high real-time requirement is written to the disk and synchronized to the cache, the second read request that the user reads without high real-time requirement is obtained.
Step 217: and returning the second execution data stored in the cache to the user, and ending the current flow.
Step 218: and returning the historical execution data stored in the cache to the user, and ending the current flow.
Specifically, when the user has no second execution data with high real-time requirements, the difference of each change of the data with high real-time requirements is small, the change frequency is low, and the frequency of the user reading is also low. When the second execution data which is synchronized from the disk and has no high real-time requirement is not stored in the cache, the historical execution data matched with the second execution data in the cache is returned to the user so as to meet the reading requirement of the user.
In summary, the scheme can update the data with lower real-time property into the cache in a short time or near real-time, and adopts a 'cache queue+data update service+cache update notification' and three-section service cooperative processing strategy. Aiming at the data with higher real-time performance, the strategy of 'cache and disk data' simultaneous updating is abandoned, and the strategy of 'cache deletion, parallel data updating service and delayed reading of the data being updated' is adopted to ensure that the data updating is not interrupted, even if abnormal conditions occur, the problem that the cache and disk data are inconsistent can not occur due to the failure of the operation of a user, and at the moment, the user only needs to operate again, or an automatic retry function is provided in the system.
As shown in fig. 3, an embodiment of the present invention provides a data management apparatus, including:
a request management module 301, configured to obtain a first read request from a user, where the first read request is used to read first execution data with high real-time requirements;
a cache management module 302, configured to determine whether the first execution data to be read by the first read request acquired by the request management module 301 is stored in a cache, where the first execution data stored in the cache is synchronized from the first execution data stored in a disk; if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user; if the first execution data is not stored in the cache, triggering a disk management module 303;
The disk management module 303 is configured to determine whether the first execution data is stored in a disk when triggered by the cache management module under a condition that the first execution data is not stored in the cache, where the first execution data stored in the disk is stored by a data storage end; if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user; and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
In the embodiment of the invention, if the request management module acquires the first execution data with high real-time requirement read by the user, the disk management module does not directly acquire the first execution data from the disk, but the cache management module firstly determines whether the first execution data synchronized from the disk is stored in the cache or not, so that the first execution data stored in the determined cache is quickly responded to the user, and the access times of the disk are reduced to the greatest extent; if the first execution data is not stored in the cache, the disk management module is triggered to return the first execution data stored in the disk to the user, and when the fact that the first execution data is not stored in the disk either is determined, the first execution data can be determined to fail to update in the process of updating the first execution data to the disk, and at the moment, alarm information needs to be sent to the user to prompt the user that the first execution data fails to update in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
As shown in fig. 4, in an embodiment of the present invention, the data management apparatus further includes: a data attribute management module 401;
the request management module 301 is further configured to obtain a write request from the data storage side, where the write request is used to instruct writing second execution data to the disk;
the data attribute management module 401 is configured to determine whether the second execution data has a high real-time requirement; if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, triggering the cache management module 301 to execute S4, otherwise, triggering the disk management module 302 to execute S5;
the cache management module 301 is further configured to S4: deleting the historical execution data stored in the cache and triggering the disk management module 302 to execute S5;
the disk management module is further configured to execute S5 when triggered: storing the second execution data into the disk, and deleting the historical execution data stored in the disk; s6: and synchronizing the second execution data stored in the disk to the cache.
In an embodiment of the present invention, the cache management module is configured to determine whether the first execution data is stored in the cache in a first duration; if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user; triggering the disk management module if the first execution data is not stored in the cache within the first duration;
the disk management module is used for determining whether the first execution data is stored in a disk in a second duration when triggered by the cache management module under the condition that the first execution data is not stored in the cache in the first duration; if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user; if the first execution data is not stored in the disk in the second time period, sending alarm information to the user; wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
The embodiments of the invention have at least the following beneficial effects:
1. In an embodiment of the present invention, if first execution data with high real-time requirement is obtained, instead of directly obtaining the first execution data from the disk, it is first determined whether the first execution data synchronized from the disk is stored in the cache, so that the first execution data stored in the determined cache is quickly responded to the user, and the access frequency of the disk is reduced to the greatest extent; if the first execution data is not stored in the cache, the first execution data stored in the disk is returned to the user, and when the first execution data is not stored in the disk, the failure of updating the first execution data in the process of updating the first execution data to the disk can be determined, and at the moment, warning information needs to be sent to the user to prompt the user that the first execution data fails to be updated in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
2. In an embodiment of the present invention, for the second execution data with high real-time requirement, before the second execution data is stored in the disk and the cache, if there is history execution data matched with the second execution data in the cache, the history execution data in the cache is deleted, then the second execution data is stored in the disk, and then the history execution data in the disk is deleted, so that the problem that the second execution data is not successfully stored in the disk, and the history execution data in the disk is deleted again, thereby affecting the user data reading service is avoided. And finally, synchronizing the second execution data stored in the disk to the cache so as to ensure that the data in the cache is consistent with the data in the disk.
3. In an embodiment of the present invention, when the second execution data to be written has no high real-time requirement, the frequency of reading the second execution data is not high, so that the second execution data may be stored in the disk first, then the history execution data stored in the disk and matched with the second execution data is deleted, so that only one part of valid data is reserved in the disk, then the second execution data stored in the disk is synchronized into the cache, and finally the history execution data stored in the cache is deleted, so as to avoid deleting the history execution data in the cache first, and when the second execution data fails to be stored in the disk, the history execution data stored in the disk is further synchronized into the cache again, so as to simplify the data updating operation.
4. In an embodiment of the present invention, when a user has a need to read the second execution data with or without high real-time requirements, for example, read the data with high real-time requirements, such as the interpretation of some terms, the workflow of some company, etc. Because the influence on the user is small before and after the data with no high real-time requirement is updated, the second execution data can be read from the cache preferentially and returned to the user. When the second execution data is not stored in the cache, the second execution data stored in the disk is not synchronized into the cache, so that the historical execution data matched with the second execution data in the cache can be returned to the user to respond to the user as soon as possible.
5. In an embodiment of the present invention, by setting at least two cache queues, when a data writing request is provided at a data storage end, second execution data indicated for the writing request from each cache queue is matched with a corresponding target cache queue, so as to achieve distribution of data to be written into different cache queues, and then connection for writing into the second cache queue is established based on the target cache queues, so as to complete writing service of the second execution data. The data to be written can be distributed in an balanced way, and a plurality of cache queues can work simultaneously, so that the data writing service can be completed as soon as possible, and the data can be responded to a user as soon as possible. And the situation that a plurality of connections need to be established to occupy the memory when the same data is subjected to the write-once service can be avoided.
It is noted that relational terms such as first and second, and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the statement "comprises one" does not exclude that an additional identical element is present in a process, method, article or apparatus that comprises the element.
Finally, it should be noted that: the foregoing description is only illustrative of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.