CN110764697B - Data management method and device - Google Patents

Data management method and device Download PDF

Info

Publication number
CN110764697B
CN110764697B CN201910930718.3A CN201910930718A CN110764697B CN 110764697 B CN110764697 B CN 110764697B CN 201910930718 A CN201910930718 A CN 201910930718A CN 110764697 B CN110764697 B CN 110764697B
Authority
CN
China
Prior art keywords
execution data
stored
cache
disk
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910930718.3A
Other languages
Chinese (zh)
Other versions
CN110764697A (en
Inventor
朱志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanghai Kangxin Beijing Technology Co ltd
Original Assignee
Wanghai Kangxin Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanghai Kangxin Beijing Technology Co ltd filed Critical Wanghai Kangxin Beijing Technology Co ltd
Priority to CN201910930718.3A priority Critical patent/CN110764697B/en
Publication of CN110764697A publication Critical patent/CN110764697A/en
Application granted granted Critical
Publication of CN110764697B publication Critical patent/CN110764697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a data management method and a device, comprising the following steps: acquiring a first reading request from a user for reading first execution data with high real-time requirements; judging whether first execution data synchronized from a disk are stored in a cache; if the first execution data is stored in the cache, the first execution data stored in the cache is returned to the user; if the first execution data is not stored in the cache, further judging whether the first execution data is stored in the disk, wherein the first execution data stored in the disk is stored by the data storage end; if the first execution data is stored in the disk, the first execution data stored in the disk is returned to the user; and if the first execution data is not stored in the disk, sending alarm information to the user, wherein the alarm information is used for indicating that the updating of the first execution data fails. The scheme can ensure that the data read from the cache by the user is consistent with the data in the disk.

Description

Data management method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data management method and apparatus.
Background
Data caching refers to a high-speed memory inside a disk, and some data is temporarily stored in a computer for reading and re-reading just like a block of buffer.
Aiming at data with high real-time requirements, the current industry generally adopts parallel updating of data to a disk and a cache so as to make the data in the disk consistent with the data in the cache.
However, when the write service is abnormal, the data in the cache is updated successfully, and the data in the disk is not updated successfully, so that the data in the cache is inconsistent with the data in the disk. For example, the updated data in the cache indicates that the number of X items in the library is 0, but the data in the disk is actually updated and fails, and the number of X items in the library is 3, and when the user reads the data in the cache, the read data may not be consistent with the data in the disk.
Disclosure of Invention
The embodiment of the invention provides a data management method and a data management device, which can ensure that data read from a cache by a user is consistent with data in a disk.
In a first aspect, an embodiment of the present invention provides a data management method, including:
acquiring a first reading request from a user, wherein the first reading request is used for reading first execution data with high real-time requirements;
Judging whether the first execution data is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk;
if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user;
if the first execution data is not stored in the cache, further judging whether the first execution data is stored in a disk, wherein the first execution data stored in the disk is stored by a data storage end;
if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user;
and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
Preferably, the method comprises the steps of,
further comprises:
s1: obtaining a write request from the data storage end, wherein the write request is used for indicating to write second execution data into the disk;
s2: judging whether the second execution data has high real-time requirements or not;
S3: if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, executing S4, otherwise executing S5;
s4: deleting the historical execution data stored in the cache, and executing S5;
s5: storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
s6: and synchronizing the second execution data stored in the disk to the cache.
Preferably, the method comprises the steps of,
after S2, further comprising:
if the second execution data has no high real-time requirement, storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
synchronizing the second execution data stored in the disk to the cache, and deleting the historical execution data stored in the cache.
Preferably, the method comprises the steps of,
further comprises:
acquiring a second read request from the user, wherein the second read request is used for reading the second execution data without high real-time requirements;
judging whether the second execution data is stored in the cache or not, wherein the second execution data stored in the cache is synchronized from the second execution data stored in the disk;
If the second execution data is stored in the cache, returning the second execution data stored in the cache to the user;
and if the second execution data is not stored in the cache, returning the historical execution data stored in the cache to the user.
Preferably, the method comprises the steps of,
before S1, further comprising:
setting at least two cache queues;
before said determining whether historical execution data matching said second execution data is stored in said cache if said second execution data has a high real-time requirement, further comprising:
determining a target cache queue for caching second execution data from the at least two cache queues;
caching the second execution data into the target cache queue;
before S5, further comprising:
establishing a connection associated with the target cache queue when the second execution data in the target cache queue is in a readable state;
s5 and S6 are performed through the connection.
Preferably, the method comprises the steps of,
after the setting at least two cache queues, before the obtaining the write request from the data storage end, further includes:
Setting a queue identifier of each cache queue respectively;
setting at least two hash values;
determining the association relation between each queue identifier and at least one hash value;
after the obtaining the write request from the data storage side, before the establishing the connection associated with the target cache queue, further comprising:
determining a unique identification of the second execution data;
carrying out hash calculation on the unique identifier to obtain a hash value of the second execution data;
the determining a target cache queue for storing the second execution data from the at least two cache queues includes:
determining a target queue identification associated with the hash value of the second execution data according to the association relation;
and taking the cache queue indicated by the target queue identification as a target cache queue.
Preferably, the method comprises the steps of,
the step of judging whether the first execution data is stored in the cache, if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user, and if the first execution data is not stored in the cache, further judging whether the first execution data is stored in a disk, including:
Determining whether the first execution data is stored in a cache or not in a first duration;
if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user;
if the first execution data is not stored in the cache within the first duration, further judging whether the first execution data is stored in a disk or not;
the step of judging whether the first execution data is stored in the disk, if so, returning the first execution data stored in the disk to the user, and if not, sending alarm information to the user comprises:
determining whether the first execution data is stored in the disk in a second duration;
if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user;
if the first execution data is not stored in the disk in the second time period, sending alarm information to the user;
Wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
In a second aspect, an embodiment of the present invention provides a data management apparatus, including:
the system comprises a request management module, a first reading module and a second reading module, wherein the request management module is used for acquiring a first reading request from a user, and the first reading request is used for reading first execution data with high real-time requirements;
the cache management module is used for judging whether the first execution data to be read by the first read request acquired by the request management module is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk; if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user; if the first execution data is not stored in the cache, triggering a disk management module;
the disk management module is configured to determine whether the first execution data is stored in a disk when triggered by the cache management module under a condition that the first execution data is not stored in the cache, where the first execution data stored in the disk is stored by a data storage end; if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user; and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
Preferably, the method comprises the steps of,
further comprises: a data attribute management module;
the request management module is further configured to obtain a write request from the data storage end, where the write request is used to instruct writing second execution data into the disk;
the data attribute management module is used for judging whether the second execution data has high real-time requirements; if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, triggering the cache management module to execute S4, otherwise, triggering the disk management module to execute S5;
the cache management module is further configured to S4: deleting the history execution data stored in the cache and triggering the disk management module to execute S5;
the disk management module is further configured to execute S5 when triggered: storing the second execution data into the disk, and deleting the historical execution data stored in the disk; s6: and synchronizing the second execution data stored in the disk to the cache.
Preferably, the method comprises the steps of,
The cache management module is used for determining whether the first execution data is stored in the cache or not within a first duration; if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user; triggering the disk management module if the first execution data is not stored in the cache within the first duration;
the disk management module is used for determining whether the first execution data is stored in a disk in a second duration when triggered by the cache management module under the condition that the first execution data is not stored in the cache in the first duration; if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user; if the first execution data is not stored in the disk in the second time period, sending alarm information to the user; wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
The embodiment of the invention provides a data management method and a data management device, wherein when first execution data with high real-time requirement is obtained, the first execution data is not directly obtained from a disk, but whether the first execution data synchronized from the disk is stored in a cache is firstly determined, so that the first execution data stored in the determined cache is quickly responded to a user, and the access times of the disk are reduced to the greatest extent; if the first execution data is not stored in the cache, the first execution data stored in the disk is returned to the user, and when the first execution data is not stored in the disk, the failure of updating the first execution data in the process of updating the first execution data to the disk can be determined, and at the moment, warning information needs to be sent to the user to prompt the user that the first execution data fails to be updated in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a data management method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for data management according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a data management device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another data management apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a data management method, including:
step 101: acquiring a first reading request from a user, wherein the first reading request is used for reading first execution data with high real-time requirements;
step 102: judging whether the first execution data is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk;
step 103: if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user;
step 104: if the first execution data is not stored in the cache, further judging whether the first execution data is stored in a disk, wherein the first execution data stored in the disk is stored by a data storage end;
step 105: if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user;
step 106: and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
In the embodiment of the invention, if the first execution data with high real-time requirement is obtained, the first execution data is not directly obtained from the disk, but the first execution data synchronized from the disk is firstly determined whether to be stored in the cache or not, so that the first execution data stored in the determined cache is quickly responded to the user, and the access times of the disk are reduced to the greatest extent; if the first execution data is not stored in the cache, the first execution data stored in the disk is returned to the user, and when the first execution data is not stored in the disk, the failure of updating the first execution data in the process of updating the first execution data to the disk can be determined, and at the moment, warning information needs to be sent to the user to prompt the user that the first execution data fails to be updated in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
In one embodiment of the present invention, further comprising:
s1: obtaining a write request from the data storage end, wherein the write request is used for indicating to write second execution data into the disk;
S2: judging whether the second execution data has high real-time requirements or not;
s3: if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, executing S4, otherwise executing S5;
s4: deleting the historical execution data stored in the cache, and executing S5;
s5: storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
s6: and synchronizing the second execution data stored in the disk to the cache.
In the embodiment of the invention, for the second execution data with high real-time requirement, before the second execution data is stored in the disk and the cache, if the history execution data matched with the second execution data exists in the cache, the history execution data in the cache is deleted, then the second execution data is stored in the disk, and then the history execution data in the disk is deleted, so that the condition that the second execution data is not successfully stored in the disk and the history execution data in the disk is deleted again to influence the user data reading service is avoided. And finally, synchronizing the second execution data stored in the disk to the cache so as to ensure that the data in the cache is consistent with the data in the disk.
Specifically, if the number of failures in the first execution data storage disk with the high real-time requirement reaches the threshold value, historical execution data, which is stored in the disk and matches with the first execution data, can be synchronized into the cache after 2 hours based on a preset cache update time, for example, 2 hours, so as to ensure that the data in the cache is consistent with the data in the disk.
In an embodiment of the present invention, after S2, further comprising:
if the second execution data has no high real-time requirement, storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
synchronizing the second execution data stored in the disk to the cache, and deleting the historical execution data stored in the cache.
In the embodiment of the invention, when the second execution data to be written does not have high real-time requirement, the frequency of the second execution data being read is not high, so that the second execution data can be stored in the disk first, then the history execution data which is stored in the disk and matched with the second execution data is deleted, so that only one part of effective data is reserved in the disk, then the second execution data stored in the disk is synchronized into the cache, finally the history execution data stored in the cache is deleted, so that the deletion of the history execution data in the cache is avoided first, and when the second execution data fails in the process of storing the second execution data in the disk, the history execution data stored in the disk is required to be synchronized into the cache again, thereby simplifying the data updating operation.
In one embodiment of the present invention, further comprising:
acquiring a second read request from the user, wherein the second read request is used for reading the second execution data without high real-time requirements;
judging whether the second execution data is stored in the cache or not, wherein the second execution data stored in the cache is synchronized from the second execution data stored in the disk;
if the second execution data is stored in the cache, returning the second execution data stored in the cache to the user;
and if the second execution data is not stored in the cache, returning the historical execution data stored in the cache to the user.
In the embodiment of the invention, when the user has the requirement of reading the second execution data with or without high real-time performance requirements, for example, reading the data without high real-time performance requirements, such as the explanation of certain vocabulary entries, the workflow of a company and the like. Because the influence on the user is small before and after the data with no high real-time requirement is updated, the second execution data can be read from the cache preferentially and returned to the user. When the second execution data is not stored in the cache, the second execution data stored in the disk is not synchronized into the cache, so that the historical execution data matched with the second execution data in the cache can be returned to the user to respond to the user as soon as possible.
In an embodiment of the present invention, before the step S1, the method further includes:
setting at least two cache queues;
before said determining whether historical execution data matching said second execution data is stored in said cache if said second execution data has a high real-time requirement, further comprising:
determining a target cache queue for caching second execution data from the at least two cache queues;
caching the second execution data into the target cache queue;
before S5, further comprising:
establishing a connection associated with the target cache queue when the second execution data in the target cache queue is in a readable state;
s5 and S6 are performed through the connection.
In the embodiment of the invention, by setting at least two cache queues, when a data writing request is provided at a data storage end, second execution data indicated for the writing request in each cache queue is matched with a corresponding target cache queue so as to realize the distribution of the data to be written into different cache queues, and then connection for writing into the second cache queue is established based on the target cache queues, thereby completing the writing service of the second execution data. The data to be written can be distributed in an balanced way, and a plurality of cache queues can work simultaneously, so that the data writing service can be completed as soon as possible, and the data can be responded to a user as soon as possible. And the situation that a plurality of connections need to be established to occupy the memory when the same data is subjected to the write-once service can be avoided.
In an embodiment of the present invention, after the setting at least two cache queues, before the obtaining the write request from the data storage side, the method further includes:
setting a queue identifier of each cache queue respectively;
setting at least two hash values;
determining the association relation between each queue identifier and at least one hash value;
after the obtaining the write request from the data storage side, before the establishing the connection associated with the target cache queue, further comprising:
determining a unique identification of the second execution data;
carrying out hash calculation on the unique identifier to obtain a hash value of the second execution data;
the determining a target cache queue for storing the second execution data from the at least two cache queues includes:
determining a target queue identification associated with the hash value of the second execution data according to the association relation;
and taking the cache queue indicated by the target queue identification as a target cache queue.
In the embodiment of the invention, when the second execution data to be written has high real-time requirement, the hash value of the unique identifier of the second execution data is calculated by determining the unique identifier of the second execution data, then the target queue identifier associated with the hash value of the second execution data can be determined according to the queue identifiers of different cache queues and the association relation between each queue identifier and one hash value, and then the target cache queue is determined, so that the data to be written is uniformly distributed to different cache queues, and the data writing service is completed as soon as possible through the simultaneous work of a plurality of queues.
In an embodiment of the present invention, the determining whether the first execution data is stored in the cache, if yes, returning the first execution data stored in the cache to the user, and if not, further determining whether the first execution data is stored in the disk includes:
determining whether the first execution data is stored in a cache or not in a first duration;
if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user;
if the first execution data is not stored in the cache within the first duration, further judging whether the first execution data is stored in a disk or not;
the step of judging whether the first execution data is stored in the disk, if so, returning the first execution data stored in the disk to the user, and if not, sending alarm information to the user comprises:
Determining whether the first execution data is stored in the disk in a second duration;
if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user;
if the first execution data is not stored in the disk in the second time period, sending alarm information to the user;
wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
In the embodiment of the invention, when it is determined that the first execution data synchronized from the disk is stored in the cache within the first time period (for example, 3 s), the first execution data in the cache may be returned to the user. Otherwise, determining whether the first execution data stored by the data storage end is stored in the disk in a second time period (for example, 4 s), if the first execution data is stored in the disk in the second time period, returning the first execution data in the bottom disk to the user to respond to the data reading service of the user, otherwise, sending alarm information of failure in updating the first execution data in the disk to the user, so that the user can confirm that the first execution data is not successfully updated in the disk. Because the first reading request sent by the user has a corresponding response time, the read first execution data or the alarm information of the first execution data not stored in the disk needs to be returned to the user within the response time, so that the user is responded within the response time.
As shown in fig. 2, in order to more clearly illustrate the technical solution and advantages of the present invention, the following details of a data management method provided by the present invention may specifically include the following steps:
step 201: a first read request from a user for reading first execution data of a high real-time requirement is obtained.
Step 202: whether the first execution data is stored in the cache is judged, if yes, step 203 is executed, otherwise, step 204 is executed.
Step 203: and returning the first execution data stored in the cache to the user, and ending the current flow.
Step 204: it is determined whether the disk stores the first execution data stored by the data storage terminal, if so, step 205 is executed, and if not, step 206 is executed.
Step 205: and returning the first execution data stored in the disk to the user, and ending the current flow.
Step 206: and sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data update fails.
Specifically, when the user requests to read the first execution data (for example, data with high real-time performance such as inventory, amount of money and the like of the shopping platform) with high real-time performance requirements, the first execution data is preferentially read from the cache and returned to the user, so that the reading times of the magnetic disk are reduced. When the first execution data synchronized by the disk does not exist in the cache, the first execution data is read from the disk at the bottom layer and returned to the user, so that the data reading requirement of the user is met. When the first execution data does not exist in the disk, the first execution data is abnormal when written in the disk, so that warning information needs to be sent to a user to prompt the user that the first execution data is not updated successfully in the disk.
Specifically, the buffering time can be preset for buffering, for example, 2h, when the data in the disk is in synchronization with the buffering failure, the data in the disk can be synchronized into the buffering again when the buffering time expires, so that the consistency of the data in the disk and the data in the buffering is ensured.
Specifically, when determining whether the first execution data with the high real-time requirement is stored in the cache, the first execution data with the high real-time requirement needs to be completed within a first time period (for example, 3 s), and when determining whether the first execution data with the high real-time requirement stored in the data storage end is stored in the disk, the second execution data with the high real-time requirement needs to be completed within a second time period. Because the first reading request sent by the user has a corresponding response time length, the sum of the first time length and the second time length is not greater than the response time length, namely, the read first execution data or the alarm information of the first execution data not stored in the magnetic disk is returned to the user in the response time length, so that the user is responded in the response time.
Step 207: when a write request from the data storage end for indicating writing the second execution data to the disk is obtained, it is determined whether the second execution data has a high real-time requirement, if so, step 208 is executed, otherwise, step 214 is executed.
Specifically, the data storage end may send a data writing request after the user requests to read the data, or may send a data writing request again after the data writing, where the data storage end in this embodiment sends a writing request after the user fails to read the data.
Step 208: determining a target cache queue for caching the second execution data from the preset at least two cache queues, and caching the second execution data into the target cache queue, and executing step 209.
Specifically, when a data writing request is sent from a data storage end, when second execution data indicated by the writing request has a high real-time requirement, hash calculation is firstly performed based on unique identifiers of the second execution data to obtain hash values of the second execution data, then, based on each queue identifier of at least two preset queue identifiers of the cache queues, an association relation with one preset hash value is used for determining a target queue identifier associated with the hash values of the second execution data, further, the cache queue indicated by the target queue identifier is determined to be a target cache queue matched with the second execution data, and the second execution data is cached in the target cache queue to wait for writing.
Specifically, the association relationship between each queue identifier and one hash value may be an equality relationship, a multiple relationship, but is not limited thereto.
Step 209: it is determined whether the cache stores historical execution data matched with the second execution data, if so, step 210 is executed, otherwise step 211 is executed.
Step 210: the history execution data stored in the cache is deleted, and step 211 is executed.
Step 211: when the second execution data in the target cache queue is in a readable state, a connection associated with the target cache queue is established, and step 212 is performed.
Step 212: the second execution data is stored in the disk through the connection, and the history execution data stored in the disk is deleted, and step 213 is performed.
Step 213: and synchronizing the second execution data stored in the disk into the cache through connection, and ending the current flow.
Step 214: the second execution data is stored in the disk, and the history execution data stored in the disk is deleted, and step 215 is performed.
Step 215: and synchronizing the second execution data stored in the disk to the cache, and deleting the historical execution data stored in the cache.
Specifically, after the second execution data with high real-time requirement is cached in the target cache queue, the historical execution data matched with the second execution data in the cache is deleted, so that the situation that the data read by the user from the cache is inconsistent with the data stored in the disk due to the fact that the user reads the data from the cache and new data is being written in the disk at the moment is avoided. And when the second execution data in the target cache queue is in a readable state, and writing service can be performed, establishing connection associated with the target cache queue, storing the second execution data into a disk through the connection, deleting historical execution data matched with the second execution data in the disk, so as to keep one part of effective data in the disk, and finally synchronizing the second execution data in the disk into a cache, thereby realizing the aim that the data in the cache is consistent with the data in the disk.
Step 216: when a second read request for reading the second execution data without high real-time requirement is obtained from the user, it is determined whether the second execution data synchronized from the disk is stored in the cache, if yes, step 217 is executed, otherwise step 218 is executed.
Specifically, the user may read the second execution data without high real-time requirement under any condition, and in this embodiment, after the second execution data with high real-time requirement is written to the disk and synchronized to the cache, the second read request that the user reads without high real-time requirement is obtained.
Step 217: and returning the second execution data stored in the cache to the user, and ending the current flow.
Step 218: and returning the historical execution data stored in the cache to the user, and ending the current flow.
Specifically, when the user has no second execution data with high real-time requirements, the difference of each change of the data with high real-time requirements is small, the change frequency is low, and the frequency of the user reading is also low. When the second execution data which is synchronized from the disk and has no high real-time requirement is not stored in the cache, the historical execution data matched with the second execution data in the cache is returned to the user so as to meet the reading requirement of the user.
In summary, the scheme can update the data with lower real-time property into the cache in a short time or near real-time, and adopts a 'cache queue+data update service+cache update notification' and three-section service cooperative processing strategy. Aiming at the data with higher real-time performance, the strategy of 'cache and disk data' simultaneous updating is abandoned, and the strategy of 'cache deletion, parallel data updating service and delayed reading of the data being updated' is adopted to ensure that the data updating is not interrupted, even if abnormal conditions occur, the problem that the cache and disk data are inconsistent can not occur due to the failure of the operation of a user, and at the moment, the user only needs to operate again, or an automatic retry function is provided in the system.
As shown in fig. 3, an embodiment of the present invention provides a data management apparatus, including:
a request management module 301, configured to obtain a first read request from a user, where the first read request is used to read first execution data with high real-time requirements;
a cache management module 302, configured to determine whether the first execution data to be read by the first read request acquired by the request management module 301 is stored in a cache, where the first execution data stored in the cache is synchronized from the first execution data stored in a disk; if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user; if the first execution data is not stored in the cache, triggering a disk management module 303;
The disk management module 303 is configured to determine whether the first execution data is stored in a disk when triggered by the cache management module under a condition that the first execution data is not stored in the cache, where the first execution data stored in the disk is stored by a data storage end; if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user; and if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update.
In the embodiment of the invention, if the request management module acquires the first execution data with high real-time requirement read by the user, the disk management module does not directly acquire the first execution data from the disk, but the cache management module firstly determines whether the first execution data synchronized from the disk is stored in the cache or not, so that the first execution data stored in the determined cache is quickly responded to the user, and the access times of the disk are reduced to the greatest extent; if the first execution data is not stored in the cache, the disk management module is triggered to return the first execution data stored in the disk to the user, and when the fact that the first execution data is not stored in the disk either is determined, the first execution data can be determined to fail to update in the process of updating the first execution data to the disk, and at the moment, alarm information needs to be sent to the user to prompt the user that the first execution data fails to update in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
As shown in fig. 4, in an embodiment of the present invention, the data management apparatus further includes: a data attribute management module 401;
the request management module 301 is further configured to obtain a write request from the data storage side, where the write request is used to instruct writing second execution data to the disk;
the data attribute management module 401 is configured to determine whether the second execution data has a high real-time requirement; if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, triggering the cache management module 301 to execute S4, otherwise, triggering the disk management module 302 to execute S5;
the cache management module 301 is further configured to S4: deleting the historical execution data stored in the cache and triggering the disk management module 302 to execute S5;
the disk management module is further configured to execute S5 when triggered: storing the second execution data into the disk, and deleting the historical execution data stored in the disk; s6: and synchronizing the second execution data stored in the disk to the cache.
In an embodiment of the present invention, the cache management module is configured to determine whether the first execution data is stored in the cache in a first duration; if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user; triggering the disk management module if the first execution data is not stored in the cache within the first duration;
the disk management module is used for determining whether the first execution data is stored in a disk in a second duration when triggered by the cache management module under the condition that the first execution data is not stored in the cache in the first duration; if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user; if the first execution data is not stored in the disk in the second time period, sending alarm information to the user; wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
The embodiments of the invention have at least the following beneficial effects:
1. In an embodiment of the present invention, if first execution data with high real-time requirement is obtained, instead of directly obtaining the first execution data from the disk, it is first determined whether the first execution data synchronized from the disk is stored in the cache, so that the first execution data stored in the determined cache is quickly responded to the user, and the access frequency of the disk is reduced to the greatest extent; if the first execution data is not stored in the cache, the first execution data stored in the disk is returned to the user, and when the first execution data is not stored in the disk, the failure of updating the first execution data in the process of updating the first execution data to the disk can be determined, and at the moment, warning information needs to be sent to the user to prompt the user that the first execution data fails to be updated in the disk. Because the data stored in the cache is synchronized from the data stored in the disk, the data read by the user from the cache can be ensured to be consistent with the data in the disk.
2. In an embodiment of the present invention, for the second execution data with high real-time requirement, before the second execution data is stored in the disk and the cache, if there is history execution data matched with the second execution data in the cache, the history execution data in the cache is deleted, then the second execution data is stored in the disk, and then the history execution data in the disk is deleted, so that the problem that the second execution data is not successfully stored in the disk, and the history execution data in the disk is deleted again, thereby affecting the user data reading service is avoided. And finally, synchronizing the second execution data stored in the disk to the cache so as to ensure that the data in the cache is consistent with the data in the disk.
3. In an embodiment of the present invention, when the second execution data to be written has no high real-time requirement, the frequency of reading the second execution data is not high, so that the second execution data may be stored in the disk first, then the history execution data stored in the disk and matched with the second execution data is deleted, so that only one part of valid data is reserved in the disk, then the second execution data stored in the disk is synchronized into the cache, and finally the history execution data stored in the cache is deleted, so as to avoid deleting the history execution data in the cache first, and when the second execution data fails to be stored in the disk, the history execution data stored in the disk is further synchronized into the cache again, so as to simplify the data updating operation.
4. In an embodiment of the present invention, when a user has a need to read the second execution data with or without high real-time requirements, for example, read the data with high real-time requirements, such as the interpretation of some terms, the workflow of some company, etc. Because the influence on the user is small before and after the data with no high real-time requirement is updated, the second execution data can be read from the cache preferentially and returned to the user. When the second execution data is not stored in the cache, the second execution data stored in the disk is not synchronized into the cache, so that the historical execution data matched with the second execution data in the cache can be returned to the user to respond to the user as soon as possible.
5. In an embodiment of the present invention, by setting at least two cache queues, when a data writing request is provided at a data storage end, second execution data indicated for the writing request from each cache queue is matched with a corresponding target cache queue, so as to achieve distribution of data to be written into different cache queues, and then connection for writing into the second cache queue is established based on the target cache queues, so as to complete writing service of the second execution data. The data to be written can be distributed in an balanced way, and a plurality of cache queues can work simultaneously, so that the data writing service can be completed as soon as possible, and the data can be responded to a user as soon as possible. And the situation that a plurality of connections need to be established to occupy the memory when the same data is subjected to the write-once service can be avoided.
It is noted that relational terms such as first and second, and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the statement "comprises one" does not exclude that an additional identical element is present in a process, method, article or apparatus that comprises the element.
Finally, it should be noted that: the foregoing description is only illustrative of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (8)

1. A method of data management, comprising:
acquiring a first reading request from a user, wherein the first reading request is used for reading first execution data with high real-time requirements;
judging whether the first execution data is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk;
if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user;
if the first execution data is not stored in the cache, further judging whether the first execution data is stored in a disk, wherein the first execution data stored in the disk is stored by a data storage end;
if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user;
If the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update;
further comprises:
s1: obtaining a write request from the data storage end, wherein the write request is used for indicating to write second execution data into the disk;
s2: judging whether the second execution data has high real-time requirements or not;
s3: if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, executing S4, otherwise executing S5;
s4: deleting the historical execution data stored in the cache, and executing S5;
s5: storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
s6: and synchronizing the second execution data stored in the disk to the cache.
2. The data management method according to claim 1, further comprising, after S2:
if the second execution data has no high real-time requirement, storing the second execution data into the disk, and deleting the historical execution data stored in the disk;
Synchronizing the second execution data stored in the disk to the cache, and deleting the historical execution data stored in the cache.
3. The data management method according to claim 2, further comprising:
acquiring a second read request from the user, wherein the second read request is used for reading the second execution data without high real-time requirements;
judging whether the second execution data is stored in the cache or not, wherein the second execution data stored in the cache is synchronized from the second execution data stored in the disk;
if the second execution data is stored in the cache, returning the second execution data stored in the cache to the user;
and if the second execution data is not stored in the cache, returning the historical execution data stored in the cache to the user.
4. A data management method according to any one of claims 1 to 3, wherein,
before S1, further comprising:
setting at least two cache queues;
before said determining whether historical execution data matching said second execution data is stored in said cache if said second execution data has a high real-time requirement, further comprising:
Determining a target cache queue for caching second execution data from the at least two cache queues;
caching the second execution data into the target cache queue;
before S5, further comprising:
establishing a connection associated with the target cache queue when the second execution data in the target cache queue is in a readable state;
s5 and S6 are performed through the connection.
5. The method for data management as claimed in claim 4, wherein,
after the setting at least two cache queues, before the obtaining the write request from the data storage end, further includes:
setting a queue identifier of each cache queue respectively;
setting at least two hash values;
determining the association relation between each queue identifier and at least one hash value;
after the obtaining the write request from the data storage side, before the establishing the connection associated with the target cache queue, further comprising:
determining a unique identification of the second execution data;
carrying out hash calculation on the unique identifier to obtain a hash value of the second execution data;
The determining a target cache queue for storing the second execution data from the at least two cache queues includes:
determining a target queue identification associated with the hash value of the second execution data according to the association relation;
and taking the cache queue indicated by the target queue identification as a target cache queue.
6. The method for data management as claimed in claim 1, wherein,
judging whether the first execution data is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk; if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user; if the first execution data is not stored in the cache, further judging whether the first execution data is stored in the disk or not, including:
determining whether the first execution data is stored in a cache or not in a first duration;
if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user;
If the first execution data is not stored in the cache within the first duration, further judging whether the first execution data is stored in a disk or not;
the step of judging whether the first execution data is stored in the disk, if so, returning the first execution data stored in the disk to the user, and if not, sending alarm information to the user comprises:
determining whether the first execution data is stored in the disk in a second duration;
if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user;
if the first execution data is not stored in the disk in the second time period, sending alarm information to the user;
wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
7. A data management apparatus, comprising:
the system comprises a request management module, a first reading module and a second reading module, wherein the request management module is used for acquiring a first reading request from a user, and the first reading request is used for reading first execution data with high real-time requirements;
The cache management module is used for judging whether the first execution data to be read by the first read request acquired by the request management module is stored in a cache or not, wherein the first execution data stored in the cache is synchronized from the first execution data stored in a disk; if the first execution data is stored in the cache, returning the first execution data stored in the cache to the user; if the first execution data is not stored in the cache, triggering a disk management module;
the disk management module is configured to determine whether the first execution data is stored in a disk when triggered by the cache management module under a condition that the first execution data is not stored in the cache, where the first execution data stored in the disk is stored by a data storage end; if the first execution data is stored in the disk, returning the first execution data stored in the disk to the user; if the first execution data is not stored in the magnetic disk, sending alarm information to the user, wherein the alarm information is used for indicating that the first execution data is failed to update;
Further comprises: a data attribute management module;
the request management module is further configured to obtain a write request from the data storage end, where the write request is used to instruct writing second execution data into the disk;
the data attribute management module is used for judging whether the second execution data has high real-time requirements; if the second execution data has high real-time requirement, further judging whether historical execution data matched with the second execution data is stored in the cache, if so, triggering the cache management module to execute S4, otherwise, triggering the disk management module to execute S5;
the cache management module is further configured to S4: deleting the history execution data stored in the cache and triggering the disk management module to execute S5;
the disk management module is further configured to execute S5 when triggered: storing the second execution data into the disk, and deleting the historical execution data stored in the disk; s6: and synchronizing the second execution data stored in the disk to the cache.
8. The data management apparatus according to claim 7, wherein,
The cache management module is used for determining whether the first execution data is stored in the cache or not within a first duration; if the first execution data is stored in the cache in the first duration, returning the first execution data stored in the cache to the user; triggering the disk management module if the first execution data is not stored in the cache within the first duration;
the disk management module is used for determining whether the first execution data is stored in a disk in a second duration when triggered by the cache management module under the condition that the first execution data is not stored in the cache in the first duration; if the first execution data is stored in the disk in the second time period, returning the first execution data stored in the disk to the user; if the first execution data is not stored in the disk in the second time period, sending alarm information to the user; wherein a sum of the first duration and the second duration is not greater than a response duration of the first read request.
CN201910930718.3A 2019-09-29 2019-09-29 Data management method and device Active CN110764697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910930718.3A CN110764697B (en) 2019-09-29 2019-09-29 Data management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910930718.3A CN110764697B (en) 2019-09-29 2019-09-29 Data management method and device

Publications (2)

Publication Number Publication Date
CN110764697A CN110764697A (en) 2020-02-07
CN110764697B true CN110764697B (en) 2023-08-29

Family

ID=69330728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910930718.3A Active CN110764697B (en) 2019-09-29 2019-09-29 Data management method and device

Country Status (1)

Country Link
CN (1) CN110764697B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112230863A (en) * 2020-11-09 2021-01-15 平安普惠企业管理有限公司 User data reading and writing method and device, electronic equipment and computer storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862475A (en) * 2005-07-15 2006-11-15 华为技术有限公司 Method for managing magnetic disk array buffer storage
CN102253810A (en) * 2010-05-17 2011-11-23 腾讯科技(深圳)有限公司 Method, apparatus and system used for reading data
CN105159604A (en) * 2015-08-20 2015-12-16 浪潮(北京)电子信息产业有限公司 Disk data read-write method and system
CN106785121A (en) * 2016-12-30 2017-05-31 中山大学 Battery management system with intelligent data management with battery bag abnormality alarm
CN107798062A (en) * 2017-09-20 2018-03-13 中国电力科学研究院 A kind of transformer station's historical data unifies storage method and system
CN110069495A (en) * 2019-03-13 2019-07-30 中科恒运股份有限公司 Date storage method, device and terminal device
CN110244911A (en) * 2019-06-20 2019-09-17 北京奇艺世纪科技有限公司 A kind of data processing method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3458804B2 (en) * 1999-12-27 2003-10-20 日本電気株式会社 Information recording apparatus and control method thereof
US7707356B2 (en) * 2006-09-28 2010-04-27 Agere Systems Inc. Method and apparatus for scheduling disk read requests
JP2011086230A (en) * 2009-10-19 2011-04-28 Ntt Comware Corp Cache system and cache access method
KR101806394B1 (en) * 2016-07-05 2017-12-07 주식회사 리얼타임테크 A data processing method having a structure of the cache index specified to the transaction in a mobile environment dbms
US10705969B2 (en) * 2018-01-19 2020-07-07 Samsung Electronics Co., Ltd. Dedupe DRAM cache

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862475A (en) * 2005-07-15 2006-11-15 华为技术有限公司 Method for managing magnetic disk array buffer storage
CN102253810A (en) * 2010-05-17 2011-11-23 腾讯科技(深圳)有限公司 Method, apparatus and system used for reading data
CN105159604A (en) * 2015-08-20 2015-12-16 浪潮(北京)电子信息产业有限公司 Disk data read-write method and system
CN106785121A (en) * 2016-12-30 2017-05-31 中山大学 Battery management system with intelligent data management with battery bag abnormality alarm
CN107798062A (en) * 2017-09-20 2018-03-13 中国电力科学研究院 A kind of transformer station's historical data unifies storage method and system
CN110069495A (en) * 2019-03-13 2019-07-30 中科恒运股份有限公司 Date storage method, device and terminal device
CN110244911A (en) * 2019-06-20 2019-09-17 北京奇艺世纪科技有限公司 A kind of data processing method and system

Also Published As

Publication number Publication date
CN110764697A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN114116613B (en) Metadata query method, device and storage medium based on distributed file system
JP5686034B2 (en) Cluster system, synchronization control method, server device, and synchronization control program
CN108566291B (en) Event processing method, server and system
CN107092628B (en) Time series data processing method and device
CN111049928B (en) Data synchronization method, system, electronic device and computer readable storage medium
CN110795395B (en) File deployment system and file deployment method
CN110706069A (en) Exception handling method, device, server and system for order payment request
CN111198845B (en) Data migration method, readable storage medium and computing device
CN112134721A (en) API gateway degradation method and terminal
CN113094430B (en) Data processing method, device, equipment and storage medium
EP3633519A1 (en) Method for storing objects, and object store gateway
CN110764697B (en) Data management method and device
CN113946427A (en) Task processing method, processor and storage medium for multi-operating system
CN112363980B (en) Data processing method and device of distributed system
JP6225606B2 (en) Database monitoring apparatus, database monitoring method, and computer program
CN106549983B (en) Database access method, terminal and server
CN108959548B (en) Service request processing method and device
CN108874319B (en) Metadata updating method, device, equipment and readable storage medium
CN112000850A (en) Method, device, system and equipment for data processing
CN113448971A (en) Data updating method based on distributed system, computing node and storage medium
CN116263646A (en) Data processing method and device, electronic equipment and storage medium
CN113542326B (en) Data caching method and device of distributed system, server and storage medium
CN111324668B (en) Database data synchronous processing method, device and storage medium
CN114077587A (en) Rule engine based business processing method, rule engine, medium and device
US10885014B2 (en) Assigning monitoring responsibilities in distributed systems using optimistic concurrency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100176 room 07, block 2, building B, No. 12, Hongda North Road, Beijing Economic and Technological Development Zone, Beijing

Applicant after: Wanghai Kangxin (Beijing) Technology Co.,Ltd.

Address before: 100176 room 07, block 2, building B, No. 12, Hongda North Road, Beijing Economic and Technological Development Zone, Beijing

Applicant before: BEIJING NEUSOFT VIEWHIGH TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100176 room 801-2, 8th floor, building 3, yard 22, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Wanghai Kangxin (Beijing) Technology Co.,Ltd.

Address before: Room 07, zone 2, building B, No. 12, Hongda North Road, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: Wanghai Kangxin (Beijing) Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant