CN113568932A - Cache data management method, system and storage medium - Google Patents

Cache data management method, system and storage medium Download PDF

Info

Publication number
CN113568932A
CN113568932A CN202110656323.6A CN202110656323A CN113568932A CN 113568932 A CN113568932 A CN 113568932A CN 202110656323 A CN202110656323 A CN 202110656323A CN 113568932 A CN113568932 A CN 113568932A
Authority
CN
China
Prior art keywords
cache
task
tasks
complete
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110656323.6A
Other languages
Chinese (zh)
Inventor
董俊明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Inspur Data Technology Co Ltd
Original Assignee
Jinan Inspur Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Data Technology Co Ltd filed Critical Jinan Inspur Data Technology Co Ltd
Priority to CN202110656323.6A priority Critical patent/CN113568932A/en
Publication of CN113568932A publication Critical patent/CN113568932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a cache data management method, a system and a storage medium, wherein the method comprises the following steps: monitoring the cache tasks of all nodes in the distributed storage cluster; in response to the fact that cache tasks exist in a plurality of nodes, the cache tasks are sequentially pushed to a cache queue, and the integrity of the cache tasks is detected in the cache queue; in response to the fact that the cache tasks are detected to be complete, analyzing the complete cache tasks, classifying the complete cache tasks based on analysis results, and respectively sending the classified cache tasks to a database or a distributed cache cluster for storage; and responding to the repair of the failed node in the nodes corresponding to the complete cache task, and acquiring the cache task lost by the repaired failed node from the database and/or the distributed cache cluster. The invention ensures the integrity of the cache task information, improves the efficiency of data storage and ensures the stability of fault recovery of the distributed storage cluster.

Description

Cache data management method, system and storage medium
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a method, a system, and a storage medium for managing cache data.
Background
Distributed storage is currently applied to the Internet in a large scale, has good expansibility and lower cost, and is mostly used for constructing enterprise-level storage capacity. Distributed refers to a unique type of system architecture that consists of a set of computer nodes that communicate over a network and that work in concert to accomplish a common task.
In current distributed storage cluster environments, there is a lack of good management for cached data. If a node fails, part of cache data of the node is lost, and the data cannot be automatically recovered after the node is repaired. In addition, the cluster network fluctuation can generate large time delay to cause data loss, and the data safety and performance of the whole cluster can be seriously influenced.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method, a system and a storage medium for managing cache data, so as to solve the problem in the prior art that a good management manner for cache data is not available.
Based on the above purpose, the present invention provides a cache data management method, which comprises the following steps:
monitoring the cache tasks of all nodes in the distributed storage cluster;
in response to the fact that cache tasks exist in a plurality of nodes, the cache tasks are sequentially pushed to a cache queue, and the integrity of the cache tasks is detected in the cache queue;
in response to the fact that the cache tasks are detected to be complete, analyzing the complete cache tasks, classifying the complete cache tasks based on analysis results, and respectively sending the classified cache tasks to a database or a distributed cache cluster for storage;
and responding to the repair of the failed node in the nodes corresponding to the complete cache task, and acquiring the cache task lost by the repaired failed node from the database and/or the distributed cache cluster.
In some embodiments, in response to monitoring that there are cache tasks in the nodes, sequentially pushing the cache tasks to the cache queue includes:
and responding to the cache tasks of the plurality of nodes, allocating identification numbers for the cache tasks and sequentially pushing the identification numbers to the cache queue.
In some embodiments, in response to a node of the nodes corresponding to the complete caching task failing and being repaired, obtaining the caching task lost by the node failing and being repaired from the database and/or the distributed caching cluster includes:
and responding to the repair of the failed node in the nodes corresponding to the complete cache task, and acquiring the cache task lost by the repaired failed node from the database and/or the distributed cache cluster according to the identification number.
In some embodiments, detecting the integrity of each buffering task in the buffering queue comprises:
the integrity of each buffer task is detected in the buffer queue by identifying the data length of each buffer task and the information of the head data and the tail data.
In some embodiments, in response to detecting that a cache task is complete, parsing the complete cache task and classifying the complete cache task based on a parsing result, and sending the classified cache task to the database or the distributed cache cluster for storage respectively includes:
in response to the fact that the cache tasks are detected to be complete, analyzing the complete cache tasks according to the size of the data blocks, classifying the analyzed cache tasks consisting of the large data blocks into a first class, and classifying the analyzed cache tasks consisting of the small data blocks into a second class;
and sending the cache tasks belonging to the first class to a distributed cache cluster for temporary storage, and sending the cache tasks belonging to the second class to a database for storage.
In some embodiments, the method further comprises:
and in response to the detection that the cache task is incomplete, the incomplete cache task is removed from the cache queue, and a corresponding node is informed to resend the incomplete cache task.
In some embodiments, the method further comprises:
and responding to the fault of the node, and sending a fault notification to the distributed storage cluster.
In some embodiments, the method further comprises:
and identifying the cache tasks needing to be encrypted in the complete cache tasks through the cache base library, and encrypting the cache tasks needing to be encrypted.
In another aspect of the present invention, a cache data management system is further provided, including:
the cache task monitoring module is configured to monitor cache tasks of all nodes in the distributed storage cluster;
the cache task detection module is configured to respond to the fact that cache tasks exist in a plurality of nodes, sequentially push the cache tasks to a cache queue, and detect the integrity of the cache tasks in the cache queue;
the cache task classification module is configured to analyze the complete cache task and classify the complete cache task based on an analysis result in response to the detection that the cache task is complete, and send the classified cache task to the database or the distributed cache cluster for storage; and
and the cache task obtaining module is configured to respond to the repair of a node after the fault occurs in the node corresponding to the complete cache task, and obtain the cache task lost by the repaired node after the fault occurs from the database and/or the distributed cache cluster.
In yet another aspect of the present invention, there is also provided a computer readable storage medium storing computer program instructions which, when executed, implement any one of the methods described above.
In yet another aspect of the present invention, a computer device is provided, which includes a memory and a processor, the memory storing a computer program, the computer program executing any one of the above methods when executed by the processor.
The invention has at least the following beneficial technical effects:
the cache tasks of all nodes in the distributed storage cluster are monitored, when the cache tasks of a plurality of nodes are monitored, all the cache tasks are sequentially pushed to the cache queue, and then the integrity of all the cache tasks is detected in the cache queue, so that the complete cache task information can be ensured to be screened out, and the condition that the cache task information is lost due to network jitter or large delay of the distributed storage cluster is effectively relieved; by analyzing and classifying the complete cache tasks, the cache task information can be stored in parallel, and the data storage efficiency is improved. In general, when a node fails, a cache task is lost, and the cache task cannot be recovered even when the node is recovered to be normal. Therefore, the invention acquires the cache tasks lost by the nodes repaired after the fault from different storage devices, thereby ensuring the integrity and stability of the nodes of the distributed storage cluster recovered after the fault.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic diagram of a cache data management method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a cache data management system/apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer-readable storage medium for implementing a cache data management method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a computer device for executing a cache data management method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two non-identical entities with the same name or different parameters, and it is understood that "first" and "second" are only used for convenience of expression and should not be construed as limiting the embodiments of the present invention. Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements does not include all of the other steps or elements inherent in the list.
In view of the foregoing, a first aspect of the embodiments of the present invention provides an embodiment of a cache data management method. Fig. 1 is a schematic diagram illustrating an embodiment of a cache data management method provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
step S10, monitoring the cache tasks of each node in the distributed storage cluster;
step S20, responding to the cache tasks of the plurality of nodes, sequentially pushing the cache tasks to a cache queue, and detecting the integrity of the cache tasks in the cache queue;
step S30, in response to the fact that the cache task is detected to be complete, the complete cache task is analyzed and classified based on the analysis result, and the classified cache task is sent to a database or a distributed cache cluster to be stored;
step S40, in response to that a node in the nodes corresponding to the complete cache task is repaired after the failure occurs, obtaining the cache task lost by the repaired node after the failure from the database and/or the distributed cache cluster.
The embodiment of the invention can ensure that the complete cache task information is screened out by monitoring the cache tasks of all nodes in the distributed storage cluster, sequentially pushing all the cache tasks to the cache queue when monitoring that a plurality of nodes have the cache tasks and then detecting the integrity of all the cache tasks in the cache queue, thereby effectively relieving the condition of cache task information loss caused by network jitter or large delay of the distributed storage cluster; by analyzing and classifying the complete cache tasks, the cache task information can be stored in parallel, and the data storage efficiency is improved. In general, when a node fails, a cache task is lost, and the cache task cannot be recovered even when the node is recovered to be normal. Therefore, the invention acquires the cache tasks lost by the nodes repaired after the fault from different storage devices, thereby ensuring the integrity and stability of the nodes of the distributed storage cluster recovered after the fault.
In some embodiments, in response to monitoring that there are cache tasks in the nodes, sequentially pushing the cache tasks to the cache queue includes: and responding to the cache tasks of the plurality of nodes, allocating identification numbers for the cache tasks and sequentially pushing the identification numbers to the cache queue.
In some embodiments, in response to a node of the nodes corresponding to the complete caching task failing and being repaired, obtaining the caching task lost by the node failing and being repaired from the database and/or the distributed caching cluster includes: and responding to the repair of the failed node in the nodes corresponding to the complete cache task, and acquiring the cache task lost by the repaired failed node from the database and/or the distributed cache cluster according to the identification number.
In the above embodiment, the buffering tasks are transmitted in segments in the network, so that an identification number is allocated to each buffering task entering the buffering queue. When the repaired node needs to recover the cache data after the fault occurs, the cache tasks in the database and the distributed cache cluster can be identified through the identification number.
In some embodiments, detecting the integrity of each buffering task in the buffering queue comprises: the integrity of each buffer task is detected in the buffer queue by identifying the data length of each buffer task and the information of the head data and the tail data.
In some embodiments, the method further comprises: and in response to the detection that the cache task is incomplete, the incomplete cache task is removed from the cache queue, and a corresponding node is informed to resend the incomplete cache task.
In the above embodiment, the cache queue includes cache integrity detection, and specifically, by detecting the data length of each cache task and the tag information of the head data and the tail data of the cache task, whether the cache task is complete can be known, so as to ensure the reliability of the cache task. And when the incomplete cache task is detected, discarding the cache task, and informing the corresponding node to resend the incomplete cache task.
In some embodiments, in response to detecting that a cache task is complete, parsing the complete cache task and classifying the complete cache task based on a parsing result, and sending the classified cache task to the database or the distributed cache cluster for storage respectively includes: in response to the fact that the cache tasks are detected to be complete, analyzing the complete cache tasks according to the size of the data blocks, classifying the analyzed cache tasks consisting of the large data blocks into a first class, and classifying the analyzed cache tasks consisting of the small data blocks into a second class; and sending the cache tasks belonging to the first class to a distributed cache cluster for temporary storage, and sending the cache tasks belonging to the second class to a database for storage.
In this embodiment, the cache task may be analyzed through the cache tag information of the cache task, and the analyzed cache task is composed of either large data blocks or small data blocks. Regarding the size of the data block, a threshold range is set in advance for the storage capacity occupied by the data block, and the large data block and the small data block belong to different ranges of storage capacity values.
In some embodiments, the method further comprises: and responding to the fault of the node, and sending a fault notification to the distributed storage cluster.
In this embodiment, in order to better manage the cache data of the distributed storage cluster, a flow of fault notification is set.
In some embodiments, the method further comprises: and identifying the cache tasks needing to be encrypted in the complete cache tasks through the cache base library, and encrypting the cache tasks needing to be encrypted.
In this embodiment, the cache base library may identify a cache task that needs to be encrypted among the cache tasks, and encrypt the cache task if it is identified that a complete cache task needs to be encrypted, and correspondingly, when the cache task that needs to be acquired by the repaired node after the failure is encrypted, the encrypted cache task needs to be decrypted, and then the node can acquire the cache task. The method further ensures the safety of the cache task information and is beneficial to the safety management of the distributed storage cluster.
In a second aspect of the embodiments of the present invention, a cache data management system is further provided. Fig. 2 is a schematic diagram illustrating an embodiment of a cache data management system provided in the present invention. As shown in fig. 2, a cache data management system includes: the cache task monitoring module 10 is configured to monitor a cache task of each node in the distributed storage cluster; the cache task detection module 20 is configured to respond to that the cache tasks exist in the plurality of nodes, sequentially push the cache tasks to a cache queue, and detect the integrity of the cache tasks in the cache queue; the cache task classification module 30 is configured to, in response to detection that a cache task is complete, analyze the complete cache task and classify the complete cache task based on an analysis result, and send the classified cache task to a database or a distributed cache cluster for storage; and a cache task obtaining module 40 configured to, in response to a repair of a node that has a failure in a node corresponding to the complete cache task, obtain, from the database and/or the distributed cache cluster, a cache task lost by the repaired node that has a failure.
The cache data management system of the embodiment of the invention can ensure that the complete cache task information is screened out by monitoring the cache tasks of each node in the distributed storage cluster, sequentially pushing each cache task to the cache queue when monitoring that a plurality of nodes have the cache tasks, and then detecting the integrity of each cache task in the cache queue, thereby effectively relieving the condition of cache task information loss caused by network jitter or large delay of the distributed storage cluster; by analyzing and classifying the complete cache tasks, the cache task information can be stored in parallel, and the data storage efficiency is improved. In general, when a node fails, a cache task is lost, and the cache task cannot be recovered even when the node is recovered to be normal. Therefore, the invention acquires the cache tasks lost by the nodes repaired after the fault from different storage devices, thereby ensuring the integrity and stability of the nodes of the distributed storage cluster recovered after the fault.
In some embodiments, the cache task snooping module 10 includes a cache queue module configured to, in response to snooping that there are cache tasks in a plurality of nodes, allocate an identification number to each cache task and push the identification number to the cache queue in sequence.
In some embodiments, the cache task obtaining module 40 is configured to, in response to a node that has failed and repaired among nodes corresponding to the complete cache task, obtain, according to the identification number, a cache task lost by the node that has failed and repaired from the database and/or the distributed cache cluster.
In some embodiments, the buffer task detecting module 20 includes an integrity detecting module configured to detect the integrity of each buffer task in the buffer queue by identifying the data length of the buffer task and the information of the head data and the tail data.
In some embodiments, the cache task classification module 30 is configured to, in response to detecting that a cache task is complete, analyze the complete cache task according to the size of the data block, classify the analyzed cache task composed of large data blocks into a first class, and classify the analyzed cache task composed of small data blocks into a second class; and sending the cache tasks belonging to the first class to a distributed cache cluster for temporary storage, and sending the cache tasks belonging to the second class to a database for storage.
In some embodiments, the system further includes a task resending module configured to, in response to detecting that the cache task is incomplete, remove the incomplete cache task from the cache queue and notify the corresponding node to resend the incomplete cache task.
In some embodiments, the system further comprises a failure notification module configured to issue a failure notification to the distributed storage cluster in response to a failure of a node.
In some embodiments, the system further includes an encryption module configured to identify, by the cache base library, a cache task that needs to be encrypted in the complete cache task, and encrypt the cache task that needs to be encrypted.
In a third aspect of the embodiment of the present invention, a computer-readable storage medium is further provided, and fig. 3 is a schematic diagram of a computer-readable storage medium implementing a cache data management method according to an embodiment of the present invention. As shown in fig. 3, the computer-readable storage medium 3 stores computer program instructions 31, the computer program instructions 31 when executed by the processor implement the steps of:
in some embodiments, in response to monitoring that there are cache tasks in the nodes, sequentially pushing the cache tasks to the cache queue includes: and responding to the cache tasks of the plurality of nodes, allocating identification numbers for the cache tasks and sequentially pushing the identification numbers to the cache queue.
In some embodiments, in response to a node of the nodes corresponding to the complete caching task failing and being repaired, obtaining the caching task lost by the node failing and being repaired from the database and/or the distributed caching cluster includes: and responding to the repair of the failed node in the nodes corresponding to the complete cache task, and acquiring the cache task lost by the repaired failed node from the database and/or the distributed cache cluster according to the identification number.
In some embodiments, detecting the integrity of each buffering task in the buffering queue comprises: the integrity of each buffer task is detected in the buffer queue by identifying the data length of each buffer task and the information of the head data and the tail data.
In some embodiments, in response to detecting that a cache task is complete, parsing the complete cache task and classifying the complete cache task based on a parsing result, and sending the classified cache task to the database or the distributed cache cluster for storage respectively includes: in response to the fact that the cache tasks are detected to be complete, analyzing the complete cache tasks according to the size of the data blocks, classifying the analyzed cache tasks consisting of the large data blocks into a first class, and classifying the analyzed cache tasks consisting of the small data blocks into a second class; and sending the cache tasks belonging to the first class to a distributed cache cluster for temporary storage, and sending the cache tasks belonging to the second class to a database for storage.
In some embodiments, the steps further comprise: and in response to the detection that the cache task is incomplete, the incomplete cache task is removed from the cache queue, and a corresponding node is informed to resend the incomplete cache task.
In some embodiments, the steps further comprise: and responding to the fault of the node, and sending a fault notification to the distributed storage cluster.
In some embodiments, the steps further comprise: and identifying the cache tasks needing to be encrypted in the complete cache tasks through the cache base library, and encrypting the cache tasks needing to be encrypted.
It is to be understood that all embodiments, features and advantages set forth above with respect to the cache data management method according to the present invention apply equally to the cache data management system and the storage medium according to the present invention, without conflicting therewith.
In a fourth aspect of the embodiments of the present invention, there is further provided a computer device, including a memory 402 and a processor 401, where the memory stores a computer program, and the computer program, when executed by the processor, implements the method of any one of the above embodiments.
Fig. 4 is a schematic hardware structure diagram of an embodiment of a computer device for executing a cache data management method according to the present invention. Taking the computer device shown in fig. 4 as an example, the computer device includes a processor 401 and a memory 402, and may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 4 illustrates an example of a connection by a bus. The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the cache data management system. The output device 404 may include a display device such as a display screen.
The memory 402, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the cache data management method in the embodiment of the present application. The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the cache data management method, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to local modules via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 401 executes various functional applications of the server and data processing by running nonvolatile software programs, instructions, and modules stored in the memory 402, that is, implements the cache data management method of the above-described method embodiment.
Finally, it should be noted that the computer-readable storage medium (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items. The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A cache data management method is characterized by comprising the following steps:
monitoring the cache tasks of all nodes in the distributed storage cluster;
in response to the fact that cache tasks exist in a plurality of nodes, the cache tasks are sequentially pushed to a cache queue, and the integrity of the cache tasks is detected in the cache queue;
in response to the fact that the cache tasks are detected to be complete, analyzing the complete cache tasks, classifying the complete cache tasks based on analysis results, and respectively sending the classified cache tasks to a database or a distributed cache cluster for storage;
and responding to the repair of the failed node in the nodes corresponding to the complete cache task, and acquiring the cache task lost by the repaired node after the failure from the database and/or the distributed cache cluster.
2. The method of claim 1, wherein in response to snooping that a plurality of nodes have cache tasks, pushing the cache tasks to a cache queue in sequence comprises:
and responding to the cache tasks of the plurality of nodes, allocating identification numbers for the cache tasks and sequentially pushing the identification numbers to the cache queue.
3. The method according to claim 2, wherein, in response to a node failure repair occurring in a node corresponding to the complete caching task, acquiring, from the database and/or the distributed caching cluster, the caching task lost by the node failure repair occurring comprises:
and responding to the repair of the failed node in the nodes corresponding to the complete cache task, and acquiring the cache task lost by the repaired failed node from the database and/or the distributed cache cluster according to the identification number.
4. The method of claim 1, wherein detecting the integrity of each buffered task in the buffer queue comprises:
and detecting the integrity of each buffer task in the buffer queue by identifying the data length of each buffer task and the information of the head data and the tail data.
5. The method of claim 1, wherein in response to detecting that a cache task is complete, parsing the complete cache task and classifying based on the parsed result, and sending the classified cache task to a database or a distributed cache cluster for storage respectively comprises:
in response to the fact that the cache tasks are detected to be complete, analyzing the complete cache tasks according to the size of the data blocks, classifying the analyzed cache tasks consisting of the large data blocks into a first class, and classifying the analyzed cache tasks consisting of the small data blocks into a second class;
and sending the cache tasks belonging to the first class to the distributed cache cluster for temporary storage, and sending the cache tasks belonging to the second class to the database for storage.
6. The method of claim 1, further comprising:
and in response to detecting that the cache task is incomplete, removing the incomplete cache task from the cache queue, and informing the corresponding node to resend the incomplete cache task.
7. The method of claim 1, further comprising:
and responding to the fault of the node, and sending a fault notification to the distributed storage cluster.
8. The method of claim 1, further comprising:
and identifying the cache tasks needing to be encrypted in the complete cache tasks through a cache basic library, and encrypting the cache tasks needing to be encrypted.
9. A cache data management system, comprising:
the cache task monitoring module is configured to monitor cache tasks of all nodes in the distributed storage cluster;
the cache task detection module is configured to respond to the fact that cache tasks exist in a plurality of nodes, sequentially push the cache tasks to a cache queue, and detect the integrity of the cache tasks in the cache queue;
the cache task classification module is configured to analyze the complete cache task and classify the complete cache task based on an analysis result in response to the detection that the cache task is complete, and send the classified cache task to the database or the distributed cache cluster for storage; and
and the cache task acquisition module is configured to respond to the repair of a node after a fault occurs in the nodes corresponding to the complete cache task and acquire the cache task lost by the repaired node after the fault occurs from the database and/or the distributed cache cluster.
10. A computer-readable storage medium, characterized in that computer program instructions are stored which, when executed by a processor, implement the method according to any one of claims 1-8.
CN202110656323.6A 2021-06-11 2021-06-11 Cache data management method, system and storage medium Pending CN113568932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110656323.6A CN113568932A (en) 2021-06-11 2021-06-11 Cache data management method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110656323.6A CN113568932A (en) 2021-06-11 2021-06-11 Cache data management method, system and storage medium

Publications (1)

Publication Number Publication Date
CN113568932A true CN113568932A (en) 2021-10-29

Family

ID=78161999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110656323.6A Pending CN113568932A (en) 2021-06-11 2021-06-11 Cache data management method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113568932A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190125A (en) * 2022-06-27 2022-10-14 京东科技控股股份有限公司 Monitoring method and device for cache cluster

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190125A (en) * 2022-06-27 2022-10-14 京东科技控股股份有限公司 Monitoring method and device for cache cluster

Similar Documents

Publication Publication Date Title
CN108881294B (en) Attack source IP portrait generation method and device based on network attack behaviors
US10649838B2 (en) Automatic correlation of dynamic system events within computing devices
CN105743730B (en) The method and its system of real time monitoring are provided for the web service of mobile terminal
CN107968791B (en) Attack message detection method and device
CN110995468A (en) System fault processing method, device, equipment and storage medium of system to be analyzed
US20170093672A1 (en) Method and device for fingerprint based status detection in a distributed processing system
CN113328985B (en) Passive Internet of things equipment identification method, system, medium and equipment
CN110062926B (en) Device driver telemetry
CN113568932A (en) Cache data management method, system and storage medium
US11227051B2 (en) Method for detecting computer virus, computing device, and storage medium
JP2021179935A (en) Vehicular abnormality detection device and vehicular abnormality detection method
US20160205118A1 (en) Cyber black box system and method thereof
CN111858588A (en) Distributed application index service platform and data processing method
CN114095385A (en) Data monitoring system, data monitoring device and data monitoring method
KR101625890B1 (en) Test automation system and test automation method for detecting change for signature of internet application traffic protocol
EP3576365B1 (en) Data processing device and method
CN114003313B (en) Cluster management method, system, storage medium and device
CN113806204B (en) Method, device, system and storage medium for evaluating message segment correlation
CN112583825B (en) Method and device for detecting abnormality of industrial system
EP3361405A1 (en) Enhancement of intrusion detection systems
CN103795577A (en) Log processing method and device of log server
CN113807697A (en) Alarm association-based order dispatching method and device
JP6330280B2 (en) Alert output device, alert output method, and alert output program
CN113810336A (en) Data message encryption determination method and device and computer equipment
CN112988511B (en) Log information collection method, system, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination