CN115543938A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115543938A
CN115543938A CN202110735667.6A CN202110735667A CN115543938A CN 115543938 A CN115543938 A CN 115543938A CN 202110735667 A CN202110735667 A CN 202110735667A CN 115543938 A CN115543938 A CN 115543938A
Authority
CN
China
Prior art keywords
data
cache queue
queue
cleaned
level cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110735667.6A
Other languages
Chinese (zh)
Inventor
高翔
王挺
宋军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110735667.6A priority Critical patent/CN115543938A/en
Publication of CN115543938A publication Critical patent/CN115543938A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a data processing method and device, electronic equipment and a storage medium, and relates to the technical field of block chains. The method comprises the following steps: acquiring target data, wherein the target data is data acquired from a storage medium except a cache in response to a corresponding read request in a message queue, and the cache comprises a first-level cache queue and at least one second-level cache queue; if the first-level cache queue is determined to meet the data cleaning condition, determining the number of the data to be cleaned in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleaned; determining the serial number of a secondary cache queue for storing the data to be cleaned according to the number, deleting the data to be cleaned from the primary cache queue, storing the data to be cleaned into the secondary cache queue with the corresponding serial number, and storing the target data into the primary cache queue. According to the embodiment of the application, load balance among channels can be realized, and the average response time of the read request is finally reduced.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of block chaining technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
When a user sends a request for querying data to a database for the first time, the database searches the data in a Cache first, if the data to be accessed is right in the Cache (generally called Cache Hit), the data is directly read from the Cache, otherwise, if the data to be queried by the user is not in the Cache, the situation is called Cache Miss, in this case, the database reads the data required by the user from a disk first and puts the data into the Cache, and the user reads the data from the Cache.
Flash-based SSDs are a common form of disk that contains multiple independent channels within the disk, the load between these channels may be unbalanced over a particular period of time, with channels that contain more hot data (i.e., data that is accessed more frequently) being loaded more heavily, and other channels that contain only cold data (data that is accessed less frequently) being difficult to access. In this case, there is a long latency for a read request sent to a channel with a high load.
The conventional cache scrubbing strategy considers that miss latency of all data (i.e., latency caused by data being obtained from the SSD when the data is not stored in the cache) is consistent, and takes a hit rate (number of hits/(number of hits + no number of hits), which is called as a hit if the accessed data is in the cache) as a main index for measuring performance, which often results in unbalanced load among channels and higher average response time of data read and read requests.
Disclosure of Invention
Embodiments of the present invention provide a data processing method, apparatus, electronic device and storage medium that overcome the above-mentioned problems or at least partially solve the above-mentioned problems.
In a first aspect, a data processing method is provided, and the method includes:
acquiring target data, wherein the target data is data acquired from storage media except a cache in response to a corresponding read request in a message queue, and the cache comprises a first-level cache queue and at least one second-level cache queue;
if the first-level cache queue is determined to meet the data cleaning condition, determining the data to be cleaned in the first-level cache queue and the number of reference read requests corresponding to the data to be cleaned;
determining the serial number of a second-level cache queue for storing the data to be cleaned according to the number, deleting the data to be cleaned from the first-level cache queue, storing the data to be cleaned into the second-level cache queue with the corresponding serial number, and storing the target data into the first-level cache queue;
wherein, the sequence number is used for indicating the priority of the data in the corresponding second-level buffer queue being cleared out of the buffer; the reference read request is a read request which is stored in the message queue before the read request corresponding to the data to be cleaned and is read after the read request corresponding to the data to be cleaned.
In a second aspect, a data processing apparatus is provided, including:
the target data acquisition module is used for acquiring target data, wherein the target data is data acquired from a storage medium except a cache in response to a corresponding read request in the message queue, and the cache comprises a first-level cache queue and at least one second-level cache queue;
the data to be cleaned determining module is used for determining the data to be cleaned in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleaned if the first-level cache queue is determined to meet the data cleaning condition;
the data unloading module is used for determining the serial number of a second-level cache queue for storing the data to be cleaned according to the number, deleting the data to be cleaned from the first-level cache queue, storing the data to be cleaned into the second-level cache queue with the corresponding serial number, and storing the target data into the first-level cache queue;
wherein, the sequence number is used for indicating the priority of the data in the corresponding second-level buffer queue being cleared out of the buffer; the reference read request is a read request which is stored in the message queue before the read request corresponding to the data to be cleaned and is read after the read request corresponding to the data to be cleaned.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the method as provided in the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer program, where the computer program includes computer instructions stored in a computer-readable storage medium, and when a processor of a computer device reads the computer instructions from the computer-readable storage medium, the processor executes the computer instructions, so that the computer device executes the steps of implementing the method provided in the first aspect.
According to the data processing method, the data processing device, the electronic equipment and the storage medium provided by the embodiment of the invention, the target data are obtained, if the first-level cache queue is determined to meet the data cleaning condition, the data to be cleaned in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleaned are determined, the number of the reference read requests can represent the load busy degree of the data to be cleaned in a channel of the storage medium except a cache, if the first-level cache queue is determined to meet the data cleaning condition, the data to be cleaned in the first-level cache queue and the number of the corresponding reference read requests are further determined, the number of the second-level cache queue storing the data to be cleaned is determined according to the number, the number is used for representing the priority of the data in the corresponding second-level cache queue being cleaned out of the cache, so that the data in the second-level cache queue with a high priority can be preferentially deleted from the cache, the load balance among the channels is realized, and the average response time of the read requests is finally reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an interaction flow for requesting a file access to read file data according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a computing environment of an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a data processing method according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a process of determining a reference read request according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a data structure of a cache according to an embodiment of the present application;
FIG. 6 is an embodiment of a first level cache queue according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 8 is an alternative structural diagram of the distributed system applied to the blockchain system according to the embodiment of the present invention;
FIG. 9 is an alternative block structure according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Several terms referred to in this application will first be introduced and explained:
the block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises the steps of maintaining public and private key generation (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorized condition, supervising and auditing the transaction condition of some real identities, and providing rule configuration (wind control audit) of risk control; basic micro-services are deployed on all block link point devices and used for verifying the effectiveness of service requests, and recording the effective requests to storage after consensus is completed, for a new service request, the basic services firstly perform interface adaptation analysis and authentication processing (interface adaptation), then encrypt service information (consensus management) through a consensus algorithm, completely and consistently transmit the encrypted service information to a shared account (network communication), and perform recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of canceling contract upgrading logout; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process, and visual output of real-time status in product operation, for example: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
The platform product service layer provides basic capability and an implementation framework of typical application, and developers can complete block chain implementation of business logic based on the basic capability and the characteristics of the superposed business. The application service layer provides the application service based on the block chain scheme for the business participants to use.
Big data (Big data) refers to a data set which cannot be captured, managed and processed by a conventional software tool within a certain time range, and is a massive, high-growth rate and diversified information asset which can have stronger decision-making power, insight discovery power and process optimization capability only by a new processing mode. With the advent of the cloud era, big data has attracted more and more attention, and the big data needs special technology to effectively process a large amount of data within a tolerance elapsed time.
Referring to fig. 1, an interactive flow diagram for requesting a file access to read file data according to an embodiment of the present application is exemplarily shown, and a data processing method provided in the embodiment of the present application may be used in a (local or distributed) file system based on a large-capacity mechanical disk. The file system comprises program instructions of file access logic and program instructions for cache, the file access logic refers to an organization form of a file seen from the viewpoint of a user, is data which can be directly processed by the user and a structure of the data, and can also be understood as how the data are logically organized in the file, and the program instructions of the file access logic are executed by a processor of the terminal. The cached program instructions are used to manage a cache, which may refer to a memory in the form of a random access memory RAM or the like. Based on the program instructions of the file access logic, the processor may manage the memory to implement the corresponding functions of the embodiments of the present application.
When a file access request (a read request or a write request) reaches a processor, a corresponding file is searched in a cache by a file system, if the corresponding file is searched, target data can be directly stored in the cache to be read, if the corresponding file is not searched, the target data needs to be read from a magnetic disk and stored in the cache, the processes of reading the disk and writing the disk are included, and after the target data is stored in the cache, the target data can be quickly read in the cache.
The embodiment of the application can be applied to the terminal, and is specifically executed by a processor of the terminal. The terminal herein may include, but is not limited to: smart phones, tablets, laptops, and desktops, among others. The processor of the terminal can read instructions (such as file access requests) from the memory and the local cache, put the instructions into the instruction register, and can issue control instructions to complete execution of one instruction, but the processor cannot directly read programs or data from the disk, so the memory is used as a component directly communicating with the processor, all the programs are run in the memory, and the memory plays a role in temporarily storing processor operation data and data exchanged with the disk, which is equivalent to a bridge between the processor and the disk. When the terminal is running, the processor writes part of data in the disk into the cache, and the problem of access of a high-speed CPU to a slow memory is solved.
Turning to FIG. 2, a schematic diagram of a computing environment of an embodiment of the present application is illustrated. The host 110 may submit an input/output (I/O) request to the storage controller 120 via the server 130 to access data (e.g., tracks, logical block addresses, units of storage, groups of units (e.g., columns, rows, or arrays of units), sectors, fields, etc.) in the storage device 140.
The storage controller 120 includes one or more processors 1201 and a cache 1202 that caches data for the storage device 140. Processor 1201 may include a single Central Processing Unit (CPU), a core or group of cores on a single CPU, or a group of processing resources on one or more CPUs. The cache 1202 has data buffered therein, which is transmitted between the host 110 and the storage device 140, and the cache 1202 is divided into a first-level cache queue and a second-level cache queue.
The processor 1201 responds to the read request, acquires target data from the storage device 140, and if the first-level cache queue meets the data cleaning condition, determines the number of reference read requests corresponding to the data to be cleaned and the data to be cleaned in the first-level cache queue; determining the serial number of a secondary cache queue for storing the data to be cleaned according to the number, deleting the data to be cleaned from the primary cache queue, storing the data to be cleaned into the secondary cache queue with the corresponding serial number, and storing the target data into the primary cache queue. The sequence number is used to indicate the priority of the data in the corresponding second-level buffer queue being cleared out of the buffer 1202, that is, the clearing order of the data in the buffer 1202 is determined according to the "first-in last-out number" (the number of read requests stored to the message queue before the read request corresponding to the data to be cleared and read after the read request corresponding to the data to be cleared) "of the read requests, so as to effectively solve the problem of load imbalance of the storage device 140. When data requested by other read requests is stored in the cache 1202, the data is directly acquired from the cache 1202, so that the data acquisition efficiency is improved.
Alternatively, some or all of the functionality may be implemented as microcode or firmware in a hardware device in the storage controller 120, for example in an Application Specific Integrated Circuit (ASIC).
The storage device 110 may include one or more storage devices 140 known in the art, such as a Solid State storage device 140 (SSD) comprised of Solid State electronic devices, NAND Memory cells, electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory, flash Disk, random Access Memory (RAM) drives, storage-class Memory (SCM), phase Change Memory (PCM), resistive Random Access Memory, spin transfer torque Memory, conductive bridging RAM, magnetic hard Disk drives, optical disks, tapes, and the like. The storage devices 140 may further be configured as an array of devices, such as simple disk bundles, direct access storage devices 140, redundant arrays of independent disks, virtualization devices, and the like. Further, the storage devices 140 may include heterogeneous storage devices 140 from different vendors or from the same vendor.
The memory may include suitable volatile or non-volatile storage 140.
The server 130 in this embodiment may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The execution method of the server in the embodiment of the present application may be implemented in a Cloud Computing (Cloud Computing) mode, where the Cloud Computing is a Computing mode, and distributes Computing tasks on a resource pool formed by a large number of computers, so that various application systems can obtain Computing power, storage space, and information service as needed. The network that provides the resources is called the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand.
As a basic capability provider of cloud computing, a cloud computing resource pool (called as an ifas (Infrastructure as a Service) platform for short is established, and multiple types of virtual resources are deployed in the resource pool and are selectively used by external clients.
According to the logic function division, a PaaS (Platform as a Service) layer can be deployed on an IaaS (Infrastructure as a Service) layer, a SaaS (Software as a Service) layer is deployed on the PaaS layer, and the SaaS can be directly deployed on the IaaS. PaaS is a platform on which software runs, such as a database, a web container, etc. SaaS is a variety of business software, such as web portal, sms group sender, etc. Generally speaking, saaS and PaaS are upper layers relative to IaaS.
Alternatively, host 110 may be connected to memory controller 120 through a bus interface (e.g., a Peripheral Component Interconnect (PCI) bus interface and other interfaces known in the art).
The connection of the channels and chips of the memory device 140 undergoes a transition from single channel to multi-channel, where all chips in a single-channel structure are connected to the same channel, and different channels in a multi-channel structure mount several NANDflash chips. Taking the SSD as an example, generally, the performance of the SSD increases in proportion to the number of channels, and the more the number of channels is, the more commands can be read and written concurrently. The same channel can be shared between chips in a command interleaving mode. The above-mentioned channel and chip connection method has the following problems that when the application load is not uniform, the utilization rates of the chips belonging to different channels are different, so that some channels are always in a busy state, and other channels are idle due to the low utilization rate of the mounted chips, so that the channel utilization rate is not uniform, and the read-write performance of the SSD cannot be fully exerted.
The application provides a data processing method, an apparatus, an electronic device and a computer-readable storage medium, which aim to solve the above technical problems in the prior art.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 3, a schematic flow chart of a data processing method according to an embodiment of the present application is exemplarily shown, and as shown in the drawing, the method includes:
s101, obtaining target data, wherein the target data is obtained from a storage medium except a cache in response to a corresponding read request in a message queue.
When the data requested by the read request is stored in the cache in advance, the data can be quickly read and returned to the request sender, and when the data requested by the read request is not stored in the cache, the data needs to be acquired from other storage media (such as the storage device shown in fig. 2), in which case the data reading speed is slow, so when the data is acquired, the data needs to be stored in the cache.
The cache comprises a first-level cache queue and at least one second-level cache queue, namely the cache is divided into two levels, when data are obtained from other storage media, the data need to be stored in the first-level cache queue, the data in the first-level cache queue can be continuously deleted according to a preset rule and stored in the second-level cache queue, the data in the second-level cache queue can be continuously deleted according to the preset rule, and therefore dynamic balance of a storage space of the cache is guaranteed.
S102, if the first-level cache queue is determined to meet the data cleaning condition, determining the number of data to be cleaned in the first-level cache queue and the number of reference read requests corresponding to the data to be cleaned.
Specifically, the access frequency of each data in the first-level cache queue is determined; if data with the access frequency lower than a preset threshold value exist in the first-level cache queue, determining that the first-level cache queue meets a data cleaning condition; correspondingly, the data with the access frequency lower than the preset threshold value in the first-level cache queue is used as the data to be cleaned.
Since the cache needs to identify which data comes from the free channel during the cache cleaning, the cache itself does not know the internal structure and parallelism of other storage media. The read request points to a logical address, parallelism is determined by the physical address, and the physical address and the logical address of the storage medium are dynamic silver snakes, so that the prior art cannot know which physical channel the data comes from according to the logical address.
The embodiment of the application shows that the delay experienced by the read request sent to the idle channel is relatively updated, and once a read request can be quickly returned, the channel is an idle channel, specifically, the read request is firstly added to the tail of the message queue, and when the data corresponding to the read request is returned from the SSD, the read request is deleted from the message queue, that is, the read request is added to the queue according to the order of the read request, and leaves the queue in a disorder manner. For a request R to a free channel, the dwell time in the data queue is short, and when the request R leaves the queue, there are still many read requests that enter the queue before the request R, and the requests come from a busy channel, and the requested data has not been returned. These requests are referred to as reference read requests with respect to request R, and the more reference read requests, the more idle the channel on which request R resides. These reference read requests can be used as a measure of how idle the channel is.
Based on the finding, the embodiment of the application measures the idle degree of the channel according to the number of the reference read requests corresponding to the data. The reference read request in the embodiment of the present application is a read request that is stored in a message queue before a read request corresponding to data to be cleaned and is read after the read request corresponding to the data to be cleaned.
Referring to fig. 4, which exemplarily shows a flow diagram for determining a reference read request according to an embodiment of the present application, each message bit may store one read request, each time a read request is received, the read request is added to the tail of a queue, and the number of the read request in the queue is marked, as can be seen in fig. 4, a total of 10 read requests are successively stored in the message queue, a corresponding message bit in the message queue has 10 message bits, a message bit marked with a letter "C" indicates that data corresponding to the read request stored in the message bit has been returned from the SSD, and a message bit marked with a letter "W" indicates that data corresponding to the read request stored in the message bit has not been returned from the SSD.
In (a), data corresponding to the read requests numbered 0 to 2 that enter the message queue earliest have all been returned from the SSD, while data corresponding to the read requests numbered 3 to 9 have not been returned from the SSD, in (b), data corresponding to the read request numbered 9 is returned from the SSD first than the read requests numbered 3 to 8, so the flag of the 10 th message bit is updated to "C", and for the read request numbered 9, since data having 6 read requests (numbered 3 to 8) among the read requests earlier than it enters the message queue has not been returned from the SSD, the number of reference read requests of the read request numbered 9 is 6. In (c), the data corresponding to the read request with the number 7 is returned from the SSD, and the number of reference read requests corresponding to the read request with the number 7 can be determined to be 4 (numbers 3 to 6), and in (d), the data corresponding to the read request with the number 8 is returned from the SSD, and the read requests with the numbers 3 to 6 are entered into the queue earlier than the read request, but the read requests not yet processed remain, and therefore, the number of reference read requests corresponding to the read request with the number 8 is 4.
S103, determining the serial number of a secondary cache queue for storing the data to be cleaned according to the number, deleting the data to be cleaned from the primary cache queue, storing the data to be cleaned into the secondary cache queue with the corresponding serial number, and storing the target data into the primary cache queue;
before data is stored in a cache, it needs to be judged and whether a cache queue meets a data clearing condition, for example, the data clearing condition may be that a storage space in a first-level cache queue is not enough to store the data, and for example, storage bits in the first-level cache queue are full.
It can be known from the foregoing embodiment that the greater the number of reference read requests corresponding to each data, the more idle the channel of the SSD where the data is located is, the embodiment of the present application may set at least one secondary buffer queue according to the number of reference read requests, for example, the secondary buffer queue 1 is used for storing data whose number of reference read requests is 1 to 3, the secondary buffer queue 2 is used for storing data whose number of reference read requests is 4 to 6, and the like, so as to obtain a correspondence between a sequence number of the secondary buffer queue and a load busy degree of the channel of the SSD, where the sequence number has a meaning of indicating that data in the corresponding secondary buffer queue is cleared out of the priority of the buffer, and subsequently, the secondary buffer queue corresponding to the low load busy degree may be preferentially deleted, thereby balancing loads between the channels, and reducing an average response time of the read requests.
According to the data processing method, the target data are obtained, if the first-level cache queue is determined to meet the data clearing condition, the data to be cleared in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleared are determined, the number of the reference read requests can represent the load busy degree of the data to be cleared in a channel of a storage medium except a cache, if the first-level cache queue is determined to meet the data clearing condition, the data to be cleared in the first-level cache queue and the number of the corresponding reference read requests are further determined, the number of the second-level cache queue storing the data to be cleared is determined according to the number, and the number is used for representing the priority of the data in the corresponding second-level cache queue being cleared out of the cache, so that the data in the second-level cache queue with the high priority can be deleted from the cache preferentially, load balance among the channels is achieved, and the average response time of the read requests is finally reduced.
On the basis of the foregoing embodiments, as an optional embodiment, determining a sequence number of a secondary cache queue storing data to be cleaned according to a quantity includes:
inputting the quantity into a preset increasing function to obtain a serial number of a secondary cache queue which is output by the preset increasing function and stores the data to be cleaned; or alternatively
And inputting the quantity into a preset decreasing function to obtain the serial number of a secondary cache queue which is output by the preset increasing function and stores the data to be cleaned.
It can be understood that, if the sequence number of the second level cache queue is obtained through the preset increasing function, which means that the greater the sequence number of the second level cache queue, the more preferentially the data in the second level cache queue is deleted from the cache; if the serial number of the second-level cache queue is obtained through the preset subtraction function, the smaller the serial number of the second-level cache queue is, the more preferentially the data in the second-level cache queue is deleted from the cache.
Specifically, the predetermined increasing function of the embodiment of the present application may be a logarithmic function with a predetermined value greater than 1 as a base, for example, log 2 N, where N denotes the number of reference read requests, as in FIG. 4Taking data numbered 9 as an example, the number of reference read requests for this data is 6, and log is calculated 2 And 6, taking an integer part, thereby determining that the data is stored in a second-level cache queue with the sequence number of 2.
Referring to fig. 5, which exemplarily shows a schematic diagram of a data structure of a cache according to an embodiment of the present application, as shown in the drawing, the cache includes a first-level cache queue and n +1 second-level cache queues, and sequence numbers of the second-level cache queues indicate priorities of data deletion from the cache in a sequence, and it is known that the priority is higher as the sequence number is larger in the drawing, and the sequence number in this embodiment is obtained according to a preset increasing function. In the figure, data bits in each buffer queue (including the first-level buffer queue and the second-level buffer queue) are represented by small rectangular boxes, filled data bits indicate that data is stored in the data bits, and unfilled data bits identify that data is not stored in the data bits, so that it can be known that data may not be deleted in sequence in the buffer queues according to the number of the data bits.
On the basis of the foregoing embodiments, as an optional embodiment, the data processing method further includes: and if the available capacity of the cache is determined to be smaller than the preset threshold, performing data cleaning from a second-level cache queue which is not empty and has the highest priority until the available capacity of the cache is not smaller than the preset threshold.
Specifically, if the sequence number of the second-level cache queue is obtained according to a preset increasing function, data cleaning is performed from the second-level cache queue with the largest sequence number which is not empty until the available capacity of the cache is not less than a preset threshold value; or alternatively
And if the sequence number of the second-level cache queue is obtained according to a preset subtraction function, performing data cleaning from the second-level cache queue with the lowest sequence number which is not empty until the available capacity of the cache is not less than a preset threshold value.
When the second-level cache queue is subjected to data cleaning, all data in the second-level cache may be deleted, or part of data may be deleted, which is not specifically limited in the embodiments of the present application.
In an optional embodiment, the acquiring target data according to the embodiment of the present application may include the following steps:
receiving a read request, and storing the read request to a message queue;
when reading a read request from a message queue, if data corresponding to at least one read request is acquired from a storage medium except a cache, the data is taken as target data.
In the embodiment of the present application, when the read request is stored in the message queue after receiving the read request, and when the data corresponding to the read request is stored in the buffer in advance, the read request can be obtained quickly, that is, the time of the read request in the message queue is much shorter than the time of the read request in the message queue corresponding to the data obtained from the storage medium other than the buffer, so that the calculation of the number of reference data requests for calculating the data obtained from the storage medium other than the buffer is not affected.
On the basis of the above embodiments, the first-level cache queue of the embodiment of the present application includes a most-recently-used MRU end and a least-recently-used LRU end.
In the embodiment of the application, because a plurality of data can be stored in the first-level cache queue, the data in the first-level cache queue needs to be sequenced. The first-level cache queue summarizes the data with the best one-time access time closer to the current time as MRU data, and the position of the MRU data is an MRU end. The data with the last access time far away from the current time is LRU data, and the position of the LRU data is an LRU end.
Further, determining data to be cleared in the first-level cache queue includes:
determining at least one data to be cleaned from an LRU end of a first-level cache queue;
storing target data to a first-level cache queue, comprising:
and storing the target data to the MRU end of the first-level cache queue.
Referring to fig. 6, an embodiment of a level one cache queue 200 of an embodiment of the present application is illustratively shown having an MRU end 201 and an LRU end 202. The MRU side 201 represents data that was most recently added to the level one cache queue 200 or most recently accessed in the level one cache queue 200. Starting at the LRU end 202, the data identified at the LRU end is selected for deletion from the level one cache queue 200. As data is added to the MRU end 201, other data moves down toward the LRU end 202. If there is not enough space for adding data to the MRU end 201, the data may be deleted from the LRU end 202 to make room for new data to be added to the level one cache queue 200.
Based on the above embodiments, as an alternative embodiment, the second level buffer queue includes an MRU end and an LRU end.
The data cleaning is carried out from a second-level cache queue which is not empty and has the highest priority, and the method comprises the following steps:
and clearing data from the LRU end of the second-level cache queue which is not empty and has the highest priority.
Specifically, the data cleaning is performed from the second-level cache queue with the largest sequence number that is not empty, and includes: performing data cleaning from the LRU end of the second-level cache queue with the largest sequence number which is not empty; the data cleaning is carried out from the second-level cache queue with the minimum sequence number which is not empty, and the method comprises the following steps: and performing data cleaning from the LRU end of the second-level cache queue with the minimum sequence number which is not empty.
That is, when the increasing function determines the second-level cache queue storing data, the data is cleared from the LRU end of the second-level cache queue with the largest sequence number that is not empty, and when the decreasing function determines the second-level cache queue storing data, the data is cleared from the LRU end of the second-level cache queue with the smallest sequence number that is not empty. For example, when a second-level buffer queue for storing data is determined by an increasing function, if the total number of sequence numbers of the second-level buffer queue is N, when the second-level buffer queue with the sequence number N is empty, the second-level buffer queue with the sequence number N-1 is cleared until the available space of the buffer meets the preset condition.
Because the second-level buffer queue in which data is stored is related to the number of reference read requests corresponding to the data, and the data is continuously moved from the first-level buffer queue to the second-level buffer queue, if only the second-level buffer queue with the highest priority is deleted, a problem that the data in the second-level buffer queue with a lower priority is not deleted later may occur, and therefore, the embodiment of the present application further includes:
determining the access frequency of each data in at least one secondary cache queue;
if the data with the access frequency lower than the preset threshold value exists in the at least one second-level cache queue, deleting the data with the access frequency lower than the preset threshold value from the corresponding second-level cache queue, and storing the data into the second-level cache queue with the priority higher than that of the corresponding second-level cache queue.
By the method, the dynamic management of the storage space data of the secondary cache queue can be realized, and the balance of the responsibility among the channels is further improved.
An embodiment of the present application provides a data processing apparatus, and as shown in fig. 7, the apparatus may include: the system comprises a target data acquisition module 101, a to-be-cleaned data determination module 102 and a data unloading module 103, and specifically:
a target data obtaining module 101, configured to obtain target data, where the target data is data obtained from a storage medium other than a cache in response to a corresponding read request in a message queue, and the cache includes a first-level cache queue and at least one second-level cache queue;
a to-be-cleaned data determining module 102, configured to determine, if it is determined that the primary cache queue meets a data cleaning condition, the data to be cleaned in the primary cache queue and the number of reference read requests corresponding to the data to be cleaned;
the data unloading module 103 is configured to determine, according to the number, a sequence number of a secondary cache queue storing the data to be cleaned, delete the data to be cleaned from the primary cache queue, store the data to be cleaned in the secondary cache queue having a corresponding sequence number, and store the target data in the primary cache queue;
wherein, the sequence number is used for indicating the priority of the data in the corresponding second-level buffer queue being cleared out of the buffer; the reference read request is a read request which is stored in a message queue before the read request corresponding to the data to be cleaned and is read after the read request corresponding to the data to be cleaned.
The data processing apparatus provided in the embodiment of the present invention specifically executes the processes of the foregoing method embodiments, and for details, the contents of the foregoing data processing method embodiments are not described herein again. According to the data processing device provided by the embodiment of the invention, the target data is obtained, if the first-level cache queue is determined to accord with the data cleaning condition, the data to be cleaned in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleaned are determined, the number of the reference read requests can represent the load busy degree of the data to be cleaned in a channel of a storage medium except a cache, if the first-level cache queue is determined to accord with the data cleaning condition, the data to be cleaned in the first-level cache queue and the number of the corresponding reference read requests are further determined, the number of the second-level cache queue storing the data to be cleaned is determined according to the number, and the number is used for representing the priority of the data to be cleaned out of the cache in the corresponding second-level cache queue, so that the data in the second-level cache queue with the higher priority can be preferentially deleted from the cache, the load balance among channels is realized, and the average response time of the read requests is finally reduced.
On the basis of the foregoing embodiments, as an optional embodiment, the to-be-cleaned data determining module includes:
the first sequence number determining submodule inputs the number to a preset increasing function and obtains the sequence number of a secondary cache queue which is output by the preset increasing function and stores the data to be cleaned; or
And the second sequence number determining submodule inputs the number to the preset decreasing function and obtains the sequence number of the second-level cache queue which is output by the preset increasing function and stores the data to be cleaned.
On the basis of the foregoing embodiments, as an alternative embodiment, the data processing apparatus further includes:
and the clearing module is used for clearing data from a second-level cache queue which is not empty and has the highest priority level until the available capacity of the cache is not less than the preset threshold value if the available capacity of the cache is determined to be less than the preset threshold value.
On the basis of the foregoing embodiments, as an optional embodiment, the target data obtaining module includes:
the request receiving submodule is used for receiving the read request and storing the read request to the message queue;
and the screening module is used for taking the data as target data if the data corresponding to at least one read request is acquired from a storage medium except the cache when the read request is read from the message queue.
On the basis of the above embodiments, as an optional embodiment, the first-level cache queue includes a most-recently-used MRU end and a least-recently-used LRU end;
the to-be-cleaned data determination module comprises:
the data to be cleaned determining submodule is used for determining at least one piece of data to be cleaned from an LRU end of the primary cache queue;
the data unloading module comprises:
and the first data unloading submodule is used for storing the target data to the MRU end of the first-level cache queue.
On the basis of the above embodiments, as an optional embodiment, the preset increasing function is a logarithmic function with a preset value larger than 1 as a base.
On the basis of the above embodiments, as an optional embodiment, the second-level cache queue includes an MRU end and an LRU end;
the data unloading module comprises:
the second data unloading submodule is used for storing the data to be cleaned to the MRU end of the second-level cache queue with the corresponding sequence number;
the cleaning module is specifically configured to: and performing data cleaning from the LRU end of the second-level cache queue which is not empty and has the highest priority.
On the basis of the foregoing embodiments, as an alternative embodiment, the data processing apparatus further includes:
the access frequency determining module is used for determining the access frequency of each data in at least one secondary cache queue;
and the second-level queue updating module is used for deleting the data with the access frequency lower than the preset threshold value from the corresponding second-level cache queue and storing the data into the second-level cache queue with the priority higher than that of the corresponding second-level cache queue if the data with the access frequency lower than the preset threshold value exists in at least one second-level cache queue.
The apparatus related to the embodiment of the present invention may be a distributed system formed by a client, a plurality of nodes (any form of computing devices in an access network, such as a server, a user terminal) connected through a network communication form.
Taking a distributed system as an example of a blockchain system, referring To fig. 8, fig. 8 is an optional structural schematic diagram of the distributed system 300 applied To the blockchain system provided in the embodiment of the present invention, and is formed by a plurality of nodes 400 (computing devices in any form in an access network, such as servers and user terminals) and a client 500, where a Peer-To-Peer (P2P) network is formed between the nodes, and the P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the block chain system shown in fig. 8, the related functions include:
1) Routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) The application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1 Wallet) for providing functions of conducting transactions of electronic money, including initiating transactions (i.e. sending transaction records of current transactions to other nodes in the blockchain system, and storing the record data of the transactions in temporary blocks of the blockchain as a response for acknowledging that the transactions are valid after the other nodes are successfully verified; of course, the wallet also supports the querying of the remaining electronic money in the electronic money address;
2.2 Shared account book, which is used for providing functions of operations such as storage, query and modification of account data, sending record data of the operations on the account data to other nodes in the block chain system, and after the other nodes verify that the record data is valid, storing the record data into a temporary block as a response for acknowledging that the account data is valid, and also sending confirmation to the node initiating the operations.
2.3 Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by code deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement code, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to a merchant's address after the buyer has signed up for goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
When the embodiment of the present application is applied to a blockchain system, data will be represented by blocks, when one node in the blockchain system sends recorded data to other nodes in the blockchain system, the one node serves as a sending end, the other nodes serve as receiving ends, and a method for the sending end and the receiving end to perform data processing may specifically refer to the above embodiment, which is not described in detail herein.
Referring to fig. 9, fig. 9 is an alternative schematic diagram of a Block Structure (Block Structure) provided in the embodiment of the present invention, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
An embodiment of the present application provides an electronic device, including: a memory and a processor; at least one program stored in the memory for execution by the processor, which when executed by the processor, implements: the method comprises the steps of obtaining target data, if the first-level cache queue is determined to meet a data cleaning condition, determining data to be cleaned in the first-level cache queue and the number of reference read requests corresponding to the data to be cleaned, wherein the number of the reference read requests can represent the load busy degree of the data to be cleaned in a channel of a storage medium except a cache, and if the first-level cache queue is determined to meet the data cleaning condition, further determining the number of the data to be cleaned in the first-level cache queue and the corresponding reference read requests, determining the sequence number of a second-level cache queue for storing the data to be cleaned according to the number, wherein the sequence number is used for representing the priority of the data to be cleaned out of the cache in the corresponding second-level cache queue, so that the data in the second-level cache queue with the high priority can be deleted from the cache preferentially, load balance among the channels is achieved, and the average response time of the read requests is reduced finally.
In an alternative embodiment, an electronic device is provided, as shown in fig. 10, and an electronic device 4000 shown in fig. 10 includes: a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Optionally, the electronic device 4000 may further comprise a transceiver 4004. It should be noted that the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or other Programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. The bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 4003 is used for storing application codes for executing the scheme of the present application, and the execution is controlled by the processor 4001. Processor 4001 is configured to execute application code stored in memory 4003 to implement what is shown in the foregoing method embodiments.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, the target data is obtained, if the first-level cache queue is determined to meet the data clearing condition, the data to be cleared in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleared are determined, the number of the reference read requests can represent the load busy degree of the data to be cleared in a channel of a storage medium except a cache, if the first-level cache queue is determined to meet the data clearing condition, the data to be cleared in the first-level cache queue and the number of the corresponding reference read requests are further determined, the serial number of the second-level cache queue storing the data to be cleared is determined according to the number, and the serial number is used for representing the priority of the data in the corresponding second-level cache queue being cleared out of the cache, so that the data in the second-level cache queue with high priority can be preferentially deleted from the cache, the load balance among the channels is realized, and the average response time of the read requests is finally reduced.
Embodiments of the present application provide a computer program, which includes computer instructions stored in a computer-readable storage medium, and when a processor of a computer device reads the computer instructions from the computer-readable storage medium, the processor executes the computer instructions, so that the computer device executes the contents shown in the foregoing method embodiments. Compared with the prior art, the target data is obtained, if the first-level cache queue is determined to meet the data clearing condition, the data to be cleared in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleared are determined, the number of the reference read requests can represent the load busy degree of the data to be cleared in a channel of a storage medium except a cache, if the first-level cache queue is determined to meet the data clearing condition, the data to be cleared in the first-level cache queue and the number of the corresponding reference read requests are further determined, the serial number of the second-level cache queue storing the data to be cleared is determined according to the number, and the serial number is used for representing the priority of the data in the corresponding second-level cache queue being cleared out of the cache, so that the data in the second-level cache queue with high priority can be preferentially deleted from the cache, the load balance among the channels is realized, and the average response time of the read requests is finally reduced.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A data processing method, comprising:
acquiring target data, wherein the target data is data acquired from a storage medium except a cache in response to a corresponding read request in a message queue, and the cache comprises a first-level cache queue and at least one second-level cache queue;
if the first-level cache queue is determined to meet the data cleaning condition, determining the data to be cleaned in the first-level cache queue and the number of reference read requests corresponding to the data to be cleaned;
determining the serial number of a secondary cache queue for storing the data to be cleaned according to the quantity, deleting the data to be cleaned from the primary cache queue, storing the data to be cleaned into the secondary cache queue with the corresponding serial number, and storing the target data into the primary cache queue;
wherein, the sequence number is used for indicating the priority of the data in the corresponding second-level buffer queue being cleared out of the buffer; the reference read request is a read request which is stored in the message queue before the read request corresponding to the data to be cleaned and is read after the read request corresponding to the data to be cleaned.
2. The data processing method according to claim 1, wherein said determining a sequence number of a second level buffer queue storing the data to be cleaned according to the number comprises:
inputting the quantity to a preset increasing function to obtain a serial number of the second-level cache queue which is output by the preset increasing function and stores the data to be cleaned; or
And inputting the number into a preset decreasing function to obtain the serial number of the second-level cache queue which is output by the preset increasing function and stores the data to be cleaned.
3. The data processing method according to claim 1 or 2, further comprising:
and if the available capacity of the cache is determined to be smaller than a preset threshold, starting data cleaning from a second-level cache queue which is never empty and has the highest priority until the available capacity of the cache is not smaller than the preset threshold.
4. The data processing method of claim 1, wherein the obtaining target data comprises:
receiving a read request, and storing the read request to the message queue;
and when the read request is read from the message queue, if the data corresponding to the at least one read request is acquired from a storage medium except the cache, the data is used as target data.
5. The data processing method of claim 1, wherein the level one cache queue comprises a most recently used MRU end and a least recently used LRU end;
determining the data to be cleared in the first-level cache queue, including:
determining at least one data to be cleaned from the LRU end of the primary cache queue;
the storing the target data to the first-level cache queue includes:
and storing the target data to the MRU end of the first-level cache queue.
6. The data processing method of claim 2, wherein the predetermined increasing function is a logarithmic function based on a predetermined value greater than 1.
7. The data processing method of claim 3, wherein the secondary cache queue comprises a MRU end and a LRU end;
the second-level cache queue stored to the corresponding sequence number includes:
storing the data to be cleaned to the MRU end of the second-level cache queue with the corresponding sequence number;
the data cleaning is performed from the second-level cache queue which is never empty and has the highest priority, and the method comprises the following steps:
and performing data cleaning from the LRU end of the second-level cache queue which is not empty and has the highest priority.
8. The data processing method of claim 2, further comprising:
determining the access frequency of each data in the at least one secondary cache queue;
if the data with the access frequency lower than the preset threshold exists in the at least one second-level cache queue, deleting the data with the access frequency lower than the preset threshold from the corresponding second-level cache queue, and storing the data into the second-level cache queue with the priority higher than that of the corresponding second-level cache queue.
9. A data processing apparatus, characterized by comprising:
a target data obtaining module, configured to obtain target data, where the target data is data obtained from a storage medium other than a cache in response to a corresponding read request in a message queue, and the cache includes a first-level cache queue and at least one second-level cache queue;
the data to be cleaned determining module is used for determining the number of the data to be cleaned in the first-level cache queue and the number of the reference read requests corresponding to the data to be cleaned if the first-level cache queue is determined to meet the data cleaning condition;
the data unloading module is used for determining the serial number of a secondary cache queue for storing the data to be cleaned according to the number, deleting the data to be cleaned from the primary cache queue, storing the data to be cleaned into the secondary cache queue with the corresponding serial number, and storing the target data into the primary cache queue;
wherein, the sequence number is used for indicating the priority of the data in the corresponding second-level buffer queue being cleared out of the buffer; the reference read request is a read request which is stored in a message queue before the read request corresponding to the data to be cleaned and is read after the read request corresponding to the data to be cleaned.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the data processing method according to any of claims 1 to 8 when executing the program.
11. A computer-readable storage medium, characterized in that it stores computer instructions that cause the computer to perform the steps of the data processing method according to any one of claims 1 to 8.
CN202110735667.6A 2021-06-30 2021-06-30 Data processing method and device, electronic equipment and storage medium Pending CN115543938A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110735667.6A CN115543938A (en) 2021-06-30 2021-06-30 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110735667.6A CN115543938A (en) 2021-06-30 2021-06-30 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115543938A true CN115543938A (en) 2022-12-30

Family

ID=84716710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110735667.6A Pending CN115543938A (en) 2021-06-30 2021-06-30 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115543938A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116405377A (en) * 2023-06-07 2023-07-07 太初(无锡)电子科技有限公司 Network state detection method, protocol conversion component, equipment and storage medium
CN117076699A (en) * 2023-10-13 2023-11-17 南京奥看信息科技有限公司 Multi-picture acceleration processing method and device and electronic equipment
CN117272352A (en) * 2023-11-21 2023-12-22 北京国科天迅科技股份有限公司 Multi-core parallel encryption and decryption method and device, computer equipment and storage medium
CN117453435A (en) * 2023-12-20 2024-01-26 北京开源芯片研究院 Cache data reading method, device, equipment and storage medium
CN117555824A (en) * 2024-01-12 2024-02-13 深圳中微电科技有限公司 Cache storage architecture in GPU simulator based on MVP architecture
CN117555824B (en) * 2024-01-12 2024-07-30 深圳中微电科技有限公司 Cache storage architecture in GPU simulator based on MVP architecture

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116405377A (en) * 2023-06-07 2023-07-07 太初(无锡)电子科技有限公司 Network state detection method, protocol conversion component, equipment and storage medium
CN116405377B (en) * 2023-06-07 2023-08-08 太初(无锡)电子科技有限公司 Network state detection method, protocol conversion component, equipment and storage medium
CN117076699A (en) * 2023-10-13 2023-11-17 南京奥看信息科技有限公司 Multi-picture acceleration processing method and device and electronic equipment
CN117076699B (en) * 2023-10-13 2023-12-12 南京奥看信息科技有限公司 Multi-picture acceleration processing method and device and electronic equipment
CN117272352A (en) * 2023-11-21 2023-12-22 北京国科天迅科技股份有限公司 Multi-core parallel encryption and decryption method and device, computer equipment and storage medium
CN117272352B (en) * 2023-11-21 2024-01-30 北京国科天迅科技股份有限公司 Multi-core parallel encryption and decryption method and device, computer equipment and storage medium
CN117453435A (en) * 2023-12-20 2024-01-26 北京开源芯片研究院 Cache data reading method, device, equipment and storage medium
CN117453435B (en) * 2023-12-20 2024-03-15 北京开源芯片研究院 Cache data reading method, device, equipment and storage medium
CN117555824A (en) * 2024-01-12 2024-02-13 深圳中微电科技有限公司 Cache storage architecture in GPU simulator based on MVP architecture
CN117555824B (en) * 2024-01-12 2024-07-30 深圳中微电科技有限公司 Cache storage architecture in GPU simulator based on MVP architecture

Similar Documents

Publication Publication Date Title
CN115543938A (en) Data processing method and device, electronic equipment and storage medium
US10884810B1 (en) Workload management using blockchain-based transaction deferrals
US10657526B2 (en) System and method to dynamically setup a private sub-blockchain based on agility of transaction processing
CN107967124A (en) A kind of distribution persistence memory storage system and method
CN103368867B (en) The method and system for the object that cache communicates through network with secondary site
Tai et al. Improving flash resource utilization at minimal management cost in virtualized flash-based storage systems
US10776013B2 (en) Performing workload balancing of tracks in storage areas assigned to processing units
US11151037B2 (en) Using track locks and stride group locks to manage cache operations
US11169927B2 (en) Efficient cache management
US10387309B2 (en) High-performance distributed caching
CN104102693A (en) Object processing method and device
US11914894B2 (en) Using scheduling tags in host compute commands to manage host compute task execution by a storage device in a storage system
CN111737168A (en) Cache system, cache processing method, device, equipment and medium
CN110321331A (en) The object storage system of storage address is determined using multistage hash function
US10884849B2 (en) Mirroring information on modified data from a primary storage controller to a secondary storage controller for the secondary storage controller to use to calculate parity data
US11416176B2 (en) Function processing using storage controllers for load sharing
US20190332471A1 (en) Receiving, at a secondary storage controller, information on modified data from a primary storage controller to use to calculate parity data
US11061676B2 (en) Scatter gather using key-value store
US10606776B2 (en) Adding dummy requests to a submission queue to manage processing queued requests according to priorities of the queued requests
CN102833295B (en) Data manipulation method and device in distributed cache system
CN114661239A (en) Data interaction system and method based on NVME hard disk
CN108628551A (en) A kind of data processing method and device
CN114707134A (en) High-performance password card security management method, device and system
US11126371B2 (en) Caching file data within a clustered computing system
US10452546B2 (en) Cache utility modeling for automated cache configuration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40080360

Country of ref document: HK