WO2023020085A1 - 基于多级缓存的数据处理方法及*** - Google Patents

基于多级缓存的数据处理方法及*** Download PDF

Info

Publication number
WO2023020085A1
WO2023020085A1 PCT/CN2022/098669 CN2022098669W WO2023020085A1 WO 2023020085 A1 WO2023020085 A1 WO 2023020085A1 CN 2022098669 W CN2022098669 W CN 2022098669W WO 2023020085 A1 WO2023020085 A1 WO 2023020085A1
Authority
WO
WIPO (PCT)
Prior art keywords
hash ring
level cache
storage node
current
node
Prior art date
Application number
PCT/CN2022/098669
Other languages
English (en)
French (fr)
Inventor
蔡尚志
王盛
Original Assignee
上海哔哩哔哩科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110963175.2A external-priority patent/CN113672524B/zh
Application filed by 上海哔哩哔哩科技有限公司 filed Critical 上海哔哩哔哩科技有限公司
Publication of WO2023020085A1 publication Critical patent/WO2023020085A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes

Definitions

  • the present application relates to the field of computer technology, and in particular to a multi-level cache-based data processing method, system, computer equipment, and computer-readable storage medium.
  • the purpose of the embodiments of the present application is to provide a multi-level cache-based data processing method, system, computer equipment, and computer-readable storage medium to solve the following problem: the current multi-level cache easily causes performance waste.
  • An aspect of the embodiment of the present application provides a multi-level cache-based data processing method used in a server, wherein the server includes a first-level cache and a second-level cache, and the first-level cache includes multiple A first-type storage node, the second-level cache includes a plurality of second-type storage nodes; the data processing method includes:
  • the target node is not any one of the plurality of first-type storage nodes, read the response data used to respond to the read request from the plurality of second-type storage nodes, and return the response data.
  • the method further includes: updating the current hash ring at a preset frequency.
  • the current hash ring is the first hash ring
  • the updating of the current hash ring at a preset frequency includes:
  • the current hash ring is updated from the first hash ring to the second hash ring according to the current state of each first-type storage node.
  • the current state of each first-type storage node includes a current disk state value of each first-type storage node
  • the updating of the current hash ring from the first hash ring to the second hash ring according to the current state of each first-type storage node includes:
  • the obtaining the current disk state value of each first-type storage node includes:
  • the current disk status values of the storage nodes of the first type are acquired.
  • the obtaining the current disk status value of each first-type storage node according to the IO queue and request delay of each first-type storage node includes:
  • the i-th storage node is any one of the plurality of first-type storage nodes, 1 ⁇ i ⁇ M, i is an integer, and M is the number of the plurality of first-type storage nodes;
  • the current disk status value of the i-th storage node is decremented by 1;
  • the current disk status value of the i-th storage node is decremented by 1;
  • the generating the second hash ring according to the new hash ring and the current disk state values of the storage nodes of the first type includes:
  • the i-th storage node is any storage node in the plurality of first-type storage nodes, 1 ⁇ i ⁇ M, i is an integer, and M is the plurality of first-type storage nodes The number of storage nodes.
  • the server further includes a memory; the selecting a target node through the current hash ring in response to the read request includes:
  • the target node is determined through the current hash ring.
  • the target node is the first type of storage node, then determine whether the target node has the response data
  • the multiple first-type storage nodes correspond to multiple SSD disks; the multiple second-type storage nodes correspond to multiple HDD disks.
  • An aspect of the embodiment of the present application provides a multi-level cache-based data processing system for use in a server, wherein the server includes a first-level cache and a second-level cache, and the first-level cache includes multiple a first-type storage node, the second-level cache includes a plurality of second-type storage nodes; the data processing system includes:
  • a selection module is used to select a target node through the current hash ring in response to the read request
  • a judging module configured to judge whether the target node is any one of the plurality of first-type storage nodes
  • a return module configured to read response data for responding to the read request from the plurality of second-type storage nodes if the target node is not any one of the plurality of first-type storage nodes , and returns the response data.
  • An aspect of the embodiments of the present application provides a computer device, the computer device includes a memory, a processor, and computer-readable instructions stored on the memory and operable on the processor, and the processor executes the computer
  • the readable instructions are used to implement the steps of the above multi-level cache-based data processing method.
  • An aspect of the embodiments of the present application further provides a computer-readable storage medium, where computer-readable instructions are stored in the computer-readable storage medium, and the computer-readable instructions can be executed by at least one processor, so that The at least one processor executes the steps of the above multi-level cache-based data processing method.
  • the multi-level cache-based data processing method, system, device and computer-readable storage medium provided by the embodiment of the present application:
  • the target node is not any one of the plurality of first-type storage nodes, it indicates that the following situation probably occurs: more positions are occupied by occupying nodes (invalid nodes) in the current hash ring, so The positions occupied by the multiple first-type storage nodes are relatively small, which means that the first-level cache is relatively busy at this time. Therefore, at this time, the read request is directed to the second-level cache, and the second-level cache responds instead of allowing the read request to enter the queue of the first-level cache for queuing.
  • the solution described in this embodiment can improve the response speed of the read request and reduce the pressure of the first-level cache.
  • FIG. 1 schematically shows an application environment diagram of a data processing method based on a multi-level cache according to an embodiment of the present application
  • FIG. 2 schematically shows a flowchart of a multi-level cache-based data processing method according to Embodiment 1 of the present application
  • FIG. 3 schematically shows a flowchart of newly added steps of a multi-level cache-based data processing method according to Embodiment 1 of the present application;
  • FIG. 4 schematically shows a sub-step diagram of step S300 in FIG. 3;
  • Fig. 5 schematically shows a sub-step diagram of step S402 in Fig. 4;
  • FIG. 6 schematically shows a sub-step diagram of step S500 in FIG. 5;
  • FIG. 7 schematically shows a sub-step diagram of step S404 in FIG. 4;
  • FIG. 8 schematically shows a sub-step diagram of step S200 in FIG. 2;
  • FIG. 9 schematically shows a flow chart of newly added steps of the multi-level cache-based data processing method according to Embodiment 1 of the present application.
  • FIG. 10 schematically shows a flow chart of steps of an application example of a multi-level cache-based data processing method
  • FIG. 11 schematically shows the update flowchart of the hash ring in Fig. 10;
  • Fig. 12 schematically shows the sub-step flowchart of the hash ring update in Fig. 11;
  • Fig. 13 schematically shows the sub-step flow chart of calculating the disk status value in Fig. 12;
  • FIG. 14 schematically shows a block diagram of a multi-level cache-based data processing system according to Embodiment 2 of the present application.
  • FIG. 15 schematically shows a schematic diagram of a hardware architecture of a computer device suitable for implementing a multi-level cache-based data processing method according to Embodiment 3 of the present application.
  • CDN Content Delivery Network, content distribution network
  • the CDN server can use the load balancing, content distribution, scheduling and other functional modules of the central platform to enable users to obtain the required content nearby, reduce network congestion, and improve user access response speed and hit rate.
  • SSD Solid State Disk or Solid State Drive: generally refers to a solid state drive.
  • HDD Hard Disk Drive: generally refers to a mechanical hard disk.
  • Hash/Hash Ring It is a special hash algorithm designed to solve the problem of distributed caching. When removing or adding a server, the mapping relationship between the existing service request and the processing request server can be changed as little as possible.
  • a hash ring can represent a distributed cache's on-disk structure or on-disk map.
  • Occupancy value Occupies a node of the consistent hash ring, indicating a null value or an invalid value, and is used to guide the overflow traffic to the HDD disk.
  • Q value the upper limit of the IO queue (usually 128).
  • L value The upper limit of the latency of a read/write disk task. It is generally believed that exceeding this value will affect the user experience (usually 500 milliseconds).
  • 99th percentile The value at the 99% position in the data sorting.
  • IO queue The task queue that the disk is currently executing.
  • the three-level cache includes: memory, SSD layer and HDD layer.
  • the process of responding to an access request is as follows: traverse the memory to query the response data, if the memory query fails, traverse the SSD layer to find the response data, and if the SSD layer query fails, then query the HDD layer.
  • the performance ratio of the combined HDD layer and the combined SSD layer There is uncertainty in the performance ratio of the combined HDD layer and the combined SSD layer. Factors that lead to uncertainty are as follows: the number of HDD disks is much greater than the number of SSD disks, failures cause some SSD disks to be dropped, wear and tear leads to performance degradation of SSD disks, and the performance of the HDD layer whose data is carefully arranged is not weaker or even stronger performance at the SSD layer. The overflow traffic of the SSD layer will be queued to wait for the previous tasks to complete, and if the set timeout is less than the waiting plus the execution time, the performance of the SSD layer will be wasted and the user experience will be affected.
  • the present application aims to provide a data processing technology based on multi-level caching.
  • the traffic requested to the SSD layer can be dynamically adjusted, so that the service quality is guaranteed.
  • the current state of each SSD disk is calculated by periodically detecting the IO queue and request delay of each SSD disk, so as to adjust the consistent hash ring of the SSD disk (that is, generate a new hash ring and replace the old hash ring) , and use the new hash ring for cache control to control the proportion of request traffic at the SSD layer.
  • FIG. 1 schematically shows a schematic diagram of an environment application of a multi-level cache-based data processing method according to Embodiment 1 of the present application.
  • the mobile terminal 2 can access the CDN server 4 through one or more networks.
  • the mobile terminal 2 may be any type of computing device, such as a smartphone, tablet, laptop or the like.
  • the mobile terminal 2 can access the server 4 through a browser or a special program to acquire content and present the content to the user.
  • the content may include video, audio, commentary, text data and/or the like.
  • the one or more networks include various network devices such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, proxy devices, and/or the like.
  • One or more networks may include physical links, such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and the like.
  • the network may include wireless links, such as cellular links, satellite links, Wi-Fi links, and the like.
  • the CDN server 4 can be configured as a multi-level cache, such as:
  • Second level cache which may include HDD tier and SSD tier.
  • Level 3 cache which may include HDD tier, SSD tier, and memory.
  • the present application provides multiple embodiments to introduce data processing technologies based on multi-level caching. For details, refer to the following.
  • Embodiment 1 provides a multi-level cache-based data processing method, which can be executed in a server (such as the CDN server 4).
  • the server includes a first-level cache and a second-level cache, the first-level cache includes multiple first-type storage nodes, and the second-level cache includes multiple second-type storage nodes.
  • the first level cache and the second level cache provide differentiated services.
  • the first-level cache may preferentially cache data with a higher access frequency.
  • the first-class storage nodes and the second-class storage nodes provide differentiated read and write performance. For example:
  • the multiple first-type storage nodes correspond to multiple SSD disks
  • the multiple second-type storage nodes correspond to multiple HDD disks.
  • FIG. 2 schematically shows a flowchart of a multi-level cache-based data processing method according to Embodiment 1 of the present application.
  • the multi-level cache-based data processing method may include steps S200-S204, wherein:
  • Step S200 in response to a read request, select a target node through the current hash ring.
  • the user may send the reading request to the server through the mobile terminal, so as to obtain corresponding data from the server.
  • data For example, user data, video, and text may be requested from the server.
  • selecting a target node through the current hash ring can be achieved through the following steps: (1) obtain the resource identifier of the requested file; (2) calculate according to the resource identifier and the preset hash algorithm to obtain the hash Hash value; (3) determining the target node in the current hash ring according to the hash value.
  • the storage information of each first-type storage node is inserted and updated into the current hash ring. Therefore, the current hash ring may represent the current disk mapping of each first-type storage node.
  • the current hash ring corresponds to the occupancy node and each first-type storage node.
  • Each first storage node may occupy one position (hash slot) of the current hash ring, may occupy multiple positions at the same time, and of course may not occupy any position.
  • the storage nodes of the first type not occupying any positions are not on the current hash ring.
  • the first type of storage node that occupies more positions has a greater probability of being selected.
  • Step S202 judging whether the target node is any one of the plurality of first-type storage nodes.
  • the current hash ring corresponds to occupying nodes.
  • the occupancy node is an invalid node, and its occupancy value is a null value or an invalid value, and is used to guide the overflow traffic (partial read request) to the second-level cache.
  • Step S204 if the target node is not any one of the plurality of first-type storage nodes, read response data for responding to the read request from the plurality of second-type storage nodes, and Return the response data.
  • the target node is not any one of the plurality of first-type storage nodes, it may be explained that the target node is an occupancy node, so the read request may be directed to the second-level cache.
  • the second-level cache will check whether there is the response data inside according to the read request. If there is the response data, it will be returned to the mobile terminal. If there is no response data, return failure or blank information, or import the read request into the third-level cache if the server is configured with a third-level cache.
  • the target node is not any one of the plurality of first-type storage nodes, it indicates that the following situation may occur: more positions are occupied by the occupying nodes in the current hash ring, and the plurality of first-type storage nodes
  • the location occupied by a type of storage node is relatively small, which means that the first-level cache is relatively busy at this time. Therefore, at this time, the read request is directed to the second-level cache, and the second-level cache responds instead of allowing the read request to enter the queue of the first-level cache for queuing.
  • the method described in this embodiment can improve the response speed of the read request and reduce the pressure of the first-level cache.
  • the first-level cache layer if the first-level cache layer is busy, read requests are still queued for the completion of the previous tasks, and after the completion of the previous tasks, the first-level cache will query and respond, which will undoubtedly reduce the response speed.
  • the read request is imported into the second-level cache only when it is determined that the target node is not any first-level storage node in the first-level cache, thereby avoiding the first-level cache performance waste.
  • the performance waste caused by the following method: setting the timeout period for importing to the second-level cache, which may be less than the waiting plus execution time, resulting in waste of performance at the first-level cache layer.
  • the method may further include: updating the current hash ring at a preset frequency.
  • the latest status of each first-type storage node in the first-level cache is obtained by periodically updating the current hash ring, thereby improving the effectiveness of selection. For example, the more idle the first type of storage node, the more space it occupies in the current hash ring, and thus the easier it is to be selected.
  • the busier the storage nodes of the first type occupy less space in the current hash ring, so it is more difficult to be selected.
  • the current hash ring is the first hash ring.
  • the updating of the current hash ring at a preset frequency can be achieved through the following operations: step S300, according to the current state of each first type of storage node, the current hash ring is updated from the The first hash ring is updated to the second hash ring.
  • the current state may represent the working state of the corresponding first-type storage node, and update the current hash ring, that is, update each first-type storage node according to the working state of each first-type storage node. Probability of being selected.
  • the current state of each first-type storage node includes a current disk state value of each first-type storage node.
  • the disk state value of each first-type storage node can be an initial value of 50 or other values when the server starts the service. As shown in FIG.
  • Step S400 construct a new hash ring; step S402, obtain the current disk state value of each first-type storage node; step S404, according to the new hash ring and the current disk state value of each first-type storage node , generating the second hash ring; and step S406, updating the current hash ring from the first hash ring to the second hash ring.
  • the new hash ring is also inserted with a placeholder value after step S400 is performed.
  • a new hash ring can be effectively constructed so that target nodes can be selected more effectively according to actual conditions.
  • the acquisition of the current disk status values of each first-type storage node can be achieved through the following operations: Step S500, according to the IO queue and request delay of each first-type storage node , acquire the current disk state value of each first-type storage node.
  • the working status for example, whether it is in a busy state
  • the working status of each first-type storage node can be determined based on the IO queue, request delay, etc., so as to update the current disk status value of each first-type storage node more effectively, A more effective new hash ring (that is, the second hash ring) is obtained to ensure the effectiveness of the second hash ring and improve response efficiency.
  • Steps S600 Determine whether the number of IO queues of the i-th storage node is greater than a first preset value; wherein, the i-th storage node is any one of the plurality of first-type storage nodes, 1 ⁇ i ⁇ M, i is an integer, and M is the number of the plurality of first-type storage nodes; step S602, if the number of IO queues of the i-th storage node is greater than the first preset value, then The current disk status value of the i storage node is decremented by 1; step S604, if the number of IO queues of the i storage node is not greater than the first preset value, obtain the request of the i storage node Delay; step S606, if the request delay of the i-th storage node
  • the first preset value may be 100
  • the second preset value may be 0.
  • the current disk status value of the i-th storage node may increase by 1, decrease by 1 or remain unchanged. This self-increment or self-decrement is performed on the basis of the disk status value of the previous cycle. Due to this gradual adjustment based on the previous cycle, the stability of the current hash ring and the stability of data processing are ensured.
  • Step S700 the generating of the second hash ring according to the new hash ring and the current disk state values of the storage nodes of the first type can be realized by the following operations: Step S700 , according to the current disk state value of the i-th storage node, insert the node information of the i-th storage node into the N positions of the new hash, and the value of N is equal to the value of the i-th storage node The current disk status value of the node; wherein, the i-th storage node is any one of the multiple first-type storage nodes, 1 ⁇ i ⁇ M, i is an integer, and M is the multiple first-type storage nodes The number of class storage nodes.
  • the server is configured with a three-level cache, including memory, a first-level cache, and a second-level cache.
  • the selection of a target node through the current hash ring in response to the read request can be achieved through the following operations: Step S800, according to the read request, determine whether there is the response in the memory data; and step S802, if there is no response data in the memory, determine the target node through the current hash ring.
  • the data with the highest access frequency can be stored in the memory, so that the server can respond to the user's read request faster.
  • the method may further include: step S900, if the target node is the first type of storage node, determine whether the response data exists in the target node; step S902, If there is the response data in the target node, read the response data from the target node; step S904, if there is no response data in the target node, read the response data from the second level cache Read the response data.
  • step S900 if the target node is the first type of storage node, determine whether the response data exists in the target node; step S902, If there is the response data in the target node, read the response data from the target node; step S904, if there is no response data in the target node, read the response data from the second level cache Read the response data.
  • the read request is imported into the second-level cache, and the second-level cache responds, so as to meet user requirements and improve user experience.
  • the CDN server includes a memory, an SSD layer, and an HDD layer.
  • the memory may include a volatile storage medium with high-speed access performance, such as RAM and other media.
  • the SSD layer is composed of multiple SSD disks.
  • the SDD layer is composed of multiple mechanical disks.
  • S1000 The CDN server starts a read-write service, and initializes each SSD disk of the SSD layer.
  • the disk state value of each SSD disk may be set to an initial value of 50 when starting the service. It should be noted that the disk state values of the respective SSD disks may be used as a basis for updating the hash value later.
  • the CDN server receives the read request sent by the user.
  • step S1004 The CDN server determines whether there is response data in the memory according to the read request. If there is the response data in the memory, go to step S1006, otherwise go to step S1008.
  • S1006 Read the response data from the memory, and return the response data to the user. The process ends.
  • the CDN server determines the target node through the current hash ring.
  • the current hash ring is used to represent the disk structure or disk mapping of the CDN server.
  • S1010 The CDN server judges whether the target node is an SSD disk.
  • step S1012 If yes, go to step S1012, otherwise go to step S1016.
  • the hash ring includes a occupying node, and if the target node is a occupying node, it is determined that the target node is not an SSD disk.
  • S1012 The CDN server determines whether the target node has response data.
  • step S1014 If yes, go to step S1014, otherwise go to step S1016.
  • the CDN server reads the response data from the target node, and returns the response data. The process ends.
  • step S1018 If yes, go to step S1018; otherwise, return a response of failure, and the process ends.
  • the CDN server reads the response data from the HDD layer, and returns the response data. The process ends.
  • the CDN server starts the read and write service, and the current hash ring is updated at a preset cycle.
  • the operation is as follows:
  • S1100 Set the disk status value of each SSD disk to an initial value of 50.
  • S1206 Take the i-th SSD disk, where i is initially 1, and i is a positive integer.
  • S1210 Assign the disk state value of the i-th SSD disk to k, where k is a variable. The larger the k value here, the more times the disk information of the i-th SSD disk is inserted into the new hash ring, and the greater the probability of the disk being selected.
  • step S1212 Determine whether k is greater than 0. If yes, go to step S1214, otherwise go to step S1218.
  • S1214 Insert disk information (such as a disk identifier, etc.) of the i-th SSD disk into the new hash ring.
  • S1218 Determine whether the i-th SSD disk is the last SSD disk in the SSD disk list.
  • step S1220 If not, update the value of i, and return to step S1206. If yes, go to step S1220.
  • S1220 Insert the disk information of each SSD disk in the SSD disk list into the new hash ring through steps S1206-S1218, and load the final new hash ring (that is, the second hash ring) obtained input is the current hash ring.
  • calculating the disk state value of the i-th SSD disk in step S1208 may include the following steps:
  • S1300 Obtain the IO queue of the i-th SSD disk, such as the average IO queue within a period of time.
  • S1302 Determine whether the number of IO queues is greater than the Q value.
  • the Q value is the upper limit value of the preset IO queue.
  • step S1310 If yes, go to step S1310, otherwise go to step S1304.
  • S1306 Determine whether the 99th percentile delay of the i-th SSD disk is greater than the L value.
  • the L value is a preset upper limit value of the delay of a read/write disk task.
  • step S1310 If yes, go to step S1310, otherwise go to S1308.
  • the maximum value of the current disk state value is 100.
  • FIG. 14 schematically shows a block diagram of a multi-level cache-based data processing system according to Embodiment 2 of the present application.
  • the data processing system is used in a server, and the server includes a first-level cache and a second-level cache, the first-level cache includes a plurality of first-type storage nodes, and the second-level cache includes a plurality of second-type Storage nodes.
  • the multi-level cache-based data processing system can be divided into one or more program modules, and one or more program modules are stored in a storage medium and executed by one or more processors to complete the embodiments of the present application .
  • the program module referred to in the embodiment of the present application refers to a series of computer-readable instruction segments capable of accomplishing specific functions. The following description will specifically introduce the functions of each program module in this embodiment.
  • the multi-level cache-based data processing system 1400 may include a selection module 1410, a judgment module 1420 and a return module 1430, wherein:
  • a selection module 1410 configured to select a target node through the current hash ring in response to the read request
  • a judging module 1420 configured to judge whether the target node is any one of the plurality of first-type storage nodes
  • Returning module 1430 configured to read responses for responding to the read request from the plurality of second-type storage nodes if the target node is not any one of the plurality of first-type storage nodes data, and returns the response data.
  • the system further includes a hash ring update module (not shown), the hash ring update module is configured to: update the current hash ring at a preset frequency.
  • the current hash ring is the first hash ring; the hash ring update module is also used for:
  • the current hash ring is updated from the first hash ring to the second hash ring according to the current state of each first-type storage node.
  • the current state of each first-type storage node includes a current disk state value of each first-type storage node; the hash ring update module is also used for:
  • the hash ring update module is also used for:
  • the current disk status values of the storage nodes of the first type are acquired.
  • the hash ring update module is also used for:
  • the i-th storage node is any one of the plurality of first-type storage nodes, 1 ⁇ i ⁇ M, i is an integer, and M is the number of the plurality of first-type storage nodes;
  • the current disk status value of the i-th storage node is decremented by 1;
  • the current disk status value of the i-th storage node is decremented by 1;
  • the hash ring update module is also used for:
  • the i-th storage node is any storage node in the plurality of first-type storage nodes, 1 ⁇ i ⁇ M, i is an integer, and M is the plurality of first-type storage nodes The number of storage nodes.
  • the server further includes a memory; the selection module 1410 is also used for:
  • the target node is determined through the current hash ring.
  • the returning module 1430 is also used for:
  • the target node is the first type of storage node, then determine whether the target node has the response data
  • the multiple first-type storage nodes correspond to multiple SSD disks; the multiple second-type storage nodes correspond to multiple HDD disks.
  • FIG. 15 schematically shows a schematic diagram of a hardware architecture of a computer device 10000 suitable for implementing a multi-level cache-based data processing method according to Embodiment 3 of the present application.
  • the computer device 10000 can be the server 4 or a part of the CDN server 4 .
  • the computer device 10000 is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions.
  • it may be a rack server, a blade server, a tower server, or a cabinet server (including an independent server, or a server cluster composed of multiple servers) and the like.
  • the computer device 10000 at least includes but is not limited to: a memory 10010 , a processor 10020 , and a network interface 10030 that can communicate with each other through a system bus. in:
  • the memory 10010 includes at least one type of computer-readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 10010 may be an internal storage module of the computer device 10000 , such as a hard disk or memory of the computer device 10000 .
  • the memory 10010 can also be an external storage device of the computer device 10000, such as a plug-in hard disk equipped on the computer device 10000, a smart memory card (Smart Media Card, referred to as SMC), a secure digital (Secure Digital (referred to as SD) card, flash memory card (Flash Card) and so on.
  • the memory 10010 may also include both an internal storage module of the computer device 10000 and an external storage device thereof.
  • the memory 10010 is generally used to store the operating system installed in the computer device 10000 and various application software, such as program codes of a data processing method based on multi-level cache.
  • the memory 10010 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 10020 may be a central processing unit (Central Processing Unit, CPU for short), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 10020 is generally used to control the overall operation of the computer device 10000 , such as performing control and processing related to data interaction or communication with the computer device 10000 .
  • the processor 10020 is configured to run program codes stored in the memory 10010 or process data.
  • the network interface 10030 may include a wireless network interface or a wired network interface, and the network interface 10030 is generally used to establish a communication link between the computer device 10000 and other computer devices.
  • the network interface 10030 is used to connect the computer device 10000 with an external terminal through a network, and establish a data transmission channel and a communication link between the computer device 10000 and an external terminal.
  • the network can be Intranet, Internet, Global System of Mobile Communication (GSM for short), Wideband Code Division Multiple Access (WCDMA for short), 4G network , 5G network, Bluetooth (Bluetooth), Wi-Fi and other wireless or wired networks.
  • FIG. 15 only shows a computer device having components 10010-10030, but it should be understood that implementing all of the illustrated components is not a requirement and that more or fewer components may alternatively be implemented.
  • the multi-level cache-based data processing method stored in the memory 10010 can also be divided into one or more program modules, and executed by one or more processors (processor 10020 in this embodiment) Execute to complete the embodiment of this application.
  • the present application also provides a computer-readable storage medium used in a server, wherein the server includes a first-level cache and a second-level cache, the first-level cache includes a plurality of first-type storage nodes, and the The second-level cache includes multiple second-type storage nodes.
  • the computer-readable storage medium has computer-readable instructions stored thereon, and when the computer-readable instructions are executed by the processor, the following steps are implemented:
  • the target node is not any one of the plurality of first-type storage nodes, read the response data used to respond to the read request from the plurality of second-type storage nodes, and return the response data.
  • the computer-readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), Magnetic Memory, Magnetic Disk, Optical Disk, etc.
  • the computer-readable storage medium may be an internal storage unit of a computer device, such as a hard disk or a memory of the computer device.
  • the computer-readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk equipped on the computer device, a smart memory card (Smart Media Card, referred to as SMC), a secure digital ( Secure Digital (referred to as SD) card, flash memory card (Flash Card), etc.
  • the computer-readable storage medium may also include both the internal storage unit of the computer device and its external storage device.
  • the computer-readable storage medium is generally used to store the operating system and various application software installed on the computer device, such as the program code of the multi-level cache-based data processing method in the embodiment.
  • the computer-readable storage medium can also be used to temporarily store various types of data that have been output or will be output.
  • modules or steps of the above-mentioned embodiments of the present application can be implemented by general-purpose computing devices, and they can be concentrated on a single computing device, or distributed among multiple computing devices.
  • they may be implemented in program code executable by a computing device, thereby, they may be stored in a storage device to be executed by a computing device, and in some cases, may be implemented in a code different from that described herein
  • the steps shown or described are executed in sequence, or they are fabricated into individual integrated circuit modules, or multiple modules or steps among them are fabricated into a single integrated circuit module for implementation.
  • embodiments of the present application are not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种基于多级缓存的数据处理方法,用于服务器中,其中,所述服务器包括第一级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点;所述数据处理方法包括:响应于读取请求,通过当前哈希环选择一个目标节点(S200);判断所述目标节点是否是所述多个第一类存储节点中的任意一个(S202);若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据(S204)。所述方法可以提高所述读取请求的响应速度和降低第一级缓存的压力。

Description

基于多级缓存的数据处理方法及***
本申请申明2021年8月20日递交的申请号为202110963175.2、名称为“基于多级缓存的数据处理方法及***”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种基于多级缓存的数据处理方法、***、计算机设备和计算机可读存储介质。
背景技术
随着互联网和计算机技术的发展,数据规模越来越大。为应对大规模的数据写入或读取,人们开始使用多级缓存技术,将不同访问频次的数据存储在不同层级缓存中。鉴于访问频次、读写速度和存储设备成本等因素,不同层级缓存一般采用不同类型的存储设备。本发明人发现,当前多级缓存容易造成性能浪费。
发明内容
本申请实施例的目的是提供一种基于多级缓存的数据处理方法、***、计算机设备及计算机可读存储介质,用于解决以下问题:当前多级缓存容易造成性能浪费。
本申请实施例的一个方面提供了一种基于多级缓存的数据处理方法,用于服务器中,其中,所述服务器包括第一级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点;所述数据处理方法包括:
响应于读取请求,通过当前哈希环选择一个目标节点;
判断所述目标节点是否是所述多个第一类存储节点中的任意一个;及
若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据。
可选的,还包括:以预设频率更新所述当前哈希环。
可选的,所述当前哈希环为第一哈希环;
所述以预设频率更新所述当前哈希环,包括:
根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环。
可选的,所述各个第一类存储节点的当前状态包括所述各个第一类存储节点的当前磁盘状态值;
所述根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环,包括:
构造一个新哈希环;
根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环;及
将所述当前哈希环从所述第一哈希环更新为所述第二哈希环。
可选的,所述获取所述各个第一类存储节点的当前磁盘状态值,包括:
根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值。
可选的,所述根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值,包括:
判断第i个存储节点的IO队列的数量是否大于第一预设值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量;
若所述第i个存储节点的IO队列的数量大于所述第一预设值,则对所述第i个存储节点的当前磁盘状态值自减1;
若所述第i个存储节点的IO队列的数量不大于所述第一预设值,则获取所述第i个存储节点的请求延时;
若所述第i个存储节点的请求延时大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值自减1;
若所述第i个存储节点的请求延时不大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值增1。
可选的,所述根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环,包括:
根据第i个存储节点的当前磁盘状态值,将所述第i个存储节点的节点信息分别***到所述新哈希的N个位置处,所述N的值等于所述第i个存储节点的当前磁盘状态值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量。
可选的,所述服务器还包括内存;所述响应于读取请求,通过当前哈希环选择一个目标节点,包括:
根据所述读取请求,确定所述内存中是否有所述响应数据;及
如果所述内存中没有所述响应数据,则通过所述当前哈希环确定所述目标节点。
可选的,还包括:
若所述目标节点是所述第一类存储节点,则确定所述目标节点中是否有所述响应数据;
若所述目标节点中有所述响应数据,则从所述目标节点中读取所述响应数据;
若所述目标节点中没有所述响应数据,则从所述第二级缓存中读取所述响应数据。
可选的,所述多个第一类存储节点对应为多个SSD磁盘;所述多个第二类存储节点对应为多个HDD磁盘。
本申请实施例的一个方面又提供了一种基于多级缓存的数据处理***,用于服务器中,其中,所述服务器包括第一级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点;所述数据处理***包括:
选择模块,用于响应于读取请求,通过当前哈希环选择一个目标节点;
判断模块,用于判断所述目标节点是否是所述多个第一类存储节点中的任意一个;及
返回模块,用于若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据。
本申请实施例的一个方面又提供了一种计算机设备,所述计算机设备包括存储器、处 理器以及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时用于实现如上述基于多级缓存的数据处理方法的步骤。
本申请实施例的一个方面又提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机可读指令,所述计算机可读指令可被至少一个处理器所执行,以使所述至少一个处理器执行如上述基于多级缓存的数据处理方法的步骤。
本申请实施例提供的基于多级缓存的数据处理方法、***、设备及计算机可读存储介质:
若所述目标节点不是所述多个第一类存储节点中的任意一个,则说明大概出现以下情况:在所述当前哈希环中的占位节点(无效节点)占据的位置偏多,所述多个第一类存储节点占据的位置偏少,即说明此时所述第一级缓存基于比较忙碌的状态。因此,在此时将所述读取请求导向到所述第二级缓存中,由所述第二级缓存进行响应,而不是让读取请求进入所述第一级缓存的队列中进行排队。本实施例所述的方案,可以提高所述读取请求的响应速度和降低第一级缓存的压力。
附图说明
图1示意性示出了根据本申请实施例的基于多级缓存的数据处理方法的应用环境图;
图2示意性示出了根据本申请实施例一的基于多级缓存的数据处理方法的流程图;
图3示意性示出了根据本申请实施例一的基于多级缓存的数据处理方法的新增步骤流程图;
图4示意性示出了图3中的步骤S300的子步骤图;
图5示意性示出了图4中的步骤S402的子步骤图;
图6示意性示出了图5中的步骤S500的子步骤图;
图7示意性示出了图4中的步骤S404的子步骤图;
图8示意性示出了图2中的步骤S200的子步骤图;
图9示意性示出了根据本申请实施例一的基于多级缓存的数据处理方法的新增步骤流程图;
图10示意性示出了的基于多级缓存的数据处理方法的应用示例的步骤流程图;
图11示意性示出了图10中的哈希环的更新流程图;
图12示意性示出了图11中的哈希环更新的子步骤流程图;
图13示意性示出了图12中的计算磁盘状态值的子步骤流程图;
图14示意性示出了根据本申请实施例二的基于多级缓存的数据处理***的框图;及
图15示意性示出了根据本申请实施例三的适于实现基于多级缓存的数据处理方法的计算机设备的硬件架构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请实施例中涉及“第一”、“第二”等的描述仅用于描述目的,而 不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
在本申请的描述中,需要理解的是,步骤前的数字标号并不标识执行步骤的前后顺序,仅用于方便描述本申请及区别每一步骤,因此不能理解为对本申请的限制。
以下为本申请的术语解释:
CDN(Content Delivery Network,内容分发网络)服务器,为部署在各地的边缘服务器。CDN服务器可以通过中心平台的负载均衡、内容分发、调度等功能模块,使用户就近获取所需内容,降低网络拥塞,提高用户访问响应速度和命中率。
SSD(Solid State Disk或Solid State Drive):一般指固态硬盘。
HDD(Hard Disk Drive):一般指机械硬盘。
一致性哈希/哈希环:是一种特殊的哈希算法,目的是解决分布式缓存的问题。在移除或者添加一个服务器时,能够尽可能小地改变已存在的服务请求与处理请求服务器之间的映射关系。哈希环可表示分布式缓存的磁盘结构或磁盘映射。
占位值:占了一致性哈希环的一个节点,表示空值或者无效值,用来将溢出流量导到HDD盘中。
Q值:IO队列的上限值(一般为128)。
L值:一个读/写磁盘任务的延时上限,一般认为超过这个值会影响用户体验(一般为500毫秒)。
99分位:数据排序中在99%位置上的数值。
IO队列:磁盘当前正在执行的任务队列。
随着互联网和计算机技术的发展,数据规模越来越大。为应对大规模的数据写入或读取,人们开始使用多级缓存技术,将不同访问频次的数据存储在不同层级缓存中。鉴于访问频次、读写速度和存储设备成本等因素,不同层级缓存一般采用不同类型的存储设备。本发明人发现,当前多级缓存容易造成性能浪费。
以本发明人了解的三级缓存为例,其包括:内存、SSD层和HDD层。
响应一个访问请求的流程如下:遍历内存查询响应数据,在内存查询失败的情况下则遍历SSD层找到响应数据,在SSD层查询失败的情况下则查询HDD层。
上述流程有以下问题:
组合出的HDD层和组合出的SSD层性能比例存在不确定性。导致不确定的因素如下:HDD磁盘数量远多于SSD磁盘数量、故障导致部分SSD磁盘被下掉、磨损导致SSD磁盘的性能下降以及数据经过精心排布的HDD层的性能并不弱于甚至强于SSD层的性能。SSD层溢出的流量将排队等待前面的任务完成,而如果设置的超时小于等待加上执行的时间,将导致SSD层的性能浪费,影响用户体验。
有鉴于此,本申请旨在提供基于多级缓存的数据处理技术。具体的,可以根据SSD层的状态,动态调整请求到SSD层的流量,从而使服务质量得到保障。例如,通过周期检测各个SSD磁盘的IO队列和请求延时计算出各个SSD磁盘的当前状态,以此来调整SSD磁盘的一致性哈希环(即生成新哈希环,替换旧哈希环),并利用新哈希环来进行缓存控制, 达到控制SSD层的请求流量占比。
图1示意性示出了根据本申请实施例一的基于多级缓存的数据处理方法的环境应用示意图。如图所示,移动终端2可以通过有一个或多个网络访问CDN服务器4。
移动终端2可以是任意类型的计算设备,诸如智能手机、平板设备、膝上型计算机等。移动终端2可以通过浏览器或专门程序访问服务器4获取内容并给用户呈现所述内容。所述内容可以包括视频,音频,评论,文本数据和/或类似物。
所述一个或多个网络包括各种网络设备,例如路由器,交换机,多路复用器,集线器,调制解调器,网桥,中继器,防火墙,代理设备和/或类似。一个或多个网络可以包括物理链路,例如同轴电缆链路,双绞线电缆链路,光纤链路,其组合等。网络可以包括无线链路,诸如蜂窝链路,卫星链路,Wi-Fi链路等。
所述CDN服务器4可以被配置为多级缓存,如:
二级缓存,其可以包括HDD层和SSD层。
三级缓存,其可以包括HDD层、SSD层和内存。
本申请提供了多个实施例介绍基于多级缓存的数据处理技术,具体参照下文。
实施例一
本实施例一提供了一种基于多级缓存的数据处理方法,该方法可执行在服务器中(如CDN服务器4)中。所述服务器包括第一级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点。
作为示例,所述第一级缓存和所述第二级缓存提供差异化服务。为提高服务效率和质量,相对于所述第二级缓存,所述第一级缓存可以优先缓存较高访问频次的数据。相应的,第一类存储节点和第二级存储节点提供差异化读写性能。例如:
所述多个第一类存储节点对应为多个SSD磁盘;
所述多个第二类存储节点对应为多个HDD磁盘。
图2示意性示出了根据本申请实施例一的基于多级缓存的数据处理方法的流程图。如图2所示,该基于多级缓存的数据处理方法可以包括步骤S200~S204,其中:
步骤S200,响应于读取请求,通过当前哈希环选择一个目标节点。
用户可以通过移动终端向所述服务器发送所述读取请求,以从所述服务器获取相应的数据。举例来说,可以向所述服务器请求获取用户数据、视频,以及文本等。
作为示例,通过当前哈希环选择一个目标节点,可以通过如下步骤实现:(1)获取所述请求文件的资源标识;(2)根据所述资源标识和预设哈希算法进行计算,得到哈希值;(3)根据所述哈希值在所述当前哈希环确定所述目标节点。
在本实施例中,各个第一类存储节点的存储信息被***并更新到所述当前哈希环中。因此,所述当前哈希环可以表所述各个第一类存储节点当前的磁盘映射。
具体的,所述当前哈希环对应占位节点和各个第一类存储节点。每个第一存储节点可以占用所述当前哈希环的其中一个位置(哈希槽),可能同时占用多个位置,当然也可能没有占用任何位置。没有占用任何位置的第一类存储节点不在所述当前哈希环上。占用位置越多的第一类存储节点,则选中的概率越大。
步骤S202,判断所述目标节点是否是所述多个第一类存储节点中的任意一个。
如前文所述,所述当前哈希环对应有占位节点。所述占位节点为无效节点,其占位值 为空值或无效值,用于将溢出流量(部分读取请求)导向所述第二级缓存。
步骤S204,若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据。
若所述目标节点不是所述多个第一类存储节点中的任意一个,则可以说明所述目标节点为占位节点,因此可以将所述读取请求导向到所述第二级缓存中。所述第二级缓存会根据所述读取请求,查询其内部是否有所述响应数据。如果有所述响应数据,则返回给移动终端。如果没有所述响应数据,则返回失败或空白信息,或在所述服务器配置有第三级缓存的情况下将所述读取请求导入第三级缓存。
本实施例提供的基于多级缓存的数据处理方法:
若所述目标节点不是所述多个第一类存储节点中的任意一个,则说明大概出现以下情况:在所述当前哈希环中的占位节点占据的位置偏多,所述多个第一类存储节点占据的位置偏少,即说明此时所述第一级缓存基于比较忙碌的状态。因此,在此时将所述读取请求导向到所述第二级缓存中,由所述第二级缓存进行响应,而不是让读取请求进入所述第一级缓存的队列中进行排队。本实施例所述的方法,可以提高所述读取请求的响应速度和降低第一级缓存的压力。
本实施例尤其适用于以下情况:
组合出的第二级缓存和组合出的第一级缓存性能比例存在不确定性。导致不确定的因素如下:第二级缓存的磁盘数量远多于第一级缓存的磁盘数量、因为故障导致部分第一类存储节点被剔除、因磨损导致部分第一类存储节点的性能下降,以及数据经过精心排布的第二级缓存的性能并不弱于甚至强于第一级缓存的性能。
在该种情况下,若第一级缓存层忙碌时,读取请求仍然排队等待前面的任务完成,并在前面的任务完成之后,由第一级缓存查询和响应,无疑会降低响应速度。
面对这种情况,可以直接导入到第二级缓存中,由第二级缓存快速响应。
需要说明的是,本实施例是在确定目标节点不是第一级缓存中的任意一个第一类存储节点的情况下,才将读取请求导入到第二级缓存中,从而避免第一级缓存的性能浪费。例如以下这种方式导致的性能浪费:设置导入第二级缓存的超时时间,该超时时间可能小于等待加上执行的时间,导致第一级缓存层的性能浪费。
以下提供一些可选方案。
作为示例,所述方法还可以包括:以预设频率更新所述当前哈希环。在本实施例中,通过周期性地更新所述当前哈希环,得到所述第一级缓存中各个第一类存储节点的最新情况,提高选择的有效性。举例来说,越空闲的第一类存储节点,在所述当前哈希环中的占位增加,从而越容易在被选中。相应的,越忙碌的存第一类储节点,在所述当前哈希环中的占位减少,从而越难在被选中。
作为示例,所述当前哈希环为第一哈希环。如图3所示,所述以预设频率更新所述当前哈希环,可以通过以下操作实现:步骤S300,根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环。本实施例中,所述当前状态可以为表示相应第一类存储节点的工作状态,更新所述当前哈希环,即:根据各个第一类存储节点的工作状态更新各个第一类存储节点被选中的概率。
作为示例,所述各个第一类存储节点的当前状态包括所述各个第一类存储节点的当前磁盘状态值。各个第一类存储节点的磁盘状态值,在服务器启动服务时可以为初始值50或 其他值。如图4所示,所述根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环,可以通过如下操作实现:步骤S400,构造一个新哈希环;步骤S402,获取所述各个第一类存储节点的当前磁盘状态值;步骤S404,根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环;及步骤S406,将所述当前哈希环从所述第一哈希环更新为所述第二哈希环。需要说明的是,所述新哈希环还***有占位值在执行步骤S400之后。本实施例中,可以有效地构建完成一个新哈希环,以便能够根据实际情况更加有效地选择目标节点。
作为示例,如图5所示,所述获取所述各个第一类存储节点的当前磁盘状态值,可以通过如下操作实现:步骤S500,根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值。本实施例中,可以基于IO队列、请求延时等确定各个第一类存储节点的工作状态(如,是否处于忙碌状态),从而更有效地更新各个第一类存储节点的当前磁盘状态值,得到更加有效的新哈希环(即,第二哈希环),保障第二哈希环的有效性,提升响应效率。
作为示例,如图6所示,所述根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值,可以通过如下操作实现:步骤S600,判断第i个存储节点的IO队列的数量是否大于第一预设值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量;步骤S602,若所述第i个存储节点的IO队列的数量大于所述第一预设值,则对所述第i个存储节点的当前磁盘状态值自减1;步骤S604,若所述第i个存储节点的IO队列的数量不大于所述第一预设值,则获取所述第i个存储节点的请求延时;步骤S606,若所述第i个存储节点的请求延时大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值自减1;步骤S608,若所述第i个存储节点的请求延时不大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值增1。在本实施例中,若所述第i个存储节点的初始磁盘状态被设置为50,则所述第一预设值可以为100,第二预设值可以为0。基于上文内容,当前哈希环的每次更新,所述第i个存储节点的当前磁盘状态值可能自增1、自减1或不变。这个自增或自减在上一个周期的磁盘状态值的基础上进行的。由于这种基于上一周期的渐进式调整,确保了当前哈希环的稳定性和数据处理稳定性。
作为示例,如图7所示,所述根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环,可以通过如下操作实现:步骤S700,根据第i个存储节点的当前磁盘状态值,将所述第i个存储节点的节点信息分别***到所述新哈希的N个位置处,所述N的值等于所述第i个存储节点的当前磁盘状态值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量。本实施例中,第i个存储节点的当前磁盘状态值越大,说明第i个存储节点越空闲,因此,第i个存储节点在所述哈希环中占据更多的位置。当接收到一个读取请求时,第i个存储节点则更容易被选中,以充分被用于提供响应服务。
作为示例,所述服务器配配置有三级缓存,即包括内存、第一级缓存、第二级缓存。如图8所示,所述响应于读取请求,通过当前哈希环选择一个目标节点,可以通过如下操作实现:步骤S800,根据所述读取请求,确定所述内存中是否有所述响应数据;及步骤S802,如果所述内存中没有所述响应数据,则通过所述当前哈希环确定所述目标节点。在本实施例中,可以将最高访问频次的数据存储于所述内存中,以使本服务器能够更快地响应用户 的读取请求。
作为示例,如图9所示,所述方法还可以包括:步骤S900,若所述目标节点是所述第一类存储节点,则确定所述目标节点中是否有所述响应数据;步骤S902,若所述目标节点中有所述响应数据,则从所述目标节点中读取所述响应数据;步骤S904,若所述目标节点中没有所述响应数据,则从所述第二级缓存中读取所述响应数据。在本实施例中,在目标节点没有所述响应数据的情形下,将所述读取请求导入到第二级缓存中,有第二级缓存进行响应,以满足用于需求,提高用户体验。
为方便理解,以下结合图10提供一个应用示例:
该应用示例用于CDN服务器中,该CDN服务器包括内存、SSD层、HDD层。其中,内存可以包括具有高速存取性能的易失性存储介质,如RAM等介质。所述SSD层由多个SSD磁盘组合而成。所述SDD层由多个机械磁盘组合而成。
S1000:CDN服务器启动读写服务,并对所述SSD层的各个SSD磁盘进行初始化。
例如:可以在启动服务时将所述各个SSD磁盘的磁盘状态值设置为初始值50。需要说明的是,所述各个SSD磁盘的磁盘状态值可以作为后面更新哈希值的依据。
S1002:CDN服务器接收用户发送的读取请求。
S1004:CDN服务器根据所述读取请求,确定所述内存中是否有响应数据。若所述内存中是否有所述响应数据,则进入步骤S1006,否则进入步骤S1008。
S1006:从所述内存中读取所述响应数据,并将所述响应数据返回给用户。流程结束。
S1008:CDN服务器通过当前哈希环确定目标节点。
所述当前哈希环,用于表示CDN服务器的磁盘结构或磁盘映射。
S1010:CDN服务器判断所述目标节点是否是SSD磁盘。
如果是,进入步骤S1012,否则进入步骤S1016。
所述哈希环中包括占位节点,若所述目标节点为占位节点,则判定所述目标节点不是SSD磁盘。
S1012:CDN服务器判断所述目标节点是否有响应数据。
如果是,进入步骤S1014,否则进入步骤S1016。
S1014:CDN服务器从所述目标节点读取所述响应数据,并返回所述响应数据。流程结束。
S1016:CDN服务器查询HDD层是否有响应数据。
如果是,进入步骤S1018,否则返回响应失败,流程结束。
S1018:CDN服务器从HDD层读取所述响应数据,并返回所述响应数据。流程结束。
如图11所示,CDN服务器启动读写服务,当前哈希环以预设周期更新,操作如下:
S1100:将所述各个SSD磁盘的磁盘状态值设置为初始值50。
S1102:以预设周期更新一次哈希环,用于更新所述当前哈希环。
如图12所示,以所述当前哈希环的其中一次更新为例,其可以包括如下步骤:
S1200:构造一个新哈希环。
S1202:将一个占位值***到该新哈希环。
S1204:获取SSD磁盘列表。
S1206:取第i个SSD磁盘,i的初始为1,i为正整数。
S1208:计算第i个SSD磁盘的磁盘状态值。
S1210:将第i个SSD磁盘的磁盘状态值赋值给k,k为变量。此处k值越大,表示第i个SSD磁盘的磁盘信息***到这个新哈希环的次数越多,磁盘被选中的概率越大。
S1212:判断k是否大于0。若是,进入步骤S1214,否则进入步骤S1218。
S1214:将第i个SSD磁盘的磁盘信息(如磁盘标识等)***到所述新哈希环中。
S1216:k自减1,并返回步骤S1212。
S1218:判断第i个SSD磁盘是否是SSD磁盘列表中的最后一个SSD磁盘。
如果不是,更新i值,并返回步骤S1206。如果是,则进入步骤S1220。
S1220:通过步骤S1206~S1218将所述SSD磁盘列表中的各个SSD磁盘的磁盘信息***到所述新哈希环之后,将得到的最终的新哈希环(即第二哈希环),载入为当前哈希环。
如图13所示,步骤S1208中的计算第i个SSD磁盘的磁盘状态值,可以包括如下步骤:
S1300:获取第i个SSD磁盘的IO队列,如一段时间内的平均IO队列。
S1302:判断IO队列的数量是否大于Q值。Q值为预设设置的IO队列的上限值。
如果是,进入步骤S1310,否则进入步骤S1304。
S1304:获取第i个SSD磁盘的99分位延时。
S1306:判断第i个SSD磁盘的99分位延时是否大于L值。L值为预先设置的一个读/写磁盘任务的延时上限值。
如果是,进入步骤S1310,否则进入还S1308。
S1308:第i个SSD磁盘的磁盘状态值自增1,得到当前磁盘状态值。
需要说明的是,当前磁盘状态值的最大值为100。
S1310:第i个SSD磁盘的磁盘状态值自减1,得到当前磁盘状态值。
需要说明的是,当前磁盘状态值的最小值为0。
实施例二
图14示意性示出了根据本申请实施例二的基于多级缓存的数据处理***的框图。该数据处理***用于服务器中,所述服务器包括第一级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点。该基于多级缓存的数据处理***可以被分割成一个或多个程序模块,一个或者多个程序模块被存储于存储介质中,并由一个或多个处理器所执行,以完成本申请实施例。本申请实施例所称的程序模块是指能够完成特定功能的一系列计算机可读指令段,以下描述将具体介绍本实施例中各程序模块的功能。
如图14所示,该基于多级缓存的数据处理***1400可以包括选择模块1410、判断模块1420和返回模块1430,其中:
选择模块1410,用于响应于读取请求,通过当前哈希环选择一个目标节点;
判断模块1420,用于判断所述目标节点是否是所述多个第一类存储节点中的任意一个;及
返回模块1430,用于若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据。
可选的,所述***还包括哈希环更新模块(未图示),所述哈希环更新模块用于:以预设频率更新所述当前哈希环。
可选的,所述当前哈希环为第一哈希环;所述哈希环更新模块还用于:
根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环。
可选的,所述各个第一类存储节点的当前状态包括所述各个第一类存储节点的当前磁盘状态值;所述哈希环更新模块还用于:
构造一个新哈希环;
将一个占位值***到所述新哈希环中;
获取所述各个第一类存储节点的当前磁盘状态值;
根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环;及
将所述当前哈希环从所述第一哈希环更新为所述第二哈希环。
可选的,所述哈希环更新模块还用于:
根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值。
可选的,所述哈希环更新模块还用于:
判断第i个存储节点的IO队列的数量是否大于第一预设值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量;
若所述第i个存储节点的IO队列的数量大于所述第一预设值,则对所述第i个存储节点的当前磁盘状态值自减1;
若所述第i个存储节点的IO队列的数量不大于所述第一预设值,则获取所述第i个存储节点的请求延时;
若所述第i个存储节点的请求延时大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值自减1;
若所述第i个存储节点的请求延时不大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值增1。
可选的,所述哈希环更新模块还用于:
根据第i个存储节点的当前磁盘状态值,将所述第i个存储节点的节点信息分别***到所述新哈希的N个位置处,所述N的值等于所述第i个存储节点的当前磁盘状态值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量。
可选的,所述服务器还包括内存;所述选择模块1410还用于:
根据所述读取请求,确定所述内存中是否有所述响应数据;及
如果所述内存中没有所述响应数据,则通过所述当前哈希环确定所述目标节点。
可选的,所述返回模块1430还用于:
若所述目标节点是所述第一类存储节点,则确定所述目标节点中是否有所述响应数据;
若所述目标节点中有所述响应数据,则从所述目标节点中读取所述响应数据;
若所述目标节点中没有所述响应数据,则从所述第二级缓存中读取所述响应数据。
可选的,所述多个第一类存储节点对应为多个SSD磁盘;所述多个第二类存储节点对应为多个HDD磁盘。
实施例三
图15示意性示出了根据本申请实施例三的适于实现基于多级缓存的数据处理方法的计算机设备10000的硬件架构示意图。本实施例中,计算机设备10000可以为服务器4或者作为CDN服务器4的一部分。本实施例中,计算机设备10000是一种能够按照事先设定或者存储的指令,自动进行数值计算和/或信息处理的设备。例如,可以是机架式服务器、刀片式服务器、塔式服务器或机柜式服务器(包括独立的服务器,或者多个服务器所组成的服务器集群)等。如图15所示,计算机设备10000至少包括但不限于:可通过***总线相互通信链接存储器10010、处理器10020、网络接口10030。其中:
存储器10010至少包括一种类型的计算机可读存储介质,可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器10010可以是计算机设备10000的内部存储模块,例如该计算机设备10000的硬盘或内存。在另一些实施例中,存储器10010也可以是计算机设备10000的外部存储设备,例如该计算机设备10000上配备的插接式硬盘,智能存储卡(Smart Media Card,简称为SMC),安全数字(Secure Digital,简称为SD)卡,闪存卡(Flash Card)等。当然,存储器10010还可以既包括计算机设备10000的内部存储模块也包括其外部存储设备。本实施例中,存储器10010通常用于存储安装于计算机设备10000的操作***和各类应用软件,例如基于多级缓存的数据处理方法的程序代码等。此外,存储器10010还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器10020在一些实施例中可以是中央处理器(Central Processing Unit,简称为CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器10020通常用于控制计算机设备10000的总体操作,例如执行与计算机设备10000进行数据交互或者通信相关的控制和处理等。本实施例中,处理器10020用于运行存储器10010中存储的程序代码或者处理数据。
网络接口10030可包括无线网络接口或有线网络接口,该网络接口10030通常用于在计算机设备10000与其他计算机设备之间建立通信链接。例如,网络接口10030用于通过网络将计算机设备10000与外部终端相连,在计算机设备10000与外部终端之间的建立数据传输通道和通信链接等。网络可以是企业内部网(Intranet)、互联网(Internet)、全球移动通讯***(Global System of Mobile communication,简称为GSM)、宽带码分多址(Wideband Code Division Multiple Access,简称为WCDMA)、4G网络、5G网络、蓝牙(Bluetooth)、Wi-Fi等无线或有线网络。
需要指出的是,图15仅示出了具有部件10010-10030的计算机设备,但是应该理解的是,并不要求实施所有示出的部件,可以替代的实施更多或者更少的部件。
在本实施例中,存储于存储器10010中的基于多级缓存的数据处理方法还可以被分割为一个或者多个程序模块,并由一个或多个处理器(本实施例为处理器10020)所执行,以完成本申请实施例。
实施例四
本申请还提供一种计算机可读存储介质,用于服务器中,其中,所述服务器包括第一 级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点。计算机可读存储介质其上存储有计算机可读指令,计算机可读指令被处理器执行时实现以下步骤:
响应于读取请求,通过当前哈希环选择一个目标节点;
判断所述目标节点是否是所述多个第一类存储节点中的任意一个;及
若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据。
本实施例中,计算机可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,计算机可读存储介质可以是计算机设备的内部存储单元,例如该计算机设备的硬盘或内存。在另一些实施例中,计算机可读存储介质也可以是计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(Smart Media Card,简称为SMC),安全数字(Secure Digital,简称为SD)卡,闪存卡(Flash Card)等。当然,计算机可读存储介质还可以既包括计算机设备的内部存储单元也包括其外部存储设备。本实施例中,计算机可读存储介质通常用于存储安装于计算机设备的操作***和各类应用软件,例如实施例中基于多级缓存的数据处理方法的程序代码等。此外,计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的各类数据。
显然,本领域的技术人员应该明白,上述的本申请实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请实施例不限制于任何特定的硬件和软件结合。
需要说明的是,本申请技术方案主要/特别针对基于DASH流媒体的Web播放器的优化。另外,以上仅为本申请的优选实施例,并非因此限制本申请的专利保护范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于多级缓存的数据处理方法,用于服务器中,其中,所述服务器包括第一级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点;所述数据处理方法包括:
    响应于读取请求,通过当前哈希环选择一个目标节点;
    判断所述目标节点是否是所述多个第一类存储节点中的任意一个;及
    若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据。
  2. 根据权利要求1所述的基于多级缓存的数据处理方法,其中,还包括:
    以预设频率更新所述当前哈希环。
  3. 根据权利要求2所述的基于多级缓存的数据处理方法,其中,所述当前哈希环为第一哈希环;
    所述以预设频率更新所述当前哈希环,包括:
    根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环。
  4. 根据权利要求3所述的基于多级缓存的数据处理方法,其中,所述各个第一类存储节点的当前状态包括所述各个第一类存储节点的当前磁盘状态值;
    所述根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环,包括:
    构造一个新哈希环;
    获取所述各个第一类存储节点的当前磁盘状态值;
    根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环;及
    将所述当前哈希环从所述第一哈希环更新为所述第二哈希环。
  5. 根据权利要求4所述的基于多级缓存的数据处理方法,其中,
    所述获取所述各个第一类存储节点的当前磁盘状态值,包括:
    根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值。
  6. 根据权利要求5所述的基于多级缓存的数据处理方法,其中,所述根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值,包括:
    判断第i个存储节点的IO队列的数量是否大于第一预设值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量;
    若所述第i个存储节点的IO队列的数量大于所述第一预设值,则对所述第i个存储节点的当前磁盘状态值自减1;
    若所述第i个存储节点的IO队列的数量不大于所述第一预设值,则获取所述第i个存储节点的请求延时;
    若所述第i个存储节点的请求延时大于所述第二预设值,则对所述第i个存储节点的当 前磁盘状态值自减1;
    若所述第i个存储节点的请求延时不大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值增1。
  7. 根据权利要求4至6任意一项所述的基于多级缓存的数据处理方法,其中,所述根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环,包括:
    根据第i个存储节点的当前磁盘状态值,将所述第i个存储节点的节点信息分别***到所述新哈希的N个位置处,所述N的值等于所述第i个存储节点的当前磁盘状态值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量。
  8. 根据权利要求1至7任意一项所述的基于多级缓存的数据处理方法,其中,所述服务器还包括内存;所述响应于读取请求,通过当前哈希环选择一个目标节点,包括:
    根据所述读取请求,确定所述内存中是否有所述响应数据;及
    如果所述内存中没有所述响应数据,则通过所述当前哈希环确定所述目标节点。
  9. 根据权利要求1至8任意一项所述的基于多级缓存的数据处理方法,其中,还包括:
    若所述目标节点是所述第一类存储节点,则确定所述目标节点中是否有所述响应数据;
    若所述目标节点中有所述响应数据,则从所述目标节点中读取所述响应数据;
    若所述目标节点中没有所述响应数据,则从所述第二级缓存中读取所述响应数据。
  10. 根据权利要求1至9任意一项所述的基于多级缓存的数据处理方法,其中:
    所述多个第一类存储节点对应为多个SSD磁盘;
    所述多个第二类存储节点对应为多个HDD磁盘。
  11. 一种基于多级缓存的数据处理***,用于服务器中,其中,所述服务器包括第一级缓存和第二级缓存,所述第一级缓存包括多个第一类存储节点,所述第二级缓存包括多个第二类存储节点;所述数据处理***包括:
    选择模块,用于响应于读取请求,通过当前哈希环选择一个目标节点;
    判断模块,用于判断所述目标节点是否是所述多个第一类存储节点中的任意一个;及
    返回模块,用于若所述目标节点不是所述多个第一类存储节点中的任意一个,则从所述多个第二类存储节点中读取用于响应所述读取请求的响应数据,并返回所述响应数据。
  12. 根据权利要求11所述的基于多级缓存的数据处理***,其中,所述***还包括哈希环更新模块,用于:
    以预设频率更新所述当前哈希环。
  13. 根据权利要求12所述的基于多级缓存的数据处理***,其中,所述当前哈希环为第一哈希环;
    所述哈希环更新模块还用于:
    根据各个第一类存储节点的当前状态,将所述当前哈希环从所述第一哈希环更新为第二哈希环。
  14. 根据权利要求13所述的基于多级缓存的数据处理***,其中,所述各个第一类存储节点的当前状态包括所述各个第一类存储节点的当前磁盘状态值;
    所述哈希环更新模块还用于:
    构造一个新哈希环;
    获取所述各个第一类存储节点的当前磁盘状态值;
    根据所述新哈希环和所述各个第一类存储节点的当前磁盘状态值,生成所述第二哈希环;及
    将所述当前哈希环从所述第一哈希环更新为所述第二哈希环。
  15. 根据权利要求14所述的基于多级缓存的数据处理***,其中,所述哈希环更新模块还用于:
    根据所述各个第一类存储节点的IO队列和请求延时,获取所述各个第一类存储节点的当前磁盘状态值。
  16. 根据权利要求15所述的基于多级缓存的数据处理***,其中,所述哈希环更新模块还用于:
    判断第i个存储节点的IO队列的数量是否大于第一预设值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量;
    若所述第i个存储节点的IO队列的数量大于所述第一预设值,则对所述第i个存储节点的当前磁盘状态值自减1;
    若所述第i个存储节点的IO队列的数量不大于所述第一预设值,则获取所述第i个存储节点的请求延时;
    若所述第i个存储节点的请求延时大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值自减1;
    若所述第i个存储节点的请求延时不大于所述第二预设值,则对所述第i个存储节点的当前磁盘状态值增1。
  17. 根据权利要求14至16任意一项所述的基于多级缓存的数据处理***,其中,所述哈希环更新模块还用于:
    根据第i个存储节点的当前磁盘状态值,将所述第i个存储节点的节点信息分别***到所述新哈希的N个位置处,所述N的值等于所述第i个存储节点的当前磁盘状态值;其中,所述第i个存储节点为所述多个第一类存储节点中任意一个存储节点,1≤i≤M,i为整数,M为所述多个第一类存储节点的数量。
  18. 根据权利要求11至17任意一项所述的基于多级缓存的数据处理***,其中,所述服务器还包括内存;所述选择模块还用于:
    根据所述读取请求,确定所述内存中是否有所述响应数据;及
    如果所述内存中没有所述响应数据,则通过所述当前哈希环确定所述目标节点。
  19. 一种计算机设备,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时用于实现权利要求1至10中任意一项所述的基于多级缓存的数据处理方法的步骤。
  20. 一种计算机可读存储介质,其中,所述计算机可读存储介质内存储有计算机可读指令,所述计算机可读指令可被至少一个处理器所执行,以使所述至少一个处理器执行权利要求1至10中任意一项所述的基于多级缓存的数据处理方法的步骤。
PCT/CN2022/098669 2021-08-20 2022-06-14 基于多级缓存的数据处理方法及*** WO2023020085A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110963175.2 2021-08-20
CN202110963175.2A CN113672524B (zh) 2021-08-20 基于多级缓存的数据处理方法及***

Publications (1)

Publication Number Publication Date
WO2023020085A1 true WO2023020085A1 (zh) 2023-02-23

Family

ID=78544932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/098669 WO2023020085A1 (zh) 2021-08-20 2022-06-14 基于多级缓存的数据处理方法及***

Country Status (1)

Country Link
WO (1) WO2023020085A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117785949A (zh) * 2024-02-28 2024-03-29 云南省地矿测绘院有限公司 一种数据缓存方法、电子设备、存储介质以及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170318114A1 (en) * 2016-05-02 2017-11-02 Netapp, Inc. Methods for managing multi-level flash storage and devices thereof
CN107707631A (zh) * 2017-09-18 2018-02-16 北京潘达互娱科技有限公司 数据获取方法及装置
CN110096227A (zh) * 2019-03-28 2019-08-06 北京奇艺世纪科技有限公司 数据存储方法、数据处理方法、装置、电子设备及计算机可读介质
CN110858201A (zh) * 2018-08-24 2020-03-03 阿里巴巴集团控股有限公司 数据处理方法及***、处理器、存储介质
CN112162987A (zh) * 2020-10-12 2021-01-01 北京字跳网络技术有限公司 数据处理方法、装置、设备及存储介质
CN112486672A (zh) * 2020-11-17 2021-03-12 中国人寿保险股份有限公司 一种服务内存缓存调用方法和装置
CN113672524A (zh) * 2021-08-20 2021-11-19 上海哔哩哔哩科技有限公司 基于多级缓存的数据处理方法及***

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170318114A1 (en) * 2016-05-02 2017-11-02 Netapp, Inc. Methods for managing multi-level flash storage and devices thereof
CN107707631A (zh) * 2017-09-18 2018-02-16 北京潘达互娱科技有限公司 数据获取方法及装置
CN110858201A (zh) * 2018-08-24 2020-03-03 阿里巴巴集团控股有限公司 数据处理方法及***、处理器、存储介质
CN110096227A (zh) * 2019-03-28 2019-08-06 北京奇艺世纪科技有限公司 数据存储方法、数据处理方法、装置、电子设备及计算机可读介质
CN112162987A (zh) * 2020-10-12 2021-01-01 北京字跳网络技术有限公司 数据处理方法、装置、设备及存储介质
CN112486672A (zh) * 2020-11-17 2021-03-12 中国人寿保险股份有限公司 一种服务内存缓存调用方法和装置
CN113672524A (zh) * 2021-08-20 2021-11-19 上海哔哩哔哩科技有限公司 基于多级缓存的数据处理方法及***

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117785949A (zh) * 2024-02-28 2024-03-29 云南省地矿测绘院有限公司 一种数据缓存方法、电子设备、存储介质以及装置
CN117785949B (zh) * 2024-02-28 2024-05-10 云南省地矿测绘院有限公司 一种数据缓存方法、电子设备、存储介质以及装置

Also Published As

Publication number Publication date
CN113672524A (zh) 2021-11-19

Similar Documents

Publication Publication Date Title
CN109947668B (zh) 存储数据的方法和装置
AU2015229200B2 (en) Coordinated admission control for network-accessible block storage
EP2822236B1 (en) Network bandwidth distribution method and terminal
US20170193416A1 (en) Reducing costs related to use of networks based on pricing heterogeneity
US20190342418A1 (en) Efficient High Availability and Storage Efficiency in a Multi-Site Object Storage Environment
EP3128420A1 (en) Service flow control method, controller and system in object-based storage system
US7849167B2 (en) Dynamic distributed adjustment of maximum use of a shared storage resource
US10091126B2 (en) Cloud system, control method thereof, management server and control method thereof
WO2020019743A1 (zh) 流量控制方法及装置
US9207983B2 (en) Methods for adapting application services based on current server usage and devices thereof
US10250673B1 (en) Storage workload management using redirected messages
WO2017124972A1 (zh) 一种资源缓存管理方法及***和装置
US11281511B2 (en) Predictive microservice systems and methods
WO2014028234A1 (en) Virtual desktop policy control
WO2023020085A1 (zh) 基于多级缓存的数据处理方法及***
CN112148430A (zh) 一种虚拟网络功能的虚拟机在线安全迁移的方法
US11500577B2 (en) Method, electronic device, and computer program product for data processing
EP3369238B1 (en) Method, apparatus, computer-readable medium and computer program product for cloud file processing
US20220086097A1 (en) Stream allocation using stream credits
US10681110B2 (en) Optimized stream management
CN112600761A (zh) 一种资源分配的方法、装置及存储介质
CN110727738A (zh) 基于数据分片的全局路由***、电子设备及存储介质
US20130239114A1 (en) Fine Grained Adaptive Throttling of Background Processes
US11354164B1 (en) Robotic process automation system with quality of service based automation
CN114265648B (zh) 编码调度方法、服务器及客户端和获取远程桌面的***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22857398

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE