WO2019091322A1 - 虚拟机快照处理方法、装置及*** - Google Patents

虚拟机快照处理方法、装置及*** Download PDF

Info

Publication number
WO2019091322A1
WO2019091322A1 PCT/CN2018/113335 CN2018113335W WO2019091322A1 WO 2019091322 A1 WO2019091322 A1 WO 2019091322A1 CN 2018113335 W CN2018113335 W CN 2018113335W WO 2019091322 A1 WO2019091322 A1 WO 2019091322A1
Authority
WO
WIPO (PCT)
Prior art keywords
disk
snapshot
virtual machine
block
disk block
Prior art date
Application number
PCT/CN2018/113335
Other languages
English (en)
French (fr)
Inventor
佘海斌
鲁振伟
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2019091322A1 publication Critical patent/WO2019091322A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Definitions

  • the present application relates to the field of storage technologies, and in particular, to a virtual machine snapshot processing method, apparatus, and system.
  • a virtual machine is a complete computer system that runs through a software and has a complete hardware system function and runs in a completely isolated environment.
  • the VM runs on the host and has a separate running environment, which helps to improve the security of the host, and makes it possible to run multiple operating systems on a single host, so it is becoming more and more popular.
  • VM Based on a series of advantages of VM, more and more Internet applications are starting to run on VMs.
  • the traffic of Internet applications fluctuates greatly. When the peak of the Internet application comes, it needs to start a large number of VMs in a short time.
  • the VM starts up relying on a disk snapshot that contains the operating system and other data needed to run the VM. Disk snapshots are typically stored in the snapshot center.
  • the disk block When the disk block needs to be read during the VM boot process, it must first be read in the VM's disk. When it is not read in the VM's disk, it is requested from the snapshot center. When a large number of VMs are started in a short period of time, the snapshot center may need to process a large number of requests concurrently, and the concurrency pressure of the snapshot center is large.
  • aspects of the present application provide a virtual machine snapshot processing method, apparatus, and system for alleviating the concurrent pressure of a snapshot center.
  • the embodiment of the present application provides a virtual machine snapshot processing method, including:
  • the snapshot cache stores a disk block whose frequency of use in the disk snapshot meets a setting requirement
  • the first disk block is queried in the disk of the first virtual machine or the snapshot cache, the first disk block is returned to the first virtual machine.
  • the embodiment of the present application further provides a virtual machine snapshot processing apparatus, including:
  • the snapshot cache stores a disk block whose frequency of use in the disk snapshot meets a setting requirement
  • the first disk block is queried in the disk of the first virtual machine or the snapshot cache, the first disk block is returned to the first virtual machine.
  • An embodiment of the present application further provides an electronic device, including: a memory and a processor;
  • the memory is configured to store a program
  • the processor is coupled to the memory for executing the program for:
  • the snapshot cache stores a disk block whose frequency of use in the disk snapshot meets a setting requirement
  • the first disk block is queried in the disk of the first virtual machine or the snapshot cache, the first disk block is returned to the first virtual machine.
  • the application also provides a cloud computing system, including: a computing cluster, a storage cluster, and a snapshot center;
  • the computing cluster is configured to provide computing resources of the first virtual machine, where the first virtual machine runs in the computing cluster;
  • the storage cluster is configured to provide a disk of the first virtual machine and a snapshot cache, where the snapshot cache stores a disk block whose usage frequency in the disk snapshot required by the first virtual machine is consistent with a setting requirement;
  • the snapshot center is configured to store the disk snapshot
  • the storage cluster includes a storage management device, and the storage management device is configured to:
  • the snapshot cache stores a disk block whose frequency of use in the disk snapshot meets a setting requirement
  • the first disk block is queried in the disk of the first virtual machine or the snapshot cache, the first disk block is returned to the first virtual machine.
  • a snapshot cache is added to the virtual machine, and the disk block in the disk snapshot that is used in the virtual machine startup process meets the set requirement, and the read is sent when the virtual machine is received during the startup process.
  • the disk block requested by the virtual machine is queried in the disk and snapshot cache of the virtual machine, and is returned to the virtual machine when the disk block requested by the virtual machine is queried in the virtual machine's disk or snapshot cache. Since the snapshot cache stores the disk blocks in the disk snapshot that meet the set requirements, the probability of hitting the disk request hits the required disk block can be increased, and the probability of requesting the disk block from the snapshot center is reduced, thereby reducing the overall size of the snapshot center. Concurrent pressure.
  • FIG. 1 is a schematic diagram of an exemplary storage computing separate cloud computing architecture provided by an exemplary embodiment of the present application
  • FIG. 2a is a schematic diagram of a process of processing a disk block by an IO thread and a Lazyload thread during a first VM startup process according to another exemplary embodiment of the present application;
  • 2b is another schematic diagram of another process of processing a disk block by an IO thread and a Lazyload thread in a first VM startup process according to still another exemplary embodiment of the present application;
  • FIG. 3 is a schematic flowchart of a virtual machine snapshot processing method according to another exemplary embodiment of the present disclosure
  • FIG. 3b is a schematic flowchart of a virtual machine snapshot processing method according to another exemplary embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a virtual machine snapshot processing apparatus according to another embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device according to another embodiment of the present disclosure.
  • the solution of the present invention provides a solution to the problem that the snapshot center has a large concurrent pressure.
  • the main principle is to add a snapshot cache to the VM, and the usage frequency of the disk snapshot required during the VM boot process is set.
  • the required disk block based on this, when receiving the read disk request issued by the VM during the startup process, query the disk block of the VM request in the VM disk and the snapshot cache respectively, when queried in the VM disk or the snapshot cache
  • the VM block requested by the VM is returned to the VM.
  • the snapshot cache stores the disk blocks in the disk snapshot that meet the set requirements, the probability of hitting the disk request hits the required disk block can be increased, and the probability of requesting the disk block from the snapshot center is reduced, thereby reducing the overall size of the snapshot center. Concurrent pressure.
  • FIG. 1 is a schematic diagram of a storage computing separate cloud computing architecture provided by an exemplary embodiment of the present application.
  • the architecture 10 includes a computing cluster 101, a storage cluster (EBS) 102, and a snapshot center 103.
  • the computing cluster 101 can be one or more.
  • the storage cluster 102 can also be one or more.
  • the snapshot center 103 is generally one, but not limited to one.
  • the computing cluster 101 includes a plurality of computing nodes.
  • a compute node is a device with certain computing power and generally includes components such as a processor, a memory, and an Ethernet controller.
  • the computing node can be a personal computer, a desktop computer, a server, or the like.
  • the computing nodes in the computing cluster 101 can be implemented by devices of the same type or by different types of devices.
  • the computing cluster 101 is primarily responsible for providing computing resources to the VMs that can be run on compute nodes in the computing cluster 101.
  • Storage cluster 102 includes multiple storage devices.
  • a storage device is a device with a certain storage capacity, and generally includes a processor, a system bus, and at least one physical storage medium such as a hard disk and a memory.
  • the storage device can be a server, a desktop computer, a personal computer, a mobile phone, a tablet, a database, and the like.
  • the server can be a regular server, a cloud server, a cloud host, a virtual center, and the like.
  • the storage devices in the storage cluster 102 may be of the same type or different types.
  • the storage cluster 102 also includes a storage management device.
  • the storage management device is primarily responsible for the storage management logic of the storage cluster 102. In a deployment implementation, the storage management device can be deployed separately from each storage device, or can be deployed on one or some storage devices.
  • the storage cluster 102 is mainly responsible for providing storage resources to the VM, for example, providing a VM disk for each VM, that is, a disk required for the VM running in the computing cluster 101 is located in the storage cluster 102, and implements a storage computing separation architecture.
  • the snapshot center 103 is mainly responsible for storing the disk snapshot required for the VM to boot, and the disk snapshot includes the operating system and other data required during the VM boot process, but is not limited thereto. Alternatively, the snapshot center 103 may adopt a storage method with a lower cost of object storage.
  • VMs can be running on the compute nodes of compute cluster 101.
  • Some VMs use the same disk snapshot, and some VMs use different disk snapshots.
  • the snapshot center 103 stores a disk snapshot that each VM needs to use.
  • the boot process that relies on disk snapshots is similar.
  • the first embodiment is described by taking the first VM as an example.
  • the first VM can be any VM running on the compute node.
  • a disk can be created for the first VM in the storage cluster 102.
  • the user can input a disk creation instruction to the storage management device through a command interface provided by the storage management device.
  • Disk creation instructions will vary depending on the storage system.
  • the storage management device can create a disk for the first VM in one or some storage devices according to the disk creation instruction.
  • the disk of the first VM is mainly used to store a disk snapshot required during the startup of the first VM during the first VM startup process.
  • the disk snapshot includes a plurality of disk blocks, and each disk block is assigned an index number (Index) in order.
  • the disk block is the smallest storage unit of the disk snapshot, and is also the minimum load unit.
  • the size of different disk blocks may be the same or different. In the case where the disk block size is the same, the size of the disk block is not limited. For example, one disk block may be 200 MB, 300 MB, or 500 MB.
  • a read disk request can be issued to the storage management device.
  • the storage management device queries the disk of the first VM for the disk block requested by the first VM according to the read disk request.
  • the queried disk block is returned to the first VM; when the disk block requested by the first VM is not queried in the disk of the first VM, The disk block is requested from the snapshot center 103.
  • the storage management device needs to centrally request the disk blocks for each VM from the snapshot center 103, thereby causing a snapshot.
  • the concentric pressure of the center 103 is large.
  • the storage management device selects a storage space in the storage cluster 102 as a snapshot cache.
  • This snapshot cache is mainly used to store disk blocks whose disk usage speed meets the set requirements.
  • the disk block whose frequency meets the setting requirement may be a plurality of disk blocks with the highest frequency of use, or may be a plurality of disk blocks whose use frequency is greater than a set frequency threshold, or may be a frequency of use within a certain interval.
  • the frequency of use of the disk block may be the frequency of use of the disk block within a certain period of time, such as the frequency of use in the last week, or the frequency of use in the most recent month.
  • the probability that a disk block stored in the snapshot cache is hit during the first VM read disk is higher, such as above a set probability threshold.
  • a read disk request may be issued to the storage management device, and the request carries the index number of the disk block, and points to the read.
  • the disk block requested by the disk is referred to as a first disk block, and the first disk block is a disk block in the disk snapshot required for the first VM boot.
  • the storage management device receives the read disk request issued by the first VM during the startup process, and can query the first disk block in the disk and the snapshot cache of the first VM respectively.
  • the first disk block queried is returned to the first VM without requesting the first disk block from the snapshot center 103.
  • the first disk block is requested from the snapshot center 103, and the first disk block returned by the snapshot center 103 is stored in the disk of the first VM. For use by the first VM.
  • the probability that the first VM hits the required disk block directly in the storage cluster 102 when the disk is read is increased to some extent, thereby reducing the The probability that the snapshot center 103 requests a disk block mitigates the concurrent pressure of the snapshot center 103.
  • the snapshot cache and the disk of the first VM are all in the storage space of the storage cluster 102.
  • the performance of the storage management device querying the snapshot cache and the disk of the first VM are basically the same, so the query cache and the first VM disk may not be cached.
  • the order is limited.
  • the snapshot cache can be queried first, and when the first disk block is not queried in the snapshot cache, the disk of the first VM is queried.
  • the disk of the first VM may be queried first, and when the first disk block is not queried in the disk of the first VM, the snapshot cache is queried.
  • the disk of the first VM is exclusive to the first VM, and the snapshot cache is shared by all the VMs, the disk of the first VM may be queried first, and the first disk block is queried in the disk of the first VM.
  • you do not need to query the snapshot cache you can reduce the pressure on the snapshot cache.
  • the storage management device may obtain the frequency of use of each disk block in the disk block according to the usage of each disk block in the disk snapshot during the startup process of the other VM before the first VM is started. ;
  • the disk blocks in each disk block whose usage frequency meets the set requirements are stored in the snapshot cache.
  • Other VMs refer to VMs that use the same disk snapshot as the first VM and are started before the first VM.
  • the storage management device may also perform a snapshot on the disk according to the first VM, other VMs currently started in addition to the first VM, and other VMs that are started before the first VM.
  • the usage of the disk block obtains the frequency of use of each disk block in the disk block, and updates the disk block in the snapshot cache in real time according to the frequency of use of each disk block.
  • the disk blocks stored in the snapshot cache are dynamically changing.
  • the storage management device can count the usage of each disk block in the disk snapshot by each VM in the latest period of time, and store the most frequently used N disk blocks into the snapshot cache.
  • N is a preset value and is a positive integer.
  • the storage management device may provide a disk block to the first VM in response to the read disk request of the first VM, and may employ lazy loading at the granularity of the disk block (Lazyload). ) Load a disk snapshot into the disk of the first VM. These two operations can be performed in parallel.
  • the storage management device can initiate two threads: a Lazyload thread and an IO thread.
  • the Lazyload thread is mainly responsible for loading the disk blocks in the disk snapshot to the disk of the first VM in the order of the disk block loading in the Lazyload mode.
  • the IO thread is primarily responsible for processing read disk requests issued by the first VM.
  • a read disk request is issued to the IO thread (such as read disk request 1 in FIG. 2a), and the read disk request carries the index number of the disk block to be read.
  • the IO thread sequentially queries the disk of the first VM (the read disk block 2 in FIG. 2a) and the snapshot cache (such as the read disk block 3 in FIG.
  • the identified disk block instructs the Lazyload thread to request the disk center identified by the disk block index number from the snapshot center 103 (see read disk block 5 in Figure 2a).
  • the disk block in the disk snapshot is loaded into the disk of the first VM in the order of disk block loading in Lazyload mode.
  • the Lazyload thread can preferentially read the disk block identified by the disk block index number from the snapshot center 103 and store it in the disk of the first VM (such as write disk block 6 in FIG. 2a) for the first
  • the VM uses the disk block as soon as possible.
  • the Lazyload thread can continue to load the disk blocks in the disk snapshot into the disk of the first VM in the order of disk block loading in the Lazyload manner.
  • the snapshot cache includes a disk block d1, a disk block d2, .... a disk block dk
  • the snapshot center includes a disk block d1, a disk block d2, a ... disk block dn, which is only an example and Not limited to this.
  • the hit rate of the IO thread will be greatly improved, and the dependence of the VM on the Lazyload thread can be reduced, so that the speed of the Lazyload thread can be controlled. It is possible to slow down and minimize the impact of the Lazyload thread on the storage cluster 102 and the snapshot center 103, even without any impact.
  • the Lazyload thread loads the disk blocks in the order of the disk block loading in the Lazyload manner, including: in the order of the disk block loading, querying the disk in the snapshot cache to the first VM.
  • the disk block loaded in (see read disk block 7 in Figure 2a).
  • the disk center is requested to be loaded into the disk of the first VM (see the reading in FIG. 2a).
  • Disk block 5), and the disk block that needs to be loaded into the disk of the first VM returned by the snapshot center 103 is stored in the disk of the first VM (such as the write disk block 6 in Fig. 2a).
  • the disk block that needs to be loaded into the disk of the first VM is queried in the snapshot cache, the disk block that needs to be loaded into the disk of the first VM is stored in the snapshot cache and stored to the first VM.
  • write disk block 6 On the disk (as in Figure 2a, write disk block 6). It can be seen that when the disk block that needs to be loaded into the disk of the first VM is read from the snapshot cache, the number of requests for the disk block to the snapshot center 103 can be reduced, and the snapshot can be further mitigated. The pressure of the center 103.
  • the disk block priority list may be read from the snapshot cache or the snapshot center 103, where the first VM pair is stored in the disk block priority list.
  • the order of use of each disk block in the disk snapshot; the order of use of the disk blocks by the first VM is taken as the disk block loading order.
  • the order in which the first VM uses each disk block as the order in which the Lazyload thread loads the disk block is advantageous for the disk block preferentially used in the first VM startup process to be preferentially loaded into the disk of the first VM, which is beneficial to the disk.
  • a method for obtaining a disk block priority list includes: pre-calculating the usage of each disk block in the disk snapshot by the first VM during the historical startup process, and obtaining the first VM pair disk. The order of use of the blocks; storing the index numbers of the disk blocks in the disk block priority list according to the order of use of the disk blocks by the first VM; and storing the disk block priority list in the snapshot cache or snapshot center 103 for Lazyload thread is used.
  • the method for collecting the usage of each disk block in the disk snapshot during the historical startup process of the first VM includes: counting the read disk request and the read disk request issued by the first VM during the historical startup process. The order of the disk blocks requested by the read disk request is collected, and then the index numbers of the disk blocks requested by each read disk request are sequentially stored in the disk block priority list in the order of the read disk requests. .
  • the case where the first VM reads the disk during the booting process is mainly described.
  • the first VM also writes to the disk during startup.
  • a disk block such as modifying some data.
  • These write operations may involve disk blocks that already exist in the disk of the first VM, and may also involve disk blocks that do not exist in the disk of the first VM.
  • the disk block that needs to be written may be loaded into the disk of the first VM, and then the write operation is performed on the disk block, but The efficiency is low.
  • the storage management device creates a shadow file for the disk of the first VM, and the sparse file is mainly used to temporarily store incremental data of certain disk blocks in the first VM disk.
  • the size of the sparse file is the same as the size of the disk of the first VM.
  • a write disk request may be issued to the storage management device, and the request carries an index number of a disk block that needs to perform a write operation, and points to a disk block that needs to perform a write operation.
  • the disk block involved in writing a disk request is referred to as a second disk block.
  • the storage management device receives the write disk request, and determines whether the second disk block exists in the disk of the first VM according to the write disk request. When the second disk block is included in the disk of the first VM, the write operation is directly performed in the second disk block; when the second disk block is not included in the disk of the first VM, the increment of the disk request is written.
  • the data is written into the sparse file corresponding to the disk of the first VM.
  • the write operation can be completed without waiting for the disk block to be loaded, which is advantageous for improving the efficiency of the write operation.
  • a bitmap file can also be used to record the location of the incremental data on the sparse file.
  • Each bit in the bitmap file corresponds to a sector in the sparse file for recording the usage status of the sector. If incremental data is stored in a sector, the corresponding location in the bitmap file is valid, for example set to 1. Based on this, after the incremental data of the write disk request is written into the sparse file corresponding to the disk of the first VM, the location of the incremental data in the sparse file may also be recorded in the bitmap file corresponding to the sparse file, that is, corresponding The position corresponding to the sector is valid.
  • snapshot cache based read disk process and the sparse file based write disk process are respectively described. Snapshot caches and sparse files can be used alone or in combination. In the following embodiments, the process of storing the disk blocks in the first VM boot process by the storage management device in combination with the snapshot cache and the sparse files will be mainly described.
  • the storage management device may query the first disk of the first VM for the first disk block requested by the first VM (see FIG. 2b).
  • Read disk block 2 When the first disk block is queried in the disk of the first VM, the incremental data of the first disk block is further queried in the sparse file corresponding to the disk of the first VM (see the read delta data 5 in FIG. 2b). If the incremental data of the first disk block is not found in the sparse file, the first disk block that is queried is returned to the first VM. If the incremental data of the first disk block is queried in the sparse file, the incremental data of the first disk block and the first disk block are combined to obtain the merged disk block data; and the merged disk block data is returned. Give the first VM.
  • the first disk block When the first disk block is queried on the disk of the first VM, the first disk block is queried in the snapshot cache (see read disk block 1 in FIG. 2b). When the first disk block is queried in the snapshot cache, the incremental data of the first disk block is further queried in the sparse file corresponding to the disk of the first VM (see the read delta data 5 in FIG. 2b). If the incremental data of the first disk block is not found in the sparse file, the first disk block that is queried is returned to the first VM. If the incremental data of the first disk block is queried in the sparse file, the incremental data of the first disk block and the first disk block are combined to obtain the merged disk block data; and the merged disk block data is returned. Give the first VM.
  • the storage management device eg, the Lazyload thread
  • the storage management device requests the first disk block from the snapshot center 103 (see read disk block 6 in FIG. 2b) and the disk in the first VM.
  • the incremental data of the first disk block is queried in the corresponding sparse file (see read incremental data 5 in Figure 2b). If the incremental data of the first disk block is not found in the sparse file, the first disk block returned by the snapshot center 103 is stored in the disk of the first VM for use by the first VM. If the incremental data of the first disk block is queried in the sparse file, the incremental data of the first disk block and the first disk block are combined to obtain the merged disk block data; and the merged disk block data is stored. To the disk of the first VM.
  • the storage management device can also load the disk block into the disk of the first VM in the order of disk block loading in the Lazyload manner.
  • the process includes: querying, in the order of the disk block loading, the disk block that needs to be loaded into the disk of the first VM in the snapshot cache (see read disk block 1 in FIG. 2b).
  • the disk center that needs to be loaded into the disk of the first VM is requested from the snapshot center 103 (see the reading in FIG. 2b).
  • Disk block 6 and in the sparse file, query the delta data of the disk block that needs to be loaded into the disk of the first VM (see read delta data 5 in Figure 2b). If the incremental data of the disk block that needs to be loaded into the disk of the first VM is not found in the sparse file, the disk block that needs to be loaded into the disk of the first VM returned by the snapshot center 103 is stored to In the disk of the first VM (see read delta data 3 in Figure 2b). If the incremental data of the disk block that needs to be loaded into the disk of the first VM is queried in the sparse file, the disk block that needs to be loaded into the disk of the first VM and returned in the sparse file is returned by the snapshot center 103. The incremental data of the disk block that needs to be loaded into the disk of the first VM is merged to form new disk block data, and the new disk block data is stored into the disk of the first VM (see Figure 2b). Read incremental data in 3).
  • the incremental data of the disk block that needs to be loaded into the disk of the first VM is queried in the sparse file (see FIG. 2b). Read incremental data in 5). If the incremental data of the disk block that needs to be loaded into the disk of the first VM is not found in the sparse file, the disk block that needs to be loaded into the disk of the first VM that is queried in the snapshot cache Stored in the disk of the first VM (see read delta data 3 in Figure 2b).
  • the disk block that needs to be loaded into the disk of the first VM that is queried in the snapshot cache is The incremental data of the disk block that needs to be loaded into the disk of the first VM is merged in the sparse file to form new disk block data, and the new disk block data is stored in the disk of the first VM (see The read delta data in Figure 2b is 3).
  • the process of writing the disk by the first VM is the same as the process of writing the disk based on the sparse file (see the write delta data 4 in FIG. 2b), and details are not described herein again.
  • the snapshot cache includes a disk block d1, a disk block d2, a disk block dk
  • the snapshot center includes a disk block d1, a disk block d2, a disk block dn, and a VM disk including a disk block.
  • D1, disk block d2, .... disk block dj the sparse file includes incremental data d1, incremental data d2, .... incremental data dm, which is by way of example only and not limiting.
  • the incremental data represented by the same symbol and the disk block have an attribute relationship, for example, the incremental data d1 is the incremental data of the disk block d1, the incremental data d2 is the incremental data of the disk block d2, and the like.
  • FIG. 3 is a schematic flowchart of a virtual machine snapshot processing method according to an exemplary embodiment of the present application. The method is mainly performed by a storage management device, as shown in FIG. 3a, the method includes:
  • the snapshot cache stores the disk block whose frequency of use in the disk snapshot meets the setting requirement.
  • a snapshot cache is used to store a disk block whose frequency is in accordance with the set requirement.
  • the disk block whose frequency meets the setting requirement may be a plurality of disk blocks with the highest frequency of use, or may be a plurality of disk blocks whose use frequency is greater than a set frequency threshold, or may be a frequency of use within a certain interval.
  • the frequency of use of the disk block may be the frequency of use of the disk block within a certain period of time, such as the frequency of use in the last week, or the frequency of use in the most recent month.
  • the probability that a disk block stored in the snapshot cache is hit during the first VM read disk is higher, such as above a set probability threshold.
  • the disk block requested by the first VM may be queried in the disk and snapshot cache of the first VM respectively.
  • the queried disk block is returned to the first VM without requesting the disk block from the snapshot center 103.
  • the disk block requested by the first VM is not queried in the disk of the first VM and the snapshot cache, the disk block is requested from the snapshot center 103, and the disk block returned by the snapshot center 103 is stored in the disk of the first VM. For use by the first VM.
  • the probability that the first VM hits the required disk block in the snapshot cache may be increased to some extent, thereby reducing the request for the disk to the snapshot center.
  • the probability of the block mitigates the concurrency pressure at the snapshot center.
  • one implementation manner of step 302 is: first querying the snapshot cache, and querying the disk of the first VM when the first disk block is not queried in the snapshot cache.
  • the disk of the first VM may be queried first, and when the first disk block is not queried in the disk of the first VM, the snapshot cache is queried.
  • the disk of the first VM is exclusive to the first VM, and the snapshot cache is shared by all the VMs, the disk of the first VM may be queried first, when the first disk block is queried in the disk of the first VM. There is no need to query the snapshot cache, which can reduce the concurrency pressure of the snapshot cache.
  • the first disk block when the first disk block is not queried in the disk and the snapshot cache of the first VM, the first disk block may be requested from the snapshot center storing the disk snapshot; and the first disk block returned from the snapshot center is stored to the first The disk of the VM.
  • the usage frequency of each disk block in the first disk block may be obtained according to the usage of each disk block in the disk snapshot during the startup process of the other VM;
  • the disk blocks in the block that use the frequency that meets the set requirements are stored in the snapshot cache. In this way, the disk blocks in the snapshot cache can be used during the first VM boot process.
  • the booting process of the first VM may be performed in a Lazyload manner according to a disk block loading sequence. Load the disk block in the disk snapshot to the disk of the first VM.
  • the disk cache that needs to be loaded into the disk of the first VM may be queried in the snapshot cache; when the disk block that needs to be loaded into the disk of the first VM is not found in the snapshot cache, The snapshot center requests a disk block that needs to be loaded into the disk of the first VM, and stores the disk block returned by the snapshot center to be loaded into the disk of the first VM to the disk of the first VM.
  • the disk block priority list can be read from the snapshot cache or the snapshot center, and the disk block priority list is stored. There is a sequence of use of each disk block in the first VM to the disk snapshot; the order of use of the first VM to each disk block is taken as the disk block loading order. Further, the disk block loaded into the disk of the first VM in the order in which the first VM uses the disk blocks in the disk snapshot.
  • a disk block prioritized list can be obtained prior to reading the disk block prioritized list from the snapshot cache or the snapshot center.
  • One way to get a disk block priority list includes:
  • each disk block by the first VM According to the order of use of each disk block by the first VM, storing the index number of each disk block into the disk block priority list;
  • the case where the first VM reads the disk during the startup process is mainly described.
  • the first VM also writes to the disk during startup.
  • the disk block that needs to perform the write operation can be first loaded into the disk of the first VM, and then the write operation is performed on the disk block, but this efficiency is low.
  • a sparse file is created for the disk of the first VM, and the sparse file is mainly used to temporarily store incremental data of some disk blocks in the first VM disk.
  • FIG. 3b a method for performing snapshot processing during a disk write operation is as shown in FIG. 3b, and includes the following steps:
  • step 305 Determine whether the second disk block is included in the disk of the first VM. If the determination result is no, go to step 306. If the determination result is yes, go to step 307.
  • a write disk request may be issued to the storage management device, and the request carries an index number of a disk block that needs to perform a write operation.
  • the storage management device determines, according to the write disk request, whether there is a disk block of the first VM that needs to be written by the first VM.
  • the write operation is directly performed in the disk block; when the disk of the first VM does not include the disk block of the first VM that needs to be written, The incremental data of the write disk request is written into the sparse file corresponding to the disk of the first VM.
  • the write operation can be completed without waiting for the disk block to be loaded, which is advantageous for improving the efficiency of the write operation.
  • a bitmap file can also be used to record the location of the incremental data on the sparse file.
  • Each bit in the bitmap file corresponds to a sector in the sparse file for recording the usage status of the sector. If incremental data is stored in a sector, the corresponding location in the bitmap file is valid, for example set to 1. Based on this, after the incremental data of the write disk request is written into the sparse file corresponding to the disk of the first VM, the location of the incremental data in the sparse file may also be recorded in the bitmap file corresponding to the sparse file, that is, corresponding The position corresponding to the sector is valid.
  • the execution bodies of the steps of the method provided by the foregoing embodiments may all be the same device, or the method may also be performed by different devices.
  • the execution body of steps 301 to 303 may be device A; for example, the execution body of steps 301 and 302 may be device A, the execution body of step 303 may be device B, and the like.
  • FIG. 4 is a schematic structural diagram of a virtual machine snapshot processing apparatus according to another embodiment of the present disclosure.
  • the virtual machine snapshot processing apparatus includes a receiving module 401, a query module 402, and a providing module 403.
  • the receiving module 401 is configured to receive a read disk request issued by the first VM during the startup process, where the read disk request is used to request the first VM to start the first disk block in the required disk snapshot.
  • the query module 402 is configured to query the first disk block in the disk and the snapshot cache of the first VM respectively; wherein the snapshot cache stores the disk block whose frequency of use in the disk snapshot meets the setting requirement.
  • the providing module 403 is configured to return the first disk block to the first VM when the first disk block is queried in the disk or the snapshot cache of the first VM.
  • the providing module 403 is further configured to: when the first disk block is not queried in the disk and the snapshot cache of the first VM, request the first disk block from the snapshot center where the disk snapshot is stored; The first disk block returned by the snapshot center is stored in the disk of the first VM.
  • the virtual machine snapshot processing apparatus further includes a statistics module, configured to obtain the first disk block according to the usage of each disk block in the disk snapshot during the startup process of the other VM before the first VM is started.
  • the frequency of use of each disk block; the disk blocks in each disk block whose usage frequency meets the set requirements are stored in the snapshot cache.
  • the query module 402 is further configured to load the disk block in the disk snapshot into the disk of the first VM in a Lazyload manner in a disk loading order in the startup process of the first VM.
  • the query module 402 loads the disk block of the disk snapshot, it is specifically used to:
  • the disk center needs to be loaded into the disk of the first VM and returned to the first VM.
  • the disk blocks loaded in the disk are stored in the disk of the first VM.
  • the query module 402 is further configured to: before querying the disk block to be loaded into the disk of the first VM in the disk cache loading order according to the disk block loading order:
  • the order in which the first VM uses each disk block is taken as the disk block loading order.
  • the statistics module is further used to:
  • each disk block by the first VM According to the order of use of each disk block by the first VM, storing the index number of each disk block into the disk block priority list;
  • the receiving module 401 is further configured to:
  • the incremental data requested by the write disk is written into the sparse file corresponding to the disk of the first VM.
  • the receiving module 401 is further configured to: record, in the bitmap file corresponding to the sparse file, the location of the incremental data in the sparse file.
  • the query module 402 is specifically configured to: query the incremental data of the first disk block in the sparse file; and when the incremental data of the first disk block is queried in the sparse file, the first disk block and the first disk are The incremental data of the block is merged to obtain the combined disk block data.
  • the providing module 403 is specifically configured to: return the merged disk block data to the first VM when returning the first disk block to the first VM.
  • the virtual machine snapshot processing device provided in this embodiment may be used to perform the process in the foregoing snapshot method embodiment, and the working principle thereof is not described again. For details, refer to the description of the method embodiment.
  • the virtual machine snapshot processing device adds a snapshot cache to the VM, and stores a disk block whose frequency of use in the disk snapshot is in accordance with the setting requirement, and the virtual machine snapshot processing device receives the VM.
  • the disk request is issued during the startup process
  • the disk block requested by the VM is queried in the disk and the snapshot cache of the VM respectively, and is returned to the VM when the disk block requested by the VM is queried in the disk or the snapshot cache of the VM.
  • the snapshot cache stores the disk blocks in the disk snapshot that meet the set requirements, the virtual machine snapshot processing device can increase the probability that the read disk request hits the required disk block, and reduce the probability of requesting the disk block from the snapshot center. Thereby reducing the overall concurrency pressure of the snapshot center.
  • the snapshot processing device can be implemented as an electronic device, including: a memory 500, a processor 501, and a communication component 502;
  • the communication component 502 is configured to receive a read disk request issued by the first VM during the startup process, where the read disk request is used to request the first VM to start the first disk block in the required disk snapshot;
  • the processor 501 is coupled to the memory 500 for executing a program for:
  • the snapshot cache stores the disk block whose frequency of use in the disk snapshot meets the setting requirement
  • the communication component 502 is further configured to return the first disk block to the first VM.
  • processor 501 is further configured to:
  • the first disk block is requested by the communication component 502 to the snapshot center storing the disk snapshot, and the first of the snapshot centers is returned by the communication component 502.
  • the disk block is stored in the disk of the first VM.
  • the communication component 502 is further configured to request the first disk block from the snapshot center storing the disk snapshot, and store the first disk block returned by the snapshot center into the disk of the first VM.
  • the processor 501 before the first VM is started, the processor 501 is further configured to:
  • the disk blocks in each disk block whose usage frequency meets the set requirements are stored in the snapshot cache.
  • the processor 501 is further configured to: in a Lazyload manner, load a disk block in the disk snapshot into the disk of the first VM in a Lazyload manner.
  • the processor 501 when the processor 501 loads the disk block of the disk snapshot, the processor 501 is specifically configured to:
  • the disk block that needs to be loaded into the disk of the first VM is requested by the communication component 502, and the snapshot is taken through the communication component 502.
  • the disk blocks returned by the center that need to be loaded into the disk of the first VM are stored in the disk of the first VM.
  • the communication component 502 is further configured to request, from the snapshot center, a disk block that needs to be loaded into the disk of the first VM, and store the disk block returned by the snapshot center to be loaded into the disk of the first VM to the first VM. On the disk.
  • processor 501 is further configured to: before querying the disk block to be loaded into the disk of the first VM in the snapshot cache according to the disk block loading order:
  • the order in which the first VM uses each disk block is taken as the disk block loading order.
  • processor 501 reads the disk block priority list from the snapshot cache or the snapshot center, it is also used to:
  • each disk block by the first VM According to the order of use of each disk block by the first VM, storing the index number of each disk block into the disk block priority list;
  • the communication component 502 is further configured to receive a write disk request issued by the first VM during startup, and the write disk request is used to request a write operation to the second disk block in the disk snapshot.
  • the processor 501 is further configured to: when the second disk block is not included in the disk of the first VM, write the incremental data of the write disk request to the first VM.
  • the disk corresponds to the sparse file.
  • the processor 501 is further configured to: record the incremental data in the sparse file in the bitmap file corresponding to the sparse file. position.
  • the processor 501 is further configured to: query the incremental data of the first disk block in the sparse file; and when the incremental data of the first disk block is queried in the sparse file, the first disk block and the first disk The delta data of the block is merged to obtain the merged disk block data; and the merged disk block data is returned to the first VM via communication component 502.
  • the communication component 502 is specifically configured to return the merged disk block data to the first VM.
  • the electronic device further includes: a display 503, a power supply component 504, and other components such as an audio component 505. Only some of the components are schematically illustrated in FIG. 5, and it is not meant that the electronic device includes only the components shown in FIG.
  • the communication component in Figure 5 can be configured to facilitate wired or wireless communication between the device to which the communication component belongs and other devices.
  • the device to which the communication component belongs can access a wireless network based on a communication standard such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component further includes a near field communication (NFC) module to facilitate short range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the display in FIG. 5 may include a screen whose screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the power component in Figure 5 provides power to the various components of the device to which the power component belongs.
  • the power components can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power to the devices to which the power components belong.
  • the audio component in Figure 5 is configured to output and/or input audio signals.
  • the audio component includes a microphone (MIC) that is configured to receive an external audio signal when the device to which the audio component belongs is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal can be further stored in a memory or transmitted via a communication component.
  • the audio component further includes a speaker for outputting an audio signal.
  • the electronic device provided by the embodiment of the present application is configured to process a virtual machine snapshot. Adding a snapshot cache to the VM, and storing the disk blocks in the disk snapshot that are used in the VM boot process to meet the set requirements. Based on this, when the electronic device receives the read disk request issued by the VM during the startup process, respectively, in the VM The disk block in the disk and snapshot cache is queried for the VM request, and is returned to the VM when the disk block requested by the VM is queried in the VM's disk or snapshot cache.
  • the snapshot cache stores the disk blocks in the disk snapshot that meet the set requirements, the probability of hitting the disk request hits the required disk block can be increased, and the probability of requesting the disk block from the snapshot center is reduced, thereby reducing the overall size of the snapshot center. Concurrent pressure.
  • the embodiment of the present application further provides a computer readable storage medium storing a computer program, which when executed, can implement:
  • the snapshot cache stores the disk block whose frequency of use in the disk snapshot meets the setting requirement
  • the first disk block is queried in the disk or snapshot cache of the first VM, the first disk block is returned to the first VM.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本申请实施例提供一种虚拟机快照处理方法、装置及***。在本申请实施例中,通过为VM增加快照缓存,存储VM启动过程中所需磁盘快照中使用频率符合设定要求的磁盘块,基于此,当接收到VM在启动过程中发出的读磁盘请求时,分别在VM的磁盘和快照缓存中查询VM请求的磁盘块,当在VM的磁盘或快照缓存中查询到VM请求的磁盘块时返回给VM。由于快照缓存中存储的是磁盘快照中使用频率符合设定要求的磁盘块,可以增大读磁盘请求命中所需磁盘块的概率,降低向快照中心请求磁盘块的概率,从而减轻快照中心的整体并发压力。

Description

虚拟机快照处理方法、装置及***
本申请要求2017年11月08日递交的申请号为201711091731.1、发明名称为“虚拟机快照处理方法、装置及***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及存储技术领域,尤其涉及一种虚拟机快照处理方法、装置及***。
背景技术
虚拟机(Virtual Machine,VM)是指通过软件模拟的具有完整硬件***功能的、运行在一个完全隔离环境中的完整计算机***。VM运行在主机上,具有独立的运行环境,有利于提高主机的安全性,而且使得单个主机上可以运行多个操作***,因此越来越受欢迎。
基于VM的一系列优势,越来越多的互联网应用开始运行在VM上。互联网应用的业务流量波动较大,当互联网应用的业务高峰到来时,需要在短时间内启动大量VM。VM启动时依赖于某个磁盘快照,磁盘快照包含VM运行所需的操作***和其它数据。磁盘快照一般存储在快照中心中。
对于VM启动过程中需要读磁盘块时,首先要在VM的磁盘中读取,当未在VM的磁盘中读取到时,向快照中心请求。当短时间内启动大量VM时,快照中心可能需要并发处理大量请求,快照中心的并发压力较大。
发明内容
本申请的多个方面提供一种虚拟机快照处理方法、装置及***,用以减轻快照中心的并发压力。
本申请实施例提供一种虚拟机快照处理方法,包括:
接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,向所述第 一虚拟机返回所述第一磁盘块。
本申请实施例还提供一种虚拟机快照处理装置,包括:
接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,向所述第一虚拟机返回所述第一磁盘块。
本申请实施例还提供一种电子设备,包括:存储器和处理器;
所述存储器,用于存储程序;
所述处理器,耦合至所述存储器,用于执行所述程序以用于:
接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,向所述第一虚拟机返回所述第一磁盘块。
本申请还提供一种云计算***,包括:计算集群、存储集群和快照中心;
所述计算集群,用于提供第一虚拟机的计算资源,所述第一虚拟机运行于所述计算集群中;
所述存储集群,用于提供所述第一虚拟机的磁盘以及快照缓存,所述快照缓存存储有所述第一虚拟机启动所需磁盘快照中使用频率符合设定要求的磁盘块;
所述快照中心,用于存储所述磁盘快照;
所述存储集群包括存储管理设备,所述存储管理设备用于:
接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,向所述第一虚拟机返回所述第一磁盘块。
在本申请实施例中,为虚拟机增加快照缓存,存储虚拟机启动过程中所需磁盘快照中使用频率符合设定要求的磁盘块,基于此,当接收到虚拟机在启动过程中发出的读磁盘请求时,分别在虚拟机的磁盘和快照缓存中查询虚拟机请求的磁盘块,当在虚拟机的磁盘或快照缓存中查询到虚拟机请求的磁盘块时返回给虚拟机。由于快照缓存中存储的是磁盘快照中使用频率符合设定要求的磁盘块,可以增大读磁盘请求命中所需磁盘块的概率,降低向快照中心请求磁盘块的概率,从而减轻快照中心的整体并发压力。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请一示例性实施例提供的示例性存储计算分离的云计算架构的示意图;
图2a为本申请另一示例性实施例提供的IO线程和Lazyload线程在第一VM启动过程中处理磁盘块的一过程示意图;
图2b为本申请又一示例性实施例提供的IO线程和Lazyload线程在第一VM启动过程中处理磁盘块的另一过程示意图;
图3a为本申请又一示例性实施例提供的虚拟机快照处理方法的流程示意图;
图3b为本申请又一示例性实施例提供的虚拟机快照处理方法的流程示意图;
图4为本申请又一实施例提供的一种虚拟机快照处理装置的结构示意图;
图5为本申请又一实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
针对现有技术存在的快照中心并发压力较大的问题,本申请实施例提供一种解决方案,主要原理是:为VM增加快照缓存,存储VM启动过程中所需磁盘快照中使用频率符合设定要求的磁盘块,基于此,当接收到VM在启动过程中发出的读磁盘请求时,分别在VM的磁盘和快照缓存中查询VM请求的磁盘块,当在VM的磁盘或快照缓存中查询到VM请求的磁盘块时返回给VM。由于快照缓存中存储的是磁盘快照中使用频率符合设定 要求的磁盘块,可以增大读磁盘请求命中所需磁盘块的概率,降低向快照中心请求磁盘块的概率,从而减轻快照中心的整体并发压力。
以下结合附图,详细说明本申请各实施例提供的技术方案。
图1为本申请一示例性实施例提供的存储计算分离的云计算架构的示意图。如图1所示,该架构10包括:计算集群101、存储集群(EBS)102以及快照中心103。计算集群101可以是一个或多个。存储集群102也可以是一个或多个。快照中心103一般为一个,但不限于一个。
计算集群101包括多个计算节点。计算节点是具有一定计算能力的设备,一般包括处理器、内存和以太网控制器等组件。举例说明,计算节点可以是个人计算机、台式计算机、服务器等。计算集群101中的计算节点可以由同类型设备实现,也可以由不同类型设备实现。计算集群101主要负责向VM提供计算资源,VM可运行在计算集群101中的计算节点上。
存储集群102包括多台存储设备。存储设备是具有一定存储能力的设备,一般包括处理器、***总线以及硬盘、内存等至少一块物理存储介质。举例说明,存储设备可以是服务器、台式计算机、个人计算机、手机、平板电脑、数据库等。服务器可以是常规服务器、云服务器、云主机、虚拟中心等。存储集群102中的存储设备可以是相同类型,也可以是不同类型。
除存储设备之外,存储集群102还包括存储管理设备。存储管理设备主要负责存储集群102的存储管理逻辑。在部署实现上,存储管理设备可独立于各存储设备,单独部署,或者,也可以部署于某个或某些存储设备上。存储集群102主要负责向VM提供存储资源,例如为各VM提供磁盘(VM disk),即运行于计算集群101中的VM所需的磁盘位于存储集群102中,实现存储计算分离的架构。
快照中心103主要负责存储VM启动所需的磁盘快照,该磁盘快照包括VM启动过程中所需的操作***和其它数据,但并不限于此。可选地,快照中心103可以采用存储代价较低的对象存储(object storage)方式。
计算集群101的计算节点上可以运行多个VM。在这些VM中,有些VM使用相同的磁盘快照,有些VM使用不同的磁盘快照。快照中心103中存储有各VM需要使用的磁盘快照。对不同VM来说,无论使用的磁盘快照是否相同,其依赖于磁盘快照的启动过程均类似。为便于描述和区分,本实施例以第一VM为例进行说明。其中,第一VM可以是计算节点上运行的任一VM。
在用户需要运行第一VM的情况下,可在存储集群102中为第一VM创建磁盘。例如,用户可以通过存储管理设备提供的命令界面,向存储管理设备输入磁盘创建指令。磁盘创建指令会因存储***的不同而不同。存储管理设备可根据磁盘创建指令,在某个或某些存储设备中为第一VM创建一磁盘。其中,在第一VM启动过程中,第一VM的磁盘主要用于存储第一VM启动过程中所需的磁盘快照。
在本实施例中,磁盘快照包括多个磁盘块(block),每个磁盘块按照顺序被分配一个索引号(Index)。其中,磁盘块是磁盘快照的最小存储单位,也是最小加载单位。其中,不同磁盘块的大小可以相同,也可以不相同。在磁盘块大小相同的情况下,并不限定磁盘块的大小,例如可以一个磁盘块是200MB、300MB或500MB等。
在第一VM启动过程中,第一VM需要使用某个磁盘块(即需要读磁盘)时,可以向存储管理设备发出读磁盘请求。存储管理设备根据该读磁盘请求,在第一VM的磁盘中查询第一VM请求的磁盘块。当在第一VM的磁盘中查询到第一VM请求的磁盘块时,向第一VM返回查询到的磁盘块;当未在第一VM的磁盘中查询到第一VM请求的磁盘块时,向快照中心103请求该磁盘块。
在短时间内启动大量VM的情况下,若在各VM的磁盘中均未查询到各VM请求的磁盘块,此时存储管理设备需要集中向快照中心103为各VM请求磁盘块,从而造成快照中心103的并发压力较大。
为解决快照中心103并发压力较大的问题,存储管理设备在存储集群102中选择一块存储空间作为快照缓存(cache)。该快照缓存主要用于存储磁盘快中使用频率符合设定要求的磁盘块。可选地,使用频率符合设定要求的磁盘块可以是使用频率最高的若干个磁盘块,或者可以是使用频率大于设定频率阈值的若干个磁盘块,或者可以是使用频率在某个区间内的若干个磁盘块。其中,磁盘块的使用频率可以是磁盘块在一定时间内的使用频率,例如最近一周内的使用频率,或者最近一个月内的使用频率等。快照缓存中存储的磁盘块在第一VM读磁盘过程中被命中的概率较高,例如高于设定概率阈值。
基于此,在第一VM启动过程中,第一VM需要使用某个磁盘块(即需要读磁盘)时,可以向存储管理设备发出读磁盘请求,该请求携带有磁盘块的索引号,指向读磁盘请求所请求的磁盘块。为便于描述和区分,将读磁盘请求所请求的磁盘块称为第一磁盘块,第一磁盘块是第一VM启动所需磁盘快照中的某个磁盘块。存储管理设备接收第一VM在启动过程中发出的读磁盘请求,可以分别在第一VM的磁盘和快照缓存中查询第一磁盘块。当在第一VM的磁盘或快照缓存中查询到第一磁盘块时,向第一VM返回查询到的第 一磁盘块,而不用向快照中心103请求第一磁盘块。当在第一VM的磁盘和快照缓存中都未查询到第一磁盘块时,向快照中心103请求第一磁盘块,并将快照中心103返回的第一磁盘块存储至第一VM的磁盘中,以供第一VM使用。
在本实施例中,通过快照缓存中存储磁盘快照中使用频率较高的磁盘块,一定程度上可增加第一VM在读磁盘时直接在存储集群102中命中所需磁盘块的概率,进而降低向快照中心103请求磁盘块的概率,减轻快照中心103的并发压力。
其中,快照缓存和第一VM的磁盘都是存储集群102中的存储空间上,存储管理设备查询快照缓存和第一VM的磁盘的性能基本相同,因此可以不对查询快照缓存和第一VM的磁盘的顺序做限定。例如,可以先查询快照缓存,当在快照缓存中未查询到第一磁盘块时,再查询第一VM的磁盘。或者,也可以先查询第一VM的磁盘,当在第一VM的磁盘中未查询到第一磁盘块时,再查询快照缓存。
进一步,考虑到第一VM的磁盘是第一VM专享的,而快照缓存是所有VM共享的,因此可以先查询第一VM的磁盘,当在第一VM的磁盘中查询到第一磁盘块时,无需查询快照缓存,这样可以减轻快照缓存的压力。
在上述实施例或下述实施例中,存储管理设备可以在第一VM启动之前,根据其它VM在启动过程中对磁盘快照中各磁盘块的使用情况,获得磁盘块中各磁盘块的使用频率;将各磁盘块中使用频率符合设定要求的磁盘块存储至快照缓存中。其它VM是指与第一VM使用相同磁盘快照且在第一VM之前启动的VM。当然,在第一VM启动过程中,存储管理设备也可以根据第一VM、除第一VM之外当前启动的其它VM以及在第一VM之前启动的其它VM在启动过程中对磁盘快照中各磁盘块的使用情况,获得磁盘块中各磁盘块的使用频率,根据各磁盘块的使用频率实时更新快照缓存中的磁盘块。快照缓存中存储的磁盘块是动态变化的。例如,存储管理设备可以统计各VM在最近一段时间内对磁盘快照中各磁盘块的使用情况,将使用频率最高的N个磁盘块存储至快照缓存中。N是预先设置的数值,是一正整数。
在一些实施例中,在第一VM启动过程中,存储管理设备一方面可以响应第一VM的读磁盘请求向第一VM提供磁盘块,另一方面可以以磁盘块为粒度采用懒加载(Lazyload)向第一VM的磁盘中加载磁盘快照。这两方面的操作可以并行执行。
在一实现方式中,存储管理设备可以启动两个线程:Lazyload线程和IO线程。其中,Lazyload线程主要负责以Lazyload方式按照磁盘块加载顺序向第一VM的磁盘中加载磁盘快照中的磁盘块。IO线程主要负责处理第一VM发出的读磁盘请求。
参见图2a,当第一VM需要读磁盘时,向IO线程发出读磁盘请求(如图2a中的读磁盘请求①),该读磁盘请求携带有需要读取的磁盘块的索引号。IO线程根据该读磁盘请求,依次查询第一VM的磁盘(如图2a中的读磁盘块②)和快照缓存(如图2a中的读磁盘块③);若在第一VM的磁盘或快照缓存查询到磁盘块索引号所标识的磁盘块,则返回给第一VM(如图2a中的返回磁盘块④);若未在第一VM的磁盘和快照缓存中查询到磁盘块索引号所标识的磁盘块,则指示Lazyload线程向快照中心103请求磁盘块索引号所标识的磁盘块(如图2a中的读磁盘块⑤)。对Lazyload线程来说,在接收到指示之前,以Lazyload方式按照磁盘块加载顺序向第一VM的磁盘中加载磁盘快照中的磁盘块。当接收到指示时,Lazyload线程可以优先从快照中心103读取该磁盘块索引号所标识的磁盘块并存储至第一VM的磁盘中(如图2a中的写磁盘块⑥),以便第一VM尽快使用到该磁盘块。在从快照中心103中读取到该磁盘块索引号所标识的磁盘块之后,Lazyload线程可以继续以Lazyload方式按照磁盘块加载顺序向第一VM的磁盘中加载磁盘快照中的磁盘块。
值得说明的是,在图2a中,假设快照缓存包括磁盘块d1、磁盘块d2、….磁盘块dk,快照中心包括磁盘块d1、磁盘块d2、….磁盘块dn,这仅作为示例并不限于此。
由上述可见,结合快照缓存中存储的磁盘快照中使用频率较高的磁盘块,IO线程的命中率将有极大地提高,可以降低VM对Lazyload线程的依赖度,因此可以控制Lazyload线程的速度尽可能的慢,尽量降低Lazyload线程对存储集群102和快照中心103的影响,甚至做到基本没有影响。
在一示例性实施例中,在快照缓存的基础上,Lazyload线程以Lazyload方式按照磁盘块加载顺序加载磁盘块的方式包括:按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块(如图2a中的读磁盘块⑦)。当未在快照缓存中查询到所述需要向第一VM的磁盘中加载的磁盘块时,向快照中心103请求所述需要向第一VM的磁盘中加载的磁盘块(如图2a中的读磁盘块⑤),并将快照中心103返回的所述需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中(如图2a中的写磁盘块⑥)。当在快照缓存中查询到所述需要向第一VM的磁盘中加载的磁盘块时,将在快照缓存中查询到所述需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中(如图2a中的写磁盘块⑥)。由此可见,当从快照缓存中读取到所述需要向第一VM的磁盘中加载的磁盘块时,无需向快照中心103请求,可以减少向快照中心103请求磁盘块的次数,进一步减轻快照中心103的压力。
进一步,Lazyload线程在按照磁盘块加载顺序向第一VM的磁盘中加载磁盘块之前,可以从快照缓存或快照中心103中读取磁盘块优先列表,该磁盘块优先列表中存储有第一VM对磁盘快照中各磁盘块的使用顺序;将第一VM对所述各磁盘块的使用顺序作为磁盘块加载顺序。其中,将第一VM使用各磁盘块时的先后顺序作为Lazyload线程加载磁盘块的顺序,有利于使第一VM启动过程中优先使用的磁盘块被优先加载至第一VM的磁盘中,有利于提高第一VM的读磁盘请求在磁盘中的命中概率,降低因第一VM读磁盘请求而查询快照缓存的次数,减轻快照缓存的压力,尤其是在短时间内大量VM启动的场景中,可减轻各VM的读磁盘请求对快照缓存造成的并发压力。
在一示例性实施例中,一种获得磁盘块优先列表的方式,包括:预先对第一VM在历史启动过程中对磁盘快照中各磁盘块的使用情况进行统计,获得第一VM对各磁盘块的使用顺序;根据第一VM对各磁盘块的使用顺序,将各磁盘块的索引号存储至磁盘块优先列表中;并将磁盘块优先列表存储至快照缓存或快照中心103中,以供Lazyload线程使用。
进一步可选地,一种对第一VM在历史启动过程中对磁盘快照中各磁盘块的使用情况进行统计的方式包括:统计第一VM在历史启动过程中发出的读磁盘请求以及读磁盘请求的先后顺序;将读磁盘请求所请求的磁盘块的索引号收集起来,然后按照读磁盘请求的先后顺序,将各读磁盘请求所请求的磁盘块的索引号按序存储至磁盘块优先列表中。
在上述实施例中,主要描述了第一VM在启动过程中读磁盘的情况。除此情况之外,第一VM在启动过程中还会写磁盘。在第一VM启动过程中,可能需要对某个磁盘块进行写操作,例如修改某些数据。这些写操作可能涉及第一VM的磁盘中已经存在的磁盘块,也可能涉及第一VM的磁盘中不存在的磁盘块。对于需要对第一VM的磁盘中不存在的磁盘块进行写操作的情况,可以先将需要写操作的磁盘块加载到第一VM的磁盘中,然后在对该磁盘块执行写操作,但这种效率较低。对此,存储管理设备为第一VM的磁盘创建一稀疏文件(shadow file),该稀疏文件主要用来临时存储第一VM磁盘中某些磁盘块的增量数据。可选地,该稀疏文件的大小与第一VM的磁盘的大小相同。
基于上述稀疏文件,当第一VM需要写磁盘时,可以向存储管理设备发出写磁盘请求,该请求会携带需要执行写操作的磁盘块的索引号,指向需要执行写操作的磁盘块。为便于描述,将写磁盘请求涉及的磁盘块称为第二磁盘块。存储管理设备接收写磁盘请 求,根据该写磁盘请求,判断第一VM的磁盘中是否存在第二磁盘块。当在第一VM的磁盘中包含第二磁盘块时,直接在该第二磁盘块中执行写操作;当第一VM的磁盘中未包含第二磁盘块时,则将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中。在该实施例中,在需要对第一VM的磁盘中不存在的磁盘块进行写操作的情况下,无需等待磁盘块加载完成即可完成写操作,有利于提高写操作的效率。
进一步,还可以采用一个位图文件(bitmap file)来记录稀疏文件上增量数据的位置。位图文件中每一个位(bit)对应稀疏文件中的一个扇区(sector),用于记录该扇区的使用状态。若一扇区中存储有增量数据,则位图文件中相应位置为有效,例如置为1。基于此,在将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中之后,还可以在稀疏文件对应的位图文件中记录增量数据在稀疏文件中的位置,即将相应扇区对应的位置为有效。
在上述实施例中,分别描述了基于快照缓存的读磁盘过程和基于稀疏文件的写磁盘过程。快照缓存和稀疏文件可以单独使用,也可以结合使用。在下面实施例中,将重点描述存储管理设备结合使用快照缓存和稀疏文件在第一VM启动过程中对磁盘块进行处理的过程。
参考图2b,当第一VM在启动过程中发出读磁盘请求时,存储管理设备(例如IO线程)可以分别在第一VM的磁盘中查询第一VM所请求的第一磁盘块(参见图2b中的读磁盘块②)。当在第一VM的磁盘中查询到第一磁盘块时,进一步在第一VM的磁盘对应的稀疏文件中查询第一磁盘块的增量数据(参见图2b中的读增量数据⑤)。若未在稀疏文件中查询到第一磁盘块的增量数据,则向第一VM返回查询到的第一磁盘块。若在稀疏文件中查询到第一磁盘块的增量数据,将第一磁盘块与第一磁盘块的增量数据进行合并,以得到合并后的磁盘块数据;将合并后的磁盘块数据返回给第一VM。
当在第一VM的磁盘查询到第一磁盘块时,继续在快照缓存中查询第一磁盘块(参见图2b中的读磁盘块①)。当在快照缓存中查询到第一磁盘块时,进一步在第一VM的磁盘对应的稀疏文件中查询第一磁盘块的增量数据(参见图2b中的读增量数据⑤)。若未在稀疏文件中查询到第一磁盘块的增量数据,则向第一VM返回查询到的第一磁盘块。若在稀疏文件中查询到第一磁盘块的增量数据,将第一磁盘块与第一磁盘块的增量数据进行合并,以得到合并后的磁盘块数据;将合并后的磁盘块数据返回给第一VM。
当在快照缓存中未查询到第一磁盘块时,存储管理设备(例如Lazyload线程)向快照中心103请求第一磁盘块(参见图2b中的读磁盘块⑥),并在第一VM的磁盘对应的稀疏文件中查询第一磁盘块的增量数据(参见图2b中的读增量数据⑤)。若未在稀疏文件中查询到第一磁盘块的增量数据,则将快照中心103返回的第一磁盘块存储至第一VM的磁盘中,以供第一VM使用。若在稀疏文件中查询到第一磁盘块的增量数据,将第一磁盘块与第一磁盘块的增量数据进行合并,以得到合并后的磁盘块数据;将合并后的磁盘块数据存储至第一VM的磁盘中。
另外,存储管理设备(例如Lazyload线程)还可以以Lazyload方式按照磁盘块加载顺序向第一VM的磁盘中加载磁盘块。该过程包括:按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块(参见图2b中的读磁盘块①)。当未在快照缓存中查询到所述需要向第一VM的磁盘中加载的磁盘块时,向快照中心103请求所述需要向第一VM的磁盘中加载的磁盘块(参见图2b中的读磁盘块⑥),并在稀疏文件中查询所述需要向第一VM的磁盘中加载的磁盘块的增量数据(参见图2b中的读增量数据⑤)。若未在稀疏文件中查询到所述需要向第一VM的磁盘中加载的磁盘块的增量数据,则将快照中心103返回的所述需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中(参见图2b中的读增量数据③)。若在稀疏文件中查询到所述需要向第一VM的磁盘中加载的磁盘块的增量数据,将快照中心103返回的所述需要向第一VM的磁盘中加载的磁盘块和在稀疏文件中查询到的所述需要向第一VM的磁盘中加载的磁盘块的增量数据进行合并,形成新的磁盘块数据,将新的磁盘块数据存储至第一VM的磁盘中(参见图2b中的读增量数据③)。
当在快照缓存中查询到所述需要向第一VM的磁盘中加载的磁盘块时,在稀疏文件中查询所述需要向第一VM的磁盘中加载的磁盘块的增量数据(参见图2b中的读增量数据⑤)。若未在稀疏文件中查询到所述需要向第一VM的磁盘中加载的磁盘块的增量数据,则将在快照缓存中查询到的所述需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中(参见图2b中的读增量数据③)。若在稀疏文件中查询到所述需要向第一VM的磁盘中加载的磁盘块的增量数据,将在快照缓存中查询到的所述需要向第一VM的磁盘中加载的磁盘块和在稀疏文件中查询到的所述需要向第一VM的磁盘中加载的磁盘块的增量数据进行合并,形成新的磁盘块数据,将新的磁盘块数据存储至第一VM的磁盘中(参见图2b中的读增量数据③)。
其中,在结合使用快照缓存和稀疏文件的场景中,第一VM写磁盘的过程与前面基 于稀疏文件写磁盘的过程相同(参见图2b中的写增量数据④),在此不再赘述。
值得说明的是,在图2b中,假设快照缓存包括磁盘块d1、磁盘块d2、….磁盘块dk,快照中心包括磁盘块d1、磁盘块d2、….磁盘块dn,VM磁盘包括磁盘块d1、磁盘块d2、….磁盘块dj,稀疏文件包括增量数据d1、增量数据d2、….增量数据dm,这仅作为示例并不限于此。其中,以相同符号表示的增量数据和磁盘块具有所属关系,例如增量数据d1是磁盘块d1的增量数据,增量数据d2是磁盘块d2的增量数据等。
由此可见,将快照缓存和稀疏文件相结合,既可以保证读写磁盘的效率,又可以减轻快照中心103的并发压力。
图3a为本申请一示例性实施例提供的虚拟机快照处理方法的流程示意图。该方法主要由存储管理设备执行,如图3a所示,该方法包括:
301、接收第一虚拟机VM在启动过程中发出的读磁盘请求,读磁盘请求用于请求第一VM启动所需磁盘快照中的第一磁盘块。
302、分别在第一VM的磁盘和快照缓存中查询第一磁盘块;快照缓存存储有磁盘快照中使用频率符合设定要求的磁盘块。
303、当在第一VM的磁盘或快照缓存中查询到第一磁盘块时,向第一VM返回第一磁盘块。
在本实施例中,通过一快照缓存(cache)存储磁盘快中使用频率符合设定要求的磁盘块。可选地,使用频率符合设定要求的磁盘块可以是使用频率最高的若干个磁盘块,或者可以是使用频率大于设定频率阈值的若干个磁盘块,或者可以是使用频率在某个区间内的若干个磁盘块。其中,磁盘块的使用频率可以是磁盘块在一定时间内的使用频率,例如最近一周内的使用频率,或者最近一个月内的使用频率等。快照缓存中存储的磁盘块在第一VM读磁盘过程中被命中的概率较高,例如高于设定概率阈值。
基于快照缓存,当第一VM在启动过程中发出读磁盘请求时,可以分别在第一VM的磁盘和快照缓存中查询第一VM所请求的磁盘块。当在第一VM的磁盘或快照缓存中查询到第一VM所请求的磁盘块时,向第一VM返回查询到的磁盘块,而不用向快照中心103请求磁盘块。当在第一VM的磁盘和快照缓存中都未查询到第一VM所请求的磁盘块时,向快照中心103请求磁盘块,并将快照中心103返回的磁盘块存储至第一VM的磁盘中,以供第一VM使用。
在本实施例中,通过快照缓存存储磁盘快照中使用频率符合设定要求的磁盘块,一定程度上可增加第一VM在快照缓存中命中所需磁盘块的概率,进而降低向快照中心请 求磁盘块的概率,减轻快照中心的并发压力。
在一可选实施方式中,步骤302的一种实现方式为:先查询快照缓存,当在快照缓存中未查询到第一磁盘块时,再查询第一VM的磁盘。或者,也可以先查询第一VM的磁盘,当在第一VM的磁盘中未查询到第一磁盘块时,再查询快照缓存。考虑到第一VM的磁盘是第一VM专享的,而快照缓存是所有VM共享的,因此可以先查询第一VM的磁盘,当在第一VM的磁盘中查询到第一磁盘块时,无需查询快照缓存,这样可以减轻快照缓存的并发压力。
进一步,当未在第一VM的磁盘和快照缓存中查询到第一磁盘块时,可以向存储有磁盘快照的快照中心请求第一磁盘块;将快照中心返回的第一磁盘块存储至第一VM的磁盘中。
在一可选实施方式中,在第一VM启动之前,可以根据其它VM在启动过程中对磁盘快照中各磁盘块的使用情况,获得第一磁盘块中各磁盘块的使用频率;将各磁盘块中使用频率符合设定要求的磁盘块存储至快照缓存中。这样,在第一VM启动过程中,可以使用快照缓存中的磁盘块。
在一可选实施方式中,除了根据第一VM的读磁盘请求,为第一VM查询第一磁盘块之外,还可以在第一VM的启动过程中,以Lazyload方式,按照磁盘块加载顺序向第一VM的磁盘中加载磁盘快照中的磁盘块。
进一步,可以按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块;当未在快照缓存中查询到需要向第一VM的磁盘中加载的磁盘块时,向快照中心请求需要向第一VM的磁盘中加载的磁盘块,并将快照中心返回的需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中。
更进一步,在按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块之前,可以从快照缓存或快照中心中读取磁盘块优先列表,磁盘块优先列表中存储有第一VM对磁盘快照中各磁盘块的使用顺序;将第一VM对各磁盘块的使用顺序作为磁盘块加载顺序。进而,以第一VM对磁盘快照中各磁盘块的使用顺序,向第一VM的磁盘中加载的磁盘块。
在一可选实施方式中,在从快照缓存或快照中心中读取磁盘块优先列表之前,可以获得磁盘块优先列表。一种获得盘块优先列表的方式包括:
对第一VM在历史启动过程中对各磁盘块的使用情况进行统计,以获得第一VM对各磁盘块的使用顺序;
根据第一VM对各磁盘块的使用顺序,将各磁盘块的索引号存储至磁盘块优先列表中;
将磁盘块优先列表存储至快照缓存或快照中心中。
在上述方法实施例中,主要描述了第一VM在启动过程中读磁盘的情况。除此情况之外,第一VM在启动过程中还会写磁盘。在第一VM启动过程中,可能需要对第一VM的磁盘中不存在的磁盘块进行写操作。对于这种情况,可以先将需要执行写操作的磁盘块加载到第一VM的磁盘中,然后在对该磁盘块执行写操作,但这种效率较低。对此,在本实施例中,为第一VM的磁盘创建一稀疏文件,该稀疏文件主要用来临时存储第一VM磁盘中某些磁盘块的增量数据。
基于上述稀疏文件,一种在写磁盘操作过程中进行快照处理的方法流程如图3b所示,包括以下步骤:
304、接收第一VM在启动过程中发出的写磁盘请求,写磁盘请求用于请求对磁盘快照中的第二磁盘块进行写操作。
305、判断第一VM的磁盘中是否包含第二磁盘块;若判断结果为否,则执行步骤306;若判断结果为是,则执行步骤307。
306、将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中。
307、将写磁盘请求的增量数据写入第二磁盘块中。
在本实施例中,当第一VM需要写磁盘时,可以向存储管理设备发出写磁盘请求,该请求会携带需要执行写操作的磁盘块的索引号。存储管理设备根据该写磁盘请求,判断第一VM的磁盘中是否存在第一VM需要写操作的磁盘块。当在第一VM的磁盘中包含第一VM需要写操作的磁盘块时,直接在该磁盘块中执行写操作;当第一VM的磁盘中未包含第一VM需要写操作的磁盘块时,则将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中。在该实施例中,在需要对第一VM的磁盘中不存在的磁盘块进行写操作的情况下,无需等待磁盘块加载完成即可完成写操作,有利于提高写操作的效率。
进一步,还可以采用一个位图文件(bitmap file)来记录稀疏文件上增量数据的位置。位图文件中每一个位(bit)对应稀疏文件中的一个扇区(sector),用于记录该扇区的使用状态。若一扇区中存储有增量数据,则位图文件中相应位置为有效,例如置为1。基于此,在将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中之后,还可以在稀疏文件对应的位图文件中记录增量数据在稀疏文件中的位置,即将相应扇区对应的位置为有效。
值得说明的是,上述快照缓存和稀疏文件可以单独使用,也可以结合使用。结合使用的过程可参见图2b所示,在此不再赘述。
需要说明的是,上述实施例所提供方法的各步骤的执行主体均可以是同一设备,或者,该方法也由不同设备作为执行主体。比如,步骤301至步骤303的执行主体可以为设备A;又比如,步骤301和302的执行主体可以为设备A,步骤303的执行主体可以为设备B;等等。
另外,在上述实施例及附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如201、202等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
图4为本申请又一实施例提供的一种虚拟机快照处理装置的结构示意图。如图4所示,虚拟机快照处理装置包括:接收模块401、查询模块402以及提供模块403。
接收模块401,用于接收第一VM在启动过程中发出的读磁盘请求,该读磁盘请求用于请求第一VM启动所需磁盘快照中的第一磁盘块。
查询模块402,用于分别在第一VM的磁盘和快照缓存中查询第一磁盘块;其中快照缓存存储有磁盘快照中使用频率符合设定要求的磁盘块。
提供模块403,用于当在第一VM的磁盘或快照缓存中查询到第一磁盘块时,向第一VM返回第一磁盘块。
在一可选实施方式中,提供模块403还用于:当未在第一VM的磁盘和快照缓存中查询到第一磁盘块时,向存储有磁盘快照的快照中心请求第一磁盘块;将快照中心返回的第一磁盘块存储至第一VM的磁盘中。
在一可选实施方式中,虚拟机快照处理装置还包括统计模块,用于在第一VM启动之前,根据其它VM在启动过程中对磁盘快照中各磁盘块的使用情况,获得第一磁盘块中各磁盘块的使用频率;将各磁盘块中使用频率符合设定要求的磁盘块存储至快照缓存中。
在一可选实施方式中,查询模块402还用于在第一VM的启动过程中,以Lazyload方式,按照磁盘块加载顺序向第一VM的磁盘中加载磁盘快照中的磁盘块。
进一步,查询模块402在加载磁盘快照的磁盘块时,具体用于:
按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块;
当未在快照缓存中查询到需要向第一VM的磁盘中加载的磁盘块时,向快照中心请求需要向第一VM的磁盘中加载的磁盘块,并将快照中心返回的需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中。
进一步,查询模块402在按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块之前,还用于:
从快照缓存或快照中心中读取磁盘块优先列表,该磁盘块优先列表中存储有第一VM对磁盘快照中各磁盘块的使用顺序;
将第一VM对各磁盘块的使用顺序作为磁盘块加载顺序。
更进一步,统计模块在查询模块402从快照缓存或快照中心中读取磁盘块优先列表之前,还用于:
对第一VM在历史启动过程中对各磁盘块的使用情况进行统计,以获得第一VM对各磁盘块的使用顺序;
根据第一VM对各磁盘块的使用顺序,将各磁盘块的索引号存储至磁盘块优先列表中;
将磁盘块优先列表存储至快照缓存或快照中心中。
在一可选实施方式中,接收模块401还用于:
接收第一VM在启动过程中发出的写磁盘请求,且该写磁盘请求用于请求对磁盘快照中的第二磁盘块进行写操作;
当第一VM的磁盘中未包含所述第二磁盘块时,将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中。
进一步,接收模块401还用于:在稀疏文件对应的位图文件中记录增量数据在稀疏文件中的位置。
相应地,查询模块402具体用于:在稀疏文件中查询第一磁盘块的增量数据;当在稀疏文件中查询到第一磁盘块的增量数据时,将第一磁盘块与第一磁盘块的增量数据进行合并,以得到合并后的磁盘块数据。
相应地,基于上述合并后的磁盘块数据,提供模块403在向第一VM返回第一磁盘块时具体用于:将合并后的磁盘块数据返回给第一VM。
本实施例提供的虚拟机快照处理装置,可用于执行上述快照方法实施例中的流程, 其工作原理不再赘述,详见方法实施例的描述。
本申请实施例提供的虚拟机快照处理装置,为VM增加快照缓存,存储VM启动过程中所需磁盘快照中使用频率符合设定要求的磁盘块,基于此,当虚拟机快照处理装置接收到VM在启动过程中发出的读磁盘请求时,分别在VM的磁盘和快照缓存中查询VM请求的磁盘块,当在VM的磁盘或快照缓存中查询到VM请求的磁盘块时返回给VM。由于快照缓存中存储的是磁盘快照中使用频率符合设定要求的磁盘块,该虚拟机快照处理装置可以增大读磁盘请求命中所需磁盘块的概率,降低向快照中心请求磁盘块的概率,从而减轻快照中心的整体并发压力。
以上描述了虚拟机快照处理装置的内部功能和结构,如图5所示,实际中,该快照处理装置可实现为电子设备,包括:存储器500、处理器501以及通信组件502;
通信组件502,用于接收第一VM在启动过程中发出的读磁盘请求,该读磁盘请求用于请求第一VM启动所需磁盘快照中的第一磁盘块;
存储器500,用于存储程序;
处理器501,耦合至存储器500,用于执行程序以用于:
分别在第一VM的磁盘和快照缓存中查询第一磁盘块;其中快照缓存存储有磁盘快照中使用频率符合设定要求的磁盘块;
当在第一VM的磁盘或快照缓存中查询到所述第一磁盘块时,通过通信组件502向第一VM返回第一磁盘块;
通信组件502,还用于向第一VM返回第一磁盘块。
在一可选实施方式中,处理器501还用于:
当未在第一VM的磁盘和快照缓存中查询到第一磁盘块时,通过通信组件502向存储有磁盘快照的快照中心请求第一磁盘块,并通过通信组件502将快照中心返回的第一磁盘块存储至第一VM的磁盘中。相应地,通信组件502还用于向存储有磁盘快照的快照中心请求第一磁盘块,并将快照中心返回的第一磁盘块存储至第一VM的磁盘中。
在一可选实施方式中,在第一VM启动之前,处理器501还用于:
根据其它VM在启动过程中对磁盘快照中各磁盘块的使用情况,获得第一磁盘块中各磁盘块的使用频率;
将各磁盘块中使用频率符合设定要求的磁盘块存储至快照缓存中。
在一可选实施方式中,处理器501在第一VM的启动过程中,还用于:以Lazyload方式,按照磁盘块加载顺序向第一VM的磁盘中加载磁盘快照中的磁盘块。
进一步,处理器501在加载磁盘快照的磁盘块时,具体用于:
按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块;
当未在快照缓存中查询到需要向第一VM的磁盘中加载的磁盘块时,通过通信组件502向快照中心请求需要向第一VM的磁盘中加载的磁盘块,并通过通信组件502将快照中心返回的需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中。相应地,通信组件502还用于向快照中心请求需要向第一VM的磁盘中加载的磁盘块,并将快照中心返回的需要向第一VM的磁盘中加载的磁盘块存储至第一VM的磁盘中。
进一步,处理器501在按照磁盘块加载顺序,在快照缓存中查询需要向第一VM的磁盘中加载的磁盘块之前,还用于:
从快照缓存或快照中心中读取磁盘块优先列表,该磁盘块优先列表中存储有第一VM对磁盘快照中各磁盘块的使用顺序;
将第一VM对各磁盘块的使用顺序作为磁盘块加载顺序。
更进一步,处理器501在从快照缓存或快照中心中读取磁盘块优先列表之前,还用于:
对第一VM在历史启动过程中对各磁盘块的使用情况进行统计,以获得第一VM对各磁盘块的使用顺序;
根据第一VM对各磁盘块的使用顺序,将各磁盘块的索引号存储至磁盘块优先列表中;
将磁盘块优先列表存储至快照缓存或快照中心中。
在一可选实施方式中,通信组件502还用于接收第一VM在启动过程中发出的写磁盘请求,且该写磁盘请求用于请求对磁盘快照中的第二磁盘块进行写操作。相应地,基于通信组件502接收的写磁盘请求,处理器501还用于:当第一VM的磁盘中未包含所述第二磁盘块时,将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中。
进一步,处理器501在将写磁盘请求的增量数据写入第一VM的磁盘对应的稀疏文件中之后,还用于:在稀疏文件对应的位图文件中记录增量数据在稀疏文件中的位置。
相应地,处理器501还用于:在稀疏文件中查询第一磁盘块的增量数据;当在稀疏文件中查询到第一磁盘块的增量数据时,将第一磁盘块与第一磁盘块的增量数据进行合并,以得到合并后的磁盘块数据;并通过通信组件502将合并后的磁盘块数据返回给第一VM。相应地,通信组件502具体用于将合并后的磁盘块数据返回给第一VM。
进一步,如图5所示,电子设备还包括:显示器503、电源组件504以及音频组件 505等其它组件。图5中仅示意性给出部分组件,并不意味着电子设备只包括图5所示组件。
在图5中的通信组件,可被配置为便于通信组件所属设备和其他设备之间有线或无线方式的通信。通信组件所属设备可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在图5中的显示器,可以包括屏幕,其屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。
在图5中的电源组件,为电源组件所属设备的各种组件提供电力。电源组件可以包括电源管理***,一个或多个电源,及其他与为电源组件所属设备生成、管理和分配电力相关联的组件。
在图5中的音频组件,被配置为输出和/或输入音频信号。例如,音频组件包括一个麦克风(MIC),当音频组件所属设备处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器或经由通信组件发送。在一些实施例中,音频组件还包括一个扬声器,用于输出音频信号。
本申请实施例提供的电子设备,用于对虚拟机快照进行处理。为VM增加快照缓存,存储VM启动过程中所需磁盘快照中使用频率符合设定要求的磁盘块,基于此,当电子设备接收到VM在启动过程中发出的读磁盘请求时,分别在VM的磁盘和快照缓存中查询VM请求的磁盘块,当在VM的磁盘或快照缓存中查询到VM请求的磁盘块时返回给VM。由于快照缓存中存储的是磁盘快照中使用频率符合设定要求的磁盘块,可以增大读磁盘请求命中所需磁盘块的概率,降低向快照中心请求磁盘块的概率,从而减轻快照中心的整体并发压力。
相应地,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,所述 计算机程序被执行时能够实现:
接收第一VM在启动过程中发出的读磁盘请求,该读磁盘请求用于请求第一VM启动所需磁盘快照中的第一磁盘块;
分别在第一VM的磁盘和快照缓存中查询第一磁盘块;其中快照缓存存储有磁盘快照中使用频率符合设定要求的磁盘块;
当在第一VM的磁盘或快照缓存中查询到第一磁盘块时,向第一VM返回第一磁盘块。
上述计算机程序被执行时,除了可以实现上述步骤之外,还可以实现其前述方法实施例中的其它步骤,具体不再赘述。
本领域内的技术人员应明白,本发明的实施例可提供为方法、***、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (13)

  1. 一种虚拟机快照处理方法,其特征在于,包括:
    接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
    分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
    当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,向所述第一虚拟机返回所述第一磁盘块。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    当未在所述第一虚拟机的磁盘和所述快照缓存中查询到所述第一磁盘块时,向存储有所述磁盘快照的快照中心请求所述第一磁盘块;
    将所述快照中心返回的所述第一磁盘块存储至所述第一虚拟机的磁盘中。
  3. 根据权利要求1所述的方法,其特征在于,还包括:
    在所述第一虚拟机启动之前,根据其它虚拟机在启动过程中对所述磁盘快照中各磁盘块的使用情况,获得所述第一磁盘块中各磁盘块的使用频率;
    将所述各磁盘块中使用频率符合设定要求的磁盘块存储至所述快照缓存中。
  4. 根据权利要求1所述的方法,其特征在于,还包括:
    在所述第一虚拟机的启动过程中,以懒加载方式,按照磁盘块加载顺序向所述第一虚拟机的磁盘中加载所述磁盘快照中的磁盘块。
  5. 根据权利要求4所述的方法,其特征在于,所述按照磁盘块加载顺序向所述第一虚拟机的磁盘中加载所述磁盘快照中的磁盘块,包括:
    按照所述磁盘块加载顺序,在所述快照缓存中查询需要向所述第一虚拟机的磁盘中加载的磁盘块;
    当未在所述快照缓存中查询到需要向所述第一虚拟机的磁盘中加载的磁盘块时,向所述快照中心请求需要向所述第一虚拟机的磁盘中加载的磁盘块,并将所述快照中心返回的需要向所述第一虚拟机的磁盘中加载的磁盘块存储至所述第一虚拟机的磁盘中。
  6. 根据权利要求5所述的方法,其特征在于,按照所述磁盘块加载顺序,在所述快照缓存中查询需要向所述第一虚拟机的磁盘中加载的磁盘块之前,所述方法还包括:
    从所述快照缓存或所述快照中心中读取磁盘块优先列表,所述磁盘块优先列表中存储有所述第一虚拟机对所述磁盘快照中各磁盘块的使用顺序;
    将所述第一虚拟机对所述各磁盘块的使用顺序作为所述磁盘块加载顺序。
  7. 根据权利要求6所述的方法,其特征在于,在从所述快照缓存或所述快照中心中读取磁盘块优先列表之前,所述方法包括:
    对所述第一虚拟机在历史启动过程中对所述各磁盘块的使用情况进行统计,以获得所述第一虚拟机对所述各磁盘块的使用顺序;
    根据所述第一虚拟机对所述各磁盘块的使用顺序,将所述各磁盘块的索引号存储至所述磁盘块优先列表中;
    将所述磁盘块优先列表存储至所述快照缓存或所述快照中心中。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,还包括:
    接收所述第一虚拟机在启动过程中发出的写磁盘请求,所述写磁盘请求用于请求对所述磁盘快照中的第二磁盘块进行写操作;
    当所述第一虚拟机的磁盘中未包含所述第二磁盘块时,将所述写磁盘请求的增量数据写入所述第一虚拟机的磁盘对应的稀疏文件中。
  9. 根据权利要求8所述的方法,其特征在于,在将所述写磁盘请求的增量数据写入所述第一虚拟机的磁盘对应的稀疏文件中之后,所述方法还包括:
    在所述稀疏文件对应的位图文件中记录所述增量数据在所述稀疏文件中的位置。
  10. 根据权利要求8所述的方法,其特征在于,所述向所述第一虚拟机返回所述第一磁盘块,包括:
    在所述稀疏文件中查询所述第一磁盘块的增量数据;
    当在所述稀疏文件中查询到所述第一磁盘块的增量数据时,将所述第一磁盘块与所述第一磁盘块的增量数据进行合并,以得到合并后的磁盘块数据;
    将所述合并后的磁盘块数据返回给所述第一虚拟机。
  11. 一种虚拟机快照处理装置,其特征在于,包括:
    接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
    分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
    当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,向所述第一虚拟机返回所述第一磁盘块。
  12. 一种电子设备,其特征在于,包括:存储器、处理器以及通信组件;
    所述通信组件,用于接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
    所述存储器,用于存储程序;
    所述处理器,耦合至所述存储器,用于执行所述程序以用于:
    分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
    当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,通过所述通信组件向所述第一虚拟机返回所述第一磁盘块;
    所述通信组件,还用于向所述第一虚拟机返回所述第一磁盘块。
  13. 一种云计算***,其特征在于,包括:计算集群、存储集群和快照中心;
    所述计算集群,用于提供第一虚拟机的计算资源,所述第一虚拟机运行于所述计算集群中;
    所述存储集群,用于提供所述第一虚拟机的磁盘以及快照缓存,所述快照缓存存储有所述第一虚拟机启动所需磁盘快照中使用频率符合设定要求的磁盘块;
    所述快照中心,用于存储所述磁盘快照;
    所述存储集群包括存储管理设备,所述存储管理设备用于:
    接收第一虚拟机在启动过程中发出的读磁盘请求,所述读磁盘请求用于请求所述第一虚拟机启动所需磁盘快照中的第一磁盘块;
    分别在所述第一虚拟机的磁盘和快照缓存中查询所述第一磁盘块;所述快照缓存存储有所述磁盘快照中使用频率符合设定要求的磁盘块;
    当在所述第一虚拟机的磁盘或所述快照缓存中查询到所述第一磁盘块时,向所述第一虚拟机返回所述第一磁盘块。
PCT/CN2018/113335 2017-11-08 2018-11-01 虚拟机快照处理方法、装置及*** WO2019091322A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711091731.1A CN109753340B (zh) 2017-11-08 2017-11-08 虚拟机快照处理方法、装置及***
CN201711091731.1 2017-11-08

Publications (1)

Publication Number Publication Date
WO2019091322A1 true WO2019091322A1 (zh) 2019-05-16

Family

ID=66401985

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113335 WO2019091322A1 (zh) 2017-11-08 2018-11-01 虚拟机快照处理方法、装置及***

Country Status (2)

Country Link
CN (1) CN109753340B (zh)
WO (1) WO2019091322A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120210068A1 (en) * 2011-02-15 2012-08-16 Fusion-Io, Inc. Systems and methods for a multi-level cache
US20130198459A1 (en) * 2012-01-27 2013-08-01 Fusion-Io, Inc. Systems and methods for a de-duplication cache
CN106406981A (zh) * 2016-09-18 2017-02-15 深圳市深信服电子科技有限公司 一种读、写磁盘数据的方法及虚拟机监视器

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365741B (zh) * 2012-03-30 2016-05-04 伊姆西公司 用于虚拟机集群的快照和恢复的方法和设备
CN102662751B (zh) * 2012-03-30 2016-05-11 浪潮电子信息产业股份有限公司 一种提高基于热迁移虚拟机***可用性的方法
US9798489B2 (en) * 2014-07-02 2017-10-24 Hedvig, Inc. Cloning a virtual disk in a storage platform
CN104767643A (zh) * 2015-04-09 2015-07-08 喜舟(上海)实业有限公司 一种基于虚拟机的容灾备份***
CN105224391B (zh) * 2015-10-12 2018-10-12 浪潮(北京)电子信息产业有限公司 一种虚拟机的在线备份方法及***

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120210068A1 (en) * 2011-02-15 2012-08-16 Fusion-Io, Inc. Systems and methods for a multi-level cache
US20130198459A1 (en) * 2012-01-27 2013-08-01 Fusion-Io, Inc. Systems and methods for a de-duplication cache
CN106406981A (zh) * 2016-09-18 2017-02-15 深圳市深信服电子科技有限公司 一种读、写磁盘数据的方法及虚拟机监视器

Also Published As

Publication number Publication date
CN109753340B (zh) 2021-07-13
CN109753340A (zh) 2019-05-14

Similar Documents

Publication Publication Date Title
EP4109230A1 (en) Storage system, file storage and reading method, and terminal device
WO2019114655A1 (zh) 数据压缩方法、电子设备及计算机可读存储介质
US20100223305A1 (en) Infrastructure for spilling pages to a persistent store
CN110554999B (zh) 基于日志式文件***和闪存设备的冷热属性识别和分离方法、装置以及相关产品
US20130262388A1 (en) Database backup to highest-used page
TWI709862B (zh) 用於預測性檔案快取及同步之技術
WO2019137252A1 (zh) 内存处理方法、电子设备、计算机可读存储介质
WO2023024955A1 (zh) 数据库任务处理方法、冷热数据处理方法、存储引擎、设备及存储介质
US10303806B2 (en) Method and system for providing concurrent buffer pool and page cache
WO2023232120A1 (zh) 数据处理方法、电子设备及存储介质
US11500873B2 (en) Methods and systems for searching directory access groups
WO2019080531A1 (zh) 一种信息采集及内存释放的方法及装置
WO2019128542A1 (zh) 应用处理方法、电子设备、计算机可读存储介质
WO2019057000A1 (zh) 日志写入方法、装置及***
CN109582649A (zh) 一种元数据存储方法、装置、设备及可读存储介质
CN114817978A (zh) 数据访问方法及***、硬件卸载设备、电子设备及介质
US20210349918A1 (en) Methods and apparatus to partition a database
US11126546B2 (en) Garbage data scrubbing method, and device
US10592123B1 (en) Policy driven IO scheduler to improve write IO performance in hybrid storage systems
CN115934002B (zh) 固态硬盘的访问方法、固态硬盘、存储***及云服务器
WO2019091341A1 (zh) 快照创建方法、装置及***
CN102467557B (zh) 重复数据删除的处理方法
WO2019091322A1 (zh) 虚拟机快照处理方法、装置及***
US10671307B2 (en) Storage system and operating method thereof
US10599340B1 (en) Policy driven IO scheduler to improve read IO performance in hybrid storage systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18875242

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18875242

Country of ref document: EP

Kind code of ref document: A1