WO2022143717A1 - 一种虚拟机迁移方法、装置及*** - Google Patents

一种虚拟机迁移方法、装置及*** Download PDF

Info

Publication number
WO2022143717A1
WO2022143717A1 PCT/CN2021/142291 CN2021142291W WO2022143717A1 WO 2022143717 A1 WO2022143717 A1 WO 2022143717A1 CN 2021142291 W CN2021142291 W CN 2021142291W WO 2022143717 A1 WO2022143717 A1 WO 2022143717A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
virtual machine
end device
server
source
Prior art date
Application number
PCT/CN2021/142291
Other languages
English (en)
French (fr)
Inventor
龙鹏
龚磊
黄智超
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Priority to EP21914451.6A priority Critical patent/EP4258113A1/en
Publication of WO2022143717A1 publication Critical patent/WO2022143717A1/zh
Priority to US18/343,250 priority patent/US20230333877A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/12Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Definitions

  • the present invention relates to the technical field of cloud computing, and in particular, to a virtual machine migration method, device and system.
  • a virtual machine refers to a complete computer system with complete hardware system functions simulated by software and running in a completely isolated environment. Through virtual machine software, one or more virtual computers can be simulated on a physical computer. These virtual machines work exactly like real computers, such as installing operating systems, installing applications, accessing network resources, etc. .
  • the traditional virtualization technology is mainly composed of computing virtualization and input/output (I/O) virtualization.
  • I/O input/output
  • both the management platform and the data plane need to use the computing resources of the physical server, so that the physical server cannot provide all resources to users, resulting in a certain degree of waste of resources.
  • the virtual machines in the cluster are often scheduled reasonably, and a large amount of computing resources of the server will be occupied during the virtual machine scheduling process.
  • the embodiments of the present invention disclose a virtual machine migration method, device and system.
  • the virtual machine migration is realized by uninstalling the card, which can reduce the resource occupation of the server during the virtual machine migration, improve the efficiency and safety of the virtual machine migration, and reduce the complexity of the migration. degree and migration costs.
  • the present application provides a method for migrating a virtual machine, the method comprising: a first front-end device sending memory dirty page address information and device status information of the source virtual machine to a first back-end through a first internal channel device, wherein a first front-end device is set on the source server, and the first back-end device is set on a first offload card inserted in the source server, between the first offload card and the source server
  • the first internal channel is provided;
  • the first back-end device reads memory dirty pages from the memory of the source server according to the memory dirty page address information through the first internal channel, and sends all the dirty pages through the external channel.
  • the memory dirty page, the address information of the memory dirty page, and the device status information are sent to the second back-end device, and the second back-end device is inserted into the second offload card of the destination server.
  • the first back-end device in the first offload card acquires the dirty pages of the memory from the memory of the source server according to the address information of dirty pages of the memory sent by the first front-end device, and then acquires the dirty pages of the memory together with the memory
  • the dirty page address information and the device status information are sent to the second back-end device in the second offload card, so that the device status device and memory settings are performed on the destination virtual machine in the destination server, so that the virtual machine can be migrated online, and according to the memory
  • the dirty page address information the work of online migration of the memory dirty pages of the source virtual machine is undertaken by the first offload card, which can effectively reduce the resource occupation of the source server and reduce the resource occupation rate of the source server.
  • the second back-end device sends the device status information to the second front-end device through a second internal channel, wherein the second unload card is connected to the
  • the second internal channel is set between the destination servers, and the second front-end device is set on the destination server;
  • the second front-end device sets the device state of the destination virtual machine according to the device state information;
  • the second front-end device sets the device state of the destination virtual machine;
  • the back-end device sets the memory dirty page in the memory of the destination server according to the memory dirty page address information through the second internal channel.
  • the second back-end device in the second offload card after receiving the memory dirty page address information and the memory dirty page, directly writes the memory dirty page into the memory of the destination server according to the memory dirty page address information, the destination server only needs to set the device state of the destination virtual machine according to the device state information, which can reduce the resource occupation of the destination server and improve the resource utilization rate of the destination server.
  • the external channel includes a first data link and a second data link, where the first data link is used to transmit the device status information, and the second data link is used to transmit the memory dirty page and the address information of the memory dirty page.
  • different data types are transmitted using different data links, so that the first back-end device or the second back-end device can distinguish the data without analyzing the content of the transmitted data.
  • the direct memory access DMA method is used for further processing, which can effectively improve the migration efficiency of the virtual machine, reduce the complexity of the device, and improve the reliability of the device.
  • the first back-end device compresses and encrypts memory dirty pages and device status information of the source virtual machine; the second back-end device compresses and encrypts the memory dirty pages and device status information of the source virtual machine; Decompress and decrypt the memory dirty pages and device state information of the source virtual machine.
  • optimization technologies such as data compression and data encryption can be flexibly added, which can further reduce the occupation of server computing resources and improve the scalability of virtual machine migration.
  • the first data link and the second data link are implemented through a transmission control protocol TCP link or a user data message protocol UDP link.
  • data transmission between the first offload card and the second offload card can be performed based on multiple network protocols, and the first offload card can flexibly select a TCP link or a UDP link to transmit migration data.
  • the first internal channel and the second internal channel are implemented through a VSOCK link.
  • data transmission can be completed between the first offload card and the source server, and the second offload card and the destination server based on a high-speed serial computer expansion bus standard PCIe interface, such as a VSOCK link, to improve data transmission efficiency.
  • PCIe interface such as a VSOCK link
  • the present application provides a virtual machine migration system
  • the virtual machine online migration system includes: a source server, a first uninstall card, a destination server, and a second uninstall card,
  • the first front-end device sends the memory dirty page address information and device status information of the source virtual machine to the first back-end device through the first internal channel, wherein the first front-end device is set in the source server, and the first front-end device is set in the source server.
  • a back-end device is arranged in the first offload card inserted in the source server, and the first internal channel is arranged between the first offload card and the source server;
  • the first back-end device reads memory dirty pages from the memory of the source server according to the memory dirty page address information through the first internal channel, and sends the memory dirty pages and the memory dirty pages through an external channel.
  • the page address information and the device status information are sent to the second backend device, and the second backend device is inserted in the second offload card of the destination server.
  • the second back-end device sends the device status information to the second front-end device through a second internal channel, wherein the second unload card
  • the second internal channel is provided with the destination server, and the second front-end device is set in the destination server; the second front-end device sets the device state of the destination virtual machine according to the device state information; The second back-end device sets the dirty memory page in the memory of the destination server according to the address information of the dirty memory page through the second internal channel.
  • the external channel includes a first data link and a second data link, wherein the first data link is used to transmit the device status information, and the second data link is used to transmit the memory dirty page and the address information of the memory dirty page.
  • the first back-end apparatus compresses and encrypts memory dirty pages and device status information of the source virtual machine; the second back-end apparatus compresses and encrypts the memory dirty pages and device status information of the source virtual machine; Decompress and decrypt the memory dirty pages and device state information of the source virtual machine.
  • the first data link and the second data link are implemented through a transmission control protocol TCP link or a user data message protocol UDP link.
  • the first internal channel and the second internal channel are implemented through a VSOCK link.
  • the present application provides an offload card, comprising: a receiving module configured to receive memory dirty page address information and device status information of a source virtual machine sent by a first front-end device through a first internal channel, wherein the A first front-end device is set in the source server; a processing module is configured to read memory dirty pages from the memory of the source server according to the address information of the memory dirty pages through the first internal channel; a sending module is configured to Send the memory dirty page, the memory dirty page address information, and the device status information to a second back-end device through an external channel, where the second back-end device is inserted into an offload card of the destination server.
  • the present application provides an unloading card, the unloading card is inserted into the source server, a first internal channel is set between the unloading card and the source server, the unloading card includes: a processor and a memory, and the processor executes the memory The program in the device, thereby executing the following method: receiving the memory dirty page address information and device status information of the source virtual machine sent by the first front-end device through the first internal channel, wherein the first front-end device is set in the source server, and the first front-end device is set in the source server.
  • the internal channel reads dirty memory pages from the memory of the source server according to the memory dirty page address information, and the offload card sends the dirty memory pages, dirty page address information and device status information through the external channel and is inserted into another offload card of the destination server.
  • FIG. 1 is a schematic diagram of a virtualization technology architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a virtualization technology architecture based on hardware offloading provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a virtual machine online migration process provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of virtual machine migration based on a TCP connection provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of virtual machine migration based on RDMA provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a virtual machine online migration system provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a server system provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for establishing a network connection provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a connection relationship of various devices provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of online migration of a virtual machine provided by an embodiment of the present application.
  • Fig. 11 is the structural representation of a kind of unloading card provided by the embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of another unloading card provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • the cloud management platform provides an access interface, which lists cloud services provided by the public cloud. Tenants can access the cloud management platform through browsers or other clients, and pay for the corresponding cloud services on the cloud management platform. After purchasing cloud services , the cloud management platform provides the tenant with the permission to access the cloud service, so that the tenant can remotely access the cloud service and perform corresponding configuration.
  • Public cloud usually refers to cloud services provided by cloud providers for tenants. Tenants can access the cloud management platform through the Internet, and purchase and use cloud services provided by public clouds on the cloud management platform.
  • the core attribute of public cloud is shared resource services.
  • the public cloud can be realized through the data center of the public cloud service provider.
  • the data center is equipped with multiple physical servers, and the multiple physical servers provide computing resources, network resources and storage resources required for cloud services.
  • a virtual machine refers to a complete computer system with complete hardware system functions simulated by software and running in a completely isolated environment. All work that can be done in a physical computer can be implemented in a virtual machine. When creating a virtual machine in a computer, part of the hard disk and memory capacity of the physical machine needs to be used as the hard disk and memory capacity of the virtual machine. Each virtual machine has an independent The hard disk and operating system of the virtual machine can be operated like a physical machine.
  • Quick Emulator is an open source emulator and virtual machine monitor (VMM).
  • QEMU mainly provides two functions for users to use, one is as a user-mode simulator, using the dynamic code translation mechanism to execute codes different from the host architecture, and the other is as a virtual machine supervisor, simulating the whole system, and using other VMMs to use hardware Provides virtualization support to create virtual machines close to the performance of the host.
  • VSOCK is a protocol type that provides a network socket (socket) programming interface, provides the abstraction of transmission control protocol (transmission control protocol, TCP) / network protocol (internet protocol, IP), and provides a set of external interfaces, Through this interface, the functions of the TCP/IP protocol can be used uniformly and conveniently.
  • TCP transmission control protocol
  • IP Internet protocol
  • Direct memory access is a capability that allows devices on a computer motherboard to send data directly to memory without going through a central processing unit (CPU) like traditional memory access. Copying the data to move the data can avoid the participation of the operating system and the CPU, and greatly reduce the CPU overhead.
  • Dirty memory pages refer to memory pages in the source virtual machine that need to be synchronized to the destination virtual machine to ensure memory consistency between the source virtual machine and the destination virtual machine.
  • Online migration is also referred to as live migration or hot migration.
  • it refers to the situation that in the data center of the public cloud service provider, as a source server, firmware upgrade, restart, power failure maintenance, or other situations that affect the operation of the application are required.
  • the cloud management platform needs to select another server in the data center as the destination server, the specification of the destination server is the same as that of the source server, and copy the memory pages of the virtual machine in the source server to the destination server, and copy the source server’s memory pages to the destination server.
  • the network disk is mounted on the destination server, so that the destination server can run the application of the source server.
  • the memory pages of the source virtual machine are migrated to the destination server in real time.
  • the migration process has only a very short downtime.
  • the virtual machine runs on the source server.
  • the destination virtual machine in the destination server The memory pages of the virtual machine and the source virtual machine are completely consistent (or very close to the same, for example, more than 99% of the memory pages are the same).
  • the cloud management platform After a very short switch (for example, within seconds), the cloud management platform will The control of the virtual machine is transferred to the destination virtual machine, and the destination virtual machine continues to run on the destination server.
  • the virtual machine itself because the switching time is very short, the tenant does not feel that the virtual machine has been switched, because the migration process is transparent to the tenant. Therefore, online migration is suitable for scenarios that require high business continuity.
  • a virtual machine manager (VMM) is implemented through the operating system kernel, and the virtual machine manager can manage and maintain the virtual machines created by the operating system.
  • the server 100 includes physical hardware resources 110 .
  • the physical hardware resources 110 specifically include computing resources 1110 , storage resources 1120 and network resources 1130 , and a management page client 1210 is deployed in the virtual machine manager 120 .
  • the virtual machine manager 120 virtualizes the computing resource 1110 through the computing virtualization program 1220 and provides it to the virtual machine 130, the virtual machine 140 and the virtual machine created by the server 100 virtual machine 150, the virtual machine manager 120 virtualizes the storage resource 1120 and the network resource 1230 through the IO virtualization program 1230 and provides them to the virtual machine 130, the virtual machine 140 and the virtual machine 150.
  • Tenants can purchase virtual machines of different specifications by purchasing The computer obtains different computing resources, network resources and storage resources.
  • the server in addition to deploying virtual machines, the server also deploys management control plane programs such as the management page client to connect and communicate with the cloud management platform, receive virtual machine management commands sent by the cloud management platform, and give feedback to the cloud management platform.
  • management control plane programs such as the management page client to connect and communicate with the cloud management platform, receive virtual machine management commands sent by the cloud management platform, and give feedback to the cloud management platform.
  • the state of the virtual machine, and the interaction between the management plane client and the data plane will occupy the computing resources of the server, so that the server cannot provide all resources to the tenant, resulting in a certain degree of waste.
  • the server 210 and the offload card 220 are connected through a high-speed serial computer expansion bus standard (peripheral component interconnect express, PCIe), and the server 210 uses the computing virtualization program 21210 in the virtual machine Virtualized and provided to the virtual machine 2130, virtual machine 2140 and virtual machine 2150, the server 210 virtualizes and provides the storage resource 2220 and network resource 2230 in the offload card 220 through the IO virtualization program 21220 in the virtual machine manager 2120
  • the tenant purchases virtual machines of different specifications to use the computing resources of the server, the storage resources and network resources of the uninstall card, and the management page client 2210 is also deployed in the uninstall card 220, It is used to manage and maintain the cloud services provided by the server.
  • PCIe peripheral component interconnect express
  • cloud service providers usually schedule the virtual machines in the data center (ie, server clusters) reasonably, that is, online migration of virtual machines is required. Migrate the tenant's virtual machine from the current physical server to another physical server and continue to work.
  • the virtual machine mainly includes three types of elements: CPU, memory, and I/O devices.
  • CPU central processing unit
  • memory main memory
  • I/O devices I/O devices
  • the page is transmitted to the destination server. If it converges, the source server suspends the virtual machine, transmits the last round of dirty pages to the destination server, and also transmits the CPU and device status to the destination server. Finally, transmits the migration end flag to the destination server. ;
  • the destination server continues to receive the data sent by the source server, and judges whether it contains an end mark after each received data. If it does not contain an end mark, it will be processed according to the type of data received, for example, if the received data is memory Dirty pages, copy the dirty pages of the memory to the specified location of the virtual machine memory. If the CPU and device status information is received, the CPU and device status of the virtual machine are set. If the end flag is received, the virtual machine will be restarted immediately.
  • the service interruption time is related to the time between the source server suspending the virtual machine and the destination server resuming the virtual machine.
  • QEMU uses multiple rounds of memory dirty page iterative transmission control algorithm to reduce the last round of memory dirty pages. data volume, thereby reducing interruption time.
  • the source server 410 includes a virtual machine 4110 and a network interface control A network interface controller (NIC) 4120, an operating system 41110 is deployed in the virtual machine 4110, the destination server 420 is similar in structure to the source server 410, a TCP connection is established between the source server 410 and the destination server 420, and the virtual machine in the source server 410
  • the machine 4110 will run the live migration thread to complete the virtual machine online migration process described in FIG. 3 , and all data involved in the migration process will be sent to the destination server 420 through the TCP connection.
  • the hot migration thread is executed by the source server, which results in a high resource occupancy rate of the source server, and almost exclusively occupies one CPU computing resource during virtual machine migration.
  • the hot migration thread will last for a long time, which will occupy the server resources for a long time, which will reduce the utilization rate of server resources, and it is not conducive to the application of other data optimization technologies, such as data compression and data encryption.
  • the computing resources of the server are further consumed, resulting in a higher occupancy rate of computing resources of the server.
  • the source server 510 includes a virtual machine 5110 and an RDMA communication unit 5120.
  • An operating system 51110 is deployed in the virtual machine 5110.
  • the structure of the destination server 520 is similar to that of the source server 510.
  • the source server 510 and the destination server 520 communicate using RDMA.
  • the unit establishes an RDMA connection and transfers data through the RDMA protocol.
  • the virtual machine 5110 will run the live migration thread to complete the virtual machine online migration process described in FIG. 3 , and the data to be migrated is transmitted through the RDMA connection.
  • the RDMA connection improves the data transmission efficiency compared with the TCP connection, in the same way, the hot migration thread is also executed by the source server, which still consumes 0.3-0.5 CPU computing resources, resulting in a high resource occupancy rate.
  • the computer memory is directly accessed and transmitted by RDMA hardware, and other data optimization technologies (such as data compression, data encryption, etc.) cannot be added at the software level.
  • the server needs to be inserted into hardware devices that support RDMA technology, which increases application and maintenance costs.
  • the present application provides a virtual machine online migration method, which utilizes the resources on the offload card to complete the memory dirty page processing process of the virtual machine, thereby reducing the consumption of computing resources of the server, reducing the occupancy rate of server resources, and improving the virtual machine. Migration efficiency and security.
  • the technical solutions of the embodiments of the present application can be applied to any system that needs to perform online migration of virtual machines, and is especially suitable for a scenario where there is no network protocol stack on the server and is connected to other servers through an offload card.
  • FIG. 6 is a schematic structural diagram of an online virtual machine migration system provided by the present application.
  • the online migration system of the present application includes: a cloud management platform 610 and multiple server systems, wherein the server system may include a server system 620 and a server system 630 , and the server system 620 may include a server 6210 and an uninstall card 6220 , wherein the server 6210 runs a VMM 62110, a virtual machine 62120, and a virtual machine 62130.
  • the structure of the server system 630 is similar to that of the server system 620.
  • the cloud management platform 610 can connect each offload card through the network, the offload card can be connected to the server through a preset interface, for example, a PCIe interface, and the different offload cards and the server can be connected through the network Communication is performed, wherein the online migration system can be set in the data center of the public cloud service provider, and the cloud management platform 110 is used to manage multiple server systems.
  • a preset interface for example, a PCIe interface
  • the different offload cards and the server can be connected through the network Communication is performed, wherein the online migration system can be set in the data center of the public cloud service provider, and the cloud management platform 110 is used to manage multiple server systems.
  • FIG. 7 is a schematic structural diagram of a server system provided by the present application.
  • the server system includes a server 710 and an offload card 720.
  • the server 710 may include a hardware layer and a software layer, the software layer includes a guest operating system, a VMM, etc., and the hardware layer includes one or more processors (eg CPU , graphics processor (graphics processing unit, GPU), neural network processor (neural-network processing unit, NPU), etc.), memory, chips (such as root complex (root complex, RC) chips) and other hardware
  • the offload card 720 can be a special application integrated circuit (application sprcific integrated circuit, ASIC) board or a field programmable gate array (field programmable gate array, FPGA) board, etc., which also includes a hardware layer and a software layer, and the hardware layer includes one or a Hardware such as multiple processors, chips, and network cards, where the capability of the processor may be weaker than that of the processor
  • ASIC application sp
  • the server 710 runs a VMM 7110, a virtual machine 7120, and a first front-end device 7130.
  • the first front-end device 7130 may be deployed inside the virtual machine 7120 or outside the virtual machine 7120.
  • This application This is not limited, if the server system is the source server system, the first front-end device 7130 will be responsible for controlling the virtual machine migration process, mainly including tracking the dirty pages of the virtual machine memory, and performing device status information (such as CPU status). save, report virtual machine migration events, etc.
  • the first front-end device 7130 will not be responsible for processing and transmitting the dirty pages of the virtual machine memory, and only passes the dirty pages of the virtual machine memory that need to be processed and transmitted through the internal channel ( For example, a PCIe interface) notifies the first backend device 7210.
  • a first back-end device 7210 runs in the offload card 720, and the first back-end device 7210 is responsible for data processing and transmission in the virtual machine migration process.
  • the dirty pages of the memory are acquired in the manner, and the device status information input by the first front-end device 7130 is optimized, and then sent to the destination server.
  • the first front-end device 7130 will set the device state and report the virtual machine migration event according to the received device state information, but will no longer receive the dirty pages of the virtual machine memory.
  • the terminal device 7210 receives the address information of the dirty pages of the virtual machine memory and the dirty pages of the memory, and writes the corresponding location in the memory of the virtual machine through DMA, and receives device status information and the like.
  • the first uninstall card of the source server and the second uninstall card of the destination server must be assisted. Between the offload cards, between the first offload card and the second offload card, the network connection must be established before data interaction and transmission can be performed. Therefore, before the online migration of the virtual machine, the network connection topology between each device needs to be completed first. establishment.
  • FIG. 8 is a flowchart of a method for establishing a network connection provided by the present application. As shown in Figure 8, the method includes:
  • S801 Start the virtual machine in the destination server, and create a second front-end device.
  • the destination server After the destination server is powered on, it starts the virtual machine running in it, and then creates a second front-end device. After the second front-end device is created, it will run as the server of the internal channel and wait for the back-end in the same environment. device to connect.
  • the above-mentioned internal channel may be a PCIe-based transmission link, such as a VSOCK link.
  • the second uninstall card is inserted into the destination server. After the second front-end device is created by the destination server, the second uninstall card will also start the second back-end device. After starting, the second back-end device will first use the internal The client identity of the channel establishes a connection with the internal channel server (ie, the second front-end device). After the connection is established, the second back-end device will run as the server of the external channel, waiting for the client of the external channel to connect. .
  • the above-mentioned external channel may be a transmission link based on various network transmission protocols, such as a transmission control protocol (transmission control protocol, TCP) link, a user datagram protocol (user datagram protocol, UDP) link.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • the first back-end device first establishes a connection with the external channel server (ie, the second back-end device) as a client of the external channel, and after the connection After the establishment is completed, the first back-end device will run as the server of the internal channel, waiting for the client of the internal channel (ie, the first front-end device in the source server) to connect.
  • the external channel server ie, the second back-end device
  • the source server creates a first front-end device.
  • the first front-end device will establish a connection with the second back-end device in the first uninstall card as a client of the internal channel.
  • the first front-end device and the second back-end device Data transfer is possible via internal channels.
  • the first front-end device in the source server and the first back-end device in the first uninstall card, the second front-end device in the destination server and the second uninstall card are After the connection is established with the second back-end device of the device, data transmission can be performed through the internal channel, and the first back-end device in the first offload card can be connected with the second back-end device in the second offload card. After the connection is established, data transmission can be performed through an external channel 9 , after each device between the source end and the destination end completes the connection establishment, it can be ensured that the data to be migrated can be smoothly migrated from the source server to the destination server during the online migration of the virtual machine.
  • FIG. 10 is a schematic diagram of online migration of a virtual machine provided by the present application.
  • the source offload card can be mounted with a network disk and provide the network disk to the source server for use.
  • the tenant can store the tenant's data in the network disk.
  • network disks can also be cloud services. Tenants can purchase network disks on the cloud management platform and mount them to the source server.
  • the migration method of the embodiment of the present application includes the following steps:
  • S101 The cloud management platform sends a migration command to the source server and the destination server respectively.
  • the migration command is used to instruct the source server to migrate the virtual machine to be migrated online to the destination server, and the data to be migrated includes memory dirty page address information, memory dirty pages, and device status information of the virtual machine.
  • the migration command is issued when the migration conditions are met.
  • the migration conditions can be that the source server needs to perform firmware upgrade, restart, power failure maintenance, or other conditions that affect the normal operation of the source server.
  • the cloud management platform can obtain the above conditions in advance, and After selecting a destination server suitable for the migration target in the data center according to the above situation, the migration command is sent to the source server and the destination server.
  • S102 The source server sends the full amount of memory pages to the destination server through the first offload card and the second offload card.
  • the first front-end device in the source server first sends the address information of the full memory page of the virtual machine to be migrated to the first back-end device in the first offload card through the internal channel, and the first back-end device receives the address After the information is obtained, the full memory page is obtained by DMA, and the first backend device sends the full memory page and address information of the full memory page to the second backend device in the second offload card through an external channel.
  • the second back-end device in the second offload card receives the full memory page and the address information of the full memory page, and the second back-end device directly writes the full memory page through DMA according to the received address information of the full memory page.
  • the second front-end device can selectively check, for example, check the address Is it legal etc.
  • the second back-end device sets the memory of the target virtual machine according to the full amount of memory pages, so that the memory of the target virtual machine is consistent with the memory of the virtual machine to be migrated.
  • the migration of memory pages is realized.
  • the network resources and storage resources of the target virtual machine are also the same as those of the virtual machine to be migrated.
  • the tenant can still access the VM to be migrated in the source server, and the OS of the source server will continue to wait for the memory of the VM to be migrated.
  • a write operation is performed, thereby generating memory dirty pages, and at the same time, the first unloading card can also perform a DMA write operation on the memory of the virtual machine to be migrated, thereby generating memory dirty pages.
  • the first offload card must obtain the dirty memory pages generated in the above two situations and send them to the second offload card.
  • the second offload card updates the full amount of memory according to these dirty pages, so as to ensure that the virtual machine to be migrated is between network resources and
  • the memory dirty pages generated before the storage resource migration is completed can be synchronized on the target virtual machine.
  • the source server sends the memory dirty page address information and the device status information to the first offload card.
  • the first front-end device in the source server enables the dirty page tracking function to track the dirty page situation generated by the operating system in the memory of the source virtual machine, thereby generating memory dirty page location information in the memory of the source virtual machine .
  • the operating system generates dirty pages in the memory of the source virtual machine.
  • the processor in the source server writes data to the memory of the source virtual machine when the operating system is running, which involves modifying the data in the memory page.
  • the first front-end device can record which memory pages have been modified in this case.
  • the location information of the dirty pages in the memory may be a bitmap of dirty pages in the memory.
  • its bitmap value is 1.
  • no data is written to the memory page, its bitmap value is 0.
  • the memory dirty page bitmap records the memory page number, and records 0 or 1 for different memory page numbers.
  • the location information of the dirty pages in the memory can also be implemented in other ways. According to the location information of the dirty pages in the memory, it can be known which memory page in the source virtual machine is modified.
  • the first front-end device will also record and save the device status information of the source virtual machine. After completing the dirty page tracking and device status information recording, the first front-end device records the memory dirty page address information and device status information of the source virtual machine. The status information is sent to the first backend device in the first offload card through the internal channel.
  • the first front-end device when the first front-end device sends data to the first back-end device, it can select different links for sending according to different data types. address information) select the same link to send, and select another link for data unrelated to the source virtual machine memory (such as device status information).
  • the first offload card acquires the dirty pages of the memory from the source server according to the address information of the dirty pages of the memory.
  • the first back-end device in the first offload card receives the memory dirty page address information
  • the first back-end device obtains the memory dirty pages generated by the operating system from the memory of the source server through DMA transmission.
  • the first offload card sends the address information of memory dirty pages, memory dirty pages and device status information to the second offload card.
  • the first backend device in the first offload card sends the received memory dirty page address information, memory dirty page and device status information to the second backend device in the second offload card through an external channel.
  • the first back-end device can select different transmission links to send, and for data related to the memory of the source virtual machine, select the same link to send, and for the data related to the source virtual machine memory, select the same link to send. For data irrelevant to the memory of the machine, select another link to send.
  • the first back-end device may further optimize the data according to actual needs, for example, the data may be compressed, encrypted, zero page optimization, etc., which can improve the data during the online migration of the virtual machine. Transmission performance improves the efficiency and security of virtual machine migration, reduces resource consumption, and saves migration costs.
  • the second offload card writes the dirty pages of the memory into the memory of the destination server according to the address information of the dirty pages of the memory.
  • the second back-end device in the second offload card receives the data
  • the data needs to be decompressed and decrypted first, and then according to the received memory dirty page address information, the data needs to be decompressed and decrypted through DMA.
  • the transmission mode writes the dirty pages of the memory directly into the memory of the destination server.
  • the second offload card sends the memory dirty page address information and the device status information to the destination server.
  • the second back-end device in the second offload card sends the memory dirty page address information and device status information to the second front-end device in the destination server through the internal channel, and after receiving the data, the second front-end device parses the data content , and perform corresponding processing according to the specific meaning of the data, for example, setting the device status of the target virtual machine according to the device status information, checking whether the address is legal according to the memory dirty page address information, etc.
  • step S108 The source server determines whether the criterion for suspending the source virtual machine has been reached. If the suspension standard is not met, go back to step S103 , and if the suspension standard is met, execute step S109 , that is, end the migration process and notify the cloud management platform that the migration is complete.
  • the first front-end device in the source server determines whether the amount of data of the dirty pages in the memory generated by the operating system in the source virtual machine is less than the capacity of the current network bandwidth, and if the amount of data in the dirty pages is greater than or equal to the current network bandwidth
  • the capacity of the source virtual machine does not meet the shutdown standard. Therefore, after the first back-end device obtains the dirty memory pages generated by the operating system in the source virtual machine and the source virtual machine stops, the operating system of the source virtual machine generates Therefore, the first back-end device needs to repeatedly execute the acquisition to obtain new memory dirty pages generated by the operating system in the source virtual machine, and send the acquired new memory dirty pages to the second back-end device. until the data volume of the new memory dirty pages generated by the operating system in the source virtual machine is less than the capacity of the current network bandwidth. At this time, the source virtual machine reaches the shutdown standard and stops, and the cloud management platform is notified that the migration is complete.
  • S110 The target server notifies the cloud management platform that the target virtual machine is ready.
  • the second front-end device of the destination server after completing the device state setting and memory setting of the target virtual machine, notifies the cloud management platform that the target virtual machine is ready.
  • the tenant remotely logs in the source virtual machine according to the IP address of the source virtual machine.
  • the switching process is very short and can be controlled within seconds, in general, the tenant is not aware of this, so the above migration process can be made invisible to the tenant and can be used in the migration of virtual machines. On the premise of the machine, the tenant experience is guaranteed.
  • the embodiment of the present application can realize the migration of virtual machines without the tenant's perception, and use the computing resources of the offload card to complete the migration of dirty pages in the memory of the virtual machine during the migration process, without consuming the computing resources of the server, and can effectively reduce the impact on the server. Resource occupation, effectively improve server resource utilization and migration efficiency, and ensure migration security.
  • FIG. 11 is a schematic structural diagram of an unloading card provided by an embodiment of the present application.
  • the uninstall card includes: a receiving module 10 , a processing module 11 and a sending module 12 . in,
  • the receiving module 10 is configured to receive memory dirty page address information and device status information of the source virtual machine sent by the first front-end device through the first internal channel, wherein the first front-end device is arranged in the source server;
  • the processing module 11 is configured to read memory dirty pages from the memory of the source server according to the memory dirty page address information through the first internal channel;
  • the sending module 12 is configured to send the memory dirty page, the memory dirty page address information and the device status information to the second back-end device through an external channel, and the second back-end device is inserted in the in the uninstall card of the destination server.
  • each module in the uninstall card may perform the steps performed by each module in FIG. 8 to FIG. 10 .
  • FIG. 8 to FIG. 10 and related descriptions which will not be repeated here.
  • An embodiment of the present application provides a server system, where the server system includes a server and an uninstall card, wherein the uninstall card can be inserted into the server.
  • the server includes one or more processors 20 , a communication interface 21 and a memory 22 , wherein the processor 20 , the communication interface 21 and the memory 22 can be connected through a bus 23 .
  • the bus may be a PCIE bus or other high-speed bus.
  • the processor 20 includes one or more general-purpose processors, wherein the general-purpose processor may be any type of device capable of processing electronic instructions, including a central processing unit (Central Processing Unit, CPU), a microprocessor, a microcontroller, a main Processors, controllers, and ASICs (Application Specific Integrated Circuits), etc.
  • the processor 20 executes various types of digitally stored instructions, such as software or firmware programs stored in the memory 22, which enable the server to provide a wide variety of services.
  • the processor 20 can execute programs or process data to perform at least a portion of the methods discussed herein.
  • the communication interface 21 may be a wired interface (eg, an Ethernet interface) for communicating with the client.
  • the communication interface 21 can adopt a protocol suite on top of TCP/IP, such as RAAS protocol, Remote Function Call (RFC) protocol, Simple Object Access Protocol (Simple Object Access Protocol, SOAP) protocol, Simple Network Management Protocol (SNMP) protocol, Common Object Request Broker Architecture (CORBA) protocol, distributed protocol and so on.
  • RAAS Remote Function Call
  • SOAP Simple Object Access Protocol
  • SNMP Simple Network Management Protocol
  • CORBA Common Object Request Broker Architecture
  • the memory 22 may include a volatile memory (Volatile Memory), such as a random access memory (Random Access Memory, RAM); the memory may also include a non-volatile memory (Non-Volatile Memory), such as a read-only memory (Read-Only Memory) Memory, ROM), flash memory (Flash Memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD) memory may also include a combination of the above-mentioned types of memory.
  • the memory can be used to store the guest operating system as well as the VMM.
  • the above server can be used to perform the steps performed by the source server or the destination server as shown in FIG. 8 to FIG. 10 .
  • the above server can be used to perform the steps performed by the source server or the destination server as shown in FIG. 8 to FIG. 10 .
  • FIG. 8 to FIG. 10 please refer to FIG. 8 to FIG. 10 and related descriptions.
  • the offload card includes one or more processors 30 , a communication interface 31 and a memory 32 .
  • the processor 30 , the communication interface 31 and the memory 32 may be connected through a bus 34 .
  • the processor 30 includes one or more general-purpose processors, wherein the general-purpose processor can be any type of device capable of processing electronic instructions, including a central processing unit (Central Processing Unit, CPU), a microprocessor, a microcontroller, a main Processors, controllers, and ASICs (Application Specific Integrated Circuits), etc.
  • the processor 30 executes various types of digitally stored instructions, such as software or firmware programs stored in the memory 32, which enable the client to provide a wide variety of services.
  • the processor 30 can execute programs or process data to perform at least part of the methods discussed herein.
  • the communication interface 31 may be a wired interface (eg, an Ethernet interface) for communicating with a server or a user.
  • the communication interface 31 can adopt a protocol suite on top of TCP/IP, such as RAAS protocol, Remote Function Call (RFC) protocol, Simple Object Access Protocol (Simple Object Access Protocol, SOAP) protocol, Simple Network Management Protocol (SNMP) protocol, Common Object Request Broker Architecture (CORBA) protocol, distributed protocol and so on.
  • RAAS protocol Remote Function Call (RFC) protocol
  • Simple Object Access Protocol Simple Object Access Protocol
  • SOAP Simple Object Access Protocol
  • SNMP Simple Network Management Protocol
  • CORBA Common Object Request Broker Architecture
  • the memory 32 may include a volatile memory (Volatile Memory), such as a random access memory (Random Access Memory, RAM); the memory may also include a non-volatile memory (Non-Volatile Memory), such as a read-only memory (Read-Only Memory) Memory, ROM), flash memory (Flash Memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD) memory may also include a combination of the above-mentioned types of memory.
  • the memory 32 can be used for a sending module, a processing module and a receiving module.
  • the above-mentioned uninstall card can be used to perform the steps performed by the first uninstall card or the second uninstall card as shown in FIGS. 8 to 10 .
  • the above-mentioned uninstall card can be used to perform the steps performed by the first uninstall card or the second uninstall card as shown in FIGS. 8 to 10 .
  • FIGS. 8 to 10 please refer to FIGS. 8 to 10 and related descriptions.
  • Embodiments of the present application further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, part or all of the steps described in the foregoing method embodiments can be implemented.
  • Embodiments of the present application also provide a computer program product, which, when run on a computer or a processor, causes the computer or processor to execute one or more steps in any one of the above methods. If each component module of the above-mentioned device is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in the computer-readable storage medium.
  • the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be implemented in the present application.
  • the implementation of the examples constitutes no limitation.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
  • the modules in the apparatus of the embodiment of the present application may be combined, divided and deleted according to actual needs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)

Abstract

一种虚拟机在线迁移方法、装置及***,用于将源服务器运行的源虚拟机迁移到目的服务器。其中,该方法包括:第一前端装置通过第一内部通道将源虚拟机的内存脏页地址信息和设备状态信息发送至第一后端装置,该第一前端装置设置于源服务器,该第一后端装置设置在插置于源服务器的第一卸载卡中,第一卸载卡与源服务器之间设置有第一内部通道;第一后端装置根据内存脏页地址信息从源服务器的内存读取内存脏页并通过外部通道发送内存脏页、内存脏页地址信息和设备状态信息至第二后端装置,该第二后端装置在插置于目的服务器的第二卸载卡中。上述方法能够减少对服务器的资源占用,降低服务器的资源占用率。

Description

一种虚拟机迁移方法、装置及*** 技术领域
本发明涉及云计算技术领域,尤其涉及一种虚拟机迁移方法、装置及***。
背景技术
虚拟机指通过软件模拟的具有完整硬件***功能的、运行在一个完全隔离环境中的完整计算机***。通过虚拟机软件,可以在一台物理计算机上模拟出一台或多台虚拟的计算机,这些虚拟机完全就像真正的计算机那样进行工作,例如可以安装操作***、安装应用程序、访问网络资源等。
传统的虚拟化技术主要由计算虚拟化、输入输出(input/output,I/O)虚拟化组成,作为云场景的核心技术,它以虚拟机为粒度将一台物理服务器共享给多个用户使用,使用户能在安全隔离的前提下方便灵活的使用物理资源,并且能极大提升物理资源的利用率。
在目前虚拟化架构中,管理平台以及数据面都需要利用物理服务器的计算资源,导致物理服务器无法将所有资源提供给用户,导致了一定程度的资源浪费,此外,为了最大化利用物理资源,需要经常对集群内的虚拟机进行合理的调度,在虚拟机调度过程中将占用服务器大量计算资源。
因此,如何实现在虚拟机迁移过程中减少对物理服务器的计算资源的占用,降低虚拟机迁移期间物理服务器的资源占用率是目前亟待解决的问题。
发明内容
本发明实施例公开了一种虚拟机迁移方法、装置及***,通过卸载卡实现虚拟机迁移,能够降低在虚拟机迁移期间对服务器的资源占用,提高虚拟机迁移的效率和安全,降低迁移复杂度和迁移成本。
第一方面,本申请提供了一种虚拟机迁移方法,该方法包括:第一前端装置通过第一内部通道将所述源虚拟机的内存脏页地址信息和设备状态信息发送至第一后端装置,其中第一前端装置设置于所述源服务器,所述第一后端装置设置在插置于所述源服务器的第一卸载卡中,所述第一卸载卡与所述源服务器之间设置有所述第一内部通道;所述第一后端装置通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页,并通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至第二后端装置,所述第二后端装置在插置于所述目的服务器的第二卸载卡中。
在本申请实施例中,第一卸载卡中的第一后端装置根据第一前端装置发送的内存脏页地址信息从源服务器的内存中获取内存脏页,然后将获取到内存脏页连同内存脏页地址信息以及设备状态信息发送给第二卸载卡中的第二后端装置,从而对目的服务器中的目的虚拟机进行设备状态设备和内存设置,能够实现虚拟机在线迁移,并且,根据内存脏页地址信息将源虚拟机的内存脏页进行在线迁移的工作由第一卸载卡承担,可以有效减少对源服务器的资源占用,降低源服务器的资源占用率。
结合第一方面,在第一方面一种可能的实现方式中,第二后端装置通过第二内部通道将所述设备状态信息发送至第二前端装置,其中所述第二卸载卡与所述目的服务器之间设置有所述第二内部通道,所述第二前端装置设置于所述目的服务器;所述第二前端装置根据所述 设备状态信息设置目的虚拟机的设备状态;所述第二后端装置通过所述第二内部通道根据所述内存脏页地址信息将所述内存脏页设置于所述目的服务器的内存中。
在本申请实施例中,第二卸载卡中的第二后端装置接收到内存脏页地址信息和内存脏页之后,直接根据内存脏页地址信息将内存脏页写入目的服务器的内存中,目的服务器只需要根据设备状态信息设置目的虚拟机的设备状态,这样可以减少对目的服务器的资源占用,提高目的服务器的资源利用率。
结合第一方面,在第一方面一种可能的实现方式中,所述外部通道包括第一数据链路和第二数据链路,其中,所述第一数据链路用于传输所述设备状态信息,所述第二数据链路用于传输所述内存脏页和所述内存脏页地址信息。
在本申请实施例中,将不同的数据类型用不同的数据链路进行传输,可以使得第一后端装置或第二后端装置在不需要解析传输的数据内容的情况下对数据进行区分,并在确定传输数据为虚拟机内存脏页数据时,利用直接内存访问DMA方式进行进一步处理,能够有效提高虚拟机迁移效率,降低装置复杂度,提高装置可靠性。
结合第一方面,在第一方面一种可能的实现方式中,所述第一后端装置对所述源虚拟机的内存脏页和设备状态信息进行压缩和加密;所述第二后端装置对所述源虚拟机的内存脏页和设备状态信息进行解压缩和解密。
在本申请提供的方案中,在虚拟机在线迁移期间,可以灵活加入数据压缩和数据加密等优化技术,能够进一步减少对服务器计算资源的占用,提高虚拟机迁移的可扩展性。
结合第一方面,在第一方面一种可能的实现方式中,所述第一数据链路和所述第二数据链路通过传输控制协议TCP链路或用户数据报文协议UDP链路实现。
在本申请提供的方案中,第一卸载卡与第二卸载卡之间可以基于多种网络协议进行数据传输,第一卸载卡可以灵活选用TCP链路或UDP链路传输迁移数据。
结合第一方面,在第一方面一种可能的实现方式中,所述第一内部通道和所述第二内部通道通过VSOCK链路实现。
在本申请提供的方案中,第一卸载卡与源服务器、第二卸载卡与目的服务器之间可以基于高速串行计算机扩展总线标准PCIe接口,例如VSOCK链路完成数据传输,提高数据传输效率。
第二方面,本申请提供了一种虚拟机迁移***,该虚拟机在线迁移***包括:源服务器、第一卸载卡、目的服务器以及第二卸载卡,
第一前端装置通过第一内部通道将源虚拟机的内存脏页地址信息和设备状态信息发送至第一后端装置,其中,所述第一前端装置设置于所述源服务器中,所述第一后端装置设置在插置于所述源服务器的所述第一卸载卡中,所述第一卸载卡与所述源服务器之间设置有所述第一内部通道;
所述第一后端装置通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页,并通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至第二后端装置,所述第二后端装置在插置于所述目的服务器的第二卸载卡中。
结合第二方面,在第二方面一种可能的实现方式中,所述第二后端装置通过第二内部通道将所述设备状态信息发送至第二前端装置,其中,所述第二卸载卡与所述目的服务器之间设置有所述第二内部通道,所述第二前端装置设置于所述目的服务器中;所述第二前端装置根据所述设备状态信息设置目的虚拟机的设备状态;所述第二后端装置通过所述第二内部通道根据所述内存脏页地址信息将所述内存脏页设置于所述目的服务器的内存中。
结合第二方面,在第二方面一种可能的实现方式中,所述外部通道包括第一数据链路和第二数据链路,其中,所述第一数据链路用于传输所述设备状态信息,所述第二数据链路用于传输所述内存脏页和所述内存脏页地址信息。
结合第二方面,在第二方面一种可能的实现方式中,所述第一后端装置对所述源虚拟机的内存脏页和设备状态信息进行压缩和加密;所述第二后端装置对所述源虚拟机的内存脏页和设备状态信息进行解压缩和解密。
结合第二方面,在第二方面一种可能的实现方式中,所述第一数据链路和所述第二数据链路通过传输控制协议TCP链路或用户数据报文协议UDP链路实现。
结合第二方面,在第二方面一种可能的实现方式中,所述第一内部通道和所述第二内部通道通过VSOCK链路实现。
第三方面,本申请提供了一种卸载卡,包括:接收模块,用于接收第一前端装置通过第一内部通道发送的源虚拟机的内存脏页地址信息和设备状态信息,其中,所述第一前端装置设置于所述源服务器中;处理模块,用于通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页;发送模块,用于通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至第二后端装置,所述第二后端装置在插置于所述目的服务器的卸载卡中。
第四方面,本申请提供了一种卸载卡,该卸载卡插置于源服务器中,卸载卡与源服务器之间设置有第一内部通道,卸载卡包括:处理器以及存储器,处理器执行存储器中的程序,从而执行以下方法:接收第一前端装置通过第一内部通道发送的源虚拟机的内存脏页地址信息和设备状态信息,其中,第一前端装置设置于源服务器中,通过第一内部通道根据内存脏页地址信息从源服务器的内存读取内存脏页,该卸载卡通过外部通道发送内存脏页、内存脏页地址信息和设备状态信息插置于目的服务器的另一卸载卡。
附图说明
为了更清楚地说明本发明实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种虚拟化技术架构的示意图;
图2是本申请实施例提供的一种基于硬件卸载的虚拟化技术架构的示意图;
图3是本申请实施例提供的一种虚拟机在线迁移流程的示意图;
图4是本申请实施例提供的一种基于TCP连接进行虚拟机迁移的示意图;
图5是本申请实施例提供的一种基于RDMA进行虚拟机迁移的示意图;
图6是本申请实施例提供的一种虚拟机在线迁移***的结构示意图;
图7是本申请实施例提供的一种服务器***的结构示意图;
图8是本申请实施例提供的一种网络连接建立方法的流程示意图;
图9是本申请实施例提供的一种各个装置连接关系示意图;
图10是本申请实施例提供的一种虚拟机在线迁移的示意图;
图11是本申请实施例提\供的一种卸载卡的结构示意图;
图12是本申请实施例提供的另一种卸载卡的结构示意图;
图13是本申请实施例提供的一种服务器的结构示意图。
具体实施方式
下面结合附图对本申请实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。
首先,结合附图对本申请中所涉及的部分用语和相关技术进行解释说明,以便于本领域技术人员理解。
云管理平台提供访问界面,访问界面列出公有云提供的云服务,租户通过浏览器或者其他客户端可以访问云管理平台,并在云管理平台付费以购买对应的云服务,在购买云服务之后,云管理平台向租户提供访问云服务的权限,使得租户可远程访问云服务并进行相应配置。
公有云通常指云提供商为租户提供的云服务,租户可通过因特网(internet)访问云管理平台,在云管理平台购买和使用公有云提供的云服务,公有云的核心属性是共享资源服务,公有云可通过公有云服务提供商的数据中心实现,该数据中心设置有多台物理服务器,多台物理服务器提供云服务所需的计算资源、网络资源以及存储资源。
虚拟机是指通过软件模拟的具有完整硬件***功能的、运行在一个完全隔离环境中的完整计算机***。在实体计算机中能够完成的工作在虚拟机中都能够实现,在计算机中创建虚拟机时,需要将实体机的部分硬盘和内存容量作为虚拟机的硬盘和内存容量,每个虚拟机都有独立的硬盘和操作***,可以像使用实体机一样对虚拟机进行操作。
快速仿真器(quick emulator,QEMU)是一款开源的模拟器及虚拟机监管器(virtual machine monitor,VMM)。QEMU主要提供两种功能给用户使用,一是作为用户态模拟器,利用动态代码翻译机制来执行不同于主机架构的代码,二是作为虚拟机监管器,模拟全***,利用其它VMM来使用硬件提供的虚拟化支持,创建接近于主机性能的虚拟机。
VSOCK是一种提供了网络套接字(socket)编程接口的协议类型,提供了传输控制协议(transmission control protocol,TCP)/网络协议(internet protocol,IP)的抽象,对外提供了一套接口,通过这个接口就可以统一、方便的使用TCP/IP协议的功能。
直接内存访问(direct memory access,DMA)是一种能力,允许在计算机主板上的设备直接把数据发送到内存中去,不需要像传统内存访问那样需要通过中央处理器(central processing unit,CPU)进行数据复制来移动数据,可以避免操作***和CPU的参与,大大降低了CPU的开销。
内存脏页是指源虚拟机中需要同步到目的虚拟机的内存页,从而保证源虚拟机和目的虚拟机的内存一致性。
在线迁移又称为实时迁移或热迁移,在本发明实施例中,是指在公有云服务提供商的数据中心中,作为源服务器需要进行固件升级、重启、停电维护或其它影响应用运行的情况时,云管理平台需要在数据中心选择另一个服务器作为目的服务器,该目的服务器的规格与源服务器的规格相同,并将源服务器中的虚拟机的内存页复制至目的服务器中、将源服务器的网络磁盘挂载到目的服务器中,从而使得目的服务器可运行源服务器的应用。、
具体地,在线迁移内存页的过程中,在保证源虚拟机正常运行的同时,将源虚拟机的内存页实时迁移到目的服务器。为了保证迁移过程中源虚拟机的可用,迁移过程仅有非常短暂的停机时间,在迁移的前面阶段,虚拟机在源服务器中运行,当内存页迁移进行到一定阶段,目的服务器中的目的虚拟机的内存页和源虚拟机的内存页完全一致(或非常接近完全一致,例如大于99%的内存页相同),经过一个非常短暂的切换(例如秒级以内),云管理平台将租户对源虚拟机的控制权转移到目的虚拟机,目的虚拟机在目的服务器上继续运行。对于虚拟机本身而言,由于切换的时间非常短暂,租户感觉不到虚拟机已经切换,因为迁移过程对租户是透明的,因此,在线迁移适用于对业务连贯性要求很高的场景。
虚拟机管理器(virtual machine manager,VMM)通过操作***内核实现,虚拟机管理器可管理和维护操作***创建的虚拟机。
目前云服务提供商为了提升物理资源的利用率,都是以虚拟机为粒度将一台物理服务器共享给多个租户使用,该物理服务器同时还部署了云管理平台以对其提供的云服务进行管理和维护,如图1所示,服务器100中包括物理硬件资源110,物理硬件资源110具体包括计算资源1110、存储资源1120和网络资源1130,虚拟机管理器120中部署了管理页面客户端1210、计算虚拟化程序1220及IO虚拟化程序1230,虚拟机管理器120通过计算虚拟化程序1220将计算资源1110进行虚拟化并将其提供给服务器100所创建的虚拟机130、虚拟机140和虚拟机150,虚拟机管理器120通过IO虚拟化程序1230将存储资源1120和网络资源1230进行虚拟化并将其提供给虚拟机130、虚拟机140和虚拟机150,租户可以通过购买不同规格的虚拟机获取到不同的计算资源、网络资源以及存储资源。可以看出,由于服务器中除了部署虚拟机以外,还部署了管理页面客户端等管控面程序,与云管理平台连接和通信,接收云管理平台发送过来的虚拟机管理命令并向云管理平台反馈虚拟机的状态,而管理面客户端以及数据面交互都将占用服务器的计算资源,导致服务器无法将所有的资源都提供给租户,形成了一定程度的浪费。
为了进一步提高服务器的资源利用率,让租户完全的使用服务器的资源,通过在服务器上***一个具备一定计算资源、存储资源和网络资源的卸载卡,把除计算资源以外的组件均部署在卸载卡中运行,从而使得服务器的资源完全分配给虚拟机。如图2所示,服务器210和卸载卡220通过高速串行计算机扩展总线标准(peripheral component interconnect express,PCIe)进行连接,服务器210通过虚拟机管理器2120中计算虚拟化程序21210将计算资源21110进行虚拟化并提供给虚拟机2130、虚拟机2140和虚拟机2150,服务器210通过虚拟机管理器2120中的IO虚拟化程序21220将卸载卡220中的存储资源2220和网络资源2230进行虚拟化并提供给虚拟机2130、虚拟机2140和虚拟机2150,租户通过购买不同规格的虚拟机从而使用服务器的计算资源和卸载卡的存储资源以及网络资源,卸载卡220中还部署了管理页面客户端2210,用于对服务器提供的云服务进行管理和维护。可以看出,通过将管理页面客户端从服务器卸载至卸载卡中,利用卸载卡与云管理平台通信,可以确保服务器的资源完全分配给虚拟机,提高了服务器的资源利用率。
需要说明的是,云服务提供商为了最大化利用物理资源,通常会对数据中心(即服务器集群)内的虚拟机进行合理调度,即需要对虚拟机进行在线迁移,在租户不感知的情况下将租户的虚拟机从当前所在的物理服务器迁移到另一个物理服务器上并继续工作,虚拟机主要包括CPU、内存和I/O设备三类要素,在进行虚拟机迁移时,只需要获取到该三类要素在源服务器的状态,然后传输到目的服务器上进行恢复即可完成虚拟机的在线迁移。如图3所示,以QEMU为例介绍虚拟机在线迁移流程,首先,源服务器将虚拟机的内存脏页传输至目的服务器,然后判断内存脏页是否收敛,若没有收敛,则继续将内存脏页传输给目的服务器,若收敛,则源服务器暂停虚拟机,将最后一轮内存脏页传输至目的服务器,同时将CPU和设备状态也传输给目的服务器,最后,将迁移结束标志传输给目的服务器;目的服务器持续接收源服务器发送的数据,并在每一次接收到数据之后判断其中是否包含结束标志,若不包含结束标志,则根据接收到的数据类型进行相应处理,例如若接收到的是内存脏页,则将内存脏页拷贝到虚拟机内存的指定位置,若接收到的是CPU和设备状态信息,则设置虚拟机的CPU和设备状态,若接收到结束标志,则立即启动恢复虚拟机,至此,完成了虚拟机从源服务器到 目的服务器的在线迁移。通过上述流程可以看出,业务中断时间与源服务器暂停虚拟机到目的服务器恢复虚拟机之间的时长相关,QEMU通过多轮的内存脏页迭代传输控制算法来降低最后一轮的内存脏页的数据量,从而降低中断时间。
为了完成上述图3所示的虚拟机在线迁移流程,在迁移之前,QEMU将会在源服务器和目的服务器之间建立TCP连接,如图4所示,源服务器410包括虚拟机4110和网络接口控制器(network interface controller,NIC)4120,虚拟机4110中部署有操作***41110,目的服务器420与源服务器410结构类似,源服务器410和目的服务器420之间建立了TCP连接,源服务器410中的虚拟机4110将运行热迁移线程以完成图3所述的虚拟机在线迁移流程,在迁移过程中所涉及的数据都将通过TCP连接发送给目的服务器420。值得说明的是,热迁移线程由源服务器执行,导致源服务器的资源占用率较高,在进行虚拟机迁移时几乎独占一个CPU计算资源,此外,若待迁移的虚拟机的内存规格较大时,热迁移线程所持续的时间将较长,导致将长时间占用服务器资源,使得服务器资源利用率降低,而且不利于应用其它数据优化技术,例如数据压缩和数据加密等优化技术,因为这样将会进一步耗费服务器的计算资源,导致服务器的计算资源占用率更高。
为了进一步提高虚拟机迁移时的数据传输效率,可以基于远程内存直接访问(remote direct memory access,RDMA)技术实现虚拟机在线迁移。如图5所示,源服务器510中包括虚拟机5110和RDMA通信单元5120,虚拟机5110中部署有操作***51110,目的服务器520与源服务器510结构类似,源服务器510和目的服务器520利用RDMA通信单元建立了RDMA连接,并通过RDMA协议进行数据传输。与上述图4所述相同,虚拟机5110将运行热迁移线程以完成图3所述的虚拟机在线迁移流程,待迁移数据通过RDMA连接进行传输。虽然RDMA连接相较于TCP连接提高了数据传输效率,但同样的,热迁移线程同样是由源服务器执行,依然要耗费0.3-0.5个CPU计算资源,导致资源占用率仍然较高,且由于虚拟机内存由RDMA硬件直接访问传输,软件层面无法加入其它数据优化技术(如数据压缩、数据加密等),另外服务器需要***支持RDMA技术的硬件设备,增加了应用和维护成本。
基于上述,本申请提供了一种虚拟机在线迁移方法,利用卸载卡上的资源完成虚拟机的内存脏页处理流程,从而减少对服务器的计算资源的消耗,降低服务器资源占用率,提高虚拟机迁移的效率和安全。
本申请实施例的技术方案可以应用于任何需要进行虚拟机在线迁移的***中,尤其适用于服务器上无网络协议栈、通过卸载卡与其它服务器连接的场景。
参见图6,图6是本申请提供的一种虚拟机在线迁移***的结构示意图。如图6所示,本申请的在线迁移***,包括:云管理平台610以及多个服务器***,其中,服务器***可以包括服务器***620和服务器***630,服务器***620可以包括服务器6210以及卸载卡6220,其中,服务器6210中运行有VMM62110、虚拟机62120、虚拟机62130。服务器***630的结构与服务器***620类似,云管理平台610可以通过网络连接各个卸载卡,卸载卡可以通过预设接口,例如,PCIe接口连接到服务器上,不同卸载卡和服务器之间可以通过网络进行通信,其中,在线迁移***可以设置在公有云服务提供商的数据中心中,云管理平台110用于对多个服务器***进行管理。
参见图7,图7是本申请提供的一种服务器***的结构示意图。如图7所示,服务器***包括服务器710和卸载卡720,该服务器710中可以包括硬件层和软件层,软件层包括客户操作***、VMM等,硬件层包括一个或多个处理器(例如CPU、图形处理器(graphics  processing unit,GPU)、神经网络处理器(neural-network processing unit,NPU)等)、内存、芯片(例如根复合体(root complex,RC)芯片)等硬件,该卸载卡720可以是特殊应用集成电路(application sprcific integrated circuit,ASIC)板卡或现场可编程逻辑门阵列(field programmable gate array,FPGA)板卡等,其也包括硬件层和软件层,硬件层包括一个或多个处理器、芯片以及网卡等硬件,其中处理器的能力可以弱于服务器710中的处理器的处理能力,软件层包括各种处理单元(例如I/O处理单元等)以处理虚拟机迁移相关流程,应理解,卸载卡720还可以通过网卡与网络磁盘连接,以便于卸载卡将服务器中的IO请求转发给网络磁盘进行处理。在一种可能的实现方式中,服务器710中运行有VMM7110、虚拟机7120以及第一前端装置7130,第一前端装置7130可以部署在虚拟机7120内部,也可以部署在虚拟机7120外部,本申请对此不作限定,若该服务器***是源服务器***,第一前端装置7130将负责对虚拟机迁移流程进行控制,主要包括对虚拟机内存脏页进行跟踪、对设备状态信息(例如CPU状态)进行保存、虚拟机迁移事件上报等,需要说明的是,该第一前端装置7130不会负责对虚拟机内存脏页进行处理和传输,仅将需要处理与传输的虚拟机内存脏页通过内部通道(例如PCIe接口)通知第一后端装置7210。卸载卡720中运行有第一后端装置7210,该第一后端装置7210负责虚拟机迁移过程中数据处理和传输,例如,根据第一前端装置7130输入的虚拟机内存脏页地址通过DMA的方式获取内存脏页,以及对第一前端装置7130输入的设备状态信息进行优化处理,然后发送给目的服务器。同理,若该服务器***是目的服务器***,第一前端装置7130将根据接收到的设备状态信息对设备状态进行设置及虚拟机迁移事件上报,但不再接收虚拟机内存脏页,第一后端装置7210接收虚拟机内存脏页地址信息和内存脏页,并通过DMA写入虚拟机内存对应位置,以及接收设备状态信息等。
可以看出,在进行虚拟机在线迁移时,虚拟机内存脏页的处理和传输都将通过卸载卡中的第一后端装置完成,不需要占用服务器的计算资源,服务器资源可以被完全利用,从而降低了服务器的资源占用率,有效提高了服务器资源的利用率。
源服务器和目的服务器之间进行虚拟机在线迁移时必须要有源服务器的第一卸载卡,以及目的服务器的第二卸载卡的协助,源服务器与第一卸载卡之间、目的服务器与第二卸载卡之间、第一卸载卡与第二卸载卡之间首先要完成网络连接的建立才能进行数据交互和传输,因此,在进行虚拟机在线迁移之前,首先需要完成各个装置之间网络连接拓扑的建立。
参见图8,图8是本申请提供的一种网络连接建立方法的流程图。如图8所示,该方法包括:
S801:启动目的服务器中的虚拟机,并创建第二前端装置。
具体地,目的服务器在上电后,启动其中运行的虚拟机,然后创建第二前端装置,该第二前端装置在创建完成后将以内部通道的服务端身份运行,等待同一环境内的后端装置进行连接。
在具体的实施例中,上述内部通道可以是基于PCIe的传输链路,例如可以是VSOCK链路。
S802:启动第二卸载卡中的第二后端装置。
具体地,第二卸载卡插置于目的服务器上,在目的服务器创建第二前端装置以后,第二卸载卡也将启动第二后端装置,在启动后,第二后端装置首先将以内部通道的客户端身份与内部通道服务端(即第二前端装置)建立连接,在完成连接建立后,该第二后端装置将以外部通道的服务端身份运行,等待外部通道的客户端进行连接。
在具体的实施例中,上述外部通道可以是基于各种网络传输协议的传输链路,例如可以 是传输控制协议(transmission control protocol,TCP)链路、用户数据报协议(user datagram protocol,UDP)链路。
S803:启动第一卸载卡中的第一后端装置。
具体地,第一卸载卡的第一后端装置被启动后,该第一后端装置首先将以外部通道的客户端身份与外部通道服务端(即第二后端装置)建立连接,在连接建立完成后,该第一后端装置将以内部通道的服务端身份运行,等待内部通道的客户端(即源服务器中的第一前端装置)进行连接。
S804:源服务器创建第一前端装置。
具体地,源服务器创建第一前端装置之后,该第一前端装置将以内部通道的客户端身份与第一卸载卡中的第二后端装置建立连接,第一前端装置和第二后端装置可以通过内部通道进行数据传输。
可以看出,通过执行上述图8所述的方法流程,源服务器中的第一前端装置与第一卸载卡中的第一后端装置、目的服务器中的第二前端装置与第二卸载卡中的第二后端装置建立连接后可以通过内部通道进行数据传输,第一卸载卡中的第一后端装置与第二卸载卡中的第二后端装置建立连接后可以通过外部通道进行数据传输,如图9所示,源端和目的端之间的各个装置在完成连接建立后,可以保证在虚拟机在线迁移期间,待迁移数据能够顺利的从源服务器迁移至目的服务器。
结合图7所示的***架构以及图8所示的网络连接建立方法流程,下面将对虚拟机在线迁移流程进行详细描述。参见图10,图10是本申请提供的一种虚拟机在线迁移的示意图。可选的,在初始状态下,源卸载卡可以挂载有网络磁盘,并提供网络磁盘至源服务器使用,租户可以在远程登录源服务器后,将租户的数据存储到网络磁盘中,值得注意的是,网络磁盘也可以是云服务,租户可以在云管理平台购买网络磁盘,并挂载到源服务器中。
具体地,本申请实施例的迁移方法包括以下步骤:
S101:云管理平台分别向源服务器和目的服务器发送迁移命令。
具体地,迁移命令用于指示源服务器将待迁移虚拟机在线迁移至目的服务器,待迁移数据包括虚拟机的内存脏页地址信息、内存脏页以及设备状态信息等,其中,迁移命令可以包括源服务器的IP地址、源服务器的MAC地址、目的服务器的IP地址、目的服务器的MAC地址或其它能识别源服务器和目的服务器的地址信息等。
另外,迁移命令是在满足迁移条件的情况下发出的,迁移条件可以为源服务器需要进行固件升级、重启、停电维护或其它影响源服务器正常工作的情况,云管理平台可预先获取上述情况,并根据上述情况在数据中心中选择适合作为迁移目标的目的服务器之后,向源服务器和目的服务器发送该迁移命令。
S102:源服务器通过第一卸载卡以及第二卸载卡向目的服务器发送全量内存页。
具体地,源服务器中的第一前端装置首先将待迁移的虚拟机的全量内存页的地址信息通过内部通道发送给第一卸载卡中的第一后端装置,第一后端装置接收到地址信息后,通过DMA的方式获取到全量内存页,第一后端装置将全量内存页及全量内存页的地址信息通过外部通道发送给第二卸载卡中的第二后端装置。相应地,第二卸载卡中的第二后端装置接收全量内存页和全量内存页的地址信息,第二后端装置根据接收到的全量内存页的地址信息,通过DMA直接将全量内存页写入指定位置,然后将全量内存页的地址信息通过内部通道发送给目的服务器中的第二前端装置,第二前端装置接收到全量内存页的地址信息后可以选择性的进行检 验,例如,检验地址是否合法等。第二后端装置根据全量内存页设置目标虚拟机的内存,使得目标虚拟机的内存和待迁移虚拟机的内存一致。
一般情况下,第二后端装置设置全量内存之后,即实现了内存页的迁移,但是,本申请实施例必须保证目标虚拟机的网络资源和存储资源也与待迁移虚拟机相同,因此,在将全量内存页设置到目的服务器之后,网络资源和存储资源从源服务器迁移到目的服务器之前,租户还可以访问源服务器中的待迁移虚拟机,源服务器的操作***会继续对待迁移虚拟机的内存进行写操作,从而产生内存脏页,同时,第一卸载卡也可以对待迁移虚拟机的内存进行DMA写操作,从而产生内存脏页。
因此,第一卸载卡必须获取上述两种情况产生的内存脏页,并发送至第二卸载卡,通过第二卸载卡根据这些内存脏页更新全量内存,从而保证待迁移虚拟机在网络资源和存储资源迁移完成之前产生的内存脏页可以在目标虚拟机上得到同步。
S103:源服务器将内存脏页地址信息和设备状态信息发送至第一卸载卡。
具体地,源服务器中的第一前端装置打开脏页跟踪功能,以对操作***在源虚拟机的内存中产生的脏页情况进行跟踪,从而在源虚拟机的内存中产生内存脏页位置信息。
操作***在源虚拟机的内存中产生脏页具体是指源服务器中的处理器在运行操作***时,对源虚拟机的内存进行数据写入操作,从而涉及到对内存页中的数据的修改,第一前端装置可以对这种情况下记录哪些内存页进行了修改。
值得说明的是,在本申请实施例中,内存脏页位置信息可以是内存脏页位图,内存脏页位图可以通过0和1对源虚拟机的操作***的内存页进行标识,在内存页被写入数据时,其位图值为1,在内存页没有写入数据时,其位图值是0,内存脏页位图记录内存页编号,并针对不同内存页编号记录0或1。当然,内存脏页位置信息还可以通过其它方式实现,根据内存脏页位置信息可以获知源虚拟机中哪个内存页被修改。
进一步的,第一前端装置还将对源虚拟机的设备状态信息进行记录和保存,第一前端装置在完成脏页跟踪以及设备状态信息记录之后,将源虚拟机的内存脏页地址信息及设备状态信息通过内部通道发送至第一卸载卡中的第一后端装置。
需要说明的是,第一前端装置在向第一后端装置发送数据时,可以根据数据类型的不同选择不同的链路进行发送,例如,对于与源虚拟机内存相关的数据(例如内存脏页地址信息)选择同一条链路发送,对于与源虚拟机内存无关的数据(例如设备状态信息)选择另一条链路发送。
S104:第一卸载卡根据内存脏页地址信息从源服务器中获取内存脏页。
具体地,第一卸载卡中的第一后端装置接收到内存脏页地址信息后,第一后端装置通过DMA传输方式从源服务器的内存中获取操作***产生的内存脏页。
S105:第一卸载卡将内存脏页地址信息、内存脏页和设备状态信息发送至第二卸载卡。
具体地,第一卸载卡中的第一后端装置将接收到的内存脏页地址信息、内存脏页和设备状态信息通过外部通道发送至第二卸载卡中的第二后端装置。
与上述S103中相关描述类似,对于不同类型的数据,第一后端装置可以选择不同的传输链路进行发送,对于与源虚拟机内存相关的数据,选择同一条链路发送,对于与源虚拟机内存无关的数据,选择另一条链路发送。
可选的,第一后端装置在发送数据之前,可以根据实际需要对数据进行进一步的优化处理,例如,可以对数据进行压缩、加密、零页优化等,能够提升虚拟机在线迁移期间的数据传输性能,提高虚拟机迁移的效率和安全,降低了资源损耗、节约了迁移成本。
S106:第二卸载卡根据内存脏页地址信息将内存脏页写入目的服务器的内存中。
具体地,第二卸载卡中的第二后端装置接收到数据之后,若数据被压缩和加密,则需要先对数据进行解压缩和解密,然后根据接收到的内存脏页地址信息,通过DMA传输方式将内存脏页直接写入目的服务器的内存中。
S107:第二卸载卡将内存脏页地址信息和设备状态信息发送至目的服务器。
具体地,第二卸载卡中的第二后端装置通过内部通道将内存脏页地址信息和设备状态信息发送给目的服务器中的第二前端装置,第二前端装置接收到数据之后,解析数据内容,根据数据具体含义做相应的处理,例如,根据设备状态信息设置目标虚拟机的设备状态、根据内存脏页地址信息检验地址是否合法等。
S108:源服务器确定是否已经达到暂停源虚拟机标准。若没有达到暂停标准,返回步骤S103,若达到暂停标准,执行步骤S109,即结束迁移流程并通知云管理平台迁移完毕。
具体地,源服务器中的第一前端装置判断该源虚拟机中的操作***所产生的内存脏页的数据量是否小于当前网络带宽的容量,若内存脏页的数据量大于或者等于当前网络带宽的容量,源虚拟机达不到停机标准,所以,在第一后端装置获取到源虚拟机中操作***产生的内存脏页之后至源虚拟机停机之间,源虚拟机的操作***又产生了新的内存脏页,于是,第一后端装置需要重复执行获取得到源虚拟机中操作***产生的新的内存脏页,并将获取到的新的内存脏页发送给第二后端装置的步骤,直到源虚拟机中操作***产生的新的内存脏页的数据量小于当前网络带宽的容量,此时,源虚拟机达到停机标准而停机,通知云管理平台迁移完毕。
S110:目的服务器通知云管理平台目标虚拟机准备就绪。
具体地,目的服务器的第二前端装置在完成对目标虚拟机的设备状态设置以及内存设置之后,通知云管理平台目标虚拟机准备就绪,此时,租户根据源虚拟机的IP地址远程登录源虚拟机时,实际登录的是目标虚拟机,但是由于切换过程非常短暂,可以控制在秒级以内,一般情况下租户对此并无感知,因此上述迁移过程可以做到租户无感知,能够在迁移虚拟机的前提下保证租户体验。
综上,本申请实施例可以实现租户无感知的迁移虚拟机,且在迁移过程中利用卸载卡的计算资源完成虚拟机内存脏页的迁移,不损耗服务器的计算资源,能够有效降低对服务器的资源占用,有效提升服务器的资源利用率和迁移效率,保证迁移安全。
上述详细阐述了本申请实施例的方法,为了便于更好的实施本申请实施例的上述方案,相应地,下面还提供用于配合实施上述方案的相关设备。
参见图11,图11是本申请实施例提供的一种卸载卡的结构示意图。如图11所示,该卸载卡包括:接收模块10、处理模块11和发送模块12。其中,
所述接收模块10,用于接收第一前端装置通过第一内部通道发送的源虚拟机的内存脏页地址信息和设备状态信息,其中,所述第一前端装置设置于所述源服务器中;
所述处理模块11,用于通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页;
所述发送模块12,用于通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至第二后端装置,所述第二后端装置在插置于所述目的服务器的卸载卡中。
为了简便起见,此处并没有对卸载卡进行具体的描述,具体请参见图6、图7以及相关描述。此外,卸载卡中的各个模块可以执行图8至图10中各个模块执行的步骤,具体请参见 图8至图10以及相关描述,此处不再赘述。
本申请实施例提供了一种服务器***,该服务器***包括服务器和卸载卡,其中,卸载卡可以插置于服务器上。如图12所示,服务器包括一个或多个处理器20、通信接口21和存储器22,其中,处理器20、通信接口21和存储器22之间可以通过总线23连接。其中,总线可以是PCIE总线或者其他高速总线。
处理器20包括一个或者多个通用处理器,其中,通用处理器可以是能够处理电子指令的任何类型的设备,包括中央处理器(Central Processing Unit,CPU)、微处理器、微控制器、主处理器、控制器以及ASIC(Application Specific Integrated Circuit,专用集成电路)等等。处理器20执行各种类型的数字存储指令,例如存储在存储器22中的软件或者固件程序,它能使服务器提供较宽的多种服务。例如,处理器20能够执行程序或者处理数据,以执行本文讨论的方法的至少一部分。
通信接口21可以为有线接口(例如以太网接口),用于与客户端进行通信。当通信接口21为有线接口时,通信接口21可以采用TCP/IP之上的协议族,例如,RAAS协议、远程函数调用(Remote Function Call,RFC)协议、简单对象访问协议(Simple Object Access Protocol,SOAP)协议、简单网络管理协议(Simple Network Management Protocol,SNMP)协议、公共对象请求代理体系结构(Common Object Request Broker Architecture,CORBA)协议以及分布式协议等等。
存储器22可以包括易失性存储器(Volatile Memory),例如随机存取存储器(Random Access Memory,RAM);存储器也可以包括非易失性存储器(Non-Volatile Memory),例如只读存储器(Read-Only Memory,ROM)、快闪存储器(Flash Memory)、硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD)存储器还可以包括上述种类的存储器的组合。存储器可以用于存储客户操作***以及VMM。
可以理解,上述服务器可以用于执行如图8至图10中源服务器或者目的服务器执行的步骤,具体请参阅图8至图10以及相关描述。
如图13所示,卸载卡包括一个或多个处理器30、通信接口31和存储器32。其中,处理器30、通信接口31和存储器32之间可以通过总线34连接。
处理器30包括一个或者多个通用处理器,其中,通用处理器可以是能够处理电子指令的任何类型的设备,包括中央处理器(Central Processing Unit,CPU)、微处理器、微控制器、主处理器、控制器以及ASIC(Application Specific Integrated Circuit,专用集成电路)等等。处理器30执行各种类型的数字存储指令,例如存储在存储器32中的软件或者固件程序,它能使客户端提供较宽的多种服务。例如,处理器30能够执行程序或者处理数据,以执行本文讨论的方法的至少一部分。
通信接口31可以为有线接口(例如以太网接口),用于与服务器或用户进行通信。当通信接口31为有线接口时,通信接口31可以采用TCP/IP之上的协议族,例如,RAAS协议、远程函数调用(Remote Function Call,RFC)协议、简单对象访问协议(Simple Object Access Protocol,SOAP)协议、简单网络管理协议(Simple Network Management Protocol,SNMP)协议、公共对象请求代理体系结构(Common Object Request Broker Architecture,CORBA)协议以及分布式协议等等。
存储器32可以包括易失性存储器(Volatile Memory),例如随机存取存储器(Random  Access Memory,RAM);存储器也可以包括非易失性存储器(Non-Volatile Memory),例如只读存储器(Read-Only Memory,ROM)、快闪存储器(Flash Memory)、硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD)存储器还可以包括上述种类的存储器的组合。存储器32中可以用于发送模块、处理模块以及接收模块。
可以理解,上述卸载卡可以用于执行如图8至图10中第一卸载卡或者第二卸载卡执行的步骤,具体请参阅图8至图10以及相关描述。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时,可以实现上述方法实施例中记载的任意一种的部分或全部步骤。
本申请实施例还提供了一种计算机程序产品,当其在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个步骤。上述所涉及的设备的各组成模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在所述计算机可读取存储介质中。
在上述实施例中,对各个实施例的描述各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
应理解,本文中涉及的第一、第二、第三、第四以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请的范围。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在 一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例装置中的模块可以根据实际需要进行合并、划分和删减。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (14)

  1. 一种虚拟机迁移方法,其特征在于,用于将源服务器运行的源虚拟机迁移到目的服务器,所述方法包括以下步骤:
    第一前端装置通过第一内部通道将所述源虚拟机的内存脏页地址信息和设备状态信息发送至第一后端装置,其中第一前端装置设置于所述源服务器,所述第一后端装置设置在插置于所述源服务器的第一卸载卡中,所述第一卸载卡与所述源服务器之间设置有所述第一内部通道;
    所述第一后端装置通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页,并通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至第二后端装置,所述第二后端装置设置在插置于所述目的服务器的第二卸载卡中。
  2. 如权利要求1所述的方法,其特征在于,还包括:
    所述第二后端装置通过第二内部通道将所述设备状态信息发送至第二前端装置,其中所述第二卸载卡与所述目的服务器之间设置有所述第二内部通道,所述第二前端装置设置于所述目的服务器;
    所述第二前端装置根据所述设备状态信息设置目的虚拟机的设备状态;
    所述第二后端装置通过所述第二内部通道根据所述内存脏页地址信息将所述内存脏页设置于所述目的服务器的内存中。
  3. 如权利要求1或2所述的方法,其特征在于,所述外部通道包括第一数据链路和第二数据链路,其中,所述第一数据链路用于传输所述设备状态信息,所述第二数据链路用于传输所述内存脏页和所述内存脏页地址信息。
  4. 如权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    所述第一后端装置对所述源虚拟机的内存脏页和设备状态信息进行压缩和加密;
    所述第二后端装置对所述源虚拟机的内存脏页和设备状态信息进行解压缩和解密。
  5. 如权利要求1至4任一项所述的方法,其特征在于,
    所述第一数据链路和所述第二数据链路通过传输控制协议TCP链路或用户数据报文协议UDP链路实现。
  6. 如权利要求1至5任一项所述的方法,其特征在于,
    所述第一内部通道和所述第二内部通道通过VSOCK链路实现。
  7. 一种虚拟机迁移***,其特征在于,所述虚拟机在线迁移***包括:源服务器、第一卸载卡、目的服务器以及第二卸载卡,
    第一前端装置通过第一内部通道将源虚拟机的内存脏页地址信息和设备状态信息发送至第一后端装置,其中,所述第一前端装置设置于所述源服务器中,所述第一后端装置设置在插置于所述源服务器的所述第一卸载卡中,所述第一卸载卡与所述源服务器之间设置有所述第一内部通道;
    所述第一后端装置通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页,并通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至第二后端装置,所述第二后端装置在插置于所述目的服务器的第二卸载卡中。
  8. 如权利要求7所述的***,其特征在于,
    所述第二后端装置通过第二内部通道将所述设备状态信息发送至第二前端装置,其中,所述第二卸载卡与所述目的服务器之间设置有所述第二内部通道,所述第二前端装置设置于 所述目的服务器中;
    所述第二前端装置根据所述设备状态信息设置目的虚拟机的设备状态;
    所述第二后端装置通过所述第二内部通道根据所述内存脏页地址信息将所述内存脏页设置于所述目的服务器的内存中。
  9. 如权利要求7或8所述的***,其特征在于,所述外部通道包括第一数据链路和第二数据链路,其中,所述第一数据链路用于传输所述设备状态信息,所述第二数据链路用于传输所述内存脏页和所述内存脏页地址信息。
  10. 如权利要求7至9任一项所述的***,其特征在于,
    所述第一后端装置对所述源虚拟机的内存脏页和设备状态信息进行压缩和加密;
    所述第二后端装置对所述源虚拟机的内存脏页和设备状态信息进行解压缩和解密。
  11. 如权利要求7至10任一项所述的***,其特征在于,
    所述第一数据链路和所述第二数据链路通过传输控制协议TCP链路或用户数据报文协议UDP链路实现。
  12. 如权利要求7至11任一项所述的***,其特征在于,
    所述第一内部通道和所述第二内部通道通过VSOCK链路实现。
  13. 一种卸载卡,其特征在于,包括:
    接收模块,用于接收第一前端装置通过第一内部通道发送的源虚拟机的内存脏页地址信息和设备状态信息,其中,所述第一前端装置设置于所述源服务器中;
    处理模块,用于通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页;
    发送模块,用于通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至第二后端装置,所述第二后端装置设置在插置于所述目的服务器的卸载卡中。
  14. 一种卸载卡,其特征在于,所述卸载卡插置于源服务器中,所述卸载卡与所述源服务器之间设置有第一内部通道,所述卸载卡包括:处理器以及存储器,所述处理器执行所述存储器中的程序,从而执行以下方法:
    接收第一前端装置通过第一内部通道发送的源虚拟机的内存脏页地址信息和设备状态信息,其中,所述第一前端装置设置于所述源服务器中;
    通过所述第一内部通道根据所述内存脏页地址信息从所述源服务器的内存读取内存脏页;
    通过外部通道发送所述内存脏页、所述内存脏页地址信息和所述设备状态信息至插置于目的服务器的另一卸载卡中。
PCT/CN2021/142291 2020-12-29 2021-12-29 一种虚拟机迁移方法、装置及*** WO2022143717A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21914451.6A EP4258113A1 (en) 2020-12-29 2021-12-29 Method, apparatus, and system for migrating virtual machine
US18/343,250 US20230333877A1 (en) 2020-12-29 2023-06-28 Virtual machine migration method, apparatus, and system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202011600628.7 2020-12-29
CN202011600628 2020-12-29
CN202110476568.0A CN114691287A (zh) 2020-12-29 2021-04-29 一种虚拟机迁移方法、装置及***
CN202110476568.0 2021-04-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/343,250 Continuation US20230333877A1 (en) 2020-12-29 2023-06-28 Virtual machine migration method, apparatus, and system

Publications (1)

Publication Number Publication Date
WO2022143717A1 true WO2022143717A1 (zh) 2022-07-07

Family

ID=82136479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/142291 WO2022143717A1 (zh) 2020-12-29 2021-12-29 一种虚拟机迁移方法、装置及***

Country Status (4)

Country Link
US (1) US20230333877A1 (zh)
EP (1) EP4258113A1 (zh)
CN (1) CN114691287A (zh)
WO (1) WO2022143717A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560802A (zh) * 2023-07-05 2023-08-08 麒麟软件有限公司 一种基于虚拟机负载的虚拟机自适应热迁移方法及***
CN116700904A (zh) * 2023-08-08 2023-09-05 苏州浪潮智能科技有限公司 内存快照生成方法、装置、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110320556A1 (en) * 2010-06-29 2011-12-29 Microsoft Corporation Techniques For Migrating A Virtual Machine Using Shared Storage
CN108874506A (zh) * 2018-06-08 2018-11-23 北京百度网讯科技有限公司 虚拟机直通设备的热迁移方法和装置
CN109739618A (zh) * 2018-12-10 2019-05-10 新华三云计算技术有限公司 虚拟机迁移方法及装置
CN111722909A (zh) * 2020-06-12 2020-09-29 浪潮电子信息产业股份有限公司 一种虚拟机迁移方法、***、设备及存储介质
CN111736945A (zh) * 2019-08-07 2020-10-02 北京京东尚科信息技术有限公司 基于智能网卡的虚拟机热迁移方法、装置、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110320556A1 (en) * 2010-06-29 2011-12-29 Microsoft Corporation Techniques For Migrating A Virtual Machine Using Shared Storage
CN108874506A (zh) * 2018-06-08 2018-11-23 北京百度网讯科技有限公司 虚拟机直通设备的热迁移方法和装置
CN109739618A (zh) * 2018-12-10 2019-05-10 新华三云计算技术有限公司 虚拟机迁移方法及装置
CN111736945A (zh) * 2019-08-07 2020-10-02 北京京东尚科信息技术有限公司 基于智能网卡的虚拟机热迁移方法、装置、设备及介质
CN111722909A (zh) * 2020-06-12 2020-09-29 浪潮电子信息产业股份有限公司 一种虚拟机迁移方法、***、设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560802A (zh) * 2023-07-05 2023-08-08 麒麟软件有限公司 一种基于虚拟机负载的虚拟机自适应热迁移方法及***
CN116560802B (zh) * 2023-07-05 2023-09-26 麒麟软件有限公司 一种基于虚拟机负载的虚拟机自适应热迁移方法及***
CN116700904A (zh) * 2023-08-08 2023-09-05 苏州浪潮智能科技有限公司 内存快照生成方法、装置、计算机设备及存储介质
CN116700904B (zh) * 2023-08-08 2023-11-03 苏州浪潮智能科技有限公司 内存快照生成方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN114691287A (zh) 2022-07-01
EP4258113A1 (en) 2023-10-11
US20230333877A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
JP6055310B2 (ja) 仮想記憶ターゲットオフロード技術
KR101530472B1 (ko) 모바일 컴퓨팅 디바이스를 통한 관리형 usb 서비스들의 원격 전달을 위한 방법 및 장치
US10120705B2 (en) Method for implementing GPU virtualization and related apparatus, and system
US8176153B2 (en) Virtual server cloning
KR101956411B1 (ko) 복수의 서버로부터 클라이언트로의 단일 최종 사용자 경험 전달 기법
WO2022143717A1 (zh) 一种虚拟机迁移方法、装置及***
US8893013B1 (en) Method and apparatus for providing a hybrid computing environment
US20130086200A1 (en) Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion
US8762544B2 (en) Selectively communicating data of a peripheral device to plural sending computers
WO2022143714A1 (zh) 服务器***、虚拟机创建方法及装置
CN113312143A (zh) 云计算***、命令处理方法及虚拟化仿真装置
WO2022267427A1 (zh) 虚拟机迁移方法、***及电子设备
US20230152978A1 (en) Data Access Method and Related Device
CN117135189A (zh) 服务器的访问方法及装置、存储介质、电子设备
Guay et al. Early experiences with live migration of SR-IOV enabled InfiniBand
US10579431B2 (en) Systems and methods for distributed management of computing resources
CN114115703A (zh) 裸金属服务器在线迁移方法以及***
US11601515B2 (en) System and method to offload point to multipoint transmissions
CN112965790B (zh) 一种基于pxe协议的虚拟机启动方法及电子设备
US11500754B2 (en) Graph-based data multi-operation system
US20230026015A1 (en) Migration of virtual computing storage resources using smart network interface controller acceleration
WO2022041839A1 (zh) 裸金属服务器在线迁移方法以及***
WO2023215029A1 (en) User triggered virtual machine cloning for recovery/availability/scaling
CN118245071A (zh) 一种***安装方法、装置、设备及可读存储介质
CN117938663A (zh) 虚拟机的迁移方法、装置、电子设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21914451

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021914451

Country of ref document: EP

Effective date: 20230704

NENP Non-entry into the national phase

Ref country code: DE