CN115827169B - Virtual machine migration method and device, electronic equipment and medium - Google Patents

Virtual machine migration method and device, electronic equipment and medium Download PDF

Info

Publication number
CN115827169B
CN115827169B CN202310075635.7A CN202310075635A CN115827169B CN 115827169 B CN115827169 B CN 115827169B CN 202310075635 A CN202310075635 A CN 202310075635A CN 115827169 B CN115827169 B CN 115827169B
Authority
CN
China
Prior art keywords
dirty page
dirty
page collection
virtual machine
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310075635.7A
Other languages
Chinese (zh)
Other versions
CN115827169A (en
Inventor
吴重云
邓鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202310075635.7A priority Critical patent/CN115827169B/en
Publication of CN115827169A publication Critical patent/CN115827169A/en
Application granted granted Critical
Publication of CN115827169B publication Critical patent/CN115827169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention provides a virtual machine migration method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: in the process of the virtual machine thermal migration, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, counting the virtual machine exit event to obtain corresponding counting information; counting the change trend of the dirty page collection rate from the dirty page collection thread of the virtual machine; and adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the change trend of the dirty page collection rate. The method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit events of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate the collection of dirty pages under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of clients.

Description

Virtual machine migration method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a virtual machine migration method, a virtual machine migration apparatus, an electronic device, and a computer readable storage medium.
Background
In the field of cloud computing virtual machines, online migration of virtual machines is an important function, and is a function frequently used by operation and maintenance personnel when processing fault hosts. The online migration virtual machine requires that the memory data, the device state information, the disk data and the like be migrated without interrupting the service of the client virtual machine, and the client does not even sense the occurrence of the thermomigration action. In the online migration process of the virtual machine, if a certain memory area of the guest virtual machine is modified, a technology is needed to store page information called memory dirty pages, after the completion of one round of memory copying, the virtualization device can continue to migrate the dirty pages generated in the previous round to the destination end in the next round, and through one round of iteration, all the latest memory information in the current guest virtual machine is finally migrated to the destination end and then is switched to the destination end to finally complete online (hot) migration.
Dirty ring is used as a new memory Dirty page tracking characteristic, is more advantageous for online migration of a large-capacity memory virtual machine, is more flexible for collecting memory Dirty pages, can collect data based on each vcpu (virtual processor, CPU virtualization technology) and can develop some characteristics according to the data, but in actual application, the problems that migration is time-consuming and customer service performance is rapidly reduced in the migration process are found when the service pressure of the migrated virtual machine is high.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention are directed to providing a virtual machine migration method and a corresponding virtual machine migration apparatus, an electronic device, and a computer-readable storage medium that overcome or at least partially solve the foregoing problems.
The embodiment of the invention discloses a virtual machine migration method, which comprises the following steps:
in the process of virtual machine thermomigration, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, counting the virtual machine exit event to obtain corresponding counting information;
counting the change trend of the dirty page collection rate from the dirty page collection thread of the virtual machine;
and according to the counting information and/or the change trend of the dirty page collection rate, adjusting the dirty page collection rate of the dirty page collection thread so as to adjust the recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collection rate.
Optionally, the count information includes first count information corresponding to the current round of dirty page collection and second count information corresponding to the previous round of dirty page collection, and the adjusting the dirty page collection rate of the dirty page collection thread according to the count information and/or the dirty page collection rate change trend includes:
And if the count value corresponding to the first count information is larger than the count value corresponding to the second count information, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection.
Optionally, the change trend of the dirty page collection rate includes a first change trend of the dirty page collection rate of the present round and a second change trend of the dirty page collection rate of the previous round, and the adjusting the dirty page collection rate of the dirty page collection thread according to the count information and/or the change trend of the dirty page collection rate includes:
and if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information, adjusting the dirty page collection rate of the dirty page collection thread according to the first change trend and the second change trend.
Optionally, the counting the dirty page collection rate change trend from the dirty page collection thread of the virtual machine includes:
counting the collected number information of the dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the round and second quantity information collected in the previous round;
if the number value corresponding to the first number information is larger than the number value corresponding to the second number information, determining that the first change trend of the round of dirty page collection rate is an increasing trend;
And if the number value corresponding to the first number information is smaller than the number value corresponding to the second number information, determining that the first change trend of the round of dirty page collection rate is a decreasing trend.
Optionally, the adjusting the dirty page collection rate of the dirty page collection thread according to the first trend and the second trend includes:
if the first change trend is consistent with the second change trend, increasing a count value corresponding to preset stable count information by a preset value; the preset stability counting information is used for measuring the stability of the change trend; the change trend includes an increasing trend and a decreasing trend;
if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection;
and if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is a decreasing trend, reducing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection.
Alternatively, the process may be carried out in a single-stage,
if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round, including:
if the count value corresponding to the increased preset stable count information is equal to a preset stable count threshold and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection;
if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is a decreasing trend, reducing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round, including:
and if the count value corresponding to the increased preset stable count information is equal to the preset stable count threshold and the first change trend is a decreasing trend, reducing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round.
Optionally, the method further comprises:
and if the first change trend is inconsistent with the second change trend, setting the count value corresponding to the preset stable count information as a preset initial value.
Optionally, the increasing the dirty page collection rate of the dirty page collection thread for a next round of dirty page collection includes:
and reducing the sleep time of the dirty page collecting thread according to a preset first proportion so as to increase the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by reducing the sleep time of the dirty page collecting thread.
Optionally, the reducing the dirty page collection rate of the dirty page collection thread for a next round of dirty page collection includes:
and increasing the sleep time of the dirty page collecting thread according to a preset second proportion so as to reduce the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by increasing the sleep time of the dirty page collecting thread.
Optionally, the method further comprises:
if the reduced sleep time of the dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged;
if the increased sleep time of the dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged.
Optionally, the counting the collected number of dirty pages from the dirty page collection thread of the virtual machine includes:
Traversing a virtual processor of the virtual machine, and determining the preset memory area corresponding to the virtual processor;
storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
and collecting dirty page information from the preset memory area through the dirty page collecting thread, and counting the collected number information of the dirty pages.
Optionally, the virtual machine includes a virtualization component that includes the dirty page collection thread; the virtualized component is a QEMU component.
The embodiment of the invention also discloses a virtual machine migration device, which comprises:
the counting module is used for counting the virtual machine exit event to obtain corresponding counting information if the virtual machine generates the virtual machine exit event based on the preset memory area filling event in the virtual machine thermomigration process;
the statistics module is used for counting the change trend of the dirty page collection rate from the dirty page collection threads of the virtual machine;
the adjustment module is used for adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the dirty page collection rate change trend so as to adjust the recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collection rate.
Optionally, the count information includes first count information corresponding to the current round of dirty page collection and second count information corresponding to the previous round of dirty page collection, and the adjustment module includes:
and the increment sub-module is used for increasing the dirty page collection rate of the next round of dirty page collection of the dirty page collection thread if the count value corresponding to the first count information is larger than the count value corresponding to the second count information.
Optionally, the change trend of the collection rate of the dirty pages includes a first change trend of the collection rate of the dirty pages of the present round and a second change trend of the collection rate of the dirty pages of the previous round, and the adjustment module includes:
and the adjustment sub-module is used for adjusting the dirty page collection rate of the dirty page collection thread according to the first change trend and the second change trend if the count value corresponding to the first count information is not larger than the count value corresponding to the second count information.
Optionally, the statistics module includes:
a statistics sub-module, configured to count the number information of the collected dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the round and second quantity information collected in the previous round;
A first determining submodule, configured to determine that the first trend of change of the current dirty page collection rate is an increasing trend if the number value corresponding to the first number information is greater than the number value corresponding to the second number information;
and the second determining submodule is used for determining that the first change trend of the collecting rate of the dirty pages is a reducing trend if the number value corresponding to the first number information is smaller than the number value corresponding to the second number information.
Optionally, the adjusting sub-module includes:
the first increasing unit is used for increasing the count value corresponding to the preset stable count information by a preset value if the first change trend is consistent with the second change trend; the preset stability counting information is used for measuring the stability of the change trend; the change trend includes an increasing trend and a decreasing trend;
the second increasing unit is used for increasing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is an increasing trend;
and the reducing unit is used for reducing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread if the count value corresponding to the increased preset stable count information meets the preset stable count threshold condition and the first change trend is a reduction trend.
Alternatively, the process may be carried out in a single-stage,
the second increasing unit includes:
a first increment subunit, configured to increment the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round if the incremented count value corresponding to the preset stable count information is equal to a preset stable count threshold and the first change trend is an increment trend;
the lowering unit includes:
and the reducing subunit is configured to reduce the dirty page collection rate of the dirty page collection thread for performing the next round of dirty page collection if the count value corresponding to the increased preset stable count information is equal to the preset stable count threshold and the first change trend is a reducing trend.
Optionally, the apparatus further comprises:
the setting module is configured to set a count value corresponding to the preset stable count information to a preset initial value if the first variation trend is inconsistent with the second variation trend.
Optionally, the second adding unit includes:
a reducing subunit, configured to reduce the sleep time of the dirty page collecting thread according to a preset first ratio, so as to increase the dirty page collecting rate of the dirty page collecting thread for performing dirty page collection in a next round by reducing the sleep time of the dirty page collecting thread.
Optionally, the lowering unit includes:
and the second increasing subunit is used for increasing the sleep time of the dirty page collecting thread according to a preset second proportion so as to reduce the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by increasing the sleep time of the dirty page collecting thread.
Optionally, the apparatus further comprises:
the first maintenance module is used for maintaining the sleep time of the dirty page collection thread unchanged if the reduced sleep time of the dirty page collection thread is not in a preset sleep time interval;
and the second maintenance module is used for maintaining the sleep time of the dirty page collection thread unchanged if the increased sleep time of the dirty page collection thread is not in the preset sleep time interval.
Optionally, the statistics submodule includes:
the traversing and determining unit is used for traversing the virtual processor of the virtual machine and determining the preset memory area corresponding to the virtual processor;
the storage unit is used for storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
And the collection and statistics unit is used for collecting the dirty page information from the preset memory area through the dirty page collection thread and counting the collected number information of the dirty pages.
Optionally, the virtual machine includes a virtualization component that includes the dirty page collection thread; the virtualized component is a QEMU component.
The embodiment of the invention also discloses an electronic device, which comprises: a processor, a memory, and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements a virtual machine migration method as described above.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the virtual machine migration method when being executed by a processor.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, in the process of the thermal migration of the virtual machine, if the virtual machine generates the virtual machine exit event based on the preset memory area filling event, the virtual machine exit event can be counted, the change trend of the dirty page collection rate can be counted from the dirty page collection thread of the virtual machine, the dirty page collection rate of the dirty page collection thread is adjusted based on the counting information and/or the change trend of the dirty page collection rate, so that the exit frequency of the virtual machine exit event caused by the preset memory area filling event is reduced, the service performance of the virtual machine in the process of the thermal migration is optimized, the influence on the service of the client is effectively reduced, and the availability of the service of the client in the process of the migration is improved. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit event of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate the collection of dirty pages under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of clients.
Drawings
FIG. 1 is a flowchart illustrating steps of a virtual machine migration method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of another virtual machine migration method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a virtual machine migration method according to an embodiment of the present invention;
fig. 4 is a block diagram of a virtual machine migration apparatus according to an embodiment of the present invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings, and some, but not all of which are illustrated in the appended drawings. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the invention, fall within the scope of protection of the invention.
At present, there are two modes of dirty page marking modes of online migration of a virtual machine: the bitmap mode and the dirtyring mode.
The bitmap method is to record dirty page information through the bitmap in the kernel, when a user inquires the dirty page from the kernel through the input/output control, the kernel is responsible for copying the bitmap information from the kernel state to the user state, and the QEMU (virtualization simulator) determines the memory page to be sent next after obtaining the dirty page information.
The dirty ring mode: the shared memory area called ring is mapped through an IOMMU (input/outputmemory management unit ), the kernel inserts information such as offset of the dirty page into the ring after collecting the dirty page, and the virtualization simulator acquires the dirty page information from the shared ring without copying bitmap data from kernel mode to user mode. And each vcpu corresponds to one ring, so that dirty pages on different cpus can be acquired respectively, which cannot be done by a bitmap mode.
The current dirty ring dirty page collection thread uses fixed calling frequency to collect dirty pages, and does not pay attention to service pressure conditions in the virtual machine. If the internal pressure of the virtual machine is large, the dirty page collecting thread will not collect dirty pages and trigger the recovery of shared dirty page space ring, so that the virtual machine is frequently exited, and the service performance of clients is seriously affected.
When the service pressure in the virtual machine is high, the user state dirty page collecting thread can collect dirty pages in a timely manner, so that the shared memory area ring space is not recovered in time, the kernel can trigger the virtual machine to exit when the dirty pages are marked, and the frequent virtual machine exit can cause the problem that the performance of the internal service of the virtual machine is obviously reduced and even almost unusable. Accordingly, the present invention is directed to a virtual machine migration method and corresponding virtual machine migration apparatus, an electronic device, and a computer-readable storage medium that overcome or at least partially solve the above-described problems.
One of the core concepts of the embodiments of the present invention is that, in a hot migration process of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, the virtual machine exit event may be counted, and a dirty page collection rate change trend may be counted from a dirty page collection thread of the virtual machine, and the dirty page collection rate of the dirty page collection thread is adjusted based on the count information and/or the dirty page collection rate change trend, so as to reduce an exit frequency of the virtual machine exit event caused by the preset memory area filling event, further optimize a virtual machine customer service performance in the hot migration process, effectively reduce an influence on a customer service, and improve availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit event of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate the collection of dirty pages under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of clients.
Referring to fig. 1, a step flowchart of a virtual machine migration method provided by an embodiment of the present invention may specifically include the following steps:
step 101, in the process of virtual machine thermal migration, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, counting the virtual machine exit event to obtain corresponding counting information.
The preset memory area may be a shared memory area called ring. The preset memory area full event may be a ring full event, in the preset memory area ring, the number of the ring that can be contained is limited, default is 4096, if the kernel finds that the ring is full when the kernel puts dirty page information, an exception is triggered, which may cause a virtual machine to exit once, that is, a virtual machine exit event is generated based on the preset memory area full event.
In the embodiment of the invention, the virtual machine exit event can be counted to obtain the corresponding counting information.
Step 102, counting the change trend of the dirty page collection rate from the dirty page collection thread of the virtual machine.
Included in the virtual machine is a virtualization component that includes a dirty page collection thread. The dirty page collection thread may be a thread that specifically collects dirty pages of memory, which is responsible for collecting dirty pages of memory for use by migration threads to query to control whether memory is migrated.
In practical application, if the kernel exits because of ring full, the kernel triggers the collection of the memory dirty pages once, synchronizes the memory dirty page information into the user-state dirty page data structure, and can insert new memory dirty page data into the ring.
The dirty page collection rate change trend may refer to an increasing trend or a decreasing trend of the dirty page collection rate, and may reflect that dirty pages are produced faster and faster/with an increasing number of dirty pages in the same time period, or that dirty pages are produced slower and with an decreasing number of dirty pages in the same time period.
In the embodiment of the invention, the change trend of the dirty page collection rate can be counted from the dirty page collection threads of the virtual machine.
And 103, according to the counting information and/or the change trend of the dirty page collection rate, adjusting the dirty page collection rate of the dirty page collection thread so as to adjust the recovery rate of a preset memory area in the virtual machine by adjusting the dirty page collection rate.
According to the counting information aiming at the virtual machine exit event and the counted change trend of the dirty page collection rate, the dirty page collection rate of the dirty page collection thread is adjusted, the dirty page collection rate is dynamically adjusted through the two factors to influence the recovery rate of a preset memory area, frequent exiting of the virtual machine caused by insufficient dirty page space due to missing dirty pages can be reduced, the service performance of clients in the virtual machine in the hot migration process can be optimized, the interference on the client service is reduced, and the service availability of the clients is improved. The preset memory area is a ring buffer for storing dirty page information, and the recovery rate of the memory, namely the recovery rate of the ring buffer, can be influenced by dynamically adjusting the dirty page collection rate.
In summary, in the embodiment of the present invention, in the process of thermomigration of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, the virtual machine exit event may be counted, and a dirty page collection rate change trend may be counted from a dirty page collection thread of the virtual machine, and the dirty page collection rate of the dirty page collection thread is adjusted based on the count information and/or the dirty page collection rate change trend, so as to reduce the exit frequency of the virtual machine exit event caused by the preset memory area filling event, further optimize the virtual machine customer service performance in the thermomigration process, effectively reduce the influence on the customer service, and improve the availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit event of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate the collection of dirty pages under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of clients.
Referring to fig. 2, a step flowchart of another virtual machine migration method provided by an embodiment of the present invention may specifically include the following steps:
in step 201, during the virtual machine thermomigration process, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, the virtual machine exit event is counted, and corresponding count information is obtained.
The current Dirtyring dirty page tracking technology of a Kernel of a KVM (Kernel-basedVirtual Machine, system virtualization module) virtual machine is a ring data structure of a shared memory mapped by an IOMMU, the Kernel KVM directly places the information of the dirty pages of the memory into the ring, a user state QEMU periodically takes the dirty pages out of the ring and sets a mark of the taken ring entry as a reset to tell the Kernel that the core can recover the entry space of the ring for the next insertion of a new dirty page. The user mode QEMU has a thread for collecting the memory dirty pages exclusively, and the thread is responsible for collecting the memory dirty pages for the migration thread to inquire to control whether to migrate the memory. That is, the virtual machine includes a virtualized component that is a QEMU component that includes dirty page collection threads.
If the kernel exits because of ring full, the collection of the memory dirty pages is triggered once, and the memory dirty page information is synchronized into a dirty page data structure of a user state, so that the kernel can insert new memory dirty page data into the ring. In this process, kernel KVM is the role of producer and user QEMU is the role of consumer, both linked by shared preset memory area ring, a scheme is needed to balance producer and consumer pace (rate).
If the customer service pressure inside the virtual machine is great, the generated speed of the internal memory dirty pages is too fast, the kernel can be caused to mark the dirty pages frequently and insert the dirty pages into the dirty ring, and at the moment, if the user mode QEMU is a fixed time (for example, 1 second) interval, the collection of the user mode dirty pages is triggered and the ring space is triggered to be released, then the kernel cannot insert new dirty pages to cause the virtual machine to exit, which can cause the customer service to be suspended. The greater the pressure, the more frequent the virtual machine exits, and the client traffic can be impacted and even appear to be nearly unusable.
In the embodiment of the invention, in the process of the virtual machine thermomigration, if the virtual machine exit event is generated due to the filling event of the preset memory area, the virtual machine exit event can be counted to obtain corresponding counting information.
Step 202, counting the change trend of the dirty page collection rate from the dirty page collection thread of the virtual machine.
In the embodiment of the invention, the change trend of the dirty page collection rate can be counted from the dirty page collection threads of the virtual machine. The dirty-page collection rate change trend may refer to an increasing trend or a decreasing trend of the dirty-page collection rate.
The dirty page collection rate variation trend includes a first variation trend of the dirty page collection rate of the present round and a second variation trend of the dirty page collection rate of the previous round. The trend of the current round of dirty-page collection rate may be referred to as a first trend, and the trend of the last round of dirty-page collection rate may be referred to as a second trend.
In an alternative embodiment of the present invention, the statistics of the dirty page collection rate change trend from the dirty page collection thread of the virtual machine in step 202 may specifically include the following sub-steps:
in a substep S11, the collected number of dirty pages information is counted from the dirty page collection thread of the virtual machine.
In the substep S12, if the number value corresponding to the first number information is greater than the number value corresponding to the second number information, the first trend of the current dirty page collection rate is determined to be an increasing trend.
In the substep S13, if the number value corresponding to the first number information is smaller than the number value corresponding to the second number information, the first trend of the current dirty page collection rate is determined to be a decreasing trend.
The quantity information comprises first quantity information collected by the round and second quantity information collected by the previous round. The trend of change includes an increasing trend and a decreasing trend.
The quantity information includes a first quantity information collected for the present round and a second quantity information collected for the previous round. The number information of the dirty pages collected in this round may be referred to as first number information, and the number information of the dirty pages collected in the previous round may be referred to as second number information.
If the number value corresponding to the first number information is larger than the number value corresponding to the second number information, determining that the first change trend of the collecting rate of the dirty pages is an increasing trend; and if the number value corresponding to the first number information is smaller than the number value corresponding to the second number information, determining that the first change trend of the round of dirty page collection rate is a decreasing trend.
The number of dirty pages collected for the present round may be compared to the number of dirty pages collected for the previous round to determine whether the number of dirty pages is increasing or decreasing. If the collection rate is increased, the change trend of the collection rate of the dirty pages of the round is indicated to be an increasing trend; if the rate of the collection of the dirty pages is reduced, the change trend of the collection rate of the dirty pages is indicated as a reduction trend. The trend of the current round of dirty-page collection rate may be referred to as a first trend, and the trend of the last round of dirty-page collection rate may be referred to as a second trend.
In an alternative embodiment of the present invention, the sub-step S11 of counting the number of collected dirty pages from the dirty page collection thread of the virtual machine may specifically include the following sub-steps:
traversing a virtual processor of the virtual machine, and determining a preset memory area corresponding to the virtual processor; storing the collected dirty page information into a preset memory area through the kernel of the virtual machine; and collecting dirty page information from a preset memory area through a dirty page collecting thread, and counting the collected number information of the dirty pages.
The preset memory area is a shared memory area.
In practical application, all virtual processors (VCPUs) on the virtual machine can be traversed, corresponding preset memory areas are determined, collected dirty page information is stored into the preset memory areas through cores of the virtual machine, dirty page items which are not collected by the dirty page collecting threads can be found out from shared preset memory areas ring, the dirty page information is collected from the preset memory areas through the dirty page collecting threads, and the collected dirty page quantity information is counted. In addition, offset information of the dirty page, for example, offset, may be extracted from the dirty page entry in the preset memory area, and the dirty page flag bit of the memory corresponding to the offset may be saved in the dirty_bitmap of the user state.
The counting information comprises first counting information corresponding to the dirty page collection of the round and second counting information corresponding to the dirty page collection of the previous round. The counted number of virtual machine exit events in the present dirty page collection may be referred to as first count information, and the counted number of virtual machine exit events in the previous dirty page collection may be referred to as second count information.
In step 203, if the count value corresponding to the first count information is greater than the count value corresponding to the second count information, the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread is increased.
In the embodiment of the invention, if the count value corresponding to the first count information is greater than the count value corresponding to the second count information, the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread can be increased.
In step 204, if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information, the dirty page collection rate of the dirty page collection thread is adjusted according to the first change trend and the second change trend.
In the embodiment of the invention, if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information, the dirty page collection rate of the dirty page collection thread can be adjusted according to the first change trend and the second change trend.
In an alternative embodiment of the present invention, in step 204, the dirty page collection rate of the dirty page collection thread is adjusted according to the first variation trend and the second variation trend, which specifically includes the following sub-steps:
in the substep S21, if the first variation trend is consistent with the second variation trend, the count value corresponding to the preset stable count information is increased by a preset value.
The preset stability count information is used for measuring the stability of the change trend. A stable count is a technique that measures the degree of stability, and if two consecutive counts are stable, the count is incremented to a value that is considered stable. If the variation trend obtained by the continuous twice statistics is consistent, the variation trend is considered to be stable, and if the variation trend obtained by the continuous twice statistics is inconsistent, the variation trend is considered to be unstable, and the count value corresponding to the preset stable count information can be cleared to indicate that the current variation trend is unstable. For example, if the last trend is an increasing trend, the current trend is also an increasing trend, and the count is increased; if the current trend is an increasing trend, resetting the count; if the current trend is a decreasing trend, the current trend is also a decreasing trend and the count is increased.
In the embodiment of the invention, if the first variation trend is consistent with the second variation trend, the count value corresponding to the preset stable count information is increased by a preset value.
In the substep S22, if the count value corresponding to the increased preset stable count information meets the preset stable count threshold condition and the first trend is an increasing trend, the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread is increased.
In the substep S23, if the count value corresponding to the increased preset stable count information meets the preset stable count threshold condition and the first variation trend is a decreasing trend, the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection is reduced.
If the count value corresponding to the increased preset stable count information meets the preset stable count threshold condition and the first change trend is an increasing trend, increasing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread; and if the count value corresponding to the increased preset stable count information meets the preset stable count threshold condition and the first change trend is a decreasing trend, reducing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread.
In an optional embodiment of the present invention, if the count value corresponding to the increased preset stable count information in the substep S22 meets the preset stable count threshold condition and the first trend of change is an increasing trend, the next round of dirty page collection rate of the dirty page collection thread is increased, which specifically includes the following substeps:
if the count value corresponding to the increased preset stable count information is equal to the preset stable count threshold and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection.
In an optional embodiment of the present invention, in the substep S23, if the count value corresponding to the increased preset stable count information meets the preset stable count threshold condition and the first trend of change is a decreasing trend, the dirty page collecting rate of the next round of dirty page collection by the dirty page collecting thread is decreased, which specifically includes the following substeps:
if the count value corresponding to the increased preset stable count information is equal to the preset stable count threshold value and the first change trend is a decreasing trend, reducing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread.
In an alternative embodiment of the invention, the following steps may be performed:
If the first change trend is inconsistent with the second change trend, setting a count value corresponding to the preset stable count information as a preset initial value.
In practical application, the change trend of the number of the dirty pages of the round relative to the number of the dirty pages of the previous round is counted to increase or decrease, and whether the change trend changes is determined. If the change trend changes, resetting the stable count; if the trend is unchanged (two consecutive times being increasing trend or two consecutive times being decreasing trend), the stable count is increased. The stable count is not increased after being increased to 3 (i.e., the preset stable count threshold); if the stable count is 3, when the change trend of the dirty page collection rate of the round is an increasing trend, the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread can be increased, and the method can be specifically expressed as that the real adjustment direction mark of the next round is set to be speed_up (the dirty page collection rate is increased); if the stable count is 3, the dirty page collection rate of the next round of the dirty page collection thread can be reduced when the change trend of the dirty page collection rate of the present round is a decreasing trend, and the method is specifically implemented by setting the real adjustment direction flag of the next round to speed_down (reducing the dirty page collection rate).
In an alternative embodiment of the present invention, the increasing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread in the substep S22 may specifically include the following substeps:
and reducing the sleep time of the dirty page collecting thread according to a preset first proportion so as to increase the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by reducing the sleep time of the dirty page collecting thread.
In an alternative embodiment of the present invention, the reducing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread in the substep S23 may specifically include the following substeps:
and increasing the sleep time of the dirty page collecting thread according to a preset second proportion so as to reduce the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by increasing the sleep time of the dirty page collecting thread.
In an alternative embodiment of the invention, the following steps may be performed:
if the sleep time of the reduced dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged.
If the sleep time of the increased dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged.
In practical application, assuming that the sleep time Tinitial of the dirty page collecting thread is 1000 milliseconds, if the real adjustment direction mark is speed_up, the sleep time is adjusted to be 50% of the original sleep time; if the real adjustment direction flag is speed_down, the sleep time is adjusted to 200% of the original sleep time. The adjustable interval range of the sleep time (preset sleep time interval) is 1000 ms to 50 ms, and beyond this interval, the original value is kept without adjustment.
Dynamically adjusting the sleep time of a dirty page collection thread is in fact adjusting the workload of the dirty page collection thread, and adjusting the sleep time lower will accelerate the speed of collecting dirty pages, so that faster dirty page generation can be adapted to reduce the number of virtual machine exits due to ring full.
Typically, memory dirty pages are not generated frequently, which can result in unnecessary idling of the dirty page gathering thread to waste processor resources if the sleep time is adjusted too short. Therefore, the working frequency of the dirty page collecting thread needs to be dynamically adjusted according to the real load of the virtual machine client service to achieve an equilibrium state, so that the client service is ensured not to be rapidly reduced in performance due to ring full, and the dirty page collecting thread is not caused to frequently idle and waste system resources when the pressure is small.
In order to enable those skilled in the art to better understand steps 201 to 204 of the present embodiment, the following description is given by way of example:
referring to fig. 3, a flowchart of a virtual machine migration method according to an embodiment of the present invention is shown, where a specific flow includes:
1. the dirty page collection thread of the virtual machine applies for dirty ring locks.
2. Traversing all VCPUs on the virtual machine, finding out all dirty page items which are not collected from the shared memory area ring, taking out offset of dirty pages from the dirty page items, and storing memory dirty page mark marks corresponding to the offset into dirty_bitmap of the user state space.
3. And counting the number information of the dirty pages collected in the round.
4. Releasing the dirty ring lock.
5. The kernel KVM is notified to reclaim the dirty page space ring.
6. And inquiring the counting information of the virtual machine exit event caused by the annuul.
7. If the count of the current time relative to the last temporary storage is increased, setting a real adjustment direction mark of a next round of dirty page collection thread as an acceleration dirty page collection rate (speed_up), and directly entering a dirty page collection frequency dynamic adjustment flow.
8. If the count of the current time relative to the last temporary storage is not increased, executing a 'filtering dithering process': counting the number of the dirty pages collected in the round in the dirty ring dirty page collection thread of the QEMU, determining whether the dirty page collection rate change trend of the round is increased or decreased relative to the previous round, and further determining whether the change trend is changed relative to the previous round; if the change trend changes, resetting the stable count; if the change trend is unchanged, increasing the stable count, and not increasing after the stable count is increased to a preset stable count threshold; if the stable count is a preset stable count threshold, the change trend of the collection rate of the dirty pages of the round is an increasing trend, and the real adjustment direction of the next round is set as speed_up; if the stable count is a preset stable count threshold, the change trend of the collection rate of the dirty pages of the round is a decreasing trend, and the real adjustment direction of the next round is set as speed_down.
9. Executing a dirty page collection frequency dynamic adjustment flow: the sleeping time T is 1000 milliseconds initially, and if the real adjusting direction is speed_up, the sleeping time is adjusted to be 50% of the original sleeping time; if the real adjustment direction is speed down, the sleep time is adjusted to 200% of the original sleep time. The preset sleep time interval of the sleep time is 1000 milliseconds to 50 milliseconds, and after the sleep time interval is exceeded, the original value is kept without adjustment.
10. And sleeping according to the sleeping time T calculated by the dirty page collecting frequency dynamic adjusting process.
11. And (5) continuing to collect the memory dirty pages of the next round in the same steps 1-10.
The virtual machine migration method is adopted to carry out virtual machine migration, test data are shown in table 1, a QEMU self-contained guestperf tool is used as a test tool, wherein 'memory pressure' represents how many G memory dirty pages are generated in one second, a performance index '1G memory spending time updated' represents the efficiency of accessing a memory by a client, the shorter the time is, the higher the performance is, and the better or worse the memory access performance of a user represents the usability of the client service to a great extent because the performance of the memory has great influence on the client service. By comparing the test data, the optimization scheme can effectively improve the service availability in the virtual machine hot migration process under the high pressure condition, and meanwhile, the migration time can be shortened.
Figure SMS_1
TABLE 1
In summary, in the embodiment of the present invention, in the process of thermomigration of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, the virtual machine exit event may be counted, and a dirty page collection rate change trend may be counted from a dirty page collection thread of the virtual machine, and the dirty page collection rate of the dirty page collection thread is adjusted based on the count information and/or the dirty page collection rate change trend, so as to reduce the exit frequency of the virtual machine exit event caused by the preset memory area filling event, further optimize the virtual machine customer service performance in the thermomigration process, effectively reduce the influence on the customer service, and improve the availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit event of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate the collection of dirty pages under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of clients.
The invention provides a method for improving the memory access performance of a virtual machine in online migration based on dirtyring, which optimizes the problem that the online hot migration dirtyring characteristic of the virtual machine leads to rapid reduction or even almost unavailability of the client performance in certain high-pressure scenes, and the method can dynamically adjust the working frequency of a dirty page collection thread according to the load of the migrated virtual machine, so that the memory dirty pages can be more quickly and smoothly synchronized to a user state, and the reduction of the client service performance caused by the exit of a large number of virtual machines is avoided; the dirty page synchronization efficiency can be quickened, the memory migration efficiency is improved, and the online migration time of the virtual machine is shortened.
According to the invention, the frequency of collecting the dirty pages of the user mode QEMU can be dynamically regulated, the pace of a kernel dirty page producer and a dirty page consumer is balanced under the condition of dynamically adapting to different dirty page pressures, the frequent withdrawal of the virtual machine caused by insufficient ring of the shared memory area is effectively reduced, the influence on customer service is effectively reduced, and the availability of the customer service in the migration process is improved. When the load of the dirty page collection thread is dynamically adjusted, the invention introduces a filtering shake flow for solving the adjustment shake problem caused by continuous back and forth adjustment due to the fluctuation of the transient dirty page, and introduces a dirty page collection frequency dynamic adjustment flow for dynamically adapting to the service pressure change and different pressures of the client virtual machine. The method can dynamically accelerate the collection of the dirty pages under high pressure, thereby reducing the exit of the virtual machine and greatly improving the service availability of clients, and the provided filtering shake flow and the dynamic regulation flow of the dirty page collection frequency enable the dynamic regulation to be carried out smoothly and correctly.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 4, a structural block diagram of a virtual machine migration apparatus provided by an embodiment of the present invention may specifically include the following modules:
the counting module 401 is configured to count the virtual machine exit event to obtain corresponding counting information if the virtual machine generates the virtual machine exit event based on a preset memory area filling event in the virtual machine thermomigration process;
a statistics module 402, configured to count a dirty page collection rate change trend from a dirty page collection thread of the virtual machine;
the adjustment module 403 is configured to adjust a dirty page collection rate of the dirty page collection thread according to the count information and/or the dirty page collection rate variation trend, so as to adjust a recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collection rate.
In an embodiment of the present invention, the count information includes first count information corresponding to a current round of dirty page collection and second count information corresponding to a previous round of dirty page collection, and the adjustment module includes:
and the increment sub-module is used for increasing the dirty page collection rate of the next round of dirty page collection of the dirty page collection thread if the count value corresponding to the first count information is larger than the count value corresponding to the second count information.
In an embodiment of the present invention, the change trend of the collection rate of the dirty pages includes a first change trend of the collection rate of the dirty pages of the present round and a second change trend of the collection rate of the dirty pages of the previous round, and the adjustment module includes:
and the adjustment sub-module is used for adjusting the dirty page collection rate of the dirty page collection thread according to the first change trend and the second change trend if the count value corresponding to the first count information is not larger than the count value corresponding to the second count information.
In an embodiment of the present invention, the statistics module includes:
a statistics sub-module, configured to count the number information of the collected dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the round and second quantity information collected in the previous round;
A first determining submodule, configured to determine that the first trend of change of the current dirty page collection rate is an increasing trend if the number value corresponding to the first number information is greater than the number value corresponding to the second number information;
and the second determining submodule is used for determining that the first change trend of the collecting rate of the dirty pages is a reducing trend if the number value corresponding to the first number information is smaller than the number value corresponding to the second number information.
In an embodiment of the present invention, the adjustment submodule includes:
the first increasing unit is used for increasing the count value corresponding to the preset stable count information by a preset value if the first change trend is consistent with the second change trend; the preset stability counting information is used for measuring the stability of the change trend; the change trend includes an increasing trend and a decreasing trend;
the second increasing unit is used for increasing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is an increasing trend;
And the reducing unit is used for reducing the dirty page collection rate of the next round of dirty page collection by the dirty page collection thread if the count value corresponding to the increased preset stable count information meets the preset stable count threshold condition and the first change trend is a reduction trend.
In an embodiment of the present invention, in the present invention,
the second increasing unit includes:
a first increment subunit, configured to increment the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round if the incremented count value corresponding to the preset stable count information is equal to a preset stable count threshold and the first change trend is an increment trend;
the lowering unit includes:
and the reducing subunit is configured to reduce the dirty page collection rate of the dirty page collection thread for performing the next round of dirty page collection if the count value corresponding to the increased preset stable count information is equal to the preset stable count threshold and the first change trend is a reducing trend.
In an embodiment of the present invention, the apparatus further includes:
the setting module is configured to set a count value corresponding to the preset stable count information to a preset initial value if the first variation trend is inconsistent with the second variation trend.
In an embodiment of the present invention, the second adding unit includes:
a reducing subunit, configured to reduce the sleep time of the dirty page collecting thread according to a preset first ratio, so as to increase the dirty page collecting rate of the dirty page collecting thread for performing dirty page collection in a next round by reducing the sleep time of the dirty page collecting thread.
In an embodiment of the present invention, the reducing unit includes:
and the second increasing subunit is used for increasing the sleep time of the dirty page collecting thread according to a preset second proportion so as to reduce the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by increasing the sleep time of the dirty page collecting thread.
In an embodiment of the present invention, the apparatus further includes:
the first maintenance module is used for maintaining the sleep time of the dirty page collection thread unchanged if the reduced sleep time of the dirty page collection thread is not in a preset sleep time interval;
and the second maintenance module is used for maintaining the sleep time of the dirty page collection thread unchanged if the increased sleep time of the dirty page collection thread is not in the preset sleep time interval.
In an embodiment of the present invention, the statistics submodule includes:
The traversing and determining unit is used for traversing the virtual processor of the virtual machine and determining the preset memory area corresponding to the virtual processor;
the storage unit is used for storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
and the collection and statistics unit is used for collecting the dirty page information from the preset memory area through the dirty page collection thread and counting the collected number information of the dirty pages.
In an embodiment of the present invention, the virtual machine includes a virtualization component, the virtualization component including the dirty page collection thread; the virtualized component is a QEMU component.
In summary, in the embodiment of the present invention, in the process of thermomigration of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, the virtual machine exit event may be counted, and a dirty page collection rate change trend may be counted from a dirty page collection thread of the virtual machine, and the dirty page collection rate of the dirty page collection thread is adjusted based on the count information and/or the dirty page collection rate change trend, so as to reduce the exit frequency of the virtual machine exit event caused by the preset memory area filling event, further optimize the virtual machine customer service performance in the thermomigration process, effectively reduce the influence on the customer service, and improve the availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit event of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate the collection of dirty pages under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of clients.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention also provides electronic equipment, which comprises: the virtual machine migration method comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the computer program realizes the processes of the virtual machine migration method embodiment when being executed by the processor, and can achieve the same technical effects, and the repetition is avoided, so that the description is omitted.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above embodiment of the virtual machine migration method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be seen with each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has described in detail a virtual machine migration method and a virtual machine migration apparatus, an electronic device and a computer readable storage medium, to which specific examples are applied to illustrate the principles and embodiments of the present invention, the above examples are only used to help understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (11)

1. A virtual machine migration method, the method comprising:
in the process of virtual machine thermomigration, if the virtual machine generates a virtual machine exit event based on a preset memory area filling event, counting the virtual machine exit event to obtain corresponding counting information;
counting the change trend of the dirty page collection rate from the dirty page collection thread of the virtual machine;
the counting the change trend of the dirty page collection rate from the dirty page collection thread of the virtual machine comprises the following steps:
counting the collected number information of the dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the round and second quantity information collected in the previous round;
If the number value corresponding to the first number information is larger than the number value corresponding to the second number information, determining that the first change trend of the round of dirty page collection rate is an increasing trend;
if the number value corresponding to the first number information is smaller than the number value corresponding to the second number information, determining that the first change trend of the round of dirty page collection rate is a reduction trend;
according to the counting information and/or the change trend of the dirty page collection rate, the dirty page collection rate of the dirty page collection thread is adjusted, so that the recovery rate of the preset memory area in the virtual machine is adjusted by adjusting the dirty page collection rate;
the counting information comprises first counting information corresponding to the collection of the dirty pages of the round and second counting information corresponding to the collection of the dirty pages of the previous round, the change trend of the collection rate of the dirty pages comprises a first change trend of the collection rate of the dirty pages of the round and a second change trend of the collection rate of the dirty pages of the previous round, and the adjustment of the collection rate of the dirty pages collection thread is carried out according to the counting information and/or the change trend of the collection rate of the dirty pages, and comprises the following steps:
if the count value corresponding to the first count information is larger than the count value corresponding to the second count information, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection;
If the count value corresponding to the first count information is not greater than the count value corresponding to the second count information, adjusting the dirty page collection rate of the dirty page collection thread according to the first change trend and the second change trend;
wherein said adjusting said dirty page collection rate of said dirty page collection thread according to said first trend and said second trend comprises:
if the first change trend is consistent with the second change trend, increasing a count value corresponding to preset stable count information by a preset value; the preset stability counting information is used for measuring the stability of the change trend; the change trend includes an increasing trend and a decreasing trend;
if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection;
and if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is a decreasing trend, reducing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round, including:
if the count value corresponding to the increased preset stable count information is equal to a preset stable count threshold and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection;
if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is a decreasing trend, reducing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round, including:
and if the count value corresponding to the increased preset stable count information is equal to the preset stable count threshold and the first change trend is a decreasing trend, reducing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round.
3. The method according to claim 1, wherein the method further comprises:
And if the first change trend is inconsistent with the second change trend, setting the count value corresponding to the preset stable count information as a preset initial value.
4. The method of claim 1, wherein the increasing the dirty page collection rate for a next round of dirty page collection by the dirty page collection thread comprises:
and reducing the sleep time of the dirty page collecting thread according to a preset first proportion so as to increase the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by reducing the sleep time of the dirty page collecting thread.
5. The method of claim 1, wherein the reducing the dirty page collection rate for a next round of dirty page collection by the dirty page collection thread comprises:
and increasing the sleep time of the dirty page collecting thread according to a preset second proportion so as to reduce the dirty page collecting rate of the next round of dirty page collecting of the dirty page collecting thread by increasing the sleep time of the dirty page collecting thread.
6. The method according to claim 4 or 5, characterized in that the method further comprises:
if the reduced sleep time of the dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged;
If the increased sleep time of the dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged.
7. The method of claim 1, wherein the counting the number of collected dirty pages from the dirty page collection thread of the virtual machine comprises:
traversing a virtual processor of the virtual machine, and determining the preset memory area corresponding to the virtual processor;
storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
and collecting dirty page information from the preset memory area through the dirty page collecting thread, and counting the collected number information of the dirty pages.
8. The method of claim 7, wherein the virtual machine comprises a virtualization component that includes the dirty page collection thread; the virtualized component is a QEMU component.
9. A virtual machine migration apparatus, the apparatus comprising:
the counting module is used for counting the virtual machine exit event to obtain corresponding counting information if the virtual machine generates the virtual machine exit event based on the preset memory area filling event in the virtual machine thermomigration process;
The statistics module is used for counting the change trend of the dirty page collection rate from the dirty page collection threads of the virtual machine;
the statistics module is further used for counting the collected quantity information of the dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the round and second quantity information collected in the previous round; if the number value corresponding to the first number information is larger than the number value corresponding to the second number information, determining that the first change trend of the round of dirty page collection rate is an increasing trend; if the number value corresponding to the first number information is smaller than the number value corresponding to the second number information, determining that the first change trend of the round of dirty page collection rate is a reduction trend;
the adjustment module is used for adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the dirty page collection rate change trend so as to adjust the recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collection rate;
wherein the counting information comprises first counting information corresponding to the collection of the dirty pages of the round and second counting information corresponding to the collection of the dirty pages of the previous round, the change trend of the collection rate of the dirty pages comprises a first change trend of the collection rate of the dirty pages of the round and a second change trend of the collection rate of the dirty pages of the previous round,
The adjustment module is further configured to increase the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round if the count value corresponding to the first count information is greater than the count value corresponding to the second count information; if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information, adjusting the dirty page collection rate of the dirty page collection thread according to the first change trend and the second change trend;
the adjusting module is further configured to increase a count value corresponding to preset stable count information by a preset value if the first variation trend is consistent with the second variation trend; the preset stability counting information is used for measuring the stability of the change trend; the change trend includes an increasing trend and a decreasing trend; if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is an increasing trend, increasing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection; and if the count value corresponding to the increased preset stable count information meets a preset stable count threshold condition and the first change trend is a decreasing trend, reducing the dirty page collection rate of the dirty page collection thread for the next round of dirty page collection.
10. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements a virtual machine migration method as claimed in any one of claims 1-8.
11. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which computer program, when executed by a processor, implements a virtual machine migration method according to any one of claims 1-8.
CN202310075635.7A 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium Active CN115827169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310075635.7A CN115827169B (en) 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310075635.7A CN115827169B (en) 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN115827169A CN115827169A (en) 2023-03-21
CN115827169B true CN115827169B (en) 2023-06-23

Family

ID=85520866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310075635.7A Active CN115827169B (en) 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115827169B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8402226B1 (en) * 2010-06-18 2013-03-19 Emc Corporation Rate proportional cache write-back in a storage server
US9223616B2 (en) * 2011-02-28 2015-12-29 Red Hat Israel, Ltd. Virtual machine resource reduction for live migration optimization
CN103365704B (en) * 2012-03-26 2016-12-14 ***通信集团公司 Internal memory pre-copy method in virtual machine (vm) migration and perform device and the system of the method
US20180246751A1 (en) * 2015-09-25 2018-08-30 Intel Corporation Techniques to select virtual machines for migration
CN107729119B (en) * 2017-09-26 2021-04-13 联想(北京)有限公司 Virtual machine migration method and system
CN107832118A (en) * 2017-11-18 2018-03-23 浙江网新恒天软件有限公司 A kind of KVM live migration of virtual machine optimization methods of reduction VCPU temperatures
CN109189545B (en) * 2018-07-06 2021-03-30 烽火通信科技股份有限公司 Implementation method and system for improving thermal migration reliability of virtual machine
CN110928636A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Virtual machine live migration method, device and equipment
CN112988332B (en) * 2021-04-26 2021-09-21 杭州优云科技有限公司 Virtual machine live migration prediction method and system and computer readable storage medium
CN113886012A (en) * 2021-09-29 2022-01-04 济南浪潮数据技术有限公司 Method, device and equipment for automatically selecting virtual machine thermal migration acceleration scheme
CN114443211A (en) * 2021-12-22 2022-05-06 天翼云科技有限公司 Virtual machine live migration method, equipment and storage medium
CN114924836A (en) * 2022-05-17 2022-08-19 上海仪电(集团)有限公司中央研究院 Optimized KVM pre-copy virtual machine live migration method

Also Published As

Publication number Publication date
CN115827169A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
US10846137B2 (en) Dynamic adjustment of application resources in a distributed computing system
Zhao et al. Dynamic memory balancing for virtual machines
US9182923B2 (en) Controlling throughput of processing units associated with different load types in storage system
Barker et al. " Cut me some slack" latency-aware live migration for databases
CN102868763B (en) The dynamic adjusting method that under a kind of cloud computing environment, virtual web application cluster is energy-conservation
US8402226B1 (en) Rate proportional cache write-back in a storage server
EP2772853B1 (en) Method and device for building memory access model
TW200805080A (en) Apparatus, system, and method for dynamic adjustment of performance monitoring
CN112346829A (en) Method and equipment for task scheduling
Das et al. Adaptive memory management and optimism control in Time Warp
US20120150895A1 (en) Maximum allowable runtime query governor
WO2020134364A1 (en) Virtual machine migration method, cloud computing management platform, and storage medium
CN112148430A (en) Method for online safe migration of virtual machine with virtual network function
US20230409206A1 (en) Systems and methods for ephemeral storage snapshotting
CN103399791A (en) Method and device for migrating virtual machines on basis of cloud computing
Mirhosseini et al. The queuing-first approach for tail management of interactive services
Liang et al. CruiseDB: An LSM-tree key-value store with both better tail throughput and tail latency
Yi et al. {MT^ 2}: Memory Bandwidth Regulation on Hybrid {NVM/DRAM} Platforms
CN103885838A (en) Method for acquiring virtual machine memory working sets and memory optimization and allocation method
US20220060420A1 (en) Distributed processing system throttling using a timestamp
CN115827169B (en) Virtual machine migration method and device, electronic equipment and medium
CN110413206B (en) Operation control method, apparatus and computer program product in storage system
WO2024021475A1 (en) Container scheduling method and apparatus
CN112783713A (en) Method, device, equipment and storage medium for processing multi-core virtual machine stuck
CN107018163B (en) Resource allocation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 100007 room 205-32, floor 2, building 2, No. 1 and No. 3, qinglonghutong a, Dongcheng District, Beijing

Patentee after: Tianyiyun Technology Co.,Ltd.

Address before: 100093 Floor 4, Block E, Xishan Yingfu Business Center, Haidian District, Beijing

Patentee before: Tianyiyun Technology Co.,Ltd.