CN111752710A - Data center PUE dynamic optimization method, system, equipment and readable storage medium - Google Patents

Data center PUE dynamic optimization method, system, equipment and readable storage medium Download PDF

Info

Publication number
CN111752710A
CN111752710A CN202010582733.6A CN202010582733A CN111752710A CN 111752710 A CN111752710 A CN 111752710A CN 202010582733 A CN202010582733 A CN 202010582733A CN 111752710 A CN111752710 A CN 111752710A
Authority
CN
China
Prior art keywords
pue
server
data center
migrated
operation state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010582733.6A
Other languages
Chinese (zh)
Other versions
CN111752710B (en
Inventor
李凌
翟天一
张鑫
钱声攀
李哲
申连腾
刘建杰
王树岭
贾强
李宇曜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Shanghai Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI, State Grid Shanghai Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010582733.6A priority Critical patent/CN111752710B/en
Publication of CN111752710A publication Critical patent/CN111752710A/en
Application granted granted Critical
Publication of CN111752710B publication Critical patent/CN111752710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5019Workload prediction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention belongs to the field of data centers, and discloses a data center PUE dynamic optimization method, a system, equipment and a readable storage medium, which comprise the following steps: the method comprises the steps of obtaining operation data information of each server in a data center, determining the operation state of each server according to the operation data information, obtaining a plurality of calculation tasks to be migrated, and respectively scheduling the calculation tasks to be migrated to each server in a low-load operation state according to the operation data information until all the servers in the low-load operation state reach a full-load operation state. According to the invention, the calculation tasks to be migrated are scheduled to the server in the low-load operation state, so that the server reaches the full-load operation state, the problem that the deviation between the actual PUE value and the theoretical PUE value is large under the long-term operation condition due to uneven server load is solved, and the PUE value of the data center is effectively reduced.

Description

Data center PUE dynamic optimization method, system, equipment and readable storage medium
Technical Field
The invention belongs to the field of data centers, and relates to a data center PUE dynamic optimization method, a system, equipment and a readable storage medium.
Background
Under the wave of new generation digital economy, various novel business models are greatly developed, and new technologies such as 5G, internet of things, cloud computing, big data, artificial intelligence and the like which bear new services drive a super-large-scale data center serving as a solid bottom layer of the internet to enter explosive development. Meanwhile, complicated operation and maintenance management and high energy consumption are challenges faced by the data center, new opportunities and challenges are brought to the construction of the large data center due to the development of new capital construction, and the operation and maintenance management and the energy efficiency optimization of the data center play an important role in the sustainable development of the data center.
PUE (power Usage efficiency) is the ratio of all energy consumed by the data center to the energy used by the IT load, and the PUE value is an important index for evaluating the energy utilization efficiency of the data center. Currently, the actual PUE value of most data centers has a large deviation from the theoretical PUE value, mainly due to the uneven load of the data center servers.
Disclosure of Invention
The invention aims to overcome the defect of large deviation between the actual PUE value and the theoretical PUE value of the data center in the prior art, and provides a data center PUE dynamic optimization method, a data center PUE dynamic optimization system, data center PUE dynamic optimization equipment and a readable storage medium.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
in one aspect of the present invention, a data center PUE dynamic optimization method includes the following steps:
acquiring operation data information of each server in a data center, and determining the operation state of each server according to the operation data information; wherein the operation state comprises a low-load operation state and a full-load operation state;
and acquiring a plurality of computing tasks to be migrated, and respectively scheduling the plurality of computing tasks to be migrated to each server in a low-load running state according to the running data information until all the servers in the low-load running state reach a full-load running state, wherein the computing tasks to be migrated are migratable computing tasks outside the data center.
The data center PUE dynamic optimization method is further improved as follows:
the specific method for acquiring the operation data information of each server in the data center comprises the following steps:
acquiring energy consumption data, calculation tasks and system resource utilization rate of each server in the data center by monitoring a DCIM system, an operating system calculation task list and the resource utilization rate of the data center; and determining the energy consumption value, the calculation task and the system resource utilization rate as operation data information.
The specific method for determining the operation state of each server according to the operation data information comprises the following steps:
and determining the servers with the system resource utilization rate of 80% or more as a full load operation state, and determining the rest servers as a low load operation state.
The specific method for respectively scheduling the plurality of computation tasks to be migrated to each server in the low-load operation state according to the operation data information comprises the following steps:
pre-judging the characteristic information of each calculation task in the server in a low-load running state through a PUE prediction model, and pre-judging the characteristic information of each calculation task to be migrated through the PUE prediction model, wherein the characteristic information comprises resource occupation information, running time and energy consumption values;
and according to the characteristic information of the computing task in each low-load running state server and the characteristic information of each computing task to be migrated, respectively scheduling the plurality of computing tasks to be migrated to each low-load running state server.
The specific method for prejudging the characteristic information of the calculation task in each server in the low-load operation state through the PUE prediction model comprises the following steps:
acquiring the category of the calculation task in each server in the low-load operation state, selecting a corresponding PUE prediction model according to the category of the calculation task, and prejudging the characteristic information of the calculation task in each server in the low-load operation state through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
The specific method for prejudging the characteristic information of each to-be-migrated computing task through the PUE prediction model comprises the following steps of:
acquiring the category of each to-be-migrated computing task, selecting a corresponding PUE prediction model according to the category of the to-be-migrated computing task, and prejudging the characteristic information of each to-be-migrated computing task through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
The historical data includes historical run time and historical energy consumption values.
In another aspect of the present invention, a data center PUE dynamic optimization system includes:
the operation state acquisition module is used for acquiring operation data information of each server in the data center and determining the operation state of each server according to the operation data information; wherein the operation state comprises a low-load operation state and a full-load operation state; and
and the computing task migration module is used for acquiring a plurality of computing tasks to be migrated, and respectively scheduling the plurality of computing tasks to be migrated to each server in a low-load running state according to the running data information until all the servers in the low-load running state reach a full-load running state, wherein the computing tasks to be migrated are migratable computing tasks except the data center.
In another aspect of the present invention, a terminal device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the data center PUE dynamic optimization method when executing the computer program.
In still another aspect of the present invention, a computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps of the data center PUE dynamic optimization method described above.
Compared with the prior art, the invention has the following beneficial effects:
according to the data center PUE dynamic optimization method, the running data information of each server in the data center is obtained, the running state of each server is determined according to the running data information, then migratable computing tasks outside the data center are obtained to serve as computing tasks to be migrated, the computing tasks to be migrated are respectively dispatched to the servers in each low-load running state until all the servers in the low-load running states reach the full-load running state, the problem that the loads of the servers in the data center are uneven is solved, the servers in the data center are guaranteed to be in the full-load running state, the energy consumption of the servers is improved, the PUE value of the data center is reduced, and the dynamic optimization of the PUE of the data center is achieved.
Drawings
FIG. 1 is a logic diagram of a data center PUE dynamic optimization method according to the present invention;
FIG. 2 is a block diagram of a flow of a data center PUE dynamic optimization method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a data center PUE dynamic optimization system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention is described in further detail below with reference to the accompanying drawings:
in practical application, the inventor finds that the traditional data center PUE level is mainly designed by hardware, and along with the continuous development of the number of data centers and the network transmission capacity, in order to realize further optimization of PUE on the basis of guaranteeing heterogeneous computing tasks, a binary optimization technology of the computing tasks and the PUE needs to be realized, and accordingly, the progress of the traditional data center in the aspect of green energy conservation is promoted. The invention mainly aims at the problem that the data center realizes the dynamic optimization capability of PUE through a calculation task cross-domain scheduling technology so as to solve the problem that the actual PUE value is more different from the theoretical PUE value, and further promote the green and energy-saving development of the data center.
Referring to fig. 1, logic of the data center PUE dynamic optimization method of the present invention is shown, first performing current computation task and overall power consumption monitoring, including current computation task monitoring, computation task prediction and server power consumption monitoring, on the basis of monitoring, preparing to carry out service migration scheduling, deconstructing service data and computing resources to obtain computing tasks and carrying out asynchronous migration, then the calculation task is migrated and dispatched to the server in the low load operation state in the data center to enable the server to reach the full load operation state or the high load operation state, the full load operating condition or the high load operating condition is only a defined difference, which indicates a difference from the low load operating condition, for example, servers with system resource utilization rate of 80% or more can be defined as a full load operation state, or a high load operation state, and the system resource utilization rate is lower than a low load operation state of 80% servers.
Referring to fig. 2, a data center PUE dynamic optimization method provided by an embodiment of the present invention is shown, and the data center PUE dynamic optimization method is implemented according to the above logic, and includes the following steps: s1: acquiring operation data information of each server in a data center, and determining the operation state of each server according to the operation data information; wherein the operation state comprises a low-load operation state and a full-load operation state; s2: and acquiring a plurality of computing tasks to be migrated, and respectively scheduling the plurality of computing tasks to be migrated to each server in a low-load running state according to the running data information until all the servers in the low-load running state reach a full-load running state, wherein the computing tasks to be migrated are migratable computing tasks outside the data center.
In this embodiment, firstly, operation data information of each server in the data center needs to be acquired, where the operation data information includes energy consumption data, calculation tasks and system resource utilization rate, and the energy consumption data, the calculation tasks and the system resource utilization rate of each server in the data center are acquired by monitoring a DCIM system (data center infrastructure management system) of the data center, an operating system calculation task list and the resource utilization rate.
Specifically, the DCIM system generally includes six parts, which are a basic monitoring part, an IT operation and maintenance management part, an asset management part, a capacity management part, an overhaul and work order management part, and a standard interface function part. The basic monitoring part is of a dynamic loop monitoring function, and collects the operation states of important parts and important equipment of the data center through front-end sensors and various intelligent interfaces carried by the equipment, wherein the operation states include the energy consumption data of each server in the data center, and the energy consumption data of each server in the data center can be obtained by monitoring a DCIM system of the data center. The computing task list of the operating system of the data center is obtained through the cloud platform management system, the computing tasks of all servers in the whole data center are included in the computing task list of the operating system, each computing task is provided with information of a corresponding running server, and the computing tasks contained in each server in the data center can be obtained by monitoring the computing task list of the operating system. The resource utilization rate of the data center is obtained through the operating system task manager, the resource utilization rate is used for representing the resource utilization rate of the whole data center, the resource utilization rate comprises the system resource utilization rate corresponding to each server, and the system resource utilization rate of each server can be obtained by monitoring the resource utilization rate of the data center. Then, the operating state of each server is determined according to the operating data information, in this embodiment, the operating state of each server is determined according to the system resource utilization rate in the operating data information, the server with the system resource utilization rate of 80% or more is determined as a full load operating state, and the other servers are determined as a low load operating state. For different types of servers, the utilization rate of system resources is explained differently, for example, the utilization rate of system resources of a CPU server is expressed as the occupancy rate of a CPU; the system resource utilization rate of the GPU server is expressed as the occupancy rate of the GPU; and the system resource utilization rate of the storage class server is expressed as the occupancy rate of the storage space.
It should be clear to those skilled in the art that the server with the system resource utilization rate of 80% or more is only a definition mode, and is a choice made in combination with the actual performance of the server itself, and likewise, the server with the system resource utilization rate of 90% or more may be in the full-load operation state, on the premise that the performance of the server per se allows, the higher the utilization rate of system resources is when the full-load operation state is defined, when the subsequent computing tasks are scheduled, more computing tasks can be scheduled to one server, thereby the server maintains high-level system resource utilization rate, further improving the energy used by IT load in the data center to the maximum extent, the PUE is the ratio of all energy consumed by the data center to the energy used by the IT load, so that the PUE value is reduced better, and the PUE optimization of the data center is realized.
Determining the load operation state of each server, selecting the servers in low load operation states, scheduling the acquired plurality of to-be-migrated computing tasks to each server in the low load operation state respectively according to the operation data information until all the servers in the low load operation state reach a full load operation state, wherein the to-be-migrated computing tasks are migratable computing tasks outside the data center, and selecting to schedule the migratable computing tasks outside the data center to the data center by adopting a cross-domain scheduling technology, so that the energy consumption of all the servers in the data center can be improved, and further the PUE optimization of the data center is realized.
Specifically, a plurality of computation tasks to be migrated generally select a model training computation task, a big data analysis computation task and/or a business system background computation task; the model training calculation task comprises a training process of a linear or non-linear model; the big data analysis and calculation task is specifically various big data analysis and processing algorithms, including data processing algorithms such as data fitting, parameter estimation and interpolation, planning algorithms such as linear programming, integer programming, multivariate programming and quadratic programming, and the like; the business system background computing task is specifically a background processing task of various business systems and cloud platforms.
The calculation migration mainly comprises six steps of migration environment perception, task division, migration decision, task uploading, server execution, result returning and the like, wherein the task division and the migration decision are two most core links. The description is given by taking a big data analysis and calculation task as an example, when the big data analysis and calculation task is selected as a calculation task to be migrated, firstly, a migration environment is sensed in a current network, the migration environment comprises state and information of a server capable of providing a task migration service, the residual quantity and calculation performance of virtual machines and channel conditions of a wireless network, reference information is provided for a subsequent process, secondly, the big data analysis task to be migrated is divided, the migratable task is generally a program task which does not need to interact with local equipment, the tasks are often data processing type tasks, subtasks formed after the task division are divided have data interaction with each other and can be separately executed, and the tasks are a main body of a migration decision process in the next step. The migration decision process is the most core link in the task migration process, the problems of whether the task to be migrated is migrated and which channel is selected for migration are solved, after the migration decision is made, some calculation tasks can be migrated to the server for execution, the result return is the last link, and after the task is executed by the server, the calculation result is returned.
In this embodiment, before migration, feature information of a computing task in a server in each low-load operating state and feature information of each computing task to be migrated need to be obtained, where the feature information includes resource occupation information, running time, and energy consumption values, the resource occupation information, the running time, and the energy consumption values of the computing task in the server in each low-load operating state are analyzed, and the remaining resource information and the remaining energy consumption values of the server in each low-load operating state are obtained by combining the overall resource information and the overall energy consumption of the server in each low-load operating state, so as to compare the remaining resource information and the remaining energy consumption values of the server in each low-load operating state with the resource occupation information and the energy consumption values of the computing task to be migrated, and the resource occupation information is smaller than the remaining resource information of the server in the low-load operating state, and migrating the computing task to be migrated to the server in the corresponding low-load running state. Meanwhile, according to the running time of the computing task in each server in the low-load running state, the computing task to be migrated which is approximately the same as the running time is reasonably selected, so that the migration times are reduced; and analyzing the running time of each to-be-migrated computing task, and finishing the to-be-migrated computing task when the to-be-migrated computing task runs only once, or finishing the to-be-migrated computing task as a circular running task without stopping the running time, so that the to-be-migrated computing task is not suitable for migration to prevent migration from being invalid, or the server enters a dead loop after migration. Through the selection mode, the plurality of calculation tasks to be migrated are reasonably migrated to the servers in different low-load running states, so that the calculation tasks cannot be normally performed due to the fact that the calculation tasks exceed the whole system resources of the servers after the calculation tasks are migrated.
Here, when obtaining the feature information of the computation task in the server in each low-load operation state, the embodiment is implemented by using a PUE prediction model prejudgment method, and similarly, when obtaining the feature information of each computation task to be migrated, the embodiment is also implemented by using the PUE prediction model prejudgment method.
When the feature information of the calculation tasks in the servers is predicted through the PUE prediction model, firstly, the category of the calculation tasks in the servers in each low-load operation state is obtained, the corresponding PUE prediction model is selected according to the category of the calculation tasks, and the feature information of the calculation tasks in the servers in each low-load operation state is pre-judged through the corresponding PUE prediction model; the corresponding PUE prediction model is obtained by training according to historical data of calculation tasks of the same category, wherein the category of the calculation tasks comprises a model training calculation task and a big data analysis task, common calculation tasks of the category of the model training calculation task have a training process on a linear or nonlinear model, common calculation tasks of the category of the big data analysis task have various big data analysis processing algorithms and the like. When the feature information of each to-be-migrated computing task is predicted through the PUE prediction model, firstly, the category of each to-be-migrated computing task is obtained, the corresponding PUE prediction model is selected according to the category of the to-be-migrated computing task, and the feature information of each to-be-migrated computing task is pre-judged through the corresponding PUE prediction model. And the corresponding PUE prediction model is used for predicting the calculation task, so that the prediction accuracy of the calculation task is effectively improved.
The historical data of the calculation task comprises historical running time and historical energy consumption values, the historical running time and the historical energy consumption values are divided into a training set and a testing set, and an initial prediction model is trained by using the training set to obtain a training testing model; testing the training test model by using the test set to obtain a test value, and comparing the test value with an actual value in the test set to obtain an evaluation parameter; and correcting and retraining the model parameters of the training test model according to the evaluation parameters to obtain a final PUE prediction model.
Referring to fig. 3, a data center PUE dynamic optimization system provided in an embodiment of the present invention is shown, where the data center PUE dynamic optimization system may be configured to execute the data center PUE dynamic optimization method in the foregoing embodiment, and the data center PUE dynamic optimization system includes an operation state obtaining module and a calculation task migration module, where the operation state obtaining module is configured to obtain operation data information of each server in a data center, and determine an operation state of each server according to the operation data information; wherein the operation state comprises a low-load operation state and a full-load operation state; the computing task migration module is used for acquiring a plurality of computing tasks to be migrated, and respectively scheduling the plurality of computing tasks to be migrated to each server in a low-load running state according to the running data information until all the servers in the low-load running state reach a full-load running state, wherein the computing tasks to be migrated are migratable computing tasks outside the data center.
In another embodiment, the data center PUE dynamic optimization system includes an operation state obtaining module and a calculation task migration module, and may further include a feature information prediction module, which is specifically configured to: and pre-judging the characteristic information of each calculation task in the server in a low-load running state through the PUE prediction model, and pre-judging the characteristic information of each calculation task to be migrated through the PUE prediction model, wherein the characteristic information comprises resource occupation information, running time and energy consumption values.
In another embodiment, when the characteristic information prediction module is configured to predict the characteristic information of the calculation task in the server in each low-load operation state through the PUE prediction model, the characteristic information prediction module is specifically configured to: acquiring the category of the calculation task in each server in the low-load operation state, selecting a corresponding PUE prediction model according to the category of the calculation task, and prejudging the characteristic information of the calculation task in each server in the low-load operation state through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
When the feature information prediction module is used for prejudging the feature information of each to-be-migrated computing task through the PUE prediction model, the feature information prediction module is specifically used for: acquiring the category of each to-be-migrated computing task, selecting a corresponding PUE prediction model according to the category of the to-be-migrated computing task, and prejudging the characteristic information of each to-be-migrated computing task through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
Based on the above description of the method embodiments and the device embodiments, those skilled in the art can understand that the data center PUE dynamic optimization method of the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Referring to fig. 4, an embodiment of the present invention further provides a terminal device, where the terminal device at least includes a processor, an input device, an output device, and a computer storage medium. The processor, input device, output device, and computer storage medium within the terminal may be connected by a bus or other means.
A computer storage medium may be stored in the memory of the terminal, the computer storage medium for storing a computer program comprising program instructions, the processor for executing the program instructions stored by the computer storage medium. The Processor may be a Central Processing Unit (CPU), or may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., which is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and is specifically adapted to load and execute one or more instructions to implement a corresponding method flow or a corresponding function; in an embodiment, the processor according to the embodiment of the present invention may be used for the operation of the data center PUE dynamic optimization method, including: acquiring operation data information of each server in a data center, and determining the operation state of each server according to the operation data information; the running states comprise a low-load running state and a full-load running state, a plurality of computing tasks to be migrated are obtained, the computing tasks to be migrated are respectively dispatched to each server in the low-load running state according to running data information until all the servers in the low-load running state reach the full-load running state, and the like.
The embodiment of the invention also provides a computer storage medium (Memory), which is a Memory device in the terminal device and is used for storing programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the terminal device, and may also include an extended storage medium supported by the terminal device. The computer storage medium provides a storage space that stores an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by a processor to implement the corresponding steps of the method described above with respect to the data center PUE dynamic optimization method embodiment; in a specific implementation, one or more instructions in a computer storage medium are loaded by a processor and perform the following steps: acquiring operation data information of each server in a data center, and determining the operation state of each server according to the operation data information; wherein the operation state comprises a low-load operation state and a full-load operation state; and acquiring a plurality of computing tasks to be migrated, and respectively scheduling the plurality of computing tasks to be migrated to each server in a low-load running state according to the running data information until all the servers in the low-load running state reach a full-load running state, wherein the computing tasks to be migrated are migratable computing tasks outside the data center.
In another embodiment, when obtaining the operation data information of each server in the data center, the one or more instructions are loaded and specifically executed by the processor: monitoring a DCIM system, an operating system calculation task list and a resource utilization rate of a data center, and acquiring energy consumption data, calculation tasks and a system resource utilization rate of each server in the data center; and determining the energy consumption data, the calculation tasks and the system resource utilization rate as the operation data information.
In another embodiment, when the operation state of each server is determined according to the operation data information, the one or more instructions are loaded and specifically executed by the processor: and determining the servers with the system resource utilization rate of 80% or more as a full load operation state, and determining the rest servers as a low load operation state.
In another embodiment, when a plurality of to-be-migrated computing tasks are respectively scheduled to each server in a low-load operating state according to the operating data information, the one or more instructions are loaded and specifically executed by the processor: pre-judging the characteristic information of each calculation task in the server in a low-load running state through a PUE prediction model, and pre-judging the characteristic information of each calculation task to be migrated through the PUE prediction model, wherein the characteristic information comprises resource occupation information, running time and energy consumption values; and according to the characteristic information of the computing task in each low-load running state server and the characteristic information of each computing task to be migrated, respectively scheduling the plurality of computing tasks to be migrated to each low-load running state server.
In another embodiment, when the feature information of the computation task in the server in each low-load operation state is predicted by the PUE prediction model, the one or more instructions are loaded and specifically executed by the processor: acquiring the category of the calculation task in each server in the low-load operation state, selecting a corresponding PUE prediction model according to the category of the calculation task, and prejudging the characteristic information of the calculation task in each server in the low-load operation state through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
In another embodiment, when the feature information of each to-be-migrated computing task is pre-judged by the PUE prediction model, the one or more instructions are loaded and specifically executed by the processor: acquiring the category of each to-be-migrated computing task, selecting a corresponding PUE prediction model according to the category of the to-be-migrated computing task, and prejudging the characteristic information of each to-be-migrated computing task through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. A data center PUE dynamic optimization method is characterized by comprising the following steps:
acquiring operation data information of each server in a data center, and determining the operation state of each server according to the operation data information; wherein the operation state comprises a low-load operation state and a full-load operation state;
and acquiring a plurality of computing tasks to be migrated, and respectively scheduling the plurality of computing tasks to be migrated to each server in a low-load running state according to the running data information until all the servers in the low-load running state reach a full-load running state, wherein the computing tasks to be migrated are migratable computing tasks outside the data center.
2. The data center PUE dynamic optimization method according to claim 1, wherein the specific method for acquiring the operation data information of each server in the data center is as follows:
acquiring energy consumption data, calculation tasks and system resource utilization rate of each server in the data center by monitoring a DCIM system, an operating system calculation task list and the resource utilization rate of the data center; and determining the energy consumption value, the calculation task and the system resource utilization rate as operation data information.
3. The data center PUE dynamic optimization method according to claim 2, wherein the specific method for determining the operation state of each server according to the operation data information is as follows:
and determining the servers with the system resource utilization rate of 80% or more as a full load operation state, and determining the rest servers as a low load operation state.
4. The data center PUE dynamic optimization method according to claim 2, wherein the specific method for respectively scheduling the plurality of computation tasks to be migrated to each server in the low-load operation state according to the operation data information comprises:
pre-judging the characteristic information of each calculation task in the server in a low-load running state through a PUE prediction model, and pre-judging the characteristic information of each calculation task to be migrated through the PUE prediction model, wherein the characteristic information comprises resource occupation information, running time and energy consumption values;
and according to the characteristic information of the computing task in each low-load running state server and the characteristic information of each computing task to be migrated, respectively scheduling the plurality of computing tasks to be migrated to each low-load running state server.
5. The data center PUE dynamic optimization method according to claim 4, wherein the specific method for prejudging the feature information of the calculation task in each server in the low-load operation state through the PUE prediction model comprises the following steps:
acquiring the category of the calculation task in each server in the low-load operation state, selecting a corresponding PUE prediction model according to the category of the calculation task, and prejudging the characteristic information of the calculation task in each server in the low-load operation state through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
6. The data center PUE dynamic optimization method according to claim 4, wherein the specific method for prejudging the feature information of each to-be-migrated computing task through the PUE prediction model is as follows:
acquiring the category of each to-be-migrated computing task, selecting a corresponding PUE prediction model according to the category of the to-be-migrated computing task, and prejudging the characteristic information of each to-be-migrated computing task through the corresponding PUE prediction model; and the corresponding PUE prediction model is obtained by training according to historical data of the same type of calculation tasks.
7. The data center PUE dynamic optimization method according to claim 5 or 6, wherein the historical data comprises historical running time and historical energy consumption values.
8. A data center PUE dynamic optimization system is characterized by comprising:
the operation state acquisition module is used for acquiring operation data information of each server in the data center and determining the operation state of each server according to the operation data information; wherein the operation state comprises a low-load operation state and a full-load operation state; and
and the computing task migration module is used for acquiring a plurality of computing tasks to be migrated, and respectively scheduling the plurality of computing tasks to be migrated to each server in a low-load running state according to the running data information until all the servers in the low-load running state reach a full-load running state, wherein the computing tasks to be migrated are migratable computing tasks except the data center.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the data center PUE dynamic optimization method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, and the computer program, when being executed by a processor, implements the steps of the data center PUE dynamic optimization method according to any one of claims 1 to 7.
CN202010582733.6A 2020-06-23 2020-06-23 Data center PUE dynamic optimization method, system and equipment and readable storage medium Active CN111752710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010582733.6A CN111752710B (en) 2020-06-23 2020-06-23 Data center PUE dynamic optimization method, system and equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010582733.6A CN111752710B (en) 2020-06-23 2020-06-23 Data center PUE dynamic optimization method, system and equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111752710A true CN111752710A (en) 2020-10-09
CN111752710B CN111752710B (en) 2023-01-31

Family

ID=72676862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010582733.6A Active CN111752710B (en) 2020-06-23 2020-06-23 Data center PUE dynamic optimization method, system and equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111752710B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930724A (en) * 2020-10-14 2020-11-13 腾讯科技(深圳)有限公司 Data migration method and device, storage medium and electronic equipment
CN114328472A (en) * 2022-03-15 2022-04-12 北京数腾软件科技有限公司 AI-based data migration method and system
CN115907202A (en) * 2022-12-13 2023-04-04 中国通信建设集团设计院有限公司 Data center PUE calculation analysis method and system under double-carbon background

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126202A1 (en) * 2001-11-08 2003-07-03 Watt Charles T. System and method for dynamic server allocation and provisioning
US20100333105A1 (en) * 2009-06-26 2010-12-30 Microsoft Corporation Precomputation for data center load balancing
CN102232282A (en) * 2010-10-29 2011-11-02 华为技术有限公司 Method and apparatus for realizing load balance of resources in data center
CN103412635A (en) * 2013-08-02 2013-11-27 清华大学 Energy-saving method and energy-saving device of data center
CN104991854A (en) * 2015-06-12 2015-10-21 北京奇虎科技有限公司 Method and system for monitoring and statistics of server sources
CN105141541A (en) * 2015-09-23 2015-12-09 浪潮(北京)电子信息产业有限公司 Task-based dynamic load balancing scheduling method and device
CN105208119A (en) * 2015-09-21 2015-12-30 重庆大学 Cloud data central task allocation method, device and system
US20160328273A1 (en) * 2015-05-05 2016-11-10 Sap Se Optimizing workloads in a workload placement system
WO2017025696A1 (en) * 2015-08-07 2017-02-16 Khalifa University of Science, Technology, and Research Methods and systems for workload distribution
CN106899660A (en) * 2017-01-26 2017-06-27 华南理工大学 Cloud data center energy-saving distribution implementation method based on trundle gray forecast model
US20170315838A1 (en) * 2016-04-29 2017-11-02 Hewlett Packard Enterprise Development Lp Migration of virtual machines
CN107977271A (en) * 2017-12-21 2018-05-01 郑州云海信息技术有限公司 A kind of data center's total management system load-balancing method
CN108694068A (en) * 2017-03-29 2018-10-23 丛林网络公司 For the method and system in virtual environment
CN111104222A (en) * 2019-12-16 2020-05-05 上海众源网络有限公司 Task processing method and device, computer equipment and storage medium
WO2020119051A1 (en) * 2018-12-10 2020-06-18 平安科技(深圳)有限公司 Cloud platform resource usage prediction method and terminal device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126202A1 (en) * 2001-11-08 2003-07-03 Watt Charles T. System and method for dynamic server allocation and provisioning
US20100333105A1 (en) * 2009-06-26 2010-12-30 Microsoft Corporation Precomputation for data center load balancing
CN102232282A (en) * 2010-10-29 2011-11-02 华为技术有限公司 Method and apparatus for realizing load balance of resources in data center
CN103412635A (en) * 2013-08-02 2013-11-27 清华大学 Energy-saving method and energy-saving device of data center
US20160328273A1 (en) * 2015-05-05 2016-11-10 Sap Se Optimizing workloads in a workload placement system
CN104991854A (en) * 2015-06-12 2015-10-21 北京奇虎科技有限公司 Method and system for monitoring and statistics of server sources
WO2017025696A1 (en) * 2015-08-07 2017-02-16 Khalifa University of Science, Technology, and Research Methods and systems for workload distribution
CN105208119A (en) * 2015-09-21 2015-12-30 重庆大学 Cloud data central task allocation method, device and system
CN105141541A (en) * 2015-09-23 2015-12-09 浪潮(北京)电子信息产业有限公司 Task-based dynamic load balancing scheduling method and device
US20170315838A1 (en) * 2016-04-29 2017-11-02 Hewlett Packard Enterprise Development Lp Migration of virtual machines
CN106899660A (en) * 2017-01-26 2017-06-27 华南理工大学 Cloud data center energy-saving distribution implementation method based on trundle gray forecast model
CN108694068A (en) * 2017-03-29 2018-10-23 丛林网络公司 For the method and system in virtual environment
CN107977271A (en) * 2017-12-21 2018-05-01 郑州云海信息技术有限公司 A kind of data center's total management system load-balancing method
WO2020119051A1 (en) * 2018-12-10 2020-06-18 平安科技(深圳)有限公司 Cloud platform resource usage prediction method and terminal device
CN111104222A (en) * 2019-12-16 2020-05-05 上海众源网络有限公司 Task processing method and device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张晓兵等: "基于虚拟机迁移的云计算资源动态调度策略", 《软件导刊》 *
钱育蓉等: "云环境下能耗感知的虚拟机动态迁移策略研究", 《微电子学与计算机》 *
魏亮等: "基于工作负载预测的虚拟机整合算法", 《电子与信息学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930724A (en) * 2020-10-14 2020-11-13 腾讯科技(深圳)有限公司 Data migration method and device, storage medium and electronic equipment
CN111930724B (en) * 2020-10-14 2021-03-16 腾讯科技(深圳)有限公司 Data migration method and device, storage medium and electronic equipment
CN114328472A (en) * 2022-03-15 2022-04-12 北京数腾软件科技有限公司 AI-based data migration method and system
CN115907202A (en) * 2022-12-13 2023-04-04 中国通信建设集团设计院有限公司 Data center PUE calculation analysis method and system under double-carbon background
CN115907202B (en) * 2022-12-13 2023-10-24 中国通信建设集团设计院有限公司 Data center PUE (physical distribution element) calculation analysis method and system under double-carbon background

Also Published As

Publication number Publication date
CN111752710B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN111752710B (en) Data center PUE dynamic optimization method, system and equipment and readable storage medium
Wu Multi-objective decision-making for mobile cloud offloading: A survey
CN109324875B (en) Data center server power consumption management and optimization method based on reinforcement learning
CN103677958B (en) The resource regulating method and device of a kind of virtual cluster
CN103473139A (en) Virtual machine cluster resource allocation and scheduling method
CN113157422A (en) Cloud data center cluster resource scheduling method and device based on deep reinforcement learning
CN103916438B (en) Cloud testing environment scheduling method and system based on load forecast
CN111984381A (en) Kubernetes resource scheduling optimization method based on historical data prediction
CN110417686B (en) Cloud resource dynamic scheduling system
CN102707995A (en) Service scheduling method and device based on cloud computing environments
CN103699443A (en) Task distributing method and scanner
CN115543626A (en) Power defect image simulation method adopting heterogeneous computing resource load balancing scheduling
Akoglu et al. Putting data science pipelines on the edge
Mazidi et al. An autonomic decision tree‐based and deadline‐constraint resource provisioning in cloud applications
CN110990160A (en) Static security analysis container cloud elastic expansion method based on load prediction
Lu et al. InSTechAH: Cost-effectively autoscaling smart computing hadoop cluster in private cloud
CN105426247A (en) HLA federate planning and scheduling method
Lordan et al. Energy-aware programming model for distributed infrastructures
CN117076882A (en) Dynamic prediction management method for cloud service resources
CN117149410A (en) AI intelligent model based training, scheduling, commanding and monitoring system
CN103685541B (en) IaaS cloud system operating rate device for controlling dynamically, system and method
CN116541128A (en) Load adjusting method, device, computing equipment and storage medium
CN105162844B (en) A kind of method and device of task distribution
De Vleeschauwer et al. 5Growth Data-driven AI-based Scaling
CN114021733B (en) Model training optimization method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant