CN114968556A - Data center energy consumption management method and system - Google Patents

Data center energy consumption management method and system Download PDF

Info

Publication number
CN114968556A
CN114968556A CN202210449494.6A CN202210449494A CN114968556A CN 114968556 A CN114968556 A CN 114968556A CN 202210449494 A CN202210449494 A CN 202210449494A CN 114968556 A CN114968556 A CN 114968556A
Authority
CN
China
Prior art keywords
server
time delay
waiting time
model
servers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210449494.6A
Other languages
Chinese (zh)
Inventor
周显敬
刘虎
汪寒雨
黄银地
王春枝
吴珺
严灵毓
叶志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhuoer Information Technology Co ltd
Hubei University of Technology
Original Assignee
Wuhan Zhuoer Information Technology Co ltd
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhuoer Information Technology Co ltd, Hubei University of Technology filed Critical Wuhan Zhuoer Information Technology Co ltd
Priority to CN202210449494.6A priority Critical patent/CN114968556A/en
Publication of CN114968556A publication Critical patent/CN114968556A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • G06F11/3062Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations where the monitored property is the power consumption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5019Workload prediction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Power Sources (AREA)

Abstract

The invention provides a data center energy consumption management method and a system, wherein the method comprises the following steps: obtaining historical access flow data of a data center, and training a long and short memory network based on the historical access flow data to obtain a flow prediction model; counting the types and the number of the servers corresponding to the historical access flow, and counting the average waiting time delay of the user; constructing an influence model of the working environment corresponding to the performance of the server; predicting access flow corresponding to different time periods based on the flow prediction model, distributing a corresponding number of servers in different time periods under the condition that the average waiting time delay is smaller than a preset threshold value, and setting corresponding refrigeration equipment through the influence model by taking the minimum energy consumption as a target; and performing feedback regulation on the number of servers and the refrigeration equipment based on the average waiting time delay of actual users and the sampling of the working environment. According to the scheme, the energy consumption of the data center can be reduced on the premise of guaranteeing the normal use experience of the user.

Description

Data center energy consumption management method and system
Technical Field
The invention relates to the field of computers, in particular to a data center energy consumption management method and system.
Background
A data center generally provides services of computing, managing, transmitting, storing and the like of network data, and a large number of server hosts and the like exist. Due to the fact that the number of the servers in the data center is large, a large amount of electric energy can be consumed in the process of operation and server environment temperature control of the servers, and under the current low-carbon development requirement, the energy consumption of the data center needs to be reduced.
In the existing energy consumption control methods, on the premise of ensuring that the host works at normal ambient temperature, the energy consumption of the refrigeration system is mostly adjusted, such as setting the air conditioner temperature, the air conditioner opening number and the like, however, the influence of the server power consumption is not considered in the schemes, so that the energy consumption of the data center is still high.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and a system for managing energy consumption of a data center, so as to solve the problem that an existing data center has relatively high energy consumption.
In a first aspect of the embodiments of the present invention, a method for managing energy consumption of a data center is provided, including:
obtaining historical access flow data of a data center, and training a long and short memory network based on the historical access flow data to obtain a flow prediction model;
counting the types and the number of the servers corresponding to the historical access flow, and counting the average waiting time delay of the user;
constructing an influence model of the working environment corresponding to the performance of the server;
predicting access flow corresponding to different time periods based on the flow prediction model, distributing a corresponding number of servers in different time periods under the condition that the average waiting time delay is smaller than a preset threshold value, and setting corresponding refrigeration equipment through the influence model by taking the minimum energy consumption as a target;
and performing feedback regulation on the number of servers and the refrigeration equipment based on the average waiting time delay of actual users and the sampling of the working environment.
In a second aspect of the embodiments of the present invention, there is provided a data center energy consumption management system, including:
the model training module is used for acquiring historical access flow data of the data center and training the long and short memory networks based on the historical access flow data to obtain a flow prediction model;
the data statistics module is used for counting the types and the number of the servers corresponding to the historical access flow and counting the average waiting time delay of the user;
the model construction module is used for constructing an influence model of the working environment corresponding to the performance of the server;
the configuration module is used for predicting access flow corresponding to different time periods based on the flow prediction model, distributing servers in corresponding number in different time periods under the condition that the average waiting time delay is smaller than a preset threshold value, and setting corresponding refrigeration equipment through the influence model by taking the minimum energy consumption as a target;
and the feedback adjusting module is used for performing feedback adjustment on the number of the servers and the refrigeration equipment based on the average waiting time delay of the actual users and the sampling of the working environment.
In a third aspect of the embodiments of the present invention, there is provided an electronic device, which at least includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method according to the first aspect of the embodiments of the present invention are implemented.
In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method provided in the first aspect of the embodiments of the present invention.
In the embodiment of the invention, the number of the corresponding servers is configured according to the change of the access flow of the data center, and the refrigeration equipment is set according to the influence of the working environment on the energy consumption and the calculation efficiency of the servers, so that the number of the servers can be reasonably set on the premise of ensuring that the user service is not influenced, the influence of the environment temperature on the power consumption and the operation efficiency of the servers is considered, and the refrigeration equipment is reasonably set, so that the power consumption of the servers and the refrigeration equipment can be effectively reduced, and the total energy consumption of the data center can be effectively reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required for the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for managing energy consumption of a data center according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a data center energy consumption management system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," when used in this specification and claims, and in the accompanying drawings and figures, are intended to cover non-exclusive inclusions, such that a process, method or system, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements. In addition, "first" and "second" are used to distinguish different objects, and are not used to describe a specific order.
Referring to fig. 1, fig. 1 is a schematic flowchart of a data center energy consumption management method provided in an embodiment of the present invention, including:
s101, obtaining historical access flow data of a data center, and training a long and short memory network based on the historical access flow data to obtain a flow prediction model;
for the provided data center service of a specific client or part of clients, the server access flow data corresponding to the client can be obtained. Alternatively, the server user access traffic data may be obtained for a portion of the data center or for all of the server user access traffic data. For the servers which are determined to be capable of providing the customer service, historical access flow corresponding to the servers in different periods is counted.
Based on historical traffic data of different time periods, the constructed traffic prediction model can be trained to obtain the trained traffic prediction model, and the trained traffic prediction model is used for predicting access traffic corresponding to different current time periods.
For example, the access traffic of one day can be counted in two hours as a time period, and the traffic of the current next time period can be predicted through training of a traffic prediction model.
S102, counting the types and the number of servers corresponding to historical access flow, and counting the average waiting time delay of a user;
for historical traffic data of each time period, the number of servers corresponding to different traffic needs to be acquired, for example, when the access amount exceeds one million, 3 servers need to be started, and the number of the started servers is different in different traffic ranges. Meanwhile, under the condition that different access flows need to be unified, user waiting time delays corresponding to different numbers of servers are started.
Counting the peak access amount and the average access amount of each time period, and the corresponding maximum server number and average server number; calculating peak value waiting time delay and average waiting time delay of each time period; and constructing a relational model of data access quantity, server quantity and waiting time delay of the data center.
In some embodiments, when the peak latency exceeds a certain value, additional servers may be considered. Alternatively, when the peak access amount exceeds a certain value, additional servers need to be considered.
S103, constructing an influence model of the working environment corresponding to the performance of the server;
under different working environments, the servers have different computational efficiency and power consumption. The working environment is generally indicative of ambient temperature and may also include humidity, etc.
The influence of the working environment on the performance of the server can be obtained from manufacturers according to the model of the server, and the corresponding influence data can also be tested by self.
The method comprises the following steps of testing a relation curve between environment temperature and server operation rate and power consumption respectively; and constructing an influence model of the ambient temperature and the performance of the server by taking the minimum power consumption of the refrigeration equipment, the minimum power consumption of the server and the maximum operation rate of the server as optimization targets.
Further, respectively establishing a relational model of the power consumption of the refrigeration equipment and the ambient temperature, and a relational model of the ambient temperature, the power consumption of the server and the operational efficiency of the server; the minimum total power consumption of the refrigeration equipment and the server is taken as a first optimization target, the maximum operational performance of the server is taken as a second optimization target, and the weight of the optimization targets is set.
S104, predicting access flow corresponding to different time periods based on the flow prediction model, distributing servers in corresponding number in different time periods under the condition that the average waiting time delay is smaller than a preset threshold value, and setting corresponding refrigeration equipment through the influence model by taking the minimum energy consumption as a target;
based on the flow prediction data, the corresponding number of the servers and the starting number of the servers are distributed, so that the problems caused by starting too many or too few servers and temporarily starting or closing the servers can be avoided on the premise of ensuring the user request.
The refrigeration device generally includes an air conditioner, a fan, a liquid cooling device, and the like, and may also be other devices for adjusting the ambient temperature of the server, which is not limited herein.
On the premise of determining the number of servers, the power consumption of the servers and the refrigeration equipment needs to be reduced while the normal work of the regional servers needs to be ensured, that is, the environment temperature is set and the corresponding refrigeration equipment setting is carried out by taking the minimum power consumption of the servers and the refrigeration equipment as a target.
Specifically, with the minimum total power consumption of the refrigeration equipment and the server, on the premise that the operational performance of the server meets the waiting time delay constraint, the corresponding environment temperature and the number of the refrigeration equipment are determined based on the influence model; and setting the refrigeration equipment based on the environment temperature.
Since the server computation efficiency is reduced when the ambient temperature is too high, the user waiting time delay constraint needs to be considered.
And S105, performing feedback adjustment on the number of servers and the refrigeration equipment based on the average waiting time delay of actual users and the sampling of the working environment.
The waiting time delay of the user is monitored in real time, and the ambient temperature is sampled so as to properly adjust the server and the refrigeration equipment, thereby avoiding the problems that the waiting time delay of the user is too large due to the error of the system and the normal work of the server is influenced by the ambient temperature.
According to the method provided by the embodiment, on the premise of ensuring the normal operation of the data center, the power consumption of the server and the refrigeration equipment can be effectively reduced, and further the total power consumption of the data center can be reduced.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, but should not constitute any limitation to the implementation process of the embodiments of the present invention,
fig. 2 is a schematic structural diagram of a data center energy consumption management system according to an embodiment of the present invention, where the system includes:
the model training module 210 is configured to obtain historical access traffic data of the data center, train the long and short memory networks based on the historical access traffic data, and obtain a traffic prediction model;
the data statistics module 220 is used for counting the types and the number of the servers corresponding to the historical access flow and counting the average waiting time delay of the user;
the counting the types and the number of the servers corresponding to the historical access flow, and the counting the average waiting time delay of the user comprises the following steps:
counting the peak access amount and the average access amount of each time period, and the corresponding maximum server number and average server number;
calculating peak value waiting time delay and average waiting time delay of each time period;
and constructing a relational model of data access quantity, server quantity and waiting time delay of the data center.
The model construction module 230 is used for constructing an influence model of the working environment corresponding to the performance of the server;
wherein, the constructing of the influence model of the working environment corresponding to the performance of the server comprises:
testing the relation curves of the environment temperature and the operation rate and the power consumption of the server respectively;
and constructing an influence model of the ambient temperature and the performance of the server by taking the minimum power consumption of the refrigeration equipment, the minimum power consumption of the server and the maximum operation rate of the server as optimization targets.
Further, respectively establishing a relational model of the power consumption of the refrigeration equipment and the ambient temperature, and a relational model of the ambient temperature, the power consumption of the server and the operational efficiency of the server; the minimum total power consumption of the refrigeration equipment and the server is taken as a first optimization target, the maximum operational performance of the server is taken as a second optimization target, and the weight of the optimization targets is set.
A configuration module 240, configured to predict access flows corresponding to different time periods based on the flow prediction model, allocate a corresponding number of servers in different time periods under a limiting condition that an average waiting time delay is smaller than a predetermined threshold, and set corresponding refrigeration equipment through the influence model with a minimum energy consumption as a target;
the total power consumption of the refrigeration equipment and the server is minimum, and on the premise that the operational performance of the server meets the waiting time delay constraint, the corresponding environment temperature and the number of the refrigeration equipment are determined based on the influence model;
and setting the refrigeration equipment based on the ambient temperature.
And the feedback adjusting module 250 is configured to perform feedback adjustment on the number of servers and the refrigeration equipment based on the average waiting time delay of the actual user and the sampling of the working environment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the module described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic equipment is used for energy consumption management of the data center. As shown in fig. 3, the electronic apparatus 3 of this embodiment includes: a memory 310, a processor 320, and a system bus 330, the memory 310 including an executable program 3101 stored thereon, it being understood by those skilled in the art that the electronic device architecture shown in fig. 3 does not constitute a limitation of electronic devices, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the electronic device in detail with reference to fig. 3:
the memory 310 may be used to store software programs and modules, and the processor 320 executes various functional applications and data processing of the electronic device by operating the software programs and modules stored in the memory 310. The memory 310 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as cache data) created according to the use of the electronic device, and the like. Further, memory 310 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
An executable program 3101 of the signboard extraction method is contained on the memory 310, the executable program 3101 may be divided into one or more modules/units, which are stored in the memory 310 and executed by the processor 320 to realize the prediction and warning of the regional vehicle traffic accident risk, etc., and the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 3101 in the electronic device 3. For example, the computer program 3101 may be partitioned into a model training module, a data statistics module, a model building module, a configuration module, and a feedback adjustment module.
The processor 320 is a control center of the electronic device, connects various parts of the whole electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 310 and calling data stored in the memory 310, thereby performing overall status monitoring of the electronic device. Alternatively, processor 320 may include one or more processing units; preferably, the processor 320 may integrate an application processor, which mainly handles operating systems, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 420.
The system bus 330 is used to connect functional units inside the computer, and can transmit data information, address information, and control information, and can be, for example, a PCI bus, an ISA bus, a VESA bus, etc. The instructions of the processor 320 are transferred to the memory 310 through the bus, the memory 310 feeds data back to the processor 320, and the system bus 330 is responsible for data and instruction interaction between the processor 320 and the memory 310. Of course, other devices, such as network interfaces, display devices, etc., may also be accessible to the system bus 330.
In this embodiment of the present invention, the executable program executed by the process 320 included in the electronic device includes:
obtaining historical access flow data of a data center, and training a long and short memory network based on the historical access flow data to obtain a flow prediction model;
counting the types and the number of the servers corresponding to the historical access flow, and counting the average waiting time delay of the user;
constructing an influence model of the working environment corresponding to the performance of the server;
predicting access flow corresponding to different time periods based on the flow prediction model, distributing a corresponding number of servers in different time periods under the condition that the average waiting time delay is smaller than a preset threshold value, and setting corresponding refrigeration equipment through the influence model by taking the minimum energy consumption as a target;
and performing feedback regulation on the number of servers and the refrigeration equipment based on the average waiting time delay of actual users and the sampling of the working environment.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A data center energy consumption management method is characterized by comprising the following steps:
obtaining historical access flow data of a data center, and training a long and short memory network based on the historical access flow data to obtain a flow prediction model;
counting the types and the number of the servers corresponding to the historical access flow, and counting the average waiting time delay of the user;
constructing an influence model of the working environment corresponding to the performance of the server;
predicting access flow corresponding to different time periods based on the flow prediction model, distributing a corresponding number of servers in different time periods under the condition that the average waiting time delay is smaller than a preset threshold value, and setting corresponding refrigeration equipment through the influence model by taking the minimum energy consumption as a target;
and performing feedback adjustment on the number of servers and the refrigeration equipment based on the average waiting time delay of actual users and the sampling of the working environment.
2. The method of claim 1, wherein the counting the types and the number of the servers corresponding to the historical access traffic, and the counting the average waiting time delay of the user comprises:
counting the peak access amount and the average access amount of each time period, and the corresponding maximum server number and average server number;
calculating peak value waiting time delay and average waiting time delay of each time period;
and constructing a relational model of data access quantity, server quantity and waiting time delay of the data center.
3. The method of claim 1, wherein constructing the model of the impact of the operating environment on the performance of the server comprises:
testing the relation curves of the environment temperature and the operation rate and the power consumption of the server respectively;
and constructing an influence model of the ambient temperature and the performance of the server by taking the minimum power consumption of the refrigeration equipment, the minimum power consumption of the server and the maximum operation rate of the server as optimization targets.
4. The method of claim 3, wherein said modeling the effect of ambient temperature on server performance comprises
Respectively establishing a relational model of the power consumption of the refrigeration equipment and the ambient temperature, a relational model of the ambient temperature and the power consumption of the server, and a relational model of the operational efficiency of the server;
the minimum total power consumption of the refrigeration equipment and the server is taken as a first optimization target, the maximum operational performance of the server is taken as a second optimization target, and the weight of the optimization targets is set.
5. The method of claim 1, wherein the setting a corresponding refrigeration appliance through the impact model with the goal of minimum energy consumption comprises:
determining the corresponding ambient temperature and the number of the refrigeration equipment on the basis of the influence model on the premise that the operational performance of the server meets the waiting time delay constraint with the minimum total power consumption of the refrigeration equipment and the server;
and setting the refrigeration equipment based on the environment temperature.
6. A data center energy consumption management system, comprising:
the model training module is used for acquiring historical access flow data of the data center and training the long and short memory networks based on the historical access flow data to obtain a flow prediction model;
the data statistics module is used for counting the types and the number of the servers corresponding to the historical access flow and counting the average waiting time delay of the user;
the model construction module is used for constructing an influence model of the working environment corresponding to the performance of the server;
the configuration module is used for predicting access flow corresponding to different time periods based on the flow prediction model, distributing servers in corresponding number in different time periods under the condition that the average waiting time delay is smaller than a preset threshold value, and setting corresponding refrigeration equipment through the influence model by taking the minimum energy consumption as a target;
and the feedback adjusting module is used for performing feedback adjustment on the number of the servers and the refrigeration equipment based on the average waiting time delay of the actual users and the sampling of the working environment.
7. The system of claim 6, wherein the statistics of the types and the number of the servers corresponding to the historical access traffic and the statistics of the average waiting time delay of the user comprise:
counting the peak access amount and the average access amount of each time period, and the corresponding maximum server number and average server number;
calculating peak value waiting time delay and average waiting time delay of each time period;
and constructing a relational model of data access quantity, server quantity and waiting time delay of the data center.
8. The system of claim 6, wherein the constructing the model of the impact of the operating environment on the performance of the server comprises:
testing the relation curves of the environment temperature and the operation rate and the power consumption of the server respectively;
and constructing an influence model of the ambient temperature and the performance of the server by taking the minimum power consumption of the refrigeration equipment, the minimum power consumption of the server and the maximum operation rate of the server as optimization targets.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of a data center energy consumption management method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium storing a computer program, wherein the computer program when executed implements the steps of a data center energy consumption management method according to any one of claims 1 to 6.
CN202210449494.6A 2022-04-26 2022-04-26 Data center energy consumption management method and system Pending CN114968556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210449494.6A CN114968556A (en) 2022-04-26 2022-04-26 Data center energy consumption management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210449494.6A CN114968556A (en) 2022-04-26 2022-04-26 Data center energy consumption management method and system

Publications (1)

Publication Number Publication Date
CN114968556A true CN114968556A (en) 2022-08-30

Family

ID=82978877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210449494.6A Pending CN114968556A (en) 2022-04-26 2022-04-26 Data center energy consumption management method and system

Country Status (1)

Country Link
CN (1) CN114968556A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115581056A (en) * 2022-11-09 2023-01-06 宁波亮控信息科技有限公司 Energy-saving prediction control method and system suitable for water cooling system of data center
CN116321999A (en) * 2023-05-15 2023-06-23 广州豪特节能环保科技股份有限公司 Intelligent air conditioner regulation and control method, system and medium for cloud computing data center

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115581056A (en) * 2022-11-09 2023-01-06 宁波亮控信息科技有限公司 Energy-saving prediction control method and system suitable for water cooling system of data center
CN116321999A (en) * 2023-05-15 2023-06-23 广州豪特节能环保科技股份有限公司 Intelligent air conditioner regulation and control method, system and medium for cloud computing data center
CN116321999B (en) * 2023-05-15 2023-08-01 广州豪特节能环保科技股份有限公司 Intelligent air conditioner regulation and control method, system and medium for cloud computing data center

Similar Documents

Publication Publication Date Title
CN114968556A (en) Data center energy consumption management method and system
EP3087503B1 (en) Cloud compute scheduling using a heuristic contention model
KR101624765B1 (en) Energy-aware server management
US9170916B2 (en) Power profiling and auditing consumption systems and methods
US20170155560A1 (en) Management systems for managing resources of servers and management methods thereof
US20150286507A1 (en) Method, node and computer program for enabling automatic adaptation of resource units
CN110196767B (en) Service resource control method, device, equipment and storage medium
US20070250837A1 (en) System and method for adjusting multiple resources across multiple workloads
US20120030356A1 (en) Maximizing efficiency in a cloud computing environment
US20150106649A1 (en) Dynamic scaling of memory and bus frequencies
WO2020253111A1 (en) Automatic expansion method and apparatus for blockchain node, and operation and maintenance terminal and storage medium
CN108038040A (en) Computer cluster performance indicator detection method, electronic equipment and storage medium
TWI542986B (en) System and method of adaptive voltage frequency scaling
US11906180B1 (en) Data center management systems and methods for compute density efficiency measurements
US20230229216A1 (en) System and methods for server power management
CN115269108A (en) Data processing method, device and equipment
CN113885794B (en) Data access method and device based on multi-cloud storage, computer equipment and medium
Cho et al. A battery lifetime guarantee scheme for selective applications in smart mobile devices
CN113791538A (en) Control method, control device and control system of machine room equipment
CN110762739B (en) Data center air conditioner control method, device, equipment and storage medium
US10802943B2 (en) Performance management system, management device, and performance management method
US20160342540A1 (en) Low latency memory and bus frequency scaling based upon hardware monitoring
CN109962941B (en) Communication method, device and server
US9846478B1 (en) Adaptive power manipulation of federated storage systems
US11526784B2 (en) Real-time server capacity optimization tool using maximum predicted value of resource utilization determined based on historica data and confidence interval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination