CN108664328B - Accelerated sensing high-utility computing architecture optimization method - Google Patents

Accelerated sensing high-utility computing architecture optimization method Download PDF

Info

Publication number
CN108664328B
CN108664328B CN201810283808.3A CN201810283808A CN108664328B CN 108664328 B CN108664328 B CN 108664328B CN 201810283808 A CN201810283808 A CN 201810283808A CN 108664328 B CN108664328 B CN 108664328B
Authority
CN
China
Prior art keywords
acceleration
server
data center
calculation
equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810283808.3A
Other languages
Chinese (zh)
Other versions
CN108664328A (en
Inventor
姚建国
�田润
周海航
管海兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201810283808.3A priority Critical patent/CN108664328B/en
Publication of CN108664328A publication Critical patent/CN108664328A/en
Application granted granted Critical
Publication of CN108664328B publication Critical patent/CN108664328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Power Sources (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses an accelerated sensing optimization method for a high-utility computing architecture, wherein in the computing architecture, each server is provided with a multiprocessor and a computing acceleration agent, the uppermost layer of the whole data center runs a computing acceleration coordination program, the power utilization safety is controlled by adopting a breaker, the computing acceleration agent reports the workload information on the server to the coordination program in real time, receives the scheduling result of computing acceleration, informs the server to run according to the scheduling result, and after the coordination program obtains the load information on all servers, the modeling is carried out on the system condition at the current moment by using a Lyapunov offset and penalty method, the benefit and the cost of computing acceleration are comprehensively considered, the task queue offset and the computing acceleration cost on all servers are minimized, and the effect optimization is finally realized during computing acceleration in a long period. The invention ensures the availability of the data center and ensures the maximum utility of calculation acceleration in a long period.

Description

Accelerated sensing high-utility computing architecture optimization method
Technical Field
The invention relates to the field of computers, in particular to an accelerated sensing high-utility computing architecture optimization method.
Background
Modern data centers often need to handle computationally intensive tasks, which are characterized by irregular burst frequency and generally high intensity. When a large number of tasks arrive on a server, the normal processing speed can cause serious task delay and reduce the service quality of the data center. Rather, low responsiveness of the data center can result in revenue reductions and even loss of customers. In order to improve the responsiveness of a data center in a workload outbreak, scholars put forward a method of "compute acceleration" for the first time in 2012.
Modern server chip multiprocessors are equipped with many cores, most of which must be shut down due to CPU thermal limitations, and "compute acceleration" proposes turning all cores on briefly for operation, requesting additional power, improving responsiveness. As a valuable method, "compute acceleration" has been widely used to handle the computational load of data center transient bursts. On the other hand, to increase the utilization of expensive power infrastructure and reduce capital costs, modern data centers are widely deployed with power overbooking technology, where the total power consumption exceeds the power that can be provided by the data center when all of the servers of the data center are simultaneously operating at peak. The power consumption of all servers is normally at a low level, with a very low probability of simultaneous workload bursts, and the data center coordinates the distribution of power with varying workloads on each server. Based on the background of power overbooking, large-scale uncoordinated "computational acceleration" is extremely dangerous for data centers. Since the processor needs extra power to "compute up", without constraining the server, the excessive power demand may cause a power crisis and even a breakdown of the entire data center. Therefore, it is essential to coordinate the "compute acceleration" of the data center servers. Under an uncoordinated data center calculation acceleration architecture, each server independently makes a decision, and calculation acceleration is performed as long as the server is not in a cooling state currently. The greedy data center calculation acceleration method looks at the fact that the future is ignored now, does not consider the aftereffect of scheduling decisions, and is allowed to accelerate as long as the queue congestion degree of a certain server is in the front at a certain moment, although the workload of the server at the next moment may be heavier.
The workload of the server changes with time, and the "compute acceleration" at different times has different benefits, and in order to prevent the temperature from being too high, the server must have a period of heat dissipation and pause the running of additional kernels after the "compute acceleration" is performed. Also, in order to prevent damage to electrical facilities (such as circuit breakers), the number of servers that are "compute accelerated" at the same time is limited. Therefore, a good strategy needs to be established to decide which servers can be accelerated at a certain time and which servers can be accelerated at a certain time. "compute acceleration" of servers adds value to the data center, but requires additional power and cooling overhead.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an accelerated sensing optimization method of a high-utility computing architecture, which is used for coordinating a data center server to perform computing acceleration on the premise of ensuring the availability of the data center.
The invention is realized according to the following technical scheme:
an accelerated perceptual optimization method for a high-utility computing architecture, comprising the steps of:
step S1: optimizing a data center computing architecture. Each server is provided with a multiprocessor, runs a calculation acceleration agent program, runs a calculation acceleration coordination program in the data center, and controls power utilization safety by adopting a breaker;
step S2: the compute acceleration agent reports workload information on the server to the coordinator in real time;
step S3: the coordination program carries out overall scheduling on whether each server carries out calculation acceleration or not: the coordination program receives load information sent by the calculation acceleration agents on the servers, integrates the gains and expenses of calculation acceleration of all the servers, defines a long-period optimization equation of an accelerated sensing high-utility calculation framework with the aim of maximizing the calculation utility of the data center, decouples the long-period optimization equation by using a Lyapunov offset penalty method, and solves a system acceleration optimization strategy at the current moment by adopting dynamic programming;
step S4: the agent program receives the scheduling result of the scheduling program for accelerating the calculation of the servers, and each server operates according to the scheduling result;
step S5: and repeating the steps S2-S3 according to the calculation acceleration result at the current moment, and iteratively solving the calculation acceleration distribution strategy at the next moment, so that the effect optimization of the calculation acceleration in a long period can be achieved.
In the above technical solution, step S2 specifically includes the following steps:
step S201: and (3) setting that the large data center has N servers providing high-strength computing power, and in each discrete time slot t (t epsilon {0,1,2 … }), the amount of tasks arriving on all the servers conforms to independent same distribution, so as to determine a task queue equation of each server in the data center.
Step S202: the normal speed of the server for processing tasks is r, when the amount of the tasks arriving at a certain moment is explosive, the processing speed needs to be increased, at the moment, CPU calculation acceleration needs to be carried out, and the processing speed can be increased by a fixed multiple when the server carries out calculation acceleration, so that a task execution equation of each server of the data center is determined.
Step S203: the server activates an additional kernel to consume a certain amount of electricity, the additional kernel also generates redundant heat when operating, in order to control the environment temperature of the data center, the cooling cost must be increased, and the relationship between the cooling cost and the electricity consumption is determined by the conversion efficiency COP of the data center cooling system, so that an energy consumption equation for calculating and accelerating the data center server is constructed.
Step S204: the calculation acceleration benefit function is a non-decreasing function related to the working load of the server, the data center focuses on reducing the delay of task processing, the effectiveness of acceleration is calculated by adopting a secondary utility function according to a queuing theory, τ r is the task processing amount of the server in normal operation in a time slot, the working load of the server at each moment is evaluated according to the ratio of the task queue to the task processing amount in acceleration, and therefore a benefit equation for calculating acceleration by the data center server is determined.
In the above technical solution, step S3 specifically includes the following steps:
step S301: combining the energy consumption equation and benefit equation analysis of each server in the step S2, if the calculation acceleration benefit is too high, the energy consumption of the data center is increased; on the contrary, if the smaller calculation acceleration energy consumption is maintained, considerable calculation acceleration benefits cannot be obtained, so that the benefits and the energy consumption need to be balanced from the perspective of system globality, and an accelerated sensing high-utility calculation architecture long-period optimization model is defined;
step S302: decoupling the long-period optimization model based on the Lyapunov function;
step S303: compared with the prior art, the method has the following beneficial effects that the optimization equation of the current moment is solved by using the complete polynomial time approximation algorithm to obtain the calculation acceleration scheduling result of the current moment of the system:
the invention first ensures the availability of the data center. "compute acceleration" has an additional power requirement, and since power infrastructure such as circuit breakers do not allow overloads for long periods of time, the present invention severely limits the number of acceleration servers to avoid power supply emergencies. Secondly, ensuring that the 'calculation acceleration' effect in a long period is maximum. The effectiveness of "compute acceleration" at different times is very different, and a cooling process must be performed for a period of time after acceleration to dissipate heat, and at this time, "compute acceleration" cannot be performed if a larger workload comes on the server, that is, the current acceleration may deprive future acceleration opportunities, resulting in a serious backlog of task queues.
According to the method, the acceleration utility is hooked with the task queue, calculation acceleration of the data center is coordinated from the perspective of economic benefit, online optimization of the average utility during calculation acceleration is obtained by utilizing the Lyapunov optimization theory, and finally the scheduling decision at each moment is obtained by solving an optimization equation.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of an accelerated perceptual high-utility computing architecture of the present invention;
FIG. 2 is a schematic diagram illustrating a comparison of utility optimization strategies and greedy strategies for "compute acceleration" scheduling results for a server;
FIG. 3 is a schematic diagram showing comparison of utility results of different "compute acceleration" strategies;
FIG. 4 is a schematic diagram of a comparison of time overhead for different "compute acceleration" strategies;
FIG. 5 is a diagram illustrating a comparison of the number of acceleration servers in 24 hours under a utility optimization strategy and an uncoordinated strategy.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
FIG. 1 is a schematic diagram of an accelerated perceptual high-utility computing architecture of the present invention; in the computing architecture, each server is provided with a multiprocessor and a computing acceleration agent, the uppermost layer of the whole data center runs a computing acceleration coordination program, a breaker is adopted to control power utilization safety, the computing acceleration agents report working load information on the servers to the coordination program in real time, receive a scheduling result of computing acceleration and inform the servers to run according to the scheduling result, after the coordination program obtains the load information on all the servers, a Lyapunov offset penalty method is used for modeling the system condition at the current moment, the benefit and the cost of computing acceleration are comprehensively considered, the task queue offset and the computing acceleration cost on all the servers are minimized, and finally the utility optimization during computing acceleration under a long period is achieved.
The optimization method specifically comprises the following steps:
step S1: optimizing data center computing system configuration: each server is provided with a multiprocessor, runs a calculation acceleration agent program, runs a calculation acceleration coordination program in the data center, and controls power utilization safety by adopting a breaker;
step S2: the compute acceleration agent reports workload information on the server to the coordinator in real time;
the specific process of step S2 is subdivided as follows:
step S201: determining a task queue equation of each server in the data center: setting a large data center to have N servers providing high-intensity computing power, wherein in each discrete time slot t (t epsilon {0,1,2 … }), the task amount arriving on all the servers accords with independent and same distribution, and enabling 3Ai(t) represents the task amount arriving at the t moment on the ith server, and the processing is carried out at the t +1 moment; let Di(t) represents the amount of tasks executed at time t on the ith server; let Qi(t) represents the total task amount waiting to be executed at time t on the ith serverThe dynamic task queue equation on server i is:
Qi(t+1)=max{Qi(t)-Di(t),0}+Ai(t) (1)
step S202: determining a task execution equation of each server in the data center: the normal speed of the server for processing tasks is r, when the amount of the incoming tasks at a certain moment is exploded, the processing speed needs to be increased, and at the moment, CPU acceleration is carried out through calculation acceleration; order Si(t) shows the status of the ith server at time t, Si(t) takes the value 0 or 1 when Si(t) equals 1 indicating a server performing a computational acceleration, when Si(t) when the value is equal to 0, the server does not perform calculation acceleration, and alpha is a multiple of the processing speed which can be increased when the server performs calculation acceleration; let τ denote the time slot length; task execution quantity DiThe equation for (t) is:
Di(t)=τ(r+αr·Si(t)) (2)
step S203: determining an energy consumption equation for calculation acceleration of the data center server: the additional kernel of the server needs to consume a certain amount of electricity for activating, the additional kernel of the server also generates excessive heat when running, and in order to control the ambient temperature of the data center, the cooling cost needs to be increased, so that PextraIndicating the amount of power required for a server to compute acceleration, PcoolIndicating the amount of power required for additional cooling of a server; let Ci(t) represents the energy consumption for calculation acceleration of the ith server at the moment t; let e denote electricity price; let Δ tcoolRepresents the cooling time; wherein, PcoolAnd PextraThe relationship between is determined by the conversion efficiency COP of the data center cooling system, as shown by equation (3):
Figure BDA0001615449150000051
the energy consumption equation for the calculation acceleration of the data center server is shown in (4):
Ci(t)=eΔtcool(Pextra+Pcool)·Si(t)(4)
step S204: determining a benefit equation for calculation acceleration of the data center server: the calculation acceleration benefit function is a non-reduction function related to the working load of the server, and the data center pays attention to reducing the time delay of task processing; according to a queuing theory, a secondary utility function is adopted to evaluate the effectiveness of the acceleration of the server calculation, wherein tau r is the task processing amount of the server in a time slot under the normal operation, and a task queue Q is usediThe ratio of (t) to τ r evaluates the workload at time t. The benefit function of the data center server for calculation acceleration is as follows:
Figure BDA0001615449150000052
step S3: the coordination program carries out overall scheduling on whether each server carries out calculation acceleration or not: the coordination program receives load information sent by the calculation acceleration agents on the servers, integrates the gains and expenses of calculation acceleration of all the servers, defines a long-period optimization equation of an accelerated sensing high-utility calculation framework with the aim of maximizing the calculation utility of the data center, decouples the long-period optimization equation by using a Lyapunov offset penalty method, and solves a system acceleration optimization strategy at the current moment by adopting dynamic programming.
The specific process of step S3 is subdivided as follows:
step S301: defining a long-period optimization equation of an accelerated sensing high-utility computing architecture: combining the energy consumption equation and benefit equation analysis of each server in the step 2, if the calculation acceleration benefit is too high, the energy consumption of the data center is increased; on the contrary, if the calculation acceleration energy consumption is kept small, no considerable calculation acceleration benefit can be obtained, so that the benefit and the energy consumption need to be balanced, w represents a weight factor of the calculation acceleration benefit of the server, and an optimization equation (6) is obtained, wherein the sign,
Figure BDA0001615449150000061
Figure BDA0001615449150000062
represents the accelerated calculation scheduling of the data center at the time tThe results on all of the servers are,
Figure BDA0001615449150000063
representing the calculation acceleration scheduling result of the T time span; constraint (6a) indicates that the length of the dynamic queue cannot be infinitely increased, and the load of the data center needs to be kept stable; constraint (6b) indicates that the number of calculation acceleration servers at each moment must meet the safety requirement, and S is taken based on the breaker characteristicsmin0.25N; constraint (6c) represents Si(t) value range, constraint (6d) indicates that extra cooling overhead is required after acceleration, only one computational acceleration can exist within a time span m,
Figure BDA0001615449150000064
Figure BDA0001615449150000065
Figure BDA0001615449150000066
Figure BDA0001615449150000067
Figure BDA0001615449150000068
step S302: decoupling is carried out based on a Lyapunov function long-period optimization model (6):
defining a Lyapunov function L (t) as the sum of backlog queues of tasks on all servers at the time t, wherein the L (t) reflects the congestion degree of the system; the lyapunov offset equation is Δ (t), which represents the difference between lyapunov functions at adjacent times, and controls the stability of the task queue by minimizing Δ (t) at any time:
Figure BDA0001615449150000069
Figure BDA00016154491500000610
the Lyapunov optimization idea is to adopt a control strategy to simultaneously minimize the total queue deviation and the calculation acceleration cost of the system, take the opposite target of an objective function in the problem (6) as a penalty function, namely the system calculation acceleration cost, define V as a nonnegative control parameter for balancing backlog queues and penalty values, obtain the optimization target of the system at each moment as shown in a formula (9), and simultaneously meet the constraint targets (6a), (6b), (6c), (6d) and (9) of a distant model as shown in a formula (10), wherein the objective function is simplified as shown in a formula (10), and the control strategy is characterized in that
Figure BDA00016154491500000611
Order to
Figure BDA00016154491500000612
Figure BDA00016154491500000613
Denotes SiValue of (t):
Figure BDA00016154491500000614
Figure BDA0001615449150000071
by solving (10), an approximate solution of the original optimization problem (6) at each moment can be obtained, and a boundary value constraint is present, wherein
Figure BDA0001615449150000072
Step S303: solving (10) using a complete polynomial time approximation algorithm:
let P be max { prefix (S)i) Get it beforeprofit(Si) All are negative numbers, then P ═ 1; order to
Figure BDA0001615449150000073
Where ε > 0, for each SiTo do so by
Figure BDA0001615449150000074
And finding out the approximate optimal solution of the system calculation acceleration scheduling strategy at the current moment by using dynamic programming for new value.
Step S4: and the agent program receives the scheduling result of the scheduling program for calculating and accelerating the servers, and each server operates according to the scheduling result.
Step S5: and (3) repeating the steps 2-3 according to the calculation acceleration result at the current moment, and iteratively solving the calculation acceleration distribution strategy at the next moment, so that the calculation acceleration time average utility optimization under a long period can be achieved.
As shown in fig. 2-5, the experimental results show that the system crash time under the uncoordinated data center calculation acceleration method accounts for a very high percentage, about 17%, while the system crash time under the economic benefit-oriented high-utility calculation acceleration optimization method is 0%. The server spends 25% of time waiting for acceleration under the greedy data center calculation acceleration method, and the high-utility calculation acceleration optimization method for economic benefits only spends 5% of time waiting for acceleration. The acceleration time of the uncoordinated and greedy methods servers is only 25%, while the economic benefit-oriented computation acceleration strategy is accelerated by 33% of the time. The total utility of the three strategies is also compared in an experiment, the economic utility-oriented data center calculation acceleration strategy is improved by 46% compared with the total utility of the greedy strategy, and the incoordination calculation acceleration strategy is improved by nearly 3 times. The above results reveal the limitations of uncoordinated and greedy strategies, as well as the advantages of high utility acceleration strategies towards economic efficiency.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (1)

1. An accelerated perceptual optimization method for a high-utility computing architecture, comprising the steps of:
step S1: optimizing a data center computing architecture, wherein each server is provided with a multiprocessor and runs a computation acceleration agent program, the data center runs a computation acceleration coordination program, and a breaker is adopted to control power utilization safety;
step S2: the compute acceleration agent reports workload information on the server to the coordinator in real time;
step S3: the coordination program carries out overall scheduling on whether each server carries out calculation acceleration or not: the coordination program receives load information sent by the calculation acceleration agents on the servers, integrates the gains and expenses of calculation acceleration of all the servers, defines a long-period optimization equation of an accelerated sensing high-utility calculation framework with the aim of maximizing the calculation utility of the data center, decouples the long-period optimization equation by using a Lyapunov offset penalty method, and solves a system acceleration optimization strategy at the current moment by adopting dynamic programming;
step S4: the agent program receives the scheduling result of the scheduling program for accelerating the calculation of the servers, and each server operates according to the scheduling result;
step S5: repeating the steps S2-S3 according to the calculation acceleration result at the current moment, and iteratively solving the calculation acceleration distribution strategy at the next moment, namely achieving the effect optimization of the calculation acceleration under a long period;
wherein the step S3 specifically includes the following steps:
step S301: combining the energy consumption equation and benefit equation analysis of each server in the step S2, if the calculation acceleration benefit is too high, the energy consumption of the data center is increased; on the contrary, if the smaller calculation acceleration energy consumption is maintained, considerable calculation acceleration benefits cannot be obtained, so that the benefits and the energy consumption need to be balanced from the perspective of system globality, and an accelerated sensing high-utility calculation architecture long-period optimization model is defined;
the long period optimization model is as formula (6):
Figure FDA0003158969070000011
Figure FDA0003158969070000012
Figure FDA0003158969070000013
Figure FDA0003158969070000014
Figure FDA0003158969070000015
step S302: decoupling the long-period optimization model based on the Lyapunov function;
step S303: solving an optimization equation at the current moment by using a complete polynomial time approximation algorithm to obtain a calculation acceleration scheduling result of the system at the current moment;
wherein the step S2 specifically includes the following steps:
step S201: setting N servers providing high-strength computing power in a large data center, and determining a task queue equation of each server in the data center according to the fact that the task quantities arriving at all the servers conform to independent same distribution in each discrete time slot t and t E ∈ {0,1,2 … };
step S202: the normal speed of the server for processing tasks is r, when the amount of the tasks arriving at a certain moment is exploded, the processing speed needs to be increased, at the moment, CPU calculation acceleration needs to be carried out, and the processing speed can be increased by a fixed multiple when the server carries out calculation acceleration, so that a task execution equation of each server of the data center is determined;
step S203: the method comprises the following steps that a server activates an extra kernel to consume a certain amount of electricity, the extra kernel also generates redundant heat when running, in order to control the environment temperature of a data center, the cooling overhead must be increased, the relation between the cooling overhead and the electricity consumption is determined by the conversion efficiency COP of a cooling system of the data center, and therefore an energy consumption equation for calculating and accelerating of the data center server is constructed;
step S204: the calculation acceleration benefit function is a non-decreasing function related to the working load of the server, the data center focuses on reducing the delay of task processing, the effectiveness of acceleration is calculated by adopting a secondary utility function according to a queuing theory, τ r is the task processing amount of the server in normal operation in a time slot, the working load of the server at each moment is evaluated according to the ratio of the task queue to the task processing amount in acceleration, and therefore a benefit equation for calculating acceleration by the data center server is determined.
CN201810283808.3A 2018-04-02 2018-04-02 Accelerated sensing high-utility computing architecture optimization method Active CN108664328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810283808.3A CN108664328B (en) 2018-04-02 2018-04-02 Accelerated sensing high-utility computing architecture optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810283808.3A CN108664328B (en) 2018-04-02 2018-04-02 Accelerated sensing high-utility computing architecture optimization method

Publications (2)

Publication Number Publication Date
CN108664328A CN108664328A (en) 2018-10-16
CN108664328B true CN108664328B (en) 2021-08-17

Family

ID=63783081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810283808.3A Active CN108664328B (en) 2018-04-02 2018-04-02 Accelerated sensing high-utility computing architecture optimization method

Country Status (1)

Country Link
CN (1) CN108664328B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106528941A (en) * 2016-10-13 2017-03-22 内蒙古工业大学 Data center energy consumption optimization resource control algorithm under server average temperature constraint
CN107172142A (en) * 2017-05-12 2017-09-15 南京邮电大学 A kind of data dispatching method for accelerating cloud computation data center to inquire about
CN107273184A (en) * 2017-06-14 2017-10-20 沈阳师范大学 A kind of optimized algorithm migrated based on high in the clouds big data with processing cost

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9424084B2 (en) * 2014-05-20 2016-08-23 Sandeep Gupta Systems, methods, and media for online server workload management

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106528941A (en) * 2016-10-13 2017-03-22 内蒙古工业大学 Data center energy consumption optimization resource control algorithm under server average temperature constraint
CN107172142A (en) * 2017-05-12 2017-09-15 南京邮电大学 A kind of data dispatching method for accelerating cloud computation data center to inquire about
CN107273184A (en) * 2017-06-14 2017-10-20 沈阳师范大学 A kind of optimized algorithm migrated based on high in the clouds big data with processing cost

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向多源大数据云端处理的成本最小化方法;肖文华等;《软件学报》;20161129;第544页-第562页 *

Also Published As

Publication number Publication date
CN108664328A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
Feng et al. A global-energy-aware virtual machine placement strategy for cloud data centers
Liu et al. Resource preprocessing and optimal task scheduling in cloud computing environments
Zhou et al. Reducing energy costs for IBM Blue Gene/P via power-aware job scheduling
KR20110007205A (en) System and method for managing energy consumption in a compute environment
CN110308967B (en) Workflow cost-delay optimization task allocation method based on hybrid cloud
Sahoo et al. A learning automata-based scheduling for deadline sensitive task in the cloud
CN113672383A (en) Cloud computing resource scheduling method, system, terminal and storage medium
US9930109B2 (en) Methods and systems for workload distribution
Hasan et al. Hso: a hybrid swarm optimization algorithm for reducing energy consumption in the cloudlets
CN112000388A (en) Concurrent task scheduling method and device based on multi-edge cluster cooperation
Catena et al. Energy-efficient query processing in web search engines
JP3896352B2 (en) Distributed computing system
CN111782627A (en) Task and data cooperative scheduling method for wide-area high-performance computing environment
Stavrinides et al. Cost-effective utilization of complementary cloud resources for the scheduling of real-time workflow applications in a fog environment
Feng et al. Towards heat-recirculation-aware virtual machine placement in data centers
CN112433807A (en) Airflow perception type virtual machine scheduling method oriented to data center global energy consumption optimization
Zhang et al. Strategy-proof mechanism for online time-varying resource allocation with restart
Terzopoulos et al. Bag-of-task scheduling on power-aware clusters using a dvfs-based mechanism
CN108664328B (en) Accelerated sensing high-utility computing architecture optimization method
Li Optimal load distribution for multiple heterogeneous blade servers in a cloud computing environment
CN116938947A (en) Task scheduling strategy considering calculation demand constraint
CN116541175A (en) Big data information processing system and method based on computer
CN106295117B (en) A kind of passive phased-array radar resource dynamic queuing management-control method
Cao et al. A QoS-guaranteed energy-efficient VM dynamic migration strategy in cloud data centers
Nakazato et al. Optimum control method of workload placement and air conditioners in a GPU server environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant