CN109388480A - A kind of method and device handling cloud resource - Google Patents
A kind of method and device handling cloud resource Download PDFInfo
- Publication number
- CN109388480A CN109388480A CN201811293562.4A CN201811293562A CN109388480A CN 109388480 A CN109388480 A CN 109388480A CN 201811293562 A CN201811293562 A CN 201811293562A CN 109388480 A CN109388480 A CN 109388480A
- Authority
- CN
- China
- Prior art keywords
- service
- host
- load
- moment
- load capacity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Debugging And Monitoring (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of methods for handling cloud resource, comprising: is grouped by type of service to the load of host;Load capacity using BP neural network to each group type of service in t moment is predicted respectively;The load capacity that each group type of service predicts is weighted, obtains the host in the integrated load of t moment.Also disclose a kind of device for handling cloud resource.System load is decomposed into multiple-factor load from single-factor load by this programme, and each factor is trained and is predicted using BP neural network, to provide the predicted value of each factor, finally towards the host of different service types, weight of each factor in integrated load calculating is provided, a reasonable integrated load predicted value is calculated.
Description
Technical field
The present invention relates to field of computer technology, espespecially a kind of method and device for handling cloud resource.
Background technique
Cloud computing resources load estimation is the important component of cloud computing system platform planning, the direct shadow of precision of prediction
Ring economy, safety and the service quality for arriving cloud computing system.It according to historic load when axis on establish now with
Quantitative relationship between the load of following cloud resource come the short-term cloud resource load state information that obtains, be cloud resource planning distribution,
The performance of cloud computing platform optimizes and the cost minimization of cloud operator provides reasonable foundation.
Traditional prediction model has difference autoregressive moving average (ARIMA) model.There are three parameters for ARIMA model: p,
D, q, wherein
P-- represents the lag number (lags) of time series data used in prediction model itself, also referred to as AR/Auto-
Regressive (autoregression) item.
D-- represents time series data and needs to carry out a few scale differentiation, is only stable, is also Integrated (order) item.
Q-- represents the lag number (lags) of the prediction error used in prediction model, also referred to as MA/Moving Average
(sliding average) item.
Its basic step is:
(1) it obtains and is observed system time sequence data;
(2) to map data, whether observation is stationary time series, first to carry out d scale for nonstationary time series
Partite transport is calculated, and stationary time series is turned to;
(3) it is handled by (2) step, has obtained stationary time series.To acquire it respectively to stationary time series certainly
Related coefficient (ACF) and PARCOR coefficients (PACF) are obtained optimal by the analysis to autocorrelogram and partial autocorrelation figure
Stratum p and order q;
(4) by d, q, p derived above, ARIMA model is obtained, then starts to carry out model testing to obtained model.
ARIMA model needs time series data is stable, or by be after differencing it is stable, if unstable
Data can not capture rule, can only substantially capture linear relationship.Cloud resource load is a chaos in itself and non-
Linear system, there are biggish fluctuations, and neural network is a good nonlinear fitting tool, by largely going through
History data can make to predict that network model voluntarily learns and exports relatively accurate load estimation as a result, for subsequent capacity planning
Decision-making foundation is provided with virtual machine load migration.
The input signal of BP (backpropagation) neural network is inputted through input layer, is calculated by hidden layer and is exported by output layer,
Output valve is compared with mark value, if there is error, by error reversely from output layer to input Es-region propagations, in this process, utilizes
Gradient descent algorithm is adjusted neuron weight.
Be currently based on the prediction that ARIMA model and BP neural network model are done, be all using system load as it is single because
It usually considers, do not comprehensively consider the business characteristic of the host of carrying different service types itself and influences more influences of load
Relationship between the factor.Such as: the load of a calculation type host, itself CPU can be constantly in higher level, load to it
The final prediction result of horizontal height should just reduce the weighing factor of CPU, a storage-type node, if CPU utilization is higher,
It is abnormal then to illustrate that its load occurs, it should promote its CPU impact weight, help to find the problem early and migrate virtual machine.
Summary of the invention
In order to solve the above-mentioned technical problems, the present invention provides a kind of method and devices for handling cloud resource, can determine
The reasonable integrated load predicted value of host.
In order to reach the object of the invention, the present invention provides a kind of methods for handling cloud resource, comprising:
The load of host is grouped by type of service;
Load capacity using BP neural network to each group type of service in t moment is predicted respectively;
The load capacity that each group type of service predicts is weighted, obtains the host in the integrated load of t moment.
Further, the load capacity using BP neural network to each group type of service in t moment is predicted respectively,
Include:
Respectively using the load capacity of each type of service at n moment before t moment as training data, the BP mind is inputted
It is trained through network, predicts each group type of service in the load capacity of t moment.
Further, the type of service includes:
The connection number of the central processing unit of host, the memory of host, the disk of host, host.
Further, the load capacity that each group type of service is predicted is weighted, comprising:
By load capacity that each group type of service predicts with for type of service, preset weighted value is weighted and asks respectively
With.
It is further, described to obtain the host after the integrated load of t moment, further includes:
If it is determined that the integrated load reaches threshold value, then alerted.
A kind of device handling cloud resource, comprising: memory and processor;Wherein:
The memory, for saving the program for handling cloud resource;
The processor executes the program for being used to handle cloud resource for reading, performs the following operations:
The load of host is grouped by type of service;
Load capacity using BP neural network to each group type of service in t moment is predicted respectively;
The load capacity that each group type of service predicts is weighted, obtains the host in the integrated load of t moment.
Further, the load capacity using BP neural network to each group type of service in t moment is predicted respectively,
Include:
Respectively using the load capacity of each type of service at n moment before t moment as training data, the BP mind is inputted
It is trained through network, predicts each group type of service in the load capacity of t moment.
Further, the type of service includes:
The connection number of the central processing unit of host, the memory of host, the disk of host, host.
Further, the load capacity that each group type of service is predicted is weighted, comprising:
By load capacity that each group type of service predicts with for type of service, preset weighted value is weighted and asks respectively
With.
It is further, described to obtain the host after the integrated load of t moment, further includes:
If it is determined that the integrated load reaches threshold value, then alerted.
To sum up, the present embodiment proposes a plan, and system load is decomposed into multiple-factor load from single-factor load, and utilize BP
Neural network is trained and predicts to each factor, so that the predicted value of each factor is provided, finally towards different business class
The host of type provides weight of each factor in integrated load calculating, calculates a reasonable integrated load predicted value.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by specification, right
Specifically noted structure is achieved and obtained in claim and attached drawing.
Detailed description of the invention
Attached drawing is used to provide to further understand technical solution of the present invention, and constitutes part of specification, with this
The embodiment of application technical solution for explaining the present invention together, does not constitute the limitation to technical solution of the present invention.
Fig. 1 is a kind of flow chart of the method for processing cloud resource of the embodiment of the present invention;
Fig. 2 is the schematic diagram of the training of BP neural network;
Fig. 3 is a kind of schematic diagram of the device of processing cloud resource of the embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention
Embodiment be described in detail.It should be noted that in the absence of conflict, in the embodiment and embodiment in the application
Feature can mutual any combination.
Step shown in the flowchart of the accompanying drawings can be in a computer system such as a set of computer executable instructions
It executes.Also, although logical order is shown in flow charts, and it in some cases, can be to be different from herein suitable
Sequence executes shown or described step.
Fig. 1 is a kind of flow chart of the method for processing cloud resource of the embodiment of the present invention, as shown in Figure 1, the present embodiment
Method includes:
Step 101, the load of host is grouped by type of service;
Step 102, the load capacity using BP neural network to each group type of service in t moment is predicted respectively;
Step 103, the load capacity that each group type of service predicts is weighted, obtains the host in the comprehensive of t moment
Close load.
The deficiency of the characteristics of changing and existing algorithm is loaded for cloud computing resources, the present embodiment proposes method, by system
Load is decomposed into multiple-factor load from single-factor load, and each factor is trained and is predicted using BP neural network, from
And the predicted value of each factor is provided, finally towards the host of different service types, provides each factor and calculated in integrated load
In weight, calculate a reasonable integrated load predicted value.
Below for predicting host in the load L (t) of t moment, the method for the present embodiment is illustrated.
Firstly, the four-tuple that splits into that cloud resource loads is indicated, indicated with L.L=Lcpu, Lmem,
Ldisk, Lnet }, wherein Lcpu indicates that the CPU usage of host, Lmem indicate the memory usage of host, and Ldisk indicates master
The occupancy of machine disk, Lnet indicate the connection number of host.
Second, predict the different types of factor in the loading condition of t moment respectively: Lcpu (t), Lmem (t), Ldisk
(t)、Lnet(t)。
If primary load sampling is done every Δ t time interval, using the load capacity at n moment before t moment as prediction
Reference quantity or training data, then:
Lcpu (t)=f (Lcpu (t- Δ t), Lcpu (t-2 Δ t) ..., Lcpu (t-n Δ t))
Lmem (t)=f (Lmem (t- Δ t), Lmem (t-2 Δ t) ..., Lmem (t-n Δ t))
Ldisk (t)=f (Ldisk (t- Δ t), Ldisk (t-2 Δ t) ..., Ldisk (t-n Δ t))
Lnet (t)=f (Lnet (t- Δ t), Lnet (t-2 Δ t) ..., Lnet (t-n Δ t))
In formula, the load capacity of t moment is the functional relation of preceding n sampled point.
This is a kind of non-linear relationship for being difficult to be described with accurate mathematic(al) representation, therefore the present embodiment uses BP neural network
It is modeled.The schematic diagram of BP neural network is as shown in Figure 2.BP neural network is made of input layer, hidden layer and output layer.
Preceding n moment sampling load capacity is the input of model, i.e., n input neuron, the premeasuring L (t) of t moment is to be
The output of system, i.e. 1 output neuron, hidden layer have m neuron, whole network is the mono- output of n input-using single hidden layer
Network.Wherein, ωijIt is weight of the input layer to hidden layer, i ∈ (1, n), j ∈ (1, m), σkIt is power of the hidden layer to output layer
Value.σk∈ (1, m), L* be used to indicate sample value, after having enough samples, repeatedly by the L* of different moments (t-Δ t),
((t-n Δ t) is as input, after network training, the predicted value L that is exported by t -2 Δ t) ..., L* by L*t, then with sample
Value L*t is compared, and obtains error e, this error-duration model is constantly modified weight ω everywhere by BP neural networkijAnd σk, make
The error that must be predicted constantly reduces, and enables actual prediction value L (t) close to sample actual value L* (t), finally allows the network to opposite
Accurately predict the load capacity of t moment.
The load of the different factors that prediction is drawn is weighted summation by third, for different business Host Types,
Different weight vectors are provided respectively.
ν={ ν cpu, ν mem, ν disk, ν net }.
Here a basic principle is provided, if it is calculation type host, then the weight of disk is suitably turned up, if it is depositing
Then the weight of cpu and memory is suitably turned up for storage type host, and for network net, keep a constant constant.
The integrated load finally obtained are as follows:
L={ Lcpu, Lmem, Ldisk, Lnet } * νT={ ν cpu, ν mem, ν disk, ν net }T
The method of the present embodiment can more precisely predict the loading condition of the following sometime host, then pass through
Reasonable threshold value is set, when the load of host reaches threshold value, is alerted, system be likely to occur problem timing node it
Before, virtual machine of moving out can largely prevent system failure, reduce failure.
Fig. 3 is a kind of schematic diagram of the device of processing cloud resource of the embodiment of the present invention, as shown in figure 3, the present embodiment
Device includes: memory and processor, wherein
The memory, for saving the program for handling cloud resource;
The processor executes the program for being used to handle cloud resource for reading, performs the following operations:
The load of host is grouped by type of service;
Load capacity using BP neural network to each group type of service in t moment is predicted respectively;
The load capacity that each group type of service predicts is weighted, obtains the host in the integrated load of t moment.
Optionally, the load capacity using BP neural network to each group type of service in t moment is predicted respectively, packet
It includes:
Respectively using the load capacity of each type of service at n moment before t moment as training data, the BP mind is inputted
It is trained through network, predicts each group type of service in the load capacity of t moment.
Optionally, the type of service includes:
The connection number of the central processing unit of host, the memory of host, the disk of host, host.
Optionally, the load capacity that each group type of service is predicted is weighted, comprising:
By load capacity that each group type of service predicts with for type of service, preset weighted value is weighted and asks respectively
With.
The embodiment of the invention also provides a kind of computer readable storage mediums, are stored with computer executable instructions,
The computer executable instructions are performed the method for realizing the processing cloud resource.
It will appreciated by the skilled person that whole or certain steps, system, dress in method disclosed hereinabove
Functional module/unit in setting may be implemented as software, firmware, hardware and its combination appropriate.In hardware embodiment,
Division between the functional module/unit referred in the above description not necessarily corresponds to the division of physical assemblies;For example, one
Physical assemblies can have multiple functions or a function or step and can be executed by several physical assemblies cooperations.Certain groups
Part or all components may be implemented as by processor, such as the software that digital signal processor or microprocessor execute, or by
It is embodied as hardware, or is implemented as integrated circuit, such as specific integrated circuit.Such software can be distributed in computer-readable
On medium, computer-readable medium may include computer storage medium (or non-transitory medium) and communication media (or temporarily
Property medium).As known to a person of ordinary skill in the art, term computer storage medium is included in for storing information (such as
Computer readable instructions, data structure, program module or other data) any method or technique in the volatibility implemented and non-
Volatibility, removable and nonremovable medium.Computer storage medium include but is not limited to RAM, ROM, EEPROM, flash memory or its
His memory technology, CD-ROM, digital versatile disc (DVD) or other optical disc storages, magnetic holder, tape, disk storage or other
Magnetic memory apparatus or any other medium that can be used for storing desired information and can be accessed by a computer.This
Outside, known to a person of ordinary skill in the art to be, communication media generally comprises computer readable instructions, data structure, program mould
Other data in the modulated data signal of block or such as carrier wave or other transmission mechanisms etc, and may include any information
Delivery media.
Claims (10)
1. a kind of method for handling cloud resource characterized by comprising
The load of host is grouped by type of service;
Load capacity using BP neural network to each group type of service in t moment is predicted respectively;
The load capacity that each group type of service predicts is weighted, obtains the host in the integrated load of t moment.
2. the method according to claim 1, wherein described utilize BP neural network to each group type of service in t
The load capacity at moment is predicted respectively, comprising:
Respectively using the load capacity of each type of service at n moment before t moment as training data, the BP nerve net is inputted
Network is trained, and predicts each group type of service in the load capacity of t moment.
3. the method according to claim 1, wherein the type of service includes:
The connection number of the central processing unit of host, the memory of host, the disk of host, host.
4. the method according to claim 1, wherein the load capacity that each group type of service is predicted carries out
Weighting, comprising:
By load capacity that each group type of service predicts with for type of service, preset weighted value is weighted summation respectively.
5. the method according to claim 1, wherein described obtain the host after the integrated load of t moment,
Further include:
If it is determined that the integrated load reaches threshold value, then alerted.
6. a kind of device for handling cloud resource, comprising: memory and processor;It is characterized by:
The memory, for saving the program for handling cloud resource;
The processor executes the program for being used to handle cloud resource for reading, performs the following operations:
The load of host is grouped by type of service;
Load capacity using BP neural network to each group type of service in t moment is predicted respectively;
The load capacity that each group type of service predicts is weighted, obtains the host in the integrated load of t moment.
7. device according to claim 6, which is characterized in that described to utilize BP neural network to each group type of service in t
The load capacity at moment is predicted respectively, comprising:
Respectively using the load capacity of each type of service at n moment before t moment as training data, the BP nerve net is inputted
Network is trained, and predicts each group type of service in the load capacity of t moment.
8. device according to claim 6, which is characterized in that the type of service includes:
The connection number of the central processing unit of host, the memory of host, the disk of host, host.
9. device according to claim 6, which is characterized in that the load capacity for predicting each group type of service carries out
Weighting, comprising:
By load capacity that each group type of service predicts with for type of service, preset weighted value is weighted summation respectively.
10. device according to claim 6, which is characterized in that described to obtain the host in the integrated load of t moment
Afterwards, further includes:
If it is determined that the integrated load reaches threshold value, then alerted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811293562.4A CN109388480A (en) | 2018-11-01 | 2018-11-01 | A kind of method and device handling cloud resource |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811293562.4A CN109388480A (en) | 2018-11-01 | 2018-11-01 | A kind of method and device handling cloud resource |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109388480A true CN109388480A (en) | 2019-02-26 |
Family
ID=65428211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811293562.4A Pending CN109388480A (en) | 2018-11-01 | 2018-11-01 | A kind of method and device handling cloud resource |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109388480A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110086888A (en) * | 2019-05-15 | 2019-08-02 | 上海淇毓信息科技有限公司 | More cluster dynamic load methods, device, electronic equipment based on RabbitMQ |
CN110377430A (en) * | 2019-07-24 | 2019-10-25 | 中南民族大学 | Data migration method, equipment, storage medium and device |
CN110413406A (en) * | 2019-06-27 | 2019-11-05 | 莫毓昌 | A kind of task load forecasting system and method |
CN111581068A (en) * | 2020-04-22 | 2020-08-25 | 北京华宇信息技术有限公司 | Terminal workload calculation method and device, storage medium, terminal and cloud service system |
CN113129048A (en) * | 2019-12-31 | 2021-07-16 | 阿里巴巴集团控股有限公司 | Resource supply method, resource supply device, electronic equipment and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425524A (en) * | 2013-07-17 | 2013-12-04 | 北京邮电大学 | Method and system for balancing multi-service terminal aggregation |
CN103778474A (en) * | 2012-10-18 | 2014-05-07 | 华为技术有限公司 | Resource load capacity prediction method, analysis prediction system and service operation monitoring system |
CN105791151A (en) * | 2014-12-22 | 2016-07-20 | 华为技术有限公司 | Dynamic flow control method and device |
-
2018
- 2018-11-01 CN CN201811293562.4A patent/CN109388480A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103778474A (en) * | 2012-10-18 | 2014-05-07 | 华为技术有限公司 | Resource load capacity prediction method, analysis prediction system and service operation monitoring system |
CN103425524A (en) * | 2013-07-17 | 2013-12-04 | 北京邮电大学 | Method and system for balancing multi-service terminal aggregation |
CN105791151A (en) * | 2014-12-22 | 2016-07-20 | 华为技术有限公司 | Dynamic flow control method and device |
Non-Patent Citations (1)
Title |
---|
伤口: "怎么计算WEB服务器的最大负载量?", 《个人图书馆》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110086888A (en) * | 2019-05-15 | 2019-08-02 | 上海淇毓信息科技有限公司 | More cluster dynamic load methods, device, electronic equipment based on RabbitMQ |
CN110086888B (en) * | 2019-05-15 | 2022-05-17 | 上海淇毓信息科技有限公司 | Multi-cluster dynamic load method and device based on RabbitMQ and electronic equipment |
CN110413406A (en) * | 2019-06-27 | 2019-11-05 | 莫毓昌 | A kind of task load forecasting system and method |
CN110377430A (en) * | 2019-07-24 | 2019-10-25 | 中南民族大学 | Data migration method, equipment, storage medium and device |
CN113129048A (en) * | 2019-12-31 | 2021-07-16 | 阿里巴巴集团控股有限公司 | Resource supply method, resource supply device, electronic equipment and computer readable storage medium |
CN111581068A (en) * | 2020-04-22 | 2020-08-25 | 北京华宇信息技术有限公司 | Terminal workload calculation method and device, storage medium, terminal and cloud service system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109388480A (en) | A kind of method and device handling cloud resource | |
CN105550323B (en) | Load balance prediction method and prediction analyzer for distributed database | |
US20160063419A1 (en) | Inventory optimization tool | |
CN113037577B (en) | Network traffic prediction method, device and computer readable storage medium | |
CN110287086A (en) | A kind of the trading volume prediction technique and device of periodicity time | |
CN103685014B (en) | Time series predicting model is utilized to strengthen the system and method for router-level topology reliability | |
CN111159241B (en) | Click conversion estimation method and device | |
CN110610140B (en) | Training method, device and equipment of face recognition model and readable storage medium | |
CN114327890A (en) | Multi-index fusion container quota recommendation method and system | |
CN113850669A (en) | User grouping method and device, computer equipment and computer readable storage medium | |
Khalill et al. | Load balancing cloud computing with web-interface using multi-channel queuing systems with warming up and cooling | |
CN105468726B (en) | Data computing method and system based on local computing and distributed computing | |
CN114969209B (en) | Training method and device, and method and device for predicting resource consumption | |
US20220327450A1 (en) | Method for increasing or decreasing number of workers and inspectors in crowdsourcing-based project for creating artificial intelligence learning data | |
CN115345551A (en) | Goods quantity prediction method, device, equipment and storage medium | |
CN115600818A (en) | Multi-dimensional scoring method and device, electronic equipment and storage medium | |
CN113537853A (en) | Order distribution method, order distribution device, readable storage medium and electronic equipment | |
Nepomuceno | Multiobjective learning in the random neural network | |
CN113657844B (en) | Task processing flow determining method and device | |
CN117234737A (en) | Cloud manufacturing scheduling simulation method and system based on micro-service architecture | |
US20240095751A1 (en) | Automatically predicting dispatch-related data using machine learning techniques | |
WO2021044459A1 (en) | Learning device, prediction system, method, and program | |
CN108268363A (en) | For the method and apparatus of volume of business management | |
CN114330456A (en) | Light path transmission quality prediction method based on multitask learning | |
CN118070859A (en) | Method, device, equipment and medium for determining quantization bit width of deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190226 |