CN116596112A - Universal cold-hot electric load prediction method and system - Google Patents

Universal cold-hot electric load prediction method and system Download PDF

Info

Publication number
CN116596112A
CN116596112A CN202310374025.7A CN202310374025A CN116596112A CN 116596112 A CN116596112 A CN 116596112A CN 202310374025 A CN202310374025 A CN 202310374025A CN 116596112 A CN116596112 A CN 116596112A
Authority
CN
China
Prior art keywords
parameter
parameters
model
load
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310374025.7A
Other languages
Chinese (zh)
Inventor
刘伟
须钢
朱明华
牛松
沈罡
周小慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Habo Energy Technology Suzhou Co ltd
Original Assignee
Habo Energy Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Habo Energy Technology Suzhou Co ltd filed Critical Habo Energy Technology Suzhou Co ltd
Priority to CN202310374025.7A priority Critical patent/CN116596112A/en
Publication of CN116596112A publication Critical patent/CN116596112A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a method and a system for predicting a universal cold-hot electric load, wherein the method comprises the following steps: acquiring influence parameters related to the target load to be predicted in the history time, calculating the correlation between each influence parameter and the target load, and reserving the parameters with higher correlation to realize data dimension reduction; preprocessing data; establishing a plurality of machine learning basic models; the training set data are brought into a plurality of machine learning models for learning, adjustment of learning parameters and adjustment of various super parameters are completed, and a prediction model with highest precision, which is applicable to the load prediction of the type or the project, of different machine learning algorithms is obtained; and carrying out accuracy verification on the test set data carried into the prediction model with the highest accuracy of each machine learning, and carrying out competition evaluation comparison on the fitting accuracy index of each model to obtain the optimal load prediction model. The application has the advantages that: the method has universality, reduces the labor cost of the method for obtaining the prediction, and greatly improves the precision of load prediction.

Description

Universal cold-hot electric load prediction method and system
Technical Field
The application belongs to the field of energy load prediction on demand sides of buildings and industries, and particularly relates to a universal cold and hot electric load prediction method and system.
Background
With the rapid development of urban process and industrial manufacturing industry and the improvement of living standard of people, the demand and quality of the whole energy are higher and higher, and the variety of the energy is diversified. The new energy power generation has larger uncertainty, and more challenges are brought to power dispatching; in recent years, partial electricity limiting phenomenon also occurs in the event of electricity consumption peak. Therefore, the method is particularly important to realize the advanced prediction of consumption of various energy sources, the reasonable scheduling of the energy source system and the improvement of the stability of the energy source system.
The existing machine learning method based on data driving is applied to the field of load prediction, but a single model and parameters in the model are mostly only applicable to one or a few scenes, and different types of machine learning models can be applicable to load prediction of different types and different scenes, for example: the cold and hot loads generally have strong hysteresis, and the electric loads generally have a large relationship with the current state; and various machine learning models need algorithm researchers to carry out parameter adjustment for many times according to scenes and types when in order to obtain better fitting precision, which is time-consuming and labor-consuming.
The dispatching supply of the load prediction effective support energy safety and stability can be obtained through the analysis, more load prediction methods based on data driving are available in the technical aspect and can be applied to a single scene, but a plurality of algorithms can not be realized in parallel and parameters can not be automatically adjusted to improve fitting precision, and the universality of the method is required to be improved. Therefore, the self-adaptive universal cold-hot electric load prediction method based on the multi-machine learning algorithm is established, automatic parameter adjustment optimizing is realized, and the method is applicable to multi-scene multi-type load prediction and is a problem to be solved urgently.
Disclosure of Invention
The application aims to overcome the defects that in the prior art, single models and parameters in the models are mostly only suitable for one or a few scenes, and various machine learning models need algorithm researchers to carry out parameter adjustment for many times according to the scenes and types when in order to obtain better fitting precision, which is time-consuming and labor-consuming.
In order to achieve the above object, the present application provides a general cold-hot electric load prediction method, which includes:
step S1: acquiring influence parameters related to the target load to be predicted in the history time, calculating the correlation between each influence parameter and the target load, setting a correlation coefficient threshold value, and reserving parameters of which the correlation exceeds the correlation coefficient threshold value to realize data dimension reduction;
step S2: deleting the abnormal data, supplementing the missing data, and preprocessing the data;
step S3: establishing a plurality of machine learning basic models;
step S4: substituting the training set data into a plurality of machine learning models for learning, and completing adjustment of learning parameters and adjustment of various super parameters to obtain a prediction model with highest precision, which is applicable to the load prediction of the type or the project, of different machine learning algorithms;
step S5: and carrying out accuracy verification on the test set data carried into the prediction model with the highest accuracy of each machine learning, and carrying out competition evaluation comparison on the fitting accuracy index of each model to obtain the optimal load prediction model.
As an improvement of the above method, the correlation between the calculated influence parameters of step S1 and the target load is specifically:
the adopted correlation coefficient calculation method comprises the following steps:
wherein: r represents a correlation coefficient between two parameters; x represents the value of the argument parameter; x' represents the average value of the independent variable parameter; y represents the value of the dependent variable parameter and Y' represents the average value of the dependent variable parameter.
As an improvement of the method, the basic model of machine learning is established in step S3, and comprises XGBoost algorithm and LSTM long-term and short-term memory neural network.
As an improvement of the above method, XGBoost algorithm parameters are set as follows: the classification type is reg, squarederror; the tree depth is 4-16; the number of the trees is 20-200; the learning rate is 0.1-0.7; regularization parameter L2 is 0.8; the smallest child node weight threshold is 1×10 -7 ~1×10 -4 Loss reduction threshold 0; the loss function is RMSE.
As an improvement of the above method, LSTM long term memory neural network parameters are set as: the random inactivation parameter is 0.1-0.4; the number of layers of the neural network is 3; the number of nodes in each layer is 40-200; the loss function is MSE; the iteration times are 15-45; the optimizer is Adam, adadelta, SGD; learning rate of 1×10 -4 The batch size is 8-32.
As an improvement of the method, the super parameter adjusting mode specifically comprises the following steps: traversing all possible super-parameter combinations to obtain super-parameter combinations with optimal performance, limiting the search interval to discrete values, and reducing the search times by adopting a split-parameter search mode.
As an improvement of the method, the hyper-parameter tuning step used by the XGBoost algorithm is as follows:
step B1: preliminary adjustment of the maximum tree depth and the number of trees is realized by using a set discrete interval; the set discrete interval is as follows: the discrete parameter adjustment intervals of the tree depth are [4,8, 12, 16], and the discrete parameter adjustment intervals of the tree number are [20, 50, 100, 150, 200]; obtaining a preliminary preferred tree Depth of depth_1 and a tree number of num_1 through network searching;
step B2: b1, reducing the parameter adjusting range of the 2 parameters to carry out fine adjustment, adjusting the parameters within the range of depth_1+/-3, adjusting the discrete interval to be 1 within the range of num_1+/-20, and adjusting the discrete interval to be 5 to obtain the optimal tree Depth depth_2 and the number of trees to be num_2;
step B3: the learning rate range is regulated, and the range interval is [0.1,0.3,0.5 and 0.7], so as to obtain the optimal learning rate lr;
step B4: b2, comparing the fitting goodness of the test set and the training set on the result of the step B, judging whether the fitting is performed, if the fitting goodness difference is larger than 0.1, performing fitting, adjusting the minimum child node weight threshold value, and adjusting the interval to be [10 ] -4 ,10 -5 ,10 -6 ,10 -7 ]And finally obtaining the optimal parameters and the model until the difference of the goodness of fit between the test set and the training set is less than 0.1.
As an improvement of the method, the super-parameter tuning step of the LSTM long-term memory neural network is as follows:
step C1: parameter adjustment is carried out on the optimizer and the batch size, the parameter adjustment range of the optimizer is [ Adam, adadelta, SGD ], the parameter adjustment range of the batch size is [8, 16, 24, 32], and the applicable optimizer and the optimal batch are obtained;
step C2: the method comprises the steps of initially adjusting the number of network nodes of each layer, and initially adjusting the number of hidden layer nodes of a first layer by using a set discrete interval; the set discrete interval is as follows: the discrete parameter adjustment interval of the first hidden layer is [40, 80, 120, 160, 200]; obtaining a preliminary preferred hidden layer as Hid_1;
step C3: and C2, reducing the parameter adjusting range of the hidden layer nodes of the first layer to carry out fine parameter adjusting, adjusting in the Hid_1+/-2 range, wherein the discrete interval is 5, and obtaining the optimal number of the hidden layer nodes of the first layer is Hid_1_1.
Step C4: repeating the step C2 and the step C3 to obtain the optimal node quantity of the hidden layers of the layers 2 and 3 as Hid_2_2 and Hid_3_3;
step C5: adjusting the random inactivation parameters to obtain optimal random inactivation parameters, wherein the adjustment interval is [0.1,0.2,0.3 and 0.4 ];
step C6: and (3) adjusting the iteration times, wherein the adjustment intervals are [15, 20, 25, 30, 35, 40 and 45], and the optimal iteration times are obtained.
As an improvement of the above method, the accuracy verification specifically includes:
the test set data are brought into the prediction model with highest precision for each machine learning, and fitting precision indexes of each model are calculated respectively;
the fitting precision index comprises: goodness of fit, mean square error and mean absolute error;
and selecting a model with high fitting goodness and low mean square error and average absolute error as an optimal load prediction model.
The application also provides a universal cold-hot electric load prediction system, which is realized based on any one of the methods, and comprises the following steps:
and a data processing module: the method comprises the steps of acquiring historical data related to load, analyzing the correlation between parameters and the load, reducing the data dimension, cleaning and normalizing the related data, and finally dividing a data set into a training set and a testing set;
load prediction model establishment module: the method comprises the steps of establishing a preliminary model of a plurality of machine learning prediction models, carrying training set data, carrying out learning calculation, and adjusting various learning parameters and super parameters in the model to obtain an optimal model of the plurality of machine learning models suitable for project load prediction;
model evaluation and preservation module: the method comprises the steps of carrying test set data into a plurality of prediction models output by a load prediction model building module, obtaining a prediction result, comparing the prediction result with a true value, evaluating model precision by using a plurality of indexes, and reserving an optimal prediction model.
Compared with the prior art, the application has the advantages that:
according to the general cold and hot electric load prediction method and system, after the historical data and the current state data related to the load are obtained, the correlation analysis of the data can be automatically realized, the data dimension and fitting cost are reduced, the abnormal values are processed by automatic cleaning data, various machine learning models are built, the automatic adjustment of various super parameters except learning parameters is realized, an evaluation model is introduced, the preferential storage of the model is realized, and the optimal model applicable to the scene of the type can be obtained. The method and the system can automatically realize the processes in the fields of single buildings, regional parks and industry, have strong universality, reduce labor cost and greatly improve the precision of load prediction.
Drawings
FIG. 1 is a schematic flow chart of a general cold-hot electric load prediction method
FIG. 2 is a schematic diagram of an example implementation of the present application;
FIG. 3 is a flow chart illustrating an example operation of the present application;
FIG. 4 is a schematic diagram showing the result of the load prediction correlation factor correlation analysis;
FIG. 5 is a schematic diagram of the LSTM neural network;
FIG. 6 is a schematic diagram of the prediction results of the XGBoost algorithm training set model;
FIG. 7 is a schematic diagram of the prediction results of the XGBoost algorithm test set model;
FIG. 8 is a schematic diagram of the model predictive results of the training set of the LSTM neural network algorithm;
FIG. 9 is a schematic diagram showing the prediction results of the LSTM neural network algorithm test set model.
Detailed Description
The technical scheme of the application is described in detail below with reference to the accompanying drawings.
The application provides a universal cold and hot electric load prediction method and a universal cold and hot electric load prediction system, which realize the automatic prediction of the whole load process of conventional energy, and the automatic parameter adjustment and adaptation of a machine learning model have low labor cost and high prediction precision and meet the actual requirements of engineering. As shown in fig. 1, the method includes:
step S1: acquiring influence parameters possibly related to a target load to be predicted in the history time, calculating the correlation between each influence parameter and the target load, setting a correlation coefficient threshold value, and reserving parameters with higher correlation to realize active dimension reduction of data;
step S2: deleting or supplementing missing data to the abnormal data by using an unsupervised learning algorithm, and preprocessing the data;
step S3: establishing a plurality of machine learning basic models, and setting key parameters and main parameter tuning optimizing ranges in the models;
step S4: the training set data are brought into a plurality of machine learning models for learning, automatic adjustment of learning parameters and automatic adjustment of various super parameters are completed, and a prediction model with highest precision, which is suitable for the type or the project of the load prediction, of different machine learning algorithms is obtained and temporarily stored;
step S5: and carrying out accuracy verification on the test set data carried into the optimal model learned by each machine, carrying out competition evaluation comparison on fitting accuracy indexes of each model, and storing an applicable optimal load prediction model.
As shown in fig. 2 and 3, as an example of the present application, the process of predicting the cooling load of a certain item time by using the general cooling-heating power load prediction method is as follows:
step S1: and acquiring influence parameters possibly related to the target load to be predicted in the history time, automatically identifying and calculating the correlation coefficient between each influence parameter and the target load, setting a correlation coefficient threshold value, and reserving the parameter with higher correlation to realize the active dimension reduction of the data.
The influencing parameters that may be related to the target load to be predicted include: current outdoor temperature, current outdoor humidity, current irradiance, personnel work and rest, first 1 hour outdoor temperature, first 1 hour outdoor humidity, first 1 hour irradiance, first 24 hours load, first 2 hours load, first 1 hour load.
The adopted correlation coefficient calculation method is a Pearson correlation coefficient method, and the calculation formula is as follows:
wherein: r is a correlation coefficient between two parameters, X is a value of an independent variable parameter, X 'is an average value of the independent variable parameter, Y is a value of the dependent variable parameter, and Y' is an average value of the dependent variable parameter.
The correlation coefficients of each correlation parameter and the predicted load are calculated respectively, and the results are shown in fig. 4 and the following table:
related parameters Pearson correlation coefficient
Current outdoor temperature 0.38
Current outdoor humidity 0.15
Current irradiance 0.58
Work and rest for personnel 0.88
Outdoor temperature for the first 1 hour 0.31
Outdoor humidity for the first 1 hour 0.14
Irradiance of the first 1 hour 0.58
Load for the first 24 hours 0.70
Load for the first 2 hours 0.74
Load for the first 1 hour 0.89
Usually, the correlation coefficient is 0.8-1.0 as extremely strong correlation, 0.6-0.8 as strong correlation, 0.4-0.6 as medium degree correlation, 0.2-0.4 as weak correlation, 0.0-0.2 as extremely weak correlation or no correlation; setting the correlation coefficient screening threshold to be 0.2, namely, setting parameters with the correlation coefficient smaller than 0.2: the current outdoor humidity and the outdoor humidity in the first 1 hour are removed, and the data dimension reduction process is realized.
Step S2: deleting or supplementing the missing data to the abnormal data by using an unsupervised learning algorithm, and preprocessing the data.
The non-supervision learning algorithm is a K-means clustering algorithm, and the technical principle is as follows: and randomly generating K clustering centers in the data, then calculating the distance from each sample in the data to the K clustering centers, dividing the corresponding sample into clusters corresponding to the clustering centers with the smallest distance, classifying all the samples, and then, calculating the clustering center of each cluster again for each cluster, namely, the mass center of all the samples in each cluster, and repeating the operation until the clustering center is not changed.
The K-means clustering algorithm implementation flow comprises 5 steps:
step A1: randomly selecting K points as initial centroids;
step A2: calculating Euclidean distance between each point and each centroid;
step A3: assigning each point to a centroid with the smallest distance to form K clusters;
step A4: calculating the mean value of each cluster and updating the mean value into a new centroid;
step A5: repeating the steps A2-A4 until the cluster is not changed any more or the maximum iteration number is reached.
In this example, the total amount of data before processing is 4104 groups, the total amount of data after processing is 3805 groups, and the number of abnormal data to be deleted is 299 groups.
The data preprocessing method comprises the following steps: the data normalization processing comprises the following calculation formula:
wherein: x is x i As raw data, min (x is the minimum value in the parameters, maxx is the maximum value in the parameters, x' is normalized data, and the processed data is divided into a training set and a test set.
Step S3: establishing 2 or more machine learning basic models, and setting key parameters and main parameter tuning optimizing ranges in the models;
the machine learning is established to comprise XGBoost (Extreme Gradient Boosting) algorithm and LSTM (Long Short-Term Memory) Long-Term Memory neural network, and most load prediction scenes can be dealt with.
XGBoost algorithm is an integrated algorithm based on gradient enhancement tree, and belongs to supervised learning algorithm. The algorithm adopts a plurality of decision trees to carry out integrated learning, and each decision tree carries out different division and feature selection on data, so that the accuracy of a load prediction model is improved; the algorithm adopts a regularization technology, so that the problems of over fitting and under fitting can be effectively avoided, and the robustness of a load prediction model is improved; the algorithm can output the importance of each feature, so that the model is known to predict, and the load prediction model has strong interpretation; the gradient lifting algorithm is adopted in the algorithm, so that the calculated amount can be effectively reduced, the training speed of the load prediction model is improved, and in sum, the XGBoost algorithm has the advantages of high accuracy, strong robustness, strong interpretability, high training speed and the like in the field of load prediction.
As shown in FIG. 5, the LSTM long-term memory neural network is a special RNN network having four neural network layers and capable of learning the dependence between long sequences, solving the problem of gradient explosion or gradient disappearance in time sequence prediction, and implementing forgetting or memorized function forgetting gate f by controlling discarding or adding information through gate t According to the state C of the cell at the previous moment t–1 And x t Deciding information to be discarded and retained; the input gate inputs x t Determining a value to be updated through sigma and tanh respectively, and generating a new candidate value to update; outputting the value after the gate update operation and then the value is combined with the forgetting gate f t The cell states are updated together. Updated cell state C t Through tanh function and output gate o t Output h after arithmetic together t And calculating the descending gradient of each parameter according to the output loss, and updating the network parameters according to the descending gradient.
The LSTM neural network algorithm can process time series data, because the LSTM neural network algorithm is provided with a memory unit and a gating mechanism, long-term dependency relationship in the time series data in load prediction can be effectively captured, and therefore the accuracy of a model is improved; the algorithm adopts a regularization technology, so that the problems of over fitting and under fitting can be effectively avoided, and the robustness of a load prediction model is improved; the algorithm can simultaneously consider the influence of a plurality of factors on the load, thereby improving the accuracy of the prediction model. In conclusion, the LSTM neural network algorithm has the advantages of being capable of processing time sequence data, high in robustness, high in accuracy and the like in the field of load prediction.
And each key parameter and main parameter adjusting and optimizing range process:
for XGBoost algorithm:
the classification type is reg, the tree depth is 4-16, the number of trees (iteration times) is 20-200, the learning rate is 0.1-0.7, the regularization parameter L2 is 0.8, and the minimum sub-node weight threshold is 1 multiplied by 10 -7 ~1×10 -4 The loss reduction threshold value 0, the loss function is RMSE, and other parameters adopt default values.
For LSTM long-term memory neural networks:
the data dimension is the number of the parameter numbers reserved in the step B1; the random inactivation parameter is 0.1-0.4; the number of layers of the neural network is 3; the number of nodes in each layer is 40-200; the loss function is MSE; the iteration times are 15-45; the optimizer is Adam, adadelta, SGD; learning rate of 1×10 -4 The batch size is 8-32, and other parameters adopt default values.
Step S4: and (3) taking the training set data into more than 2 machine learning models in the step (S3) for learning, automatically adjusting parameters in the machine learning models in a super-parameter adjusting mode, obtaining a prediction model with highest precision, which is suitable for the type or the project of load prediction, of different machine learning algorithms, and temporarily storing the prediction model.
Super-parameter tuning is machine learning super-parameter tuning, in conventional machine learning, besides parameters needing to be learned and trained internally, a plurality of super-parameters exist, the influence of the super-parameters on a network is also great, when the field of load prediction is oriented to different scenes, different super-parameters are usually needed for machine learning to meet better prediction precision, but the adjustment of the parameters often needs to be completed manually, time and labor are wasted, and common super-parameters are as follows:
1. network structure: the connection relation among neurons, the number of layers, the number of neurons of each layer, an activation function and the like;
2. optimizing parameters: optimizing method, learning rate and sample batch;
3. regularization parameters, etc.
The super parameter adjustment mode of the embodiment is as follows: traversing all possible super-parameter combinations to obtain super-parameter combinations with optimal performance, limiting the search interval to discrete values, reducing the search times by adopting a split-parameter search mode, and saving the calculation time.
The XGBoost algorithm is generally sensitive to tree depth and number of trees, and therefore performs coarse and fine adjustments; the XGBoost super-parameter tuning steps are as follows:
step B1: preliminary adjustment of the maximum tree Depth and the tree number is realized by using a larger discrete interval, wherein the discrete parameter adjustment intervals of the tree Depth are [4,8, 12 and 16], the discrete parameter adjustment intervals of the tree number are [20, 50, 100, 150 and 200], the preliminary optimal tree Depth is Depth_1, and the tree number is Num_1;
step B2: and B1, reducing the parameter adjusting range of the 2 parameters to carry out fine adjustment, adjusting the parameters within the range of Depth_1+/-3, adjusting the discrete interval to be 1 within the range of Num_1+/-20, and adjusting the discrete interval to be 5 to obtain the optimal tree Depth Depth_2 and the optimal tree number to be Num_2.
Step B3: fine adjustment is carried out on the learning rate range, and the adjustment range interval is [0.1,0.3,0.5 and 0.7], so as to obtain the optimal learning rate lr;
step B4: comparing the goodness of fit R of the test set and the training set on the result of step B2 2 (as shown in FIGS. 7 and 8), whether the fitting is performed is judged, and if the fitting goodness R is the 2 If the difference is greater than 0.1, the minimum child node weight threshold is adjusted for over fitting, and the adjustment interval is [10 ] -4 ,10 -5 ,10 -6 ,10 -7 ]And finally obtaining and storing the optimal parameters and the model until the difference of the goodness of fit between the test set and the training set is less than 0.1.
The optimal parameters obtained in this example are as follows:
the maximum tree depth is: 6, preparing a base material;
the number of trees is: 190;
the learning rate is: 0.1;
the minimum child node weight threshold is a default value and no adjustment is required.
Usually, the LSTM neural network precision is sensitive to the number of nodes in each layer, so that coarse adjustment and fine adjustment are performed; the super-parameter tuning step of the LSTM long-term memory neural network is as follows:
step C1: firstly, parameter adjustment is carried out on the optimizer and the batch size, the parameter adjustment range of the optimizer is [ Adam, adadelta, SGD ], the parameter adjustment range of the batch size is [8, 16, 24, 32], and the applicable optimizer and the optimal batch are obtained.
Step C2: the method comprises the steps of initially adjusting the number of network nodes of each layer, initially using a larger discrete interval to realize the initial adjustment of the number of nodes of a first layer hidden layer, wherein the discrete parameter adjusting interval of the first layer hidden layer is [40, 80, 120, 160, 200], and the initial optimal hidden layer is Hid_1.
Step C3: and C2, reducing the parameter adjusting range of the hidden layer nodes of the first layer to carry out fine parameter adjusting, adjusting in the Hid_1+/-2 range, wherein the discrete interval is 5, and obtaining the optimal number of the hidden layer nodes of the first layer is Hid_1_1.
Step C4: and (3) repeating the steps C2 and C3 to obtain the optimal node quantity of the hidden layers of the layers 2 and 3 as Hid_2_2 and Hid_3_3.
Step C5: and adjusting the random inactivation parameters to obtain the optimal random inactivation parameters, wherein the adjustment interval is [0.1,0.2,0.3 and 0.4 ].
Step C6: and (3) adjusting the iteration times, wherein the adjustment intervals are [15, 20, 25, 30, 35, 40 and 45], and the optimal iteration times are obtained.
The optimal parameters obtained in this example are as follows:
the batch size is: 16
The iteration times are as follows: 35
The optimizer is as follows: adma (Adma)
The number of network nodes of each layer is as follows: 120, 120, 60
Random inactivation parameters: 0.2
The LSTM neural network algorithm training set model prediction results are shown in fig. 7.
Step S5: and (3) carrying out accuracy verification on the optimal model obtained in the step (S4) by taking the test set data into the test set data, carrying out competition evaluation comparison on fitting accuracy indexes of the models, and storing the applicable optimal load prediction model.
The accuracy verification indexes for each model comprise: goodness of fit (R) 2 ) Mean Square Error (MSE), mean Absolute Error (MAE).
Goodness of fit (R) 2 ) The calculation formula of (2) is as follows:
wherein: y is i To be predicted to be true value, f i For the model predictive value, y' is the average of the predictive truth values, and n is the number of data. Higher goodness-of-fit values indicate higher model accuracy.
The Mean Square Error (MSE) is calculated as:
wherein: y is i To be predicted to be true value, f i N is the number of data for the model predictor. Lower mean square error indicates higher model accuracy.
The Mean Absolute Error (MAE) is calculated as:
wherein: y is i To be predicted to be true value, f i N is the number of data for the model predictor. Lower average absolute error indicates higher model accuracy.
The calculation accuracy index results of bringing the test set data into each model are shown in the following table:
XGBoost algorithm LSTM long-term memory neural network
Goodness of fit (R) 2 ) 0.97 0.94
Mean Square Error (MSE) 559 905
Mean Absolute Error (MAE) 9.47 13.24
The XGBoost algorithm and LSTM neural network algorithm test set model predictions are shown in FIGS. 7 and 9.
From the calculation results, the XGBoost algorithm has higher fitting precision in the scene, and the system stores the model as a final model for future load prediction.
The application also provides a universal cold-hot electric load prediction system, which comprises the following modules:
and a data processing module: the method is used for acquiring historical data related to the load, analyzing the correlation between the parameters and the load, reducing the data dimension, cleaning the related data, carrying out normalization processing, and finally dividing the data set into a training set and a testing set.
Load prediction model establishment module: the method is used for establishing a preliminary model of a plurality of machine learning prediction models, carrying out learning calculation by taking training set data into the preliminary model, and automatically adjusting various learning parameters and super parameters in the model to obtain an optimal model of the plurality of machine learning models suitable for project load prediction.
Model evaluation and preservation module: the method comprises the steps of carrying test set data into a plurality of prediction models output by a load prediction model building module, obtaining a prediction result, comparing the prediction result with a true value, evaluating model precision by using a plurality of indexes, and reserving an optimal prediction model.
The present application may also provide a computer apparatus comprising: at least one processor, memory, at least one network interface, and a user interface. The various components in the device are coupled together by a bus system. It will be appreciated that a bus system is used to enable connected communications between these components. The bus system includes a power bus, a control bus, and a status signal bus in addition to the data bus.
The user interface may include, among other things, a display, a keyboard, or a pointing device. Such as a mouse, track ball, touch pad, touch screen, or the like.
It will be appreciated that the memory in the disclosed embodiments of this application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (RandomAccess Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). The memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some implementations, the memory stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof: an operating system and application programs.
The operating system includes various system programs, such as a framework layer, a core library layer, a driving layer, and the like, and is used for realizing various basic services and processing hardware-based tasks. Applications, including various applications such as Media Player (Media Player), browser (Browser), etc., are used to implement various application services. The program implementing the method of the embodiment of the present disclosure may be contained in an application program.
In the above embodiment, the processor may be further configured to call a program or an instruction stored in the memory, specifically, may be a program or an instruction stored in an application program:
the steps of the above method are performed.
The method described above may be applied in a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The methods, steps and logic blocks disclosed above may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method as disclosed above may be embodied directly in hardware for execution by a decoding processor, or in a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP devices, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, microcontrollers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the inventive techniques may be implemented with functional modules (e.g., procedures, functions, and so on) that perform the inventive functions. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The present application may also provide a non-volatile storage medium for storing a computer program. The steps of the above-described method embodiments may be implemented when the computer program is executed by a processor.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present application and are not limiting. Although the present application has been described in detail with reference to the embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present application, which is intended to be covered by the appended claims.

Claims (10)

1. A universal cold and hot electrical load prediction method, the method comprising:
step S1: acquiring influence parameters related to the target load to be predicted in the history time, calculating the correlation between each influence parameter and the target load, setting a correlation coefficient threshold value, and reserving parameters of which the correlation exceeds the correlation coefficient threshold value to realize data dimension reduction;
step S2: deleting the abnormal data, supplementing the missing data, and preprocessing the data;
step S3: establishing a plurality of machine learning basic models;
step S4: substituting the training set data into a plurality of machine learning models for learning, and completing adjustment of learning parameters and adjustment of various super parameters to obtain a prediction model with highest precision, which is applicable to the load prediction of the type or the project, of different machine learning algorithms;
step S5: and carrying out accuracy verification on the test set data carried into the prediction model with the highest accuracy of each machine learning, and carrying out competition evaluation comparison on the fitting accuracy index of each model to obtain the optimal load prediction model.
2. The method according to claim 1, wherein the correlation between each of the calculated influence parameters and the target load in step S1 is specifically:
the adopted correlation coefficient calculation method comprises the following steps:
wherein: r represents a correlation coefficient between two parameters; x represents the value of the argument parameter; x' represents the average value of the independent variable parameter; y represents the value of the dependent variable parameter and Y' represents the average value of the dependent variable parameter.
3. The method of claim 1, wherein the step S3 of building a machine-learned base model includes XGBoost algorithm and LSTM long-term short-term memory neural network.
4. A universal cold and hot electrical load prediction method according to claim 3, wherein XGBoost algorithm parameters are set as: the classification type is reg, squarederror; the tree depth is 4-16; the number of the trees is 20-200; learning rate of 01 to 0.7; regularization parameter L2 is 0.8; the smallest child node weight threshold is 1×10 -7 ~1×10 -4 Loss reduction threshold 0; the loss function is RMSE.
5. The method for predicting the load of cold and hot electricity according to claim 3, wherein the LSTM long-term memory neural network parameter is set as follows: the random inactivation parameter is 0.1-0.4; the number of layers of the neural network is 3; the number of nodes in each layer is 40-200; the loss function is MSE; the iteration times are 15-45; the optimizer is Adam, adadelta, SGD; learning rate of 1×10 -4 The batch size is 8-32.
6. The method for predicting the load of a general cooling and heating power according to claim 3, wherein the super-parameter adjusting mode specifically comprises the following steps: traversing all possible super-parameter combinations to obtain super-parameter combinations with optimal performance, limiting the search interval to discrete values, and reducing the search times by adopting a split-parameter search mode.
7. The method for predicting the load of a general cold and hot electric power according to claim 6, wherein the super-parameter tuning step used by the XGBoost algorithm is as follows:
step B1: preliminary adjustment of the maximum tree depth and the number of trees is realized by using a set discrete interval; the set discrete interval is as follows: the discrete parameter adjustment intervals of the tree depth are [4,8, 12, 16], and the discrete parameter adjustment intervals of the tree number are [20, 50, 100, 150, 200]; obtaining a preliminary preferred tree Depth of depth_1 and a tree number of num_1 through network searching;
step B2: b1, reducing the parameter adjusting range of the 2 parameters to carry out fine adjustment, adjusting the parameters within the range of depth_1+/-3, adjusting the discrete interval to be 1 within the range of num_1+/-20, and adjusting the discrete interval to be 5 to obtain the optimal tree Depth depth_2 and the number of trees to be num_2;
step B3: the learning rate range is regulated, and the range interval is [0.1,0.3,0.5 and 0.7], so as to obtain the optimal learning rate lr;
step B4: comparison on the result of step B2The fitting goodness of the test set and the training set is used for judging whether the fitting is performed or not, if the fitting goodness difference is larger than 0.1, the fitting is performed, the minimum child node weight threshold is adjusted, and the adjustment interval is [10 ] -4 ,10 -5 ,10 -6 ,10 -7 ]And finally obtaining the optimal parameters and the model until the difference of the goodness of fit between the test set and the training set is less than 0.1.
8. The method for predicting the load of cold and hot electricity according to claim 6, wherein the super-parameter tuning step of the LSTM long-term memory neural network is as follows:
step C1: parameter adjustment is carried out on the optimizer and the batch size, the parameter adjustment range of the optimizer is [ Adam, adadelta, SGD ], the parameter adjustment range of the batch size is [8, 16, 24, 32], and the applicable optimizer and the optimal batch are obtained;
step C2: the method comprises the steps of initially adjusting the number of network nodes of each layer, and initially adjusting the number of hidden layer nodes of a first layer by using a set discrete interval; the set discrete interval is as follows: the discrete parameter adjustment interval of the first hidden layer is [40, 80, 120, 160, 200]; obtaining a preliminary preferred hidden layer as Hid_1;
step C3: c2, reducing the parameter adjusting range of the hidden layer nodes of the first layer on the result of the step, performing fine parameter adjustment, adjusting in the Hid_1+/-2 range, wherein the discrete interval is 5, and obtaining the optimal number of the hidden layer nodes of the first layer as Hid_1_1;
step C4: repeating the step C2 and the step C3 to obtain the optimal node quantity of the hidden layers of the layers 2 and 3 as Hid_2_2 and Hid_3_3;
step C5: adjusting the random inactivation parameters to obtain optimal random inactivation parameters, wherein the adjustment interval is [0.1,0.2,0.3 and 0.4 ];
step C6: and (3) adjusting the iteration times, wherein the adjustment intervals are [15, 20, 25, 30, 35, 40 and 45], and the optimal iteration times are obtained.
9. The method for predicting a universal cold-hot electrical load according to claim 1, wherein the precision verification specifically comprises:
the test set data are brought into the prediction model with highest precision for each machine learning, and fitting precision indexes of each model are calculated respectively;
the fitting precision index comprises: goodness of fit, mean square error and mean absolute error;
and selecting a model with high fitting goodness and low mean square error and average absolute error as an optimal load prediction model.
10. A universal cold-hot electrical load prediction system, implemented on the basis of any one of the methods of claims 1-9, characterized in that it comprises:
and a data processing module: the method comprises the steps of acquiring historical data related to load, analyzing the correlation between parameters and the load, reducing the data dimension, cleaning and normalizing the related data, and finally dividing a data set into a training set and a testing set;
load prediction model establishment module: the method comprises the steps of establishing a preliminary model of a plurality of machine learning prediction models, carrying training set data, carrying out learning calculation, and adjusting various learning parameters and super parameters in the model to obtain an optimal model of the plurality of machine learning models suitable for project load prediction; and
model evaluation and preservation module: the method comprises the steps of carrying test set data into a plurality of prediction models output by a load prediction model building module, obtaining a prediction result, comparing the prediction result with a true value, evaluating model precision by using a plurality of indexes, and reserving an optimal prediction model.
CN202310374025.7A 2023-04-10 2023-04-10 Universal cold-hot electric load prediction method and system Pending CN116596112A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310374025.7A CN116596112A (en) 2023-04-10 2023-04-10 Universal cold-hot electric load prediction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310374025.7A CN116596112A (en) 2023-04-10 2023-04-10 Universal cold-hot electric load prediction method and system

Publications (1)

Publication Number Publication Date
CN116596112A true CN116596112A (en) 2023-08-15

Family

ID=87599704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310374025.7A Pending CN116596112A (en) 2023-04-10 2023-04-10 Universal cold-hot electric load prediction method and system

Country Status (1)

Country Link
CN (1) CN116596112A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117236800A (en) * 2023-11-14 2023-12-15 齐信科技(北京)有限公司 Multi-scene self-adaptive electricity load prediction method and system
CN117648383A (en) * 2024-01-30 2024-03-05 中国人民解放军国防科技大学 Heterogeneous database real-time data synchronization method, device, equipment and medium
CN117674302A (en) * 2024-02-01 2024-03-08 浙江省白马湖实验室有限公司 Combined heat and power load scheduling method based on two-stage integrated learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762534A (en) * 2021-09-10 2021-12-07 广东电网有限责任公司 Building cold and heat load prediction method, device, equipment and storage medium
CN114548509A (en) * 2022-01-18 2022-05-27 湖南大学 Multi-type load joint prediction method and system for multi-energy system
CN115470862A (en) * 2022-10-06 2022-12-13 东南大学 Dynamic self-adaptive load prediction model combination method
CN115587672A (en) * 2022-11-09 2023-01-10 国网湖南省电力有限公司 Distribution transformer load prediction and heavy overload early warning method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762534A (en) * 2021-09-10 2021-12-07 广东电网有限责任公司 Building cold and heat load prediction method, device, equipment and storage medium
CN114548509A (en) * 2022-01-18 2022-05-27 湖南大学 Multi-type load joint prediction method and system for multi-energy system
CN115470862A (en) * 2022-10-06 2022-12-13 东南大学 Dynamic self-adaptive load prediction model combination method
CN115587672A (en) * 2022-11-09 2023-01-10 国网湖南省电力有限公司 Distribution transformer load prediction and heavy overload early warning method and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117236800A (en) * 2023-11-14 2023-12-15 齐信科技(北京)有限公司 Multi-scene self-adaptive electricity load prediction method and system
CN117236800B (en) * 2023-11-14 2024-02-20 齐信科技(北京)有限公司 Multi-scene self-adaptive electricity load prediction method and system
CN117648383A (en) * 2024-01-30 2024-03-05 中国人民解放军国防科技大学 Heterogeneous database real-time data synchronization method, device, equipment and medium
CN117648383B (en) * 2024-01-30 2024-06-11 中国人民解放军国防科技大学 Heterogeneous database real-time data synchronization method, device, equipment and medium
CN117674302A (en) * 2024-02-01 2024-03-08 浙江省白马湖实验室有限公司 Combined heat and power load scheduling method based on two-stage integrated learning
CN117674302B (en) * 2024-02-01 2024-05-03 浙江省白马湖实验室有限公司 Combined heat and power load scheduling method based on two-stage integrated learning

Similar Documents

Publication Publication Date Title
CN116596112A (en) Universal cold-hot electric load prediction method and system
CN113609779B (en) Modeling method, device and equipment for distributed machine learning
Fan et al. Design for auto-tuning PID controller based on genetic algorithms
CN110163429B (en) Short-term load prediction method based on similarity day optimization screening
CN107506865A (en) A kind of load forecasting method and system based on LSSVM optimizations
CN112734135A (en) Power load prediction method, intelligent terminal and computer readable storage medium
CN115049019B (en) Method and device for evaluating arsenic adsorption performance of metal organic framework and related equipment
CN115470862A (en) Dynamic self-adaptive load prediction model combination method
CN115906399A (en) Improved method for predicting key process quality of product under small sample data
CN115758912A (en) Air conditioner energy consumption optimizing system
CN116796141A (en) GBDT regression model-based office building energy consumption prediction method
CN113762591B (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning
CN113239503B (en) New energy output scene analysis method and system based on improved k-means clustering algorithm
CN113408802B (en) Energy consumption prediction network training method and device, energy consumption prediction method and device, and computer equipment
CN113592064A (en) Ring polishing machine process parameter prediction method, system, application, terminal and medium
CN117477561A (en) Residential household load probability prediction method and system
CN117277312A (en) Gray correlation analysis-based power load influence factor method and equipment
CN112862216A (en) Multi-industry energy demand prediction method based on deep belief network
CN110059871B (en) Photovoltaic power generation power prediction method
CN115907131B (en) Method and system for constructing electric heating load prediction model in northern area
CN111027760A (en) Power load prediction method based on least square vector machine
CN116011655A (en) Load ultra-short-term prediction method and system based on two-stage intelligent feature engineering
CN112990603B (en) Air conditioner cold load prediction method and system considering frequency domain decomposed data characteristics
CN113963758A (en) Prediction recommendation method, device and terminal for thermodynamic stable structure of disordered material
CN110782950A (en) Tumor key gene identification method based on preference grid and Levy flight multi-target particle swarm algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination