CN107590565B - Method and device for constructing building energy consumption prediction model - Google Patents

Method and device for constructing building energy consumption prediction model Download PDF

Info

Publication number
CN107590565B
CN107590565B CN201710806517.3A CN201710806517A CN107590565B CN 107590565 B CN107590565 B CN 107590565B CN 201710806517 A CN201710806517 A CN 201710806517A CN 107590565 B CN107590565 B CN 107590565B
Authority
CN
China
Prior art keywords
influence factor
neural network
network training
main influence
training model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710806517.3A
Other languages
Chinese (zh)
Other versions
CN107590565A (en
Inventor
宋扬
官泽
孔祥旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shougang Automation Information Technology Co Ltd
Original Assignee
Beijing Shougang Automation Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shougang Automation Information Technology Co Ltd filed Critical Beijing Shougang Automation Information Technology Co Ltd
Priority to CN201710806517.3A priority Critical patent/CN107590565B/en
Publication of CN107590565A publication Critical patent/CN107590565A/en
Application granted granted Critical
Publication of CN107590565B publication Critical patent/CN107590565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides a method and a device for a building energy consumption prediction model, wherein the method comprises the following steps: acquiring an energy consumption influence factor set; dividing the data into a linear correlation influence factor set and a nonlinear correlation influence factor set; respectively constructing corresponding Bayesian network models; grouping into a first primary impact factor, a first non-primary impact factor, a second primary impact factor, and a second non-primary impact factor based on the respective Bayesian network models; constructing each BP neural network training model; training each BP neural network training model respectively based on training sample data; respectively carrying out prediction inspection on each trained BP neural network training model based on preset test sample data, and outputting a prediction result value; and if the error of the prediction result value is within the preset error range, outputting an energy consumption prediction model of the linear correlation influence factor and an energy consumption prediction model of the nonlinear correlation influence factor.

Description

Method and device for constructing building energy consumption prediction model
Technical Field
The invention belongs to the technical field of data analysis in the building industry, and particularly relates to a method and a device for constructing a building energy consumption prediction model.
Background
The analysis of the building energy consumption trend has been a research hotspot of numerous scholars at home and abroad for many years, and no matter which analysis method is adopted, the description of key influence factors in an uncertain energy consumption system is lacked, so that when the energy consumption trend is predicted, the prediction precision is low, and the energy consumption trend cannot be accurately predicted.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a method and a device for constructing a building energy consumption prediction model, which are used for solving the technical problem of low prediction precision when the building energy consumption trend is predicted in the prior art.
The embodiment of the invention provides a method for constructing a building energy consumption prediction model, which comprises the following steps:
acquiring building prior data, and acquiring an energy consumption influence factor set based on the prior data;
classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner;
respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model;
constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data;
respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values;
and judging whether the error of the prediction result value is within a preset error range, and if so, outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor.
In the above scheme, the first BP neural network training model is constructed based on the preprocessed first main influence factor and the preprocessed first non-main influence factor; before constructing a second BP neural network training model based on the preprocessed second main influence factor and the preprocessed second non-main influence factor, the method comprises the following steps:
and performing ashing pretreatment and normalization pretreatment on the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor.
In the above solution, the determining a first main influence factor, a first non-main influence factor, and a second main influence factor and a second non-main influence factor in the linear correlation influence factor set respectively based on the corresponding bayesian network model includes:
respectively calculating probability distribution of each node in the directed acyclic graph in corresponding Bayes network models, and respectively acquiring relative weight of each node based on the probability distribution; each node corresponds to each influence factor;
and respectively determining the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor according to the relative weight of each influence factor.
In the above scheme, the variables of the first BP neural network training model and the second BP neural network training model include:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is M, and the maximum learning frequency is M;
the hidden layer input weight is whiThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
Figure GDA0002697155530000031
The error function is
Figure GDA0002697155530000032
The yooIs any one of the output vectors of the output layer, doAny one vector in the expected output vectors;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
In the foregoing scheme, the acquiring training sample data includes:
segmenting the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing respectively to form n data segments with the length of m +1 and coincidence; each data segment is a training sample data.
In the foregoing solution, the training the first BP neural network training model and the second BP neural network training model based on the training sample data respectively includes:
respectively determining partial derivatives of the error function to each node of the output layers of the first BP neural network training model and the second BP neural network training modelo
Respectively determining the partial derivatives of the error function to each node of the hidden layers of the first BP neural network training model and the second BP neural network training modelh
Respectively using partial derivatives of each node of the output layeroAnd hohCorrecting the output weight of the hidden layer to be who
Respectively using the partial derivatives of each node of the hidden layerhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiFor in the input layer of the corresponding BP neural network modelAny one of the nodes.
In the foregoing solution, the predicting the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting a prediction result value includes:
utilizing a normalization function based on the test sample data
Figure GDA0002697155530000041
Reversely analyzing, and outputting the test sample data after the primary reduction; said xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using ashing reduction function
Figure GDA0002697155530000042
Secondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration;
and respectively predicting the first BP neural network training model and the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a prediction result value.
In the foregoing solution, the predicting the first BP neural network training model and the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a prediction result value includes:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain the predicted result value.
The embodiment of the invention also provides a device for constructing the building energy consumption prediction model, which comprises:
the acquisition unit is used for acquiring prior building data and acquiring an energy consumption influence factor set based on the prior data;
the classification unit is used for classifying the energy consumption influence factors and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
a first constructing unit, configured to construct a corresponding bayesian network model for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner, respectively;
a determining unit, configured to determine, based on the respective bayesian network models, a first primary influence factor, a first non-primary influence factor, a second primary influence factor, and a second non-primary influence factor in the set of non-linear correlation influence factors, respectively;
the second construction unit is used for constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
the training unit is used for acquiring training sample data and respectively training the first BP neural network training model and the second BP neural network training model based on the training sample data;
the prediction unit is used for respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data and outputting prediction result values;
and the output unit is used for judging whether the error of the prediction result value is within a preset error range or not, and outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor if the error of the prediction result value is within the preset error range.
In the above scheme, the apparatus further comprises: the preprocessing unit is used for constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor in the second constructing unit; before a second BP neural network training model is constructed based on the second main influence factors and the second non-main influence factors after pretreatment, ashing pretreatment and normalization pretreatment are carried out on the first main influence factors, the first non-main influence factors, the second main influence factors and the second non-main influence factors.
The embodiment of the invention provides a method and a device for a building energy consumption prediction model, wherein the method comprises the following steps: acquiring prior data, and acquiring an energy consumption influence factor set based on the prior data; classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set; respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner; respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model; constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor; acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data; respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values; judging whether the error of the predicted result value is within a preset error range, and if the error of the predicted result value is within the preset error range, outputting an energy consumption prediction model of the linear correlation influence factor and an energy consumption prediction model of the nonlinear correlation influence factor; therefore, the Bayesian network model can be used for acquiring main key factors from multiple building factor influence events, namely main influence factors of building energy consumption; and continuously training and predicting and checking by using the BP neural network training model to obtain an energy consumption prediction model approximate to the fitting degree of real data, so that the prediction precision of the building energy consumption prediction model can be improved, and the trend of building energy consumption can be accurately predicted.
Drawings
Fig. 1 is a schematic flow chart of a method for constructing a building energy consumption prediction model according to an embodiment of the present invention;
fig. 2 is an overall schematic diagram of building energy consumption prediction model construction provided in the second embodiment of the present invention.
Detailed Description
In order to solve the technical problem of low prediction precision in the prediction of the building energy consumption trend in the prior art, the invention provides a method for constructing a building energy consumption prediction model, which comprises the following steps: acquiring prior data, and acquiring an energy consumption influence factor set based on the prior data; classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set; respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner; respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model; constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor; acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data; respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values; and judging whether the error of the prediction result value is within a preset error range, and if so, outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor.
The technical solution of the present invention is further described in detail by the accompanying drawings and the specific embodiments.
Example one
The embodiment provides a method for constructing a building energy consumption prediction model, as shown in fig. 1, the method includes:
s101, building prior data are obtained, and an energy consumption influence factor set is obtained based on the prior data;
in the step, firstly, building prior data needs to be obtained, and an energy consumption influence factor set is obtained based on the prior data.
And then classifying the energy consumption influence factors, specifically, determining whether normalization and ashing treatment are adopted or not according to the acquired energy consumption influence factor set and the data distribution of each factor, respectively performing first-order linear regression fitting analysis on each pretreated factor and an energy consumption value, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set through a linear relation.
S102, respectively combining the linear correlation influence factor set and the nonlinear correlation influence factor set to construct a corresponding Bayesian network model;
after the energy consumption influence factors are divided into a linear correlation influence factor set and a nonlinear correlation influence factor set, the incidence relation between the factors in the linear correlation influence factor set and the nonlinear correlation influence factor set is respectively obtained based on the prior data, and then corresponding Bayesian network models are respectively constructed according to the relation between the factors.
According to the principle of pressingIn other words, the bayesian network model is formed by a directed acyclic graph. The directed acyclic graph is G (I, E), wherein I is a set of all nodes, and E is a set of directed connecting line segments. Let XiA random variable representing one point I in the point set I, and the random variable set of the point set I is represented as X ═ XiI belongs to I, and if the joint probability of X can be expressed as shown in formula (1), the directed acyclic graph is called to form a Bayesian network model for G.
Figure GDA0002697155530000081
In the formula (1), thepa(i)Representing the parent of node i.
Accordingly, for any random variable (any node), its probability distribution can be multiplied by the respective local conditional probability distribution, as shown in equation (2):
p(x1,x2…,xk)=p(xk|x1,x2…,xk-1)…p(x2|x1)p(x1) (2)
then based on the probability distribution, the relative weights of each node in the directed acyclic graph can be calculated.
S103, respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model;
in this step, after the relative weight of each node is calculated, because each node corresponds to each influence factor, the relative weight of each influence factor is correspondingly obtained, and then the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor are respectively determined according to the relative weight of each influence factor. The relative weight of the first main influence factor is the influence factor with the maximum relative weight in the linear correlation influence factor set, and accordingly, the other influence factors are the first non-main influence factors. The relative weight of the second main influence factor is the influence factor with the maximum relative weight in the nonlinear correlation influence factor set, and correspondingly, the other influence factors are the second non-main influence factors.
S104, constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
after determining a first main influence factor, a first non-main influence factor, a second main influence factor and a second non-main influence factor in the linear correlation influence factor set, the ashing pretreatment and the normalization pretreatment are performed on the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor.
Specifically, taking the first major influence factor as an example, assume that the original sequence of the first major influence factor is:
X(0)={X(0)(1),X(0)(2)…X(0)(n)}
the sequence generated by the first accumulation is:
X(1)={X(1)(1),X(1)(2)…X(1)(n)}
wherein,
Figure GDA0002697155530000091
k≥1&k∈N,x(1)(0)=0。
let Z(1)Is X(1)Then the following sequence is generated:
Z(1)=Z(1)(2),Z(1)(3)…Z(1)(n),
Z(1)(k)=0.5(x(1)(k)+x(1)(k-1)),
the gray differential equation model for the ashing process model GM (1,1) is then:
X(0)(k)+az(1)(k)=b
note the book
Figure GDA0002697155530000092
Then the least squares estimation parameter of the gray differential equation is satisfied
Figure GDA0002697155530000093
Wherein,
Figure GDA0002697155530000094
Figure GDA0002697155530000095
then, it can be called
Figure GDA0002697155530000096
The whitening equation of (1).
In summary, the gray differential equation X of GM (1,1) can be calculated(0)(k)+az(1)(k) The time series of b is:
Figure GDA0002697155530000097
then the reduction equation (whitening) after ashing, also known as the ashing reduction function, is
Figure GDA0002697155530000098
This makes it possible to perform ashing processing on the original sequence.
Then, carrying out normalization processing on the data after ashing treatment:
using formulas in particular
Figure GDA0002697155530000101
Normalizing input data to [ -1,1]Within an interval, wherein, at this time, x ismaxIs the maximum value in the first main influence shadow data sequence, xminThe data sequence is the minimum value in the first main influence shadow data sequence, y is the data obtained after preprocessing, and the input data is the data sequence of the first main influence factor after normalization processing.
Likewise, the first non-primary influencing factor, the second primary influencing factor and said second non-primary influencing factor may be pre-processed in the same way.
Then constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor; and constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor.
Here, the first BP neural network training model and the second BP neural network training model have the same structure, and both include: an input layer, a hidden layer, and an output layer.
The variables of the first BP neural network training model and the second BP neural network training model comprise:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is M, and the maximum learning frequency is M;
the hidden layer input weight is whiThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
Figure GDA0002697155530000102
The error function is
Figure GDA0002697155530000103
The yooIs any one of the output vectors of the output layer, doFor any direction in the desired output vectorAn amount;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
S105, acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data;
in this step, after the first BP neural network training model and the second BP neural network training model are constructed, training sample data needs to be acquired, and based on the training sample data, the first BP neural network training model and the second BP neural network training model are trained respectively.
Specifically, segmenting the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing respectively to form n data segments with the length of m +1 and coincidence; each data segment is a training sample data. The input bit is the value of the first m moments, the output bit is the value of the m +1 th moment, and a sample matrix which constructs repeated data segments is gradually pushed forward (the sample matrix is n rows and m +1 columns);
adding the sample matrix row training after the segmentation processing into each training model, and performing output calculation and back propagation calculation output, wherein the back propagation calculation is used for error correction; wherein the back propagation calculation comprises:
respectively ensureDetermining the partial derivative of the error function to each node of the output layer of the first BP neural network training model and the second BP neural network training modelo(ii) a Respectively determining the partial derivatives of the error function to each node of the hidden layers of the first BP neural network training model and the second BP neural network training modelh(ii) a Respectively using partial derivatives of each node of the output layeroAnd the output value ho of each node of the hidden layerhCorrecting the hidden layer output weight value to be who(ii) a h is any value from 0 to p; respectively using the partial derivatives of each node of the hidden layerhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiThe influence factor is corresponding to any node in the input layer of the corresponding BP neural network model and is corresponding to the corresponding Bayesian network model.
And after the correction, the first BP neural network training model and the second BP neural network training model are saved.
S106, predicting the trained first BP neural network training model and second BP neural network training model respectively based on preset test sample data, and outputting a prediction result value; and judging whether the error of the prediction result value is within a preset error range, and if so, outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor.
In this step, the predicted test sample data is obtained and used as input data in the first BP neural network training model and the second BP neural network training model, and the first BP neural network training model and the second BP neural network training model perform inverse normalization processing and whitening processing on the test sample data to output the restored test sample data. The restored test sample data is only data which is not preprocessed.
Specifically, based on the test sample data, a normalization function is utilized
Figure GDA0002697155530000121
Go on to return toNormalizing, namely reversely analyzing, and outputting the test sample data after the primary reduction; at this time, the xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using the ashing reduction function (equation)
Figure GDA0002697155530000122
Secondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration;
then, based on the test sample data after the secondary reduction, predicting the first BP neural network training model and the second BP neural network training model respectively, and outputting a prediction result value, including:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain a fitting result value, namely a predicted result value, judging whether the error of the predicted result value is within a preset error range or not by taking the actual building energy consumption value as a reference, and if the error of the predicted result value is within the preset error range or the accuracy of the predicted result value reaches at least 90%, outputting the fitting result value as the energy consumption influence predicted value. And simultaneously outputting the energy consumption prediction model of the linear correlation influence factor, the energy consumption prediction model of the nonlinear correlation influence factor, the first main influence factor and the second main influence factor.
If the error of the predicted result value is not within the preset error range, resetting the learning precision, the learning times, the input weight of the hidden layer and the output weight of the hidden layer of the first BP neural network training model and the second BP neural network training model by taking the actual building energy consumption value as the reference to form a new first BP neural network training model and a new second BP neural network training model, training and predicting the first BP neural network training model and the second BP neural network training model again according to the same method to obtain a new predicted result value, outputting the final energy consumption prediction model of the linear correlation influence factor and the final energy consumption prediction model of the nonlinear correlation influence factor until the error of the predicted result value is within the preset error range, and outputting a first main influence factor and a second main influence factor.
Example two
Corresponding to the first embodiment, the present embodiment provides an apparatus for constructing a building energy consumption prediction model, as shown in fig. 2, the apparatus includes: the device comprises an acquisition unit 21, a classification unit 22, a first construction unit 23, a determination unit 24, a second construction unit 25, a training unit 26, a prediction unit 27, an output unit 28 and a preprocessing unit 29; wherein,
first the obtaining unit 21 is configured to obtain a priori data based on which a set of energy consumption impact factors is obtained.
The classifying unit 22 is configured to classify the energy consumption influencing factors, specifically, determine whether to adopt normalization and ashing processing according to the acquired energy consumption influencing factor set and data distribution of each factor, perform first-order linear regression fitting analysis on each preprocessed factor and the energy consumption value, and divide the energy consumption influencing factors into a linear correlation influencing factor set and a nonlinear correlation influencing factor set through a linear relationship.
After the classifying unit 22 classifies the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set, the first constructing unit 23 is configured to respectively obtain association relations between the factors in the linear correlation influence factor set and the nonlinear correlation influence factor set based on the prior data, and then respectively construct corresponding bayesian network models according to the relations between the factors.
Theoretically, the bayesian network model is composed of a directed acyclic graph. The directed acyclic graph is G (I, E), wherein I is a set of all nodes, and E is a set of directed connecting line segments. Let XiA random variable representing one point I in the point set I, and the random variable set of the point set I is represented as X ═ XiI belongs to I, and if the joint probability of X can be expressed as shown in formula (1), the directed acyclic graph is called to form a Bayesian network model for G.
Figure GDA0002697155530000141
In formula (1), pa (i) represents a parent node of node i.
Accordingly, for any random variable (any node), its probability distribution can be multiplied by the respective local conditional probability distribution, as shown in equation (2):
p(x1,x2…,xk)=p(xk|x1,x2…,xk-1)…p(x2|x1)p(x1) (2)
then based on the probability distribution, the relative weights of each node in the directed acyclic graph can be calculated.
Accordingly, after the relative weight of each node is calculated, because each node corresponds to each influence factor, the determining unit 24 correspondingly obtains the relative weight of each influence factor, and then determines the first main influence factor, the first non-main influence factor, the second main influence factor, and the second non-main influence factor according to the relative weight of each influence factor. The relative weight of the first main influence factor is the influence factor with the maximum relative weight in the linear correlation influence factor set, and accordingly, the other influence factors are the first non-main influence factors. The relative weight of the second main influence factor is the influence factor with the maximum relative weight in the nonlinear correlation influence factor set, and correspondingly, the other influence factors are the second non-main influence factors.
Then pre-processing unit 29 is used to perform ashing pre-processing and normalization pre-processing on the first dominant impact factor, the first non-dominant impact factor, the second dominant impact factor and the second non-dominant impact factor.
Specifically, taking the first major influence factor as an example, assume that the original sequence of the first major influence factor is:
X(0)={X(0)(1),X(0)(2)…X(0)(n)}
the sequence generated by the first accumulation is:
X(1)={X(1)(1),X(1)(2)…X(1)(n)}
wherein,
Figure GDA0002697155530000142
k≥1&k∈N,x(1)(0)=0。
let Z(1)Is X(1)Then the following sequence is generated:
Z(1)=Z(1)(2),Z(1)(3)…Z(1)(n),
Z(1)(k)=0.5(x(1)(k)+x(1)(k-1)),
the gray differential equation model for the ashing process model GM (1,1) is then:
X(0)(k)+az(1)(k)=b
note the book
Figure GDA0002697155530000151
Then the least squares estimation parameter of the gray differential equation is satisfied
Figure GDA0002697155530000152
Wherein,
Figure GDA0002697155530000153
Figure GDA0002697155530000154
then, it can be called
Figure GDA0002697155530000155
The whitening equation of (1).
In summary, the gray differential equation X of GM (1,1) can be calculated(0)(k)+az(1)(k) The time series of b is:
Figure GDA0002697155530000156
then the reduction equation (whitening) after ashing is
Figure GDA0002697155530000157
This makes it possible to perform ashing processing on the original sequence.
Then, carrying out normalization processing on the data after ashing treatment:
using formulas in particular
Figure GDA0002697155530000158
Input data sum normalization to [ -1,1]Within the interval, wherein, the xmaxIs the maximum value in the first main influence shadow data sequence, xminThe data sequence is the minimum value in the first main influence shadow data sequence, y is the data obtained after preprocessing, and the input data is the data sequence of the first main influence factor after normalization processing.
Likewise, the preprocessing unit 29 may preprocess the first non-primary influencing factor, the second primary influencing factor and the second non-primary influencing factor in the same way.
Furthermore, the second constructing unit 25 may construct a first BP neural network training model based on the first major influencing factor and the first non-major influencing factor; and constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor.
Here, the first BP neural network training model and the second BP neural network training model have the same structure, and both include: an input layer, a hidden layer, and an output layer.
The variables of the first BP neural network training model and the second BP neural network training model comprise:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is M, and the maximum learning frequency is M;
the hidden layer input weight is whiThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
Figure GDA0002697155530000161
The error function is
Figure GDA0002697155530000162
The yooIs any one of the output vectors of the output layer, doAny one vector in the expected output vectors;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
After the first BP neural network training model and the second BP neural network training model are constructed, the training unit 26 is configured to obtain training sample data, and train the first BP neural network training model and the second BP neural network training model based on the training sample data, respectively.
Specifically, the training unit 26 segments the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing, respectively, to form n data segments with length of m +1 and overlapping; each data segment is a training sample data. The input bit is the value of the first m moments, the output bit is the value of the m +1 th moment, and a sample matrix which constructs repeated data segments is gradually pushed forward (the sample matrix is n rows and m +1 columns);
adding the sample matrix row training after the segmentation processing into each training model, and performing output calculation and back propagation calculation output, wherein the back propagation calculation is used for error correction; wherein the back propagation calculation comprises:
respectively determining partial derivatives of the error function to each node of the output layers of the first BP neural network training model and the second BP neural network training modelo(ii) a Respectively determining the partial derivatives of the error function to each node of the hidden layers of the first BP neural network training model and the second BP neural network training modelh(ii) a Respectively using partial derivatives of each node of the output layeroAnd the output value ho of each node of the hidden layerhCorrecting the output weight of the hidden layer to be who(ii) a h is any value from 0 to p; respectively using the partial derivatives of each node of the hidden layerhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiThe influence factor is corresponding to any node in the input layer of the corresponding BP neural network model and is corresponding to the corresponding Bayesian network model.
And after the correction, the first BP neural network training model and the second BP neural network training model are saved.
The prediction unit 27 is configured to predict the trained first BP neural network training model and second BP neural network training model respectively based on preset test sample data, and output a prediction result value;
in particular, the prediction unit 27 utilizes a normalization function based on the test sample data
Figure GDA0002697155530000171
Performing inverse normalization, namely inverse analysis, and outputting the test sample data after primary reduction; said xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using the ashing reduction function (equation)
Figure GDA0002697155530000172
Secondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration; the restored test sample data is only data which is not preprocessed.
Then, based on the test sample data after the secondary reduction, predicting the first BP neural network training model and the second BP neural network training model respectively, and outputting a prediction result value, including:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain a fitting result value, namely a predicted result value.
The output unit 28 is configured to determine whether an error of the predicted result value is within a preset error range, and output the energy consumption prediction model of the linear correlation impact factor and the energy consumption prediction model of the nonlinear correlation impact factor if the error of the predicted result value is within the preset error range.
Specifically, the output unit 28 determines whether the error of the prediction result value is within a preset error range based on the actual building energy consumption value, and outputs the fitting result value as the energy consumption influence prediction value if the error of the prediction result value is within the preset error range or the accuracy of the prediction result value reaches at least 90%. And simultaneously outputting the energy consumption prediction model of the linear correlation influence factor, the energy consumption prediction model of the nonlinear correlation influence factor, the first main influence factor and the second main influence factor.
If the error of the predicted result value is not within the preset error range, resetting the learning precision, the learning times, the input weight of the hidden layer and the output weight of the hidden layer of the first BP neural network training model and the second BP neural network training model by taking the actual building energy consumption value as the reference to form a new first BP neural network training model and a new second BP neural network training model, training and predicting the first BP neural network training model and the second BP neural network training model again according to the same method to obtain a new predicted result value, outputting the final energy consumption prediction model of the linear correlation influence factor and the final energy consumption prediction model of the nonlinear correlation influence factor until the error of the predicted result value is within the preset error range, and outputting a first main influence factor and a second main influence factor.
The method and the device for constructing the building energy consumption prediction model provided by the embodiment of the invention have the beneficial effects that at least:
the embodiment of the invention provides a method and a device for a building energy consumption prediction model, wherein the method comprises the following steps: acquiring prior data, and acquiring an energy consumption influence factor set based on the prior data; classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set; respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner; respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model; constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor; acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data; respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values; judging whether the error of the predicted result value is within a preset error range, and if the error of the predicted result value is within the preset error range, outputting an energy consumption prediction model of the linear correlation influence factor and an energy consumption prediction model of the nonlinear correlation influence factor; therefore, the Bayesian network model can be used for acquiring main key factors from multiple building factor influence events, namely main influence factors of building energy consumption; and continuously training and predicting by using the BP neural network training model to obtain an energy consumption prediction model approximate to the fitting degree of real data, so that the prediction precision of the building energy consumption prediction model can be improved, and the trend of building energy consumption can be accurately predicted.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.

Claims (7)

1. A method of constructing a building energy consumption prediction model, the method comprising:
acquiring building prior data, and acquiring an energy consumption influence factor set based on the prior data;
classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner;
respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model;
constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data;
respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values;
judging whether the error of the predicted result value is within a preset error range, and if the error of the predicted result value is within the preset error range, outputting an energy consumption prediction model of the linear correlation influence factor and an energy consumption prediction model of the nonlinear correlation influence factor; wherein,
the determining a first dominant impact factor, a first non-dominant impact factor, a second dominant impact factor and a second non-dominant impact factor in the set of non-linear correlation impact factors, respectively, based on the corresponding bayesian network model, comprises:
respectively calculating probability distribution of each node in the directed acyclic graph in corresponding Bayes network models, and respectively acquiring relative weight of each node based on the probability distribution; each node corresponds to each influence factor;
respectively determining the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor according to the relative weight of each influence factor; wherein,
constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor; before constructing a second BP neural network training model based on the preprocessed second main influence factor and the preprocessed second non-main influence factor, the method comprises the following steps:
and performing ashing pretreatment and normalization pretreatment on the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor.
2. The method of claim 1, wherein the variables of the first and second BP neural network training models comprise:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is M, and the maximum learning frequency is M;
the hidden layer input weight is whiThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
Figure FDA0002697155520000021
The error function is
Figure FDA0002697155520000022
The yooIs any one of the output vectors of the output layer, doAny one vector in the expected output vectors;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
3. The method of claim 1, wherein said obtaining training sample data comprises:
segmenting the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing respectively to form n data segments with the length of m +1 and coincidence; each data segment is a training sample data.
4. The method of claim 2, wherein the training the first and second BP neural network training models, respectively, based on the training sample data comprises:
respectively determining partial derivatives of the error function to each node of the output layers of the first BP neural network training model and the second BP neural network training modelo
Respectively determining the partial derivatives of the error function to each node of the hidden layers of the first BP neural network training model and the second BP neural network training modelh
Respectively using partial derivatives of each node of the output layeroAnd the output value ho of each node of the hidden layerhCorrecting the output weight of the hidden layer to be who
Respectively using the partial derivatives of each node of the hidden layerhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiIs any node in the input layer of the corresponding BP neural network model.
5. The method of claim 1, wherein the predicting the trained first and second BP neural network training models based on preset test sample data and outputting a prediction result value comprises:
utilizing a normalization function based on the test sample data
Figure FDA0002697155520000031
Reversely analyzing, and outputting the test sample data y after the primary reduction; said xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using ashing reduction function
Figure FDA0002697155520000032
Secondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration;
and respectively predicting the first BP neural network training model and the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a prediction result value.
6. The method of claim 5, wherein the predicting the first and second BP neural network training models respectively based on the test sample data after the second restoring and outputting a prediction result value comprises:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain the predicted result value.
7. An apparatus for constructing a model for predicting energy consumption of a building, the apparatus comprising:
the acquisition unit is used for acquiring prior building data and acquiring an energy consumption influence factor set based on the prior data;
the classification unit is used for classifying the energy consumption influence factors and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
a first constructing unit, configured to construct a corresponding bayesian network model for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner, respectively;
a determining unit, configured to determine, based on the respective bayesian network models, a first primary influence factor, a first non-primary influence factor, a second primary influence factor, and a second non-primary influence factor in the set of non-linear correlation influence factors, respectively;
the second construction unit is used for constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
the training unit is used for acquiring training sample data and respectively training the first BP neural network training model and the second BP neural network training model based on the training sample data;
the prediction unit is used for respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data and outputting prediction result values;
the output unit is used for judging whether the error of the prediction result value is within a preset error range or not, and outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor if the error of the prediction result value is within the preset error range; wherein,
the determining unit is specifically configured to:
respectively calculating probability distribution of each node in the directed acyclic graph in corresponding Bayes network models, and respectively acquiring relative weight of each node based on the probability distribution; each node corresponds to each influence factor;
respectively determining the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor according to the relative weight of each influence factor; wherein,
the device further comprises: the preprocessing unit is used for constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor in the second constructing unit; before a second BP neural network training model is constructed based on the second main influence factors and the second non-main influence factors after pretreatment, ashing pretreatment and normalization pretreatment are carried out on the first main influence factors, the first non-main influence factors, the second main influence factors and the second non-main influence factors.
CN201710806517.3A 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model Active CN107590565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710806517.3A CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710806517.3A CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Publications (2)

Publication Number Publication Date
CN107590565A CN107590565A (en) 2018-01-16
CN107590565B true CN107590565B (en) 2021-01-05

Family

ID=61051121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710806517.3A Active CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Country Status (1)

Country Link
CN (1) CN107590565B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232476B (en) * 2018-05-10 2024-04-16 创新先进技术有限公司 Method and device for updating test sample set
CN108764568B (en) * 2018-05-28 2020-10-23 哈尔滨工业大学 Data prediction model tuning method and device based on LSTM network
CN109063903B (en) * 2018-07-19 2021-04-09 山东建筑大学 Building energy consumption prediction method and system based on deep reinforcement learning
CN109325631A (en) * 2018-10-15 2019-02-12 华中科技大学 Electric car charging load forecasting method and system based on data mining
CN111062876B (en) * 2018-10-17 2023-08-08 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111179108A (en) * 2018-11-12 2020-05-19 珠海格力电器股份有限公司 Method and device for predicting power consumption
CN109685252B (en) * 2018-11-30 2023-04-07 西安工程大学 Building energy consumption prediction method based on cyclic neural network and multi-task learning model
CN109726936B (en) * 2019-01-24 2020-06-30 辽宁工业大学 Monitoring method for deviation correction of inclined masonry ancient tower
CN110032780A (en) * 2019-02-01 2019-07-19 浙江中控软件技术有限公司 Commercial plant energy consumption benchmark value calculating method and system based on machine learning
CN112183166B (en) * 2019-07-04 2024-07-02 北京地平线机器人技术研发有限公司 Method and device for determining training samples and electronic equipment
CN111160598A (en) * 2019-11-13 2020-05-15 浙江中控技术股份有限公司 Energy prediction and energy consumption control method and system based on dynamic energy consumption benchmark
CN111221880B (en) * 2020-04-23 2021-01-22 北京瑞莱智慧科技有限公司 Feature combination method, device, medium, and electronic apparatus
CN111859500B (en) * 2020-06-24 2023-10-10 广州大学 Prediction method and device for bridge deck elevation of rigid frame bridge
CN112230991A (en) * 2020-10-26 2021-01-15 重庆博迪盛软件工程有限公司 Software portability evaluation method based on BP neural network
CN112462708B (en) * 2020-11-19 2023-04-11 南京河海南自水电自动化有限公司 Remote diagnosis and optimized scheduling method and system for pump station
CN113552855B (en) * 2021-07-23 2023-06-06 重庆英科铸数网络科技有限公司 Industrial equipment dynamic threshold setting method and device, electronic equipment and storage medium
CN116204566B (en) * 2023-04-28 2023-07-14 深圳市欣冠精密技术有限公司 Digital factory monitoring big data processing system
CN117077854B (en) * 2023-08-15 2024-04-16 广州视声智能科技有限公司 Building energy consumption monitoring method and system based on sensor network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104269849A (en) * 2014-10-17 2015-01-07 国家电网公司 Energy managing method and system based on building photovoltaic micro-grid
CN104597842A (en) * 2015-02-02 2015-05-06 武汉理工大学 BP neutral network heavy machine tool thermal error modeling method optimized through genetic algorithm
CN104765916A (en) * 2015-03-31 2015-07-08 西南交通大学 Dynamics performance parameter optimizing method of high-speed train
CN106951611A (en) * 2017-03-07 2017-07-14 哈尔滨工业大学 A kind of severe cold area energy-saving design in construction optimization method based on user's behavior

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331737A (en) * 2014-11-21 2015-02-04 国家电网公司 Office building load prediction method based on particle swarm neural network
CN104834808A (en) * 2015-04-07 2015-08-12 青岛科技大学 Back propagation (BP) neural network based method for predicting service life of rubber absorber
CN105373830A (en) * 2015-12-11 2016-03-02 中国科学院上海高等研究院 Prediction method and system for error back propagation neural network and server
CN105631539A (en) * 2015-12-25 2016-06-01 上海建坤信息技术有限责任公司 Intelligent building energy consumption prediction method based on support vector machine
CN106161138A (en) * 2016-06-17 2016-11-23 贵州电网有限责任公司贵阳供电局 A kind of intelligence automatic gauge method and device
CN106874581B (en) * 2016-12-30 2021-03-30 浙江大学 Building air conditioner energy consumption prediction method based on BP neural network model
CN106991504B (en) * 2017-05-09 2021-07-16 南京工业大学 Building energy consumption prediction method and system based on subentry measurement time sequence and building

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104269849A (en) * 2014-10-17 2015-01-07 国家电网公司 Energy managing method and system based on building photovoltaic micro-grid
CN104597842A (en) * 2015-02-02 2015-05-06 武汉理工大学 BP neutral network heavy machine tool thermal error modeling method optimized through genetic algorithm
CN104765916A (en) * 2015-03-31 2015-07-08 西南交通大学 Dynamics performance parameter optimizing method of high-speed train
CN106951611A (en) * 2017-03-07 2017-07-14 哈尔滨工业大学 A kind of severe cold area energy-saving design in construction optimization method based on user's behavior

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
日光温室环境建模及控制策略的研究;史丽荣;《中国优秀硕士学位论文全文数据库 农业科技辑》;20151215(第11期);第1-49页 *

Also Published As

Publication number Publication date
CN107590565A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
CN107590565B (en) Method and device for constructing building energy consumption prediction model
Li et al. Deep learning for high-dimensional reliability analysis
CN106847302B (en) Single-channel mixed voice time domain separation method based on convolutional neural network
CN111785014B (en) Road network traffic data restoration method based on DTW-RGCN
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN107977710A (en) Electricity consumption abnormal data detection method and device
CN107832789B (en) Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation
CN112101487B (en) Compression method and device for fine-grained recognition model
CN110472671A (en) Based on multistage oil-immersed transformer fault data preprocess method
CN112347531B (en) Brittle marble Dan Sanwei crack propagation path prediction method and system
CN107688863A (en) The short-term wind speed high accuracy combination forecasting method that adaptive iteration is strengthened
Jahani et al. Remaining useful life prediction based on degradation signals using monotonic B-splines with infinite support
Ibragimovich et al. Effective recognition of pollen grains based on parametric adaptation of the image identification model
Teferra et al. Mapping model validation metrics to subject matter expert scores for model adequacy assessment
CN115587666A (en) Load prediction method and system based on seasonal trend decomposition and hybrid neural network
Tang et al. Bayesian augmented Lagrangian algorithm for system identification
CN110309904A (en) A kind of neural network compression method
Li et al. Label free uncertainty quantification
Singh et al. Modified mean square error algorithm with reduced cost of training and simulation time for character recognition in backpropagation neural network
de Oliveira An application of neural networks trained with kalman filter variants (ekf and ukf) to heteroscedastic time series forecasting
CN112232570A (en) Forward active total electric quantity prediction method and device and readable storage medium
KR20190134308A (en) Data augmentation method and apparatus using convolution neural network
Marushko et al. Ensembles of neural networks for forecasting of time series of spacecraft telemetry
CN111160419B (en) Deep learning-based electronic transformer data classification prediction method and device
Xie Time series prediction based on recurrent LS-SVM with mixed kernel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant