CN111507505A - Method for constructing reservoir daily input prediction model - Google Patents
Method for constructing reservoir daily input prediction model Download PDFInfo
- Publication number
- CN111507505A CN111507505A CN202010198509.7A CN202010198509A CN111507505A CN 111507505 A CN111507505 A CN 111507505A CN 202010198509 A CN202010198509 A CN 202010198509A CN 111507505 A CN111507505 A CN 111507505A
- Authority
- CN
- China
- Prior art keywords
- model
- prediction
- reservoir
- constructing
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 27
- 238000009499 grossing Methods 0.000 claims abstract description 12
- 238000003062 neural network model Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 14
- 230000009466 transformation Effects 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 2
- 238000005065 mining Methods 0.000 abstract 1
- 238000002203 pretreatment Methods 0.000 abstract 1
- 238000013528 artificial neural network Methods 0.000 description 8
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 6
- 238000012360 testing method Methods 0.000 description 5
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 4
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 4
- 230000000052 comparative effect Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000011551 log transformation method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000003621 irrigation water Substances 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Economics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Strategic Management (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Primary Health Care (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method for constructing a reservoir daily intake prediction model, which comprises the steps of constructing a basic learning machine, utilizing a smoothing pretreatment method to the learning machine, utilizing a time sequence decomposition method to obtain relevant components of a processed sequence, then establishing a prediction model for the components, reconstructing the prediction result to obtain a relevant prediction result, obtaining a plurality of basic learning machines according to the steps to integrate, and predicting the daily intake of a reservoir by the integrated model. The invention mainly solves the problems that the existing reservoir daily warehousing quantity prediction algorithm is insufficient in data characteristic information mining and low in prediction accuracy.
Description
Technical Field
The invention relates to a hydrological forecasting technology, in particular to a reservoir daily inflow prediction method based on logarithmic transformation, time series decomposition and reconstruction, a neural network and integrated learning, which is mainly used for predicting the daily inflow of a reservoir to guide the management operation of the reservoir and reduce the unnecessary release of water resources and can be used for drought management, flood control, irrigation water, hydropower, industrial domestic water and other aspects of the reservoir.
Background
The reservoir is an important component of water resource management, and effective reservoir operation can reduce water release. The inflow prediction of the reservoir is crucial to the management operation of the reservoir, the flow prediction can be used for flood prevention, drought resistance, power generation, domestic water utilization, ecological environment improvement and the like of the reservoir, and the determination of an appropriate model for predicting the inflow of the reservoir in the future is very important for water resource planning.
In order to accurately predict the reservoir storage capacity, various prediction models are proposed, and the proposed models are mainly divided into two types, namely a physical-based model and a data-driven model.
Physical-based models, which employ mathematical functions that model the hydrological process and typically involve complex non-linear processes with high spatial variability in scale, can be very complex and limited, requiring manual calibration of large amounts of data with real-time difficulties. The data-driven model has the capability of fully simulating the input-output relationship of the hydrological system without deeply knowing the basic physical process of the system, and the data-driven method can directly map the relationship between the input variable and the output variable to predict the inflow, so that many hydrological researchers pay attention to the data-driven model.
Attempts have been made in recent years to implement watershed models using complex neural network methods. The advantage of this approach is that a neural network with a sufficiently hidden layer can approximate any continuous function to any degree of accuracy. For example, the method adopts ensemble empirical mode decomposition to decompose original reservoir data, then combines the original reservoir data into three trends, periods and random terms, and respectively predicts each term by using a deep neural network model based on a deep belief network and a neural network.
Disclosure of Invention
Aiming at the problems in the prior art, the method for constructing the reservoir daily inflow prediction model utilizes the characteristics of different sensitivities of various models to different data, overcomes the problems that a single model is sensitive and fragile in the face of sequence data with complex characteristics and weak in generalization capability, and realizes accurate prediction of the daily inflow of the reservoir.
In order to solve the technical problems, the invention adopts a technical scheme that: the method for constructing the reservoir daily input prediction model comprises the following steps:
step 1: smoothing daily warehousing quantity data by using logarithmic transformation: x ═ x1,x2,…,xuLnx, wherein X is a data sequence of historical daily warehousing quantity of the reservoir to be predicted, and X isuThe u-th day warehousing quantity of the reservoir to be predicted, and X is a sequence to be input after smoothing treatment on X;
step 2: building a plurality of basic learning machines, Y ═ Y1,Y2,…,YNIn which Y isNThe data of the Nth basic learning machine, wherein N is the number of the basic learning machines to be integrated;
and step 3: respectively learning the input sequence X after the smoothing processing by using N basic learning machines to obtain N prediction results, wherein y is equal to { y }1(X),y2(X),…,YN(X), and integrating the N prediction results to obtain a final prediction result.
In order to solve the technical problems, the invention adopts the further technical scheme that:
integrating the prediction results of the N basic learning machines by using a weighted summation method by adopting an equation (1);
r in the formula (1) is a final prediction result, omegaiIs the weight of the i-th basic learning machine, yiIs the prediction result of the ith basic learning machine.
Further, the method for constructing the basic learning machine in step 2 includes the following steps:
step a: selecting one of EMD, EEMD and wavelet decomposition method, performing time sequence decomposition on the input sequence X, and obtaining S ═ S1,s2,…,su},T={t1,t2,…,tu},P={p1,p2,…,puItems, wherein S is a random item, T is a trend item, and P is a period item;
selecting L STM model and DNN model to construct three sub-network models, and predicting the decomposed S, T and P items;
step c: and reconstructing the prediction components of the S, T and P items to obtain a reconstructed prediction result.
Further, the method for constructing the prediction component in step c includes the following steps:
step A: constructing a training set, wherein the constructed training set comprises Q samples, wherein the sample xq={xq1,xq2,…,xqu,...,xqU,xq(U+1)};
Wherein Q represents the Q-th sample in the training set, Q is 1, 2, 3,.. the Q is a positive integer greater than or equal to 1, U is 1, 2, 3,. the.. the U is a positive integer greater than or equal to 1;
xqulnx (T), X ═ { X (T), T ═ 1, 2, 3.., T } is the data sequence of the component to be predicted, X (T) is the tth component of the component to be predicted, and U is the embedding dimension of the data sequence X of the component to be predicted;
and B: constructing an initial neural network model, wherein the number of input nodes of the constructed initial neural network model is U, and the number of output nodes of the constructed initial neural network model is 1;
and C: using normalized training set to pairTraining the initial neural network model to obtain a component prediction model, sample xqThe first U data is input data of the neural network model, and the last data is target output corresponding to the input data.
Further, the number of hidden layers of the constructed initial neural network model is 1, 2 or 3, and the number of hidden nodes is 5, 10, 15, 20 or 25.
Further, the embedding dimension of the component data sequence X to be predicted is the embedding dimension of the component data sequence X to be predicted, which is obtained by adopting a false nearest neighbor method.
The invention has the beneficial effects that:
firstly, because the fluctuation of the original reservoir warehousing quantity is relatively high, logarithmic transformation is adopted in the pretreatment of the daily warehousing flow sequence of the original reservoir, and the logarithmic transformation can slow down the fluctuation of original data, so that a neural network can learn data change characteristics more easily, particularly the warehousing quantity change characteristics in rainy seasons, and the prediction accuracy of a model can be improved;
secondly, the time series decomposition-based reconstruction model construction method is used, so that the characteristics of the time series can be independently learned from multiple scales, and the prediction accuracy is improved;
thirdly, the invention uses a scheme of training a basic learning machine by combining various decomposition methods and a neural network model and finally integrating to obtain a final result, utilizes the characteristics of various models with different sensitivities to different data, overcomes the problems that a single model is sensitive and fragile in sequence data with complex characteristics and weak in generalization capability, and realizes accurate prediction of daily inflow of the reservoir.
Drawings
FIG. 1 is a flow chart of a method for constructing a model for predicting daily intake of a reservoir according to the present invention;
FIG. 2 is a flow chart of the main algorithm of the construction method of the basic learning machine of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided for illustrative purposes, and the present invention will be described in detail with reference to the accompanying drawings. The invention may be embodied in other different forms, i.e. it is capable of various modifications and changes without departing from the scope of the invention as disclosed.
In the embodiment, after log transformation is carried out on input reservoir daily inflow data, an improved false nearest neighbor method is adopted to determine the number of embedded dimensions, namely input nodes of a neural network, and then a plurality of basic learning machines combined by different decomposition algorithms and different neural network model structures are constructed.
The method for constructing the reservoir daily input prediction model comprises the following steps:
step 1: smoothing daily warehousing quantity data by using logarithmic transformation: x ═ x1,x2,…,xuLnx, wherein X is a data sequence of historical daily warehousing quantity of the reservoir to be predicted, and X isuThe u-th day warehousing quantity of the reservoir to be predicted, and X is a sequence to be input after smoothing treatment on X;
step 2: building a plurality of basic learning machines, Y ═ Y1,Y2,…,YNIn which Y isNThe data of the Nth basic learning machine, wherein N is the number of the basic learning machines to be integrated;
and step 3: respectively learning the input sequence X after the smoothing processing by using N basic learning machines to obtain N prediction results, wherein y is equal to { y }1(X),y2(X),…,yN(X), and integrating the N prediction results to obtain a final prediction result.
In the preferred scheme, a sub-sample set corresponding to a daily warehousing quantity data sequence before R-1 year in a historical daily warehousing quantity data sequence of the reservoir to be predicted is a training set, a sub-sample set corresponding to a daily warehousing quantity data sequence of R-1 year is a testing set, and R is the current year of prediction.
The basic learning mechanism establishing method of the embodiment includes the following steps:
step a: selecting one of EMD, EEMD and wavelet decomposition method, performing time sequence decomposition on the input sequence X, and obtaining S ═ S1,s2,…,su},T={t1,t2,…,tu},P={p1,p2,…,puItems, wherein S is a random item, T is a trend item, and P is a period item;
selecting L STM model and DNN model to construct three sub-network models, and predicting the decomposed S, T and P items;
step c: and reconstructing the prediction components of the S, T and P items to obtain a reconstructed prediction result.
In a specific scheme, the number of hidden layers of the initial L STM network model is 1, 2 or 3, the number of hidden nodes is 5, 10, 15, 20 or 25, optionally, the decomposition methods used by the invention are EMD, EEMD and wavelet decomposition, the adopted neural network models are L STM and DNN, and the basic learning machine can be combined with the two network models by three decomposition methods at will.
The simulation of this embodiment is performed in the hardware environment of CPU with main frequency of 3.6GHZ, memory of 8GB and software environment of python3.5.2, tensoflow version 1.3.0 and MAT L AB R2016 a.
The method for constructing the reservoir daily input prediction model of the embodiment specifically comprises the following steps:
step 1: smoothing the daily warehouse entry data by utilizing logarithmic transformation, wherein x is { x ═ x1,x2,…,xuWherein x is a data sequence of historical daily warehousing quantity of the reservoir to be predicted, and xuThe u-th day warehousing quantity of the reservoir to be predicted, and X is a sequence to be input after smoothing treatment on X;
step 2: constructing 6 basic learning machines, Y ═ Y1,Y2,…,YNSpecifically, the method comprises the steps of utilizing EMB decomposition to combine with an L STM network, utilizing EEMB decomposition to combine with a L STM network, utilizing wavelet decomposition to combine with a L STM network, utilizing EMB decomposition to combine with a CNN network, utilizing EEMB decomposition to combine with a CNN network, utilizing wavelet decomposition to combine with the CNN network, and sharing 6 basic learning machines;
step 3, training 6 basic learning machines respectively by using the normalized training set; and (4) respectively testing the 6 basic learning machines by using the normalized test set, and integrating the test results into a final result through weighted summation.
In this embodiment, on the basis of the above contents, the final result of the test set passing through the integrated model is output, and logarithmic reduction is performed to obtain a predicted value.
The construction method of the embodiment is adopted, the reservoir to be predicted is selected to be an healthy reservoir, the historical daily warehousing data sequence of the reservoir to be predicted is adopted, the daily warehousing data of 1943/1/1-1971/12/31 are used for prediction, the time sequence data of 1943/1/1-1970/12/31 are used as a training set, the data of 1971/1/1-1971/12/31 are the most tested set, and the experimental simulation environment of the embodiment is a CPU with the dominant frequency of 3.6GHz, a hardware environment with the memory of 8GB and a software environment with the Python3.5.2, the tensoflow1.3.0 version and MAT L AB R2016 a.
This example was compared with the following 6 model prediction methods of the comparative examples using the method proposed by the present invention:
(1) wavelet decomposition + L STM model;
(2) a wavelet decomposition + DNN model;
(3) EMD + L STM model;
(4) an EMD + DNN model;
(5) EEMD + L STM model;
(6) EEMD + DNN model;
note: EMD is Empirical Mode Decomposition (Empirical Mode Decomposition);
EEMD is an integrated Empirical Mode Decomposition (Ensemble Empirical Mode Decomposition);
l STM model is a long and Short time Memory Network (L ong Short Term Memory Network) model
The DNN model is a Deep Neural Networks (Deep Neural Networks) model.
In the above models, the log transformation proposed by the present invention is used to pre-process the sequence to be put into storage, and then the corresponding model is used to predict the sequence, and the results of the comparative experiment between the present embodiment and the comparative example are shown in table 1.
TABLE 1 comparison of model prediction accuracies
Note: MAPE is the mean absolute percent error, NRMSE is the standard root mean square error, and R2 determines the coefficient.
The results of MAPE evaluation indexes of different models on the 1971 prediction results of the Ankang reservoir are shown in Table 1, and the smaller the value of MAPE, the better the MAPE, so that the integrated model has the best effect and the prediction precision of 11.82 percent compared with 6 single algorithm models in a comparative example.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures made by using the contents of the specification and the drawings, or other related technical fields, are encompassed by the present invention.
Claims (6)
1. A method for constructing a reservoir daily input prediction model is characterized by comprising the following steps: the construction method comprises the following steps:
step 1: smoothing daily warehousing quantity data by using logarithmic transformation: x ═ x1,x2,…,xuAnd X is ln X, wherein X is a data sequence of historical daily warehousing quantity of the reservoir to be predicted, and X isuThe u-th day warehousing quantity of the reservoir to be predicted, and X is a sequence to be input after smoothing treatment on X;
step 2: constructing a plurality of basic learning machines: y ═ Y1,Y2,…,YNIn which Y isNFor data of the Nth basic learning machine, N being basic learning to be integratedThe number of machines;
and step 3: respectively learning the input sequence X after the smoothing treatment by using N basic learning machines to obtain N prediction results: y ═ y1(X),y2(X),…,yN(X), and integrating the N prediction results to obtain a final prediction result.
2. The method for constructing a model for predicting daily intake of a reservoir as claimed in claim 1, wherein:
integrating the prediction results of the N basic learning machines by using a weighted summation method by adopting an equation (1);
r in the formula (1) is a final prediction result, omegaiIs the weight of the i-th basic learning machine, yiIs the prediction result of the ith basic learning machine.
3. The method for constructing a model for predicting daily intake of a reservoir as claimed in claim 1, wherein: the construction method of the basic learning machine in the step 2 comprises the following steps:
step a: selecting one of EMD, EEMD and wavelet decomposition method, performing time sequence decomposition on the input sequence X, and obtaining S ═ S1,s2,…,su},T={t1,t2,…,tu},P={p1,p2,…,puItems, wherein S is a random item, T is a trend item, and P is a period item;
selecting L STM model and DNN model to construct three sub-network models, and predicting the decomposed S, T and P items;
step c: and reconstructing the prediction components of the S, T and P items to obtain a reconstructed prediction result.
4. The basic learning mechanism building method of claim 3, wherein: the construction method for reconstructing the prediction component in the step c comprises the following steps:
step A: constructing a training set, wherein the constructed training set comprises Q samples, wherein the sample xq={xq1,xq2,…,xqu,...,xqU,xq(U+1)};
Wherein Q represents the Q-th sample in the training set, Q is 1, 2, 3,.. the Q is a positive integer greater than or equal to 1, U is 1, 2, 3,. the.. the U is a positive integer greater than or equal to 1;
xquln X (T), X ═ { X (T), T ═ 1, 2, 3.., T } is the data sequence of the component to be predicted, X (T) is the tth component of the component to be predicted, and U is the embedding dimension of the data sequence X of the component to be predicted;
and B: constructing an initial neural network model, wherein the number of input nodes of the constructed initial neural network model is U, and the number of output nodes of the constructed initial neural network model is 1;
and C: training the initial neural network model by using the normalized training set to obtain a component prediction model, sample xqThe first U data is input data of the neural network model, and the last data is target output corresponding to the input data.
5. The method for constructing a model for predicting daily intake of a reservoir as set forth in claim 4, wherein: the number of hidden layers of the constructed initial neural network model is 1, 2 or 3, and the number of hidden nodes is 5, 10, 15, 20 or 25.
6. The method of constructing a component prediction model according to claim 4, characterized in that: and the embedding dimension of the component data sequence X to be predicted is the embedding dimension of the component data sequence X to be predicted, which is obtained by adopting a false nearest neighbor method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010198509.7A CN111507505A (en) | 2020-03-20 | 2020-03-20 | Method for constructing reservoir daily input prediction model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010198509.7A CN111507505A (en) | 2020-03-20 | 2020-03-20 | Method for constructing reservoir daily input prediction model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111507505A true CN111507505A (en) | 2020-08-07 |
Family
ID=71875859
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010198509.7A Pending CN111507505A (en) | 2020-03-20 | 2020-03-20 | Method for constructing reservoir daily input prediction model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507505A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668773A (en) * | 2020-12-24 | 2021-04-16 | 北京百度网讯科技有限公司 | Method and device for predicting warehousing traffic and electronic equipment |
CN112966926A (en) * | 2021-03-02 | 2021-06-15 | 河海大学 | Flood sensitivity risk assessment method based on ensemble learning |
CN112989705A (en) * | 2021-03-30 | 2021-06-18 | 海尔数字科技(上海)有限公司 | Method and device for predicting reservoir entry flow value, electronic device and medium |
CN117744884A (en) * | 2023-12-29 | 2024-03-22 | 南方电网调峰调频发电有限公司鲁布革水力发电厂 | Reservoir water flow prediction model construction method and reservoir water flow prediction method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010204974A (en) * | 2009-03-04 | 2010-09-16 | Oki Electric Ind Co Ltd | Time series data prediction device |
CN102184335A (en) * | 2011-05-20 | 2011-09-14 | 公安部上海消防研究所 | Fire disaster time sequence prediction method based on ensemble empirical mode decomposition and phase space reconstruction |
CN104239964A (en) * | 2014-08-18 | 2014-12-24 | 华北电力大学 | Ultra-short-period wind speed prediction method based on spectral clustering type and genetic optimization extreme learning machine |
CN105404939A (en) * | 2015-12-04 | 2016-03-16 | 河南许继仪表有限公司 | Short-term power load prediction method |
CN108921279A (en) * | 2018-03-26 | 2018-11-30 | 西安电子科技大学 | Reservoir day enters water prediction technique |
CN110598170A (en) * | 2019-08-06 | 2019-12-20 | 天津大学 | Data prediction method based on FEEMD decomposition time sequence |
-
2020
- 2020-03-20 CN CN202010198509.7A patent/CN111507505A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010204974A (en) * | 2009-03-04 | 2010-09-16 | Oki Electric Ind Co Ltd | Time series data prediction device |
CN102184335A (en) * | 2011-05-20 | 2011-09-14 | 公安部上海消防研究所 | Fire disaster time sequence prediction method based on ensemble empirical mode decomposition and phase space reconstruction |
CN104239964A (en) * | 2014-08-18 | 2014-12-24 | 华北电力大学 | Ultra-short-period wind speed prediction method based on spectral clustering type and genetic optimization extreme learning machine |
CN105404939A (en) * | 2015-12-04 | 2016-03-16 | 河南许继仪表有限公司 | Short-term power load prediction method |
CN108921279A (en) * | 2018-03-26 | 2018-11-30 | 西安电子科技大学 | Reservoir day enters water prediction technique |
CN110598170A (en) * | 2019-08-06 | 2019-12-20 | 天津大学 | Data prediction method based on FEEMD decomposition time sequence |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668773A (en) * | 2020-12-24 | 2021-04-16 | 北京百度网讯科技有限公司 | Method and device for predicting warehousing traffic and electronic equipment |
CN112966926A (en) * | 2021-03-02 | 2021-06-15 | 河海大学 | Flood sensitivity risk assessment method based on ensemble learning |
CN112989705A (en) * | 2021-03-30 | 2021-06-18 | 海尔数字科技(上海)有限公司 | Method and device for predicting reservoir entry flow value, electronic device and medium |
CN117744884A (en) * | 2023-12-29 | 2024-03-22 | 南方电网调峰调频发电有限公司鲁布革水力发电厂 | Reservoir water flow prediction model construction method and reservoir water flow prediction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuan et al. | Monthly runoff forecasting based on LSTM–ALO model | |
Choi et al. | Multi-period and multi-criteria model conditioning to reduce prediction uncertainty in an application of TOPMODEL within the GLUE framework | |
CN107168402B (en) | Environment of chicken house temperature intelligent monitoring system based on CAN fieldbus | |
Li et al. | Comparison of the use of a physical-based model with data assimilation and machine learning methods for simulating soil water dynamics | |
Jallal et al. | A novel deep neural network based on randomly occurring distributed delayed PSO algorithm for monitoring the energy produced by four dual-axis solar trackers | |
Kumar et al. | Heating and cooling potential of an earth-to-air heat exchanger using artificial neural network | |
Sanikhani et al. | Estimation of daily pan evaporation using two different adaptive neuro-fuzzy computing techniques | |
CN107622329A (en) | The Methods of electric load forecasting of Memory Neural Networks in short-term is grown based on Multiple Time Scales | |
Sethi et al. | Prediction of water table depth in a hard rock basin by using artificial neural network | |
CN111507505A (en) | Method for constructing reservoir daily input prediction model | |
CN108921279A (en) | Reservoir day enters water prediction technique | |
Kumar et al. | Regional flood frequency analysis using soft computing techniques | |
Wang et al. | Comparative performance of logistic regression and survival analysis for detecting spatial predictors of land-use change | |
He et al. | Comparative study of artificial neural networks and wavelet artificial neural networks for groundwater depth data forecasting with various curve fractal dimensions | |
Ismail et al. | A hybrid model of self organizing maps and least square support vector machine for river flow forecasting | |
CN112100911B (en) | Solar radiation prediction method based on depth BILSTM | |
Seifi et al. | GLUE uncertainty analysis of hybrid models for predicting hourly soil temperature and application wavelet coherence analysis for correlation with meteorological variables | |
CN107798383A (en) | Improved core extreme learning machine localization method | |
Qiao et al. | Metaheuristic evolutionary deep learning model based on temporal convolutional network, improved aquila optimizer and random forest for rainfall-runoff simulation and multi-step runoff prediction | |
Xu et al. | Prediction of water temperature in prawn cultures based on a mechanism model optimized by an improved artificial bee colony | |
Hassan et al. | Predicting streamflows to a multipurpose reservoir using artificial neural networks and regression techniques | |
Khorram et al. | A hybrid CNN-LSTM approach for monthly reservoir inflow forecasting | |
Robati et al. | Inflation rate modeling: adaptive neuro-fuzzy inference system approach and particle swarm optimization algorithm (ANFIS-PSO) | |
CN112131794A (en) | Hydraulic structure multi-effect optimization prediction and visualization method based on LSTM network | |
Li et al. | Daily streamflow forecasting based on flow pattern recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200807 |