CN109359404B - Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion - Google Patents
Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion Download PDFInfo
- Publication number
- CN109359404B CN109359404B CN201811273575.5A CN201811273575A CN109359404B CN 109359404 B CN109359404 B CN 109359404B CN 201811273575 A CN201811273575 A CN 201811273575A CN 109359404 B CN109359404 B CN 109359404B
- Authority
- CN
- China
- Prior art keywords
- output
- runoff
- empirical
- neural network
- forecasting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 56
- 230000004927 fusion Effects 0.000 title claims abstract description 17
- 238000013277 forecasting method Methods 0.000 title abstract description 8
- 230000006870 function Effects 0.000 claims description 34
- 239000011159 matrix material Substances 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 25
- 238000000034 method Methods 0.000 claims description 20
- 210000002569 neuron Anatomy 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 238000005259 measurement Methods 0.000 claims description 11
- 150000001875 compounds Chemical class 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 8
- 230000007774 longterm Effects 0.000 claims description 7
- 238000001228 spectrum Methods 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 230000021615 conjugation Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 claims 2
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 claims 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008020 evaporation Effects 0.000 description 1
- 238000001704 evaporation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001556 precipitation Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to the field of hydrological prediction, and discloses a medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion.
Description
Technical Field
The invention relates to the field of hydrological forecasting in hydrology, in particular to a medium and long term runoff forecasting method based on empirical wavelet denoising and neural network fusion.
Background
The medium-and-long-term runoff forecasting (the meeting period is 3 days to 1 year) is one of the hot problems of hydrological research in recent decades, and the improvement of the medium-and-long-term runoff forecasting precision has important significance on aspects of water resource allocation, flood control, disaster reduction and the like. The existing medium-long term runoff forecasting methods can be divided into two types: physical models and empirical models. The physical model typically predicts future runoff by simulating the physical process of watershed water circulation. The construction of the physical model requires acquiring meteorological data such as precipitation, evaporation and temperature in a long time period (a meeting period), and the difficulty of medium-long term forecasting by adopting the physical model is increased.
The various empirical models involved in the medium-and long-term hydrological forecast mainly include time series models and machine learning models. However, previous researches show that the time series model has limited capability of forecasting the runoff time series with strong nonlinearity and non-stationarity. In various machine learning methods, artificial Neural Networks (ANN) including Back Propagation Neural Networks (BPNN), radial Basis Function (RBF) neural networks, generalized Regression Neural Networks (GRNN), elman neural networks, extreme Learning Machines (ELM) and the like have achieved great results in the field of medium and long term runoff forecasting, but there is still a great room for improvement in the forecasting performance of a single ANN model. The method for preprocessing the runoff time sequence by using the data preprocessing technology is a method capable of effectively improving the medium-and-long-term runoff forecasting precision. In addition, another method capable of improving the single ANN model forecasting performance is to fuse the forecasting information of a plurality of ANN models, comprehensively consider the advantages of each model, and further improve the medium-and-long-term runoff forecasting precision.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion, which can effectively improve the forecasting precision of a medium-and-long-term runoff time sequence.
The technical scheme is as follows: the invention provides a medium and long term runoff forecasting method based on empirical wavelet denoising and neural network fusion, which is characterized by comprising the following steps of: the method comprises the following steps: acquiring historical actual runoff data of a hydrological site, establishing a medium-long runoff time sequence, and dividing sample data into a training period and an inspection period; step two: decomposing the runoff time sequence into a plurality of independent empirical modes by adopting EWT, removing the mode with the highest frequency in the decomposed empirical modes, and carrying out linear summation on the rest empirical modes to obtain a reconstructed runoff time sequence so as to eliminate redundant noise of the original runoff time sequence; step three: performing phase space reconstruction on the runoff time sequence processed by the EWT, constructing a phase space matrix as an input factor of a base forecasting model, and establishing an input matrix and an output matrix of the ANN base forecasting model; step four: training the RBF, ELM and Elman neural network models respectively by adopting the input and output matrixes of the training period determined in the third step; step five: taking the outputs of the three basis forecasting models as forecasting factors of the GRNN model, establishing an input matrix and an output matrix again, training the GRNN model, and substituting the test samples into the trained GRNN model to obtain a predicted value of a test period; step six: and evaluating the prediction result by adopting four evaluation indexes including Root Mean Square Error (RMSE), mean Absolute Error (MAE), mean Absolute Percent Error (MAPE) and correlation coefficient (R).
Further, in the second step, for the original runoff time series x (t), the decomposition process of EWT is as follows: (1) Calculating a Fourier spectrum F (omega) of the original runoff time sequence x (t) according to a Fast Fourier Transform (FFT) algorithm; (2) The frequency domain [0, pi ] of the Fourier spectrum F (omega)]Adaptive partitioning into K frequency bands of unequal bandwidth, [0, ω 1 ],[ω 1 ,ω 2 ],...,[ω K-1 ,π]Wherein the boundary ω of the wavelet filter n N =1,2,.. N-1 is chosen as the intermediate frequency between two consecutive local maxima; (3) Constructing empirical wavelet according to Meyer wavelet, and determining empirical wavelet function psi k (omega) and empirical scale function
In the formula (I), the compound is shown in the specification,β(x)=x 4 (35-84x+70x 2 -20x 3 ) (ii) a (4) And reconstructing the original runoff time sequence to obtain different empirical modes.
Preferably, in the (4), the original runoff time series is reconstructed by the following formula:
wherein, is convolution operation, W x (0,t) is an approximation coefficient, W x (k, t) are detail coefficients, and the approximation coefficient and the detail coefficient are given by:
in the formula, # k (t) and ψ k (t) represents an empirical wavelet function and an empirical scaling function respectively,andrespectively represent psi k (t-t) andcomplex conjugation of (2), F -1 (. Cndot.) represents an inverse fourier transform,andrespectively indicate ψ (ω) andfourier transform of (1); the empirical mode u k (t) is defined as follows:
preferably, the hidden layer of the RBF neural network uses a Gaussian transfer function asActivation function, gaussian activation function is defined as:wherein x = [ x ] 1 ,x 2 ,...,x n ] T An input vector of dimension n, c i =[c i1 ,c i2 ,...,c in ] T Is the center of the ith hidden layer neuron, q i Represents the width of the gaussian function,. Is the euclidean norm; the response of the jth node of the output layer is:in the formula, w ij H and m respectively represent the number of nodes of the hidden layer and the output layer; the extreme learning machine is a single hidden layer feedforward neural network, and K training samples { x is given k ,y k K =1,2, K, where X = [ X ] 1 ,x 2 ,...,x K ] T ∈R M For the input vector, Y = [ Y = 1 ,y 2 ,...,y K ] T ∈R N Is an output vector; the output of the jth hidden layer neuron for the kth training sample is represented asWherein J represents the number of hidden neurons, g (-) represents the activation function, β j And a j Respectively representing the desired connection weight and bias between the input layer and the hidden layer; the output of the k training sample of the ELM model can be expressed as:in the formula, w j Representing weights between the hidden layer and the output layer; the hidden layer kernel mapping matrix is represented as follows:the output of the ELM network is represented as:(11) (ii) a Wherein w = [ w = 1 ,w 2 ,...,w J ] T (ii) a The goal of the ELM model is to find the most appropriate w, so that the network outputs valuesAnd the error between the actual measurement value Y is minimum, and the association weight between the hidden layer and the output layer is obtained by solving the following optimization problem:
in the formula (I), the compound is shown in the specification,is the generalized inverse of matrix D; the output of the k training sample of the ELM model is represented as:the Elman neural network is a dynamic recurrent neural network. In addition to the input, hidden and output layers, the Elman neural network includes a special recursive layer that remembers the output information of the hidden neurons at previous times and then uses this information as input to the hidden layer.
Further, in the fifth step, assuming that the random vector X, y and the joint probability density function p (X, y) are known, if the observed value of X is X, the regression of y with respect to X is represented as:in the formula, E [ y | X]For the predicted value of output Y given input X, the density function p (X, Y) is unknown and estimated by a Parzen non-parametric estimate:in the formula (I), the compound is shown in the specification,X j and Y j Respectively representing the actual measurement input value and the actual measurement output value of the jth sample, wherein N is the number of sample sets, d is the dimension of x, and sigma is a smoothing parameter; output of GRNN model according to equations (15) and (16)Can be expressed as follows:
preferably, in the sixth step, the calculation formula of RMSE, MAE, MAPE and R is as follows:
in the formula, q p (i) Is a predicted value; q. q.s o (i) Is an actual measurement value;andrespectively expressed as the mean values of the measured value and the predicted value; n is the number of sample sets.
Has the advantages that: compared with the prior art, the medium-and-long-term runoff forecasting method established by the invention can obtain the following beneficial effects:
1) According to the method, the empirical wavelet transform is adopted to decompose and reconstruct the runoff time sequence, so that redundant noise in the natural runoff time sequence can be removed, the interference of nonlinear fluctuation of the runoff time sequence on runoff forecasting is reduced, and the performance of a forecasting model is improved;
2) Aiming at the problems that the forecasting precision of a single ANN model cannot reach an expected level easily and each model has advantages and disadvantages, forecasting results of three ANN forecasting models are used as input factors, the GRNN neural network model is adopted to further forecast the monthly runoff time sequence, the advantages of different ANN forecasting models are integrated, the forecasting information of each basic model is fused, and therefore runoff forecasting information with higher precision is obtained.
Drawings
FIG. 1 is a flow chart of a medium-and-long-term runoff forecasting model based on empirical wavelet denoising and neural network fusion;
FIG. 2 is a graph of the EWT decomposition of the Wudongde station runoff time series and a comparison of the original and noise-canceled runoff time series;
FIG. 3 is a graph comparing the monthly runoff predictions of the Wudongda test period GNE and EWT-GNE models.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings.
Embodiment 1:
the method takes the Udongde hydrology standing-month runoff time sequence of the Yangtze river upstream watershed as an embodiment, and performs example simulation to verify the effect of the method. Fig. 1 is a flow chart of a method for forecasting medium and long term runoff based on empirical wavelet denoising and neural network fusion, which is implemented by the following steps:
the method comprises the following steps: historical actual measurement month flow data (660 sample data points) of Uedod hydrology station from 1 month to 12 months in 1958 are obtained, a month flow time series is established, the actual measurement value of the month flow in the first 525 months (from 1 month to 9 months in 1958) is used as a training sample, and the actual measurement value of the month flow in the later 135 months (from 10 months to 12 months in 2012 in 2001) is used as an inspection sample.
Step two: and decomposing the Wudongde month runoff time series into a plurality of independent empirical modes by adopting EWT, removing the mode with the highest frequency in the decomposed empirical modes, and performing linear summation on the rest empirical modes to obtain a reconstructed runoff time series. The EWT decomposition of the udot station runoff time series and the comparison of the original and noise-cancelled runoff time series are shown in fig. 2.
The main idea of EWT is to perform adaptive partitioning on Fourier spectrum of the original runoff time series and establish a group of appropriate wavelet filters to screen out amplitude modulation and frequency modulation (AM-FM) components of the original runoff time series. For the original runoff time series x (t), the decomposition process of EWT is as follows:
(1) And calculating the Fourier spectrum F (omega) of the original runoff time sequence x (t) according to a fast Fourier transform algorithm (FFT).
(2) The frequency domain [0, pi ] of the Fourier spectrum F (omega)]Adaptive partitioning into K frequency bands of unequal bandwidth, [0, ω 1 ],[ω 1 ,ω 2 ],...,[ω K-1 ,π]Wherein the boundary ω of the wavelet filter n N =1,2, N-1 is chosen as the intermediate frequency between two consecutive local maxima.
(3) Constructing an empirical wavelet according to the Meyer wavelet, and determining an empirical wavelet function psi k (omega) and empirical scale function
(4) And reconstructing the original runoff time sequence to obtain different modes. The original runoff time series was reconstructed by the following formula:
wherein, is convolution operation, W x (0,t) is an approximation coefficient, W x (k, t) are detail coefficients, and the approximation coefficients and the detail coefficients are given by:
in the formula, # k (t) and ψ k (t) represents an empirical wavelet function and an empirical scaling function respectively,andrespectively represent psi k (t-t) andcomplex conjugation of (1), F -1 (. Cndot.) represents an inverse fourier transform,andrespectively indicate ψ (ω) andthe fourier transform of (d).
The empirical mode u k (t) is defined as follows:
step three: and (3) carrying out phase space reconstruction on the noise-eliminated monthly runoff time sequence processed by the EWT, constructing a phase space matrix as an input factor of the basis forecasting model, and establishing an input matrix and an output matrix of the ANN basis forecasting model.
Step four: and (4) training the RBF, ELM and Elman neural network models respectively by adopting the input matrix and the output matrix of the training period determined in the third step.
The RBF neural network is a multi-layer feed-Forward Neural Network (FNN). The RBF neural network includes a three-layer network structure: the system comprises an input layer consisting of input variables, an implicit layer for carrying out nonlinear transformation on the input variables and an output layer for generating network linear response. The hidden layer of the RBF neural network uses a gaussian transfer function as the activation function. The gaussian activation function is defined as:
wherein x = [ x ] 1 ,x 2 ,...,x n ] T An input vector of dimension n, c i =[c i1 ,c i2 ,...,c in ] T Is the center of the ith hidden layer neuron, q i Denotes the width of the gaussian function, and is the euclidean norm. The response of the jth node of the output layer is:
in the formula, w ij H and m respectively represent the number of nodes of the hidden layer and the output layer for the connection weight between the ith hidden layer node and the jth output layer node.
The extreme learning machine is a single hidden layer feedforward neural network, and K training samples { x is given k ,y k K =1,2, K, where X = [ X ] 1 ,x 2 ,...,x K ] T ∈R M For the input vector, Y = [ Y = 1 ,y 2 ,...,y K ] T ∈R N Is the output vector. The output of the jth hidden layer neuron for the kth training sample can be expressed asWherein J represents the number of hidden neurons, g (-) represents the activation function, β j And a j Respectively representing the desired connection weights and offsets between the input layer and the hidden layer. The output of the k training sample of the ELM model can be expressed as:
in the formula, w j Representing the weights between the hidden layer and the output layer.
The hidden layer kernel mapping matrix can be represented as follows:
the output of the ELM network can be expressed as:
wherein w = [ w = 1 ,w 2 ,...,w J ] T
The goal of the ELM model is to find the most appropriate w, so that the network outputs valuesAnd the measured value Y is minimized. The associated weights between the hidden layer and the output layer can be derived by solving the following optimization problem:
the solution to equation (12) can be found from the Moore Penrose (MP) generalized inverse:
in the formula (I), the compound is shown in the specification,is the generalized inverse of matrix D.
The output of the k-th training sample of the ELM model can be expressed as:
the Elman neural network is a dynamic recurrent neural network. In addition to the input, hidden and output layers, the Elman neural network includes a special recursive layer that remembers the output information of the hidden neurons at previous times and then uses this information as input to the hidden layer.
Step five: and (3) taking the outputs of the three ANN-based prediction models as the prediction factors of the GRNN model, establishing an input matrix and an output matrix again, training the GRNN model, and substituting the test samples into the trained GRNN model to obtain the predicted value of the test period.
The GRNN neural network is a special form of RBF neural network. Assuming that random vectors X, y, and a joint probability density function p (X, y) are known, if the observed value of X is X, then the regression of y with respect to X can be expressed as:
where E [ Y | X ] is the predicted value of output Y given input X. The density function p (X, y) is usually unknown and can be estimated from a Parzen non-parametric estimate:
in the formula (I), the compound is shown in the specification,X j and Y j Respectively represent the j th sampleN is the number of sample sets, d is the dimension of x, and σ is a smoothing parameter.
step six: and evaluating the prediction result by adopting four evaluation indexes including Root Mean Square Error (RMSE), mean Absolute Error (MAE), mean Absolute Percent Error (MAPE) and correlation coefficient (R).
The formula for RMSE, MAE, MAPE and R is as follows:
in the formula, q p (i) Is a predicted value; q. q.s o (i) Is an actual measured value;andrespectively expressed as the mean values of the measured value and the predicted value; n is the number of sample sets.
The Udongde hydrological station monthly runoff time sequence is forecasted by adopting an empirical wavelet denoising and neural network fusion (EWT-GNE) based model, and is compared with three single ANN models (RBF, ELM and Elman) and a GRNN-based neural network fusion forecasting model (GNE), the forecasting results of the training period and the testing period of the Udongde hydrological station monthly runoff time sequence are shown in a table 1, and a comparison graph of the forecasting results of the test period GNE of the Wu Dongde station and the EWT-GNE model is shown in a figure 3.
TABLE 1 Udongde stand-month runoff forecasting result error statistics
As can be seen from table 1, the RMSE, MAE, and MAPE of the neural network fusion GNE model are smaller than those of the three single ANN prediction models, and R is larger than those of the three ANN models, which indicates that the neural network fusion model can integrate the advantages of the three single ANN models and improve the performance of the single ANN prediction model. The forecasting effect of the EWT-GNE model is superior to that of the GNE model, which shows that the EWT can effectively remove redundant noise in the monthly runoff time sequence and remarkably improve the precision of the GNE forecasting model. As can be seen from fig. 3, compared to the GNE model, the predicted flow rate of the EWT-GNE model is closer to the measured flow rate value, further proving the superiority of the EWT-GNE model.
The above embodiments are merely illustrative of the technical concepts and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered in the protection scope of the present invention.
Claims (6)
1. A method for forecasting medium and long term runoff based on empirical wavelet denoising and neural network fusion is characterized by comprising the following steps:
the method comprises the following steps: acquiring historical actual runoff data of a hydrological site, establishing a medium-long runoff time sequence, and dividing sample data into a training period and an inspection period;
step two: decomposing the runoff time sequence into a plurality of independent empirical modes by adopting EWT, removing the mode with the highest frequency in the decomposed empirical modes, and carrying out linear summation on the rest empirical modes to obtain a reconstructed runoff time sequence so as to eliminate redundant noise of the original runoff time sequence;
step three: performing phase space reconstruction on the runoff time sequence processed by the EWT, constructing a phase space matrix as an input factor of a base forecasting model, and establishing an input matrix and an output matrix of the ANN base forecasting model;
step four: training the RBF, ELM and Elman neural network models respectively by adopting the input matrix and the output matrix of the training period determined in the third step;
step five: taking the outputs of the three basis forecasting models as forecasting factors of the GRNN model, establishing an input matrix and an output matrix again, training the GRNN model, and substituting the test samples into the trained GRNN model to obtain a predicted value of a test period;
step six: and evaluating the prediction result by adopting four evaluation indexes including a root mean square error RMSE, an average absolute error MAE, an average absolute percentage error MAPE and a correlation coefficient R.
2. The method for forecasting the runoff in the medium and long term based on the empirical wavelet denoising and the neural network fusion of claim 1, wherein in the second step, for the time series x (t) of the original runoff, the decomposition process of the EWT is as follows:
(1) Calculating a Fourier spectrum F (omega) of the original runoff time sequence x (t) according to a Fast Fourier Transform (FFT) algorithm;
(2) The frequency domain [0, pi ] of the Fourier spectrum F (omega)]Adaptive partitioning into K frequency bands of unequal bandwidth, [0, ω 1 ],[ω 1 ,ω 2 ],...,[ω K-1 ,π]Wherein the boundary ω of the wavelet filter n N =1,2,.. N-1 is chosen as the intermediate frequency between two consecutive local maxima;
(3) Constructing an empirical wavelet according to the Meyer wavelet, and determining an empirical wavelet function psi k (omega) and empirical scale function
(4) And reconstructing the original runoff time sequence to obtain different empirical modes.
3. The method for forecasting the medium-and-long-term runoff based on the empirical wavelet denoising and the neural network fusion as claimed in claim 2, wherein in the step (4), the original runoff time series is reconstructed by the following formula:
wherein, is convolution operation, W x (0,t) is an approximation coefficient, W x (k, t) are detail coefficients, and the approximation coefficient and the detail coefficient are given by:
in the formula, # k (t) and ψ k (t) represents an empirical wavelet function and an empirical scaling function respectively,andrespectively represent psi k (t-t) andcomplex conjugation of (1), F -1 (. Cndot.) represents an inverse fourier transform,andrespectively indicate ψ (ω) andfourier transform of (3);
the empirical mode u k (t) is defined as follows:
4. the method for forecasting the runoff of the medium and long periods based on the empirical wavelet denoising and the neural network fusion according to any one of claims 1 to 3, wherein in the fourth step, the hidden layer of the RBF neural network uses a Gaussian transfer function as an activation function, and the Gaussian activation function is defined as:
wherein x = [ x ] 1 ,x 2 ,...,x n ] T An input vector of dimension n, c i =[c i1 ,c i2 ,...,c in ] T Is the center of the ith hidden layer neuron, q i The width of a Gaussian function is represented, and | is | · | | | is an Euclidean norm; the response of the jth node of the output layer is:
in the formula, w ij H and m respectively represent the number of nodes of the hidden layer and the output layer;
the extreme learning machine is a single hidden layer feedforward neural network, and K training samples { x is given k ,y k K =1,2, K, where X = [ X ] 1 ,x 2 ,...,x K ] T ∈R M For the input vector, Y = [ Y = 1 ,y 2 ,...,y K ] T ∈R N Is an output vector; the output of the jth hidden layer neuron for the kth training sample is represented asWherein J represents the number of hidden layer neurons, g (-) represents the activation function, β j And a j Respectively representing the wanted connection weight and the bias between the input layer and the hidden layer; the output of the k training sample of the ELM model is represented as:
in the formula, w j Representing weights between the hidden layer and the output layer;
the hidden layer kernel mapping matrix is represented as follows:
the output of the ELM network is represented as:
wherein w = [ w = 1 ,w 2 ,...,w J ] T ;
The goal of the ELM model is to find the most appropriate w, so that the network outputs valuesAnd the error between the actual measurement value Y is minimum, and the association weight between the hidden layer and the output layer is obtained by solving the following optimization problem:
the solution to equation (12) is found from the Moore Penrose (MP) generalized inverse:
in the formula (I), the compound is shown in the specification,is the generalized inverse of matrix D;
the output of the k training sample of the ELM model is represented as:
the Elman neural network is a dynamic recursive neural network, and comprises a special recursive layer besides an input layer, a hidden layer and an output layer, wherein the special recursive layer is used for memorizing output information of neurons of the hidden layer at previous moments and then using the information as input of the hidden layer.
5. The method for forecasting the medium-and-long-term runoff based on the empirical wavelet denoising and neural network fusion of any one of claims 1 to 3, wherein in the step five, assuming that random vectors X, y and a joint probability density function p (X, y) are known, if the observed value of X is X, then the regression of y with respect to X is expressed as:
where E [ Y | X ] is the predicted value of output Y given input X, the density function p (X, Y) is unknown, and is estimated by Parzen's nonparametric estimation:
in the formula (I), the compound is shown in the specification,X j and Y j Respectively representing the actual measurement input value and the actual measurement output value of the jth sample, wherein N is the number of sample sets, d is the dimension of x, and sigma is a smoothing parameter;
6. the method for forecasting the runoff of a long and medium term according to any one of claims 1 to 3, wherein in the sixth step, the calculation formulas of RMSE, MAE, MAPE and R are as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811273575.5A CN109359404B (en) | 2018-10-30 | 2018-10-30 | Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811273575.5A CN109359404B (en) | 2018-10-30 | 2018-10-30 | Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109359404A CN109359404A (en) | 2019-02-19 |
CN109359404B true CN109359404B (en) | 2023-01-13 |
Family
ID=65347339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811273575.5A Active CN109359404B (en) | 2018-10-30 | 2018-10-30 | Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109359404B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110909943A (en) * | 2019-11-27 | 2020-03-24 | 淮阴工学院 | Multi-scale multi-factor joint-driven monthly runoff probability forecasting method |
CN111079069A (en) * | 2019-12-17 | 2020-04-28 | 华中科技大学 | Prediction difficulty calculation method and system based on error distribution |
CN111275253B (en) * | 2020-01-15 | 2022-09-20 | 中国地质大学(武汉) | Runoff probabilistic prediction method and system integrating deep learning and error correction |
CN111311026A (en) * | 2020-03-19 | 2020-06-19 | 中国地质大学(武汉) | Runoff nonlinear prediction method considering data characteristics, model and correction |
CN111783363B (en) * | 2020-07-15 | 2022-05-17 | 华东交通大学 | Ionized layer prediction method based on SSA and RBF neural network model |
CN112257960A (en) * | 2020-11-12 | 2021-01-22 | 国网湖南省电力有限公司 | Reservoir basin runoff decomposition prediction method and system |
CN113176092B (en) * | 2021-04-25 | 2022-08-02 | 江苏科技大学 | Motor bearing fault diagnosis method based on data fusion and improved experience wavelet transform |
CN116701949B (en) * | 2023-08-07 | 2023-10-20 | 苏州思萃融合基建技术研究所有限公司 | Training method of spatial point location monitoring model based on regional environment data |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881563A (en) * | 2015-01-28 | 2015-09-02 | 柳州师范高等专科学校 | Chaotic characteristic analysis and non-linear prediction method of run-off |
CN107798431A (en) * | 2017-10-31 | 2018-03-13 | 河海大学 | A kind of Medium-and Long-Term Runoff Forecasting method based on Modified Elman Neural Network |
CN107885951A (en) * | 2017-11-27 | 2018-04-06 | 河海大学 | A kind of Time series hydrological forecasting method based on built-up pattern |
-
2018
- 2018-10-30 CN CN201811273575.5A patent/CN109359404B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881563A (en) * | 2015-01-28 | 2015-09-02 | 柳州师范高等专科学校 | Chaotic characteristic analysis and non-linear prediction method of run-off |
CN107798431A (en) * | 2017-10-31 | 2018-03-13 | 河海大学 | A kind of Medium-and Long-Term Runoff Forecasting method based on Modified Elman Neural Network |
CN107885951A (en) * | 2017-11-27 | 2018-04-06 | 河海大学 | A kind of Time series hydrological forecasting method based on built-up pattern |
Non-Patent Citations (1)
Title |
---|
"长江上游径流混沌动力特性及其集成预测研究";周建中,彭甜;《长江科学院院报》;20181015;第35卷(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109359404A (en) | 2019-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109359404B (en) | Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion | |
Altan et al. | A new hybrid model for wind speed forecasting combining long short-term memory neural network, decomposition methods and grey wolf optimizer | |
Huang et al. | Landslide displacement prediction using discrete wavelet transform and extreme learning machine based on chaos theory | |
Kisi et al. | Investigation of empirical mode decomposition in forecasting of hydrological time series | |
Coulibaly et al. | Nonstationary hydrological time series forecasting using nonlinear dynamic methods | |
Nassif et al. | Distributed diffusion adaptation over graph signals | |
Partal et al. | Daily precipitation predictions using three different wavelet neural network algorithms by meteorological data | |
CN110377942B (en) | Multi-model space-time modeling method based on finite Gaussian mixture model | |
CN112445131A (en) | Self-adaptive optimal tracking control method for linear system | |
Altunkaynak et al. | Comparison of discrete and continuous wavelet–multilayer perceptron methods for daily precipitation prediction | |
Li et al. | Identification method of neuro‐fuzzy‐based Hammerstein model with coloured noise | |
Paul et al. | Wavelets based artificial neural network technique for forecasting agricultural prices | |
Schaub et al. | Signal processing on simplicial complexes | |
Cintra et al. | Tracking the model: Data assimilation by artificial neural network | |
Cai et al. | Wind power forecasting based on ensemble empirical mode decomposition with generalized regression neural network based on cross-validated method | |
Latifoğlu et al. | Importance of hybrid models for forecasting of hydrological variable | |
Antari et al. | Identification of quadratic systems using higher order cumulants and neural networks: Application to model the delay of video-packets transmission | |
AU2020103329A4 (en) | A based on recursive least squares online distributed multitask graph filter algorithm. | |
Malek et al. | Imputation of time series data via Kohonen self organizing maps in the presence of missing data | |
Jaber et al. | Future smart grids creation and dimensionality reduction with signal handling on smart grid using targeted projection | |
CN113740671A (en) | Fault arc identification method based on VMD and ELM | |
CN109217844B (en) | Hyper-parameter optimization method based on pre-training random Fourier feature kernel LMS | |
Scott et al. | Nonlinear system identification and prediction using orthogonal functions | |
Ding et al. | Chaotic feature analysis and forecasting of Liujiang River runoff | |
Zhang et al. | Daily runoff prediction during flood seasons based on the VMD–HHO–KELM model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231214 Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province Patentee after: Dragon totem Technology (Hefei) Co.,Ltd. Address before: 223005 Huaian 1 Jiangsu Higher Education Park Patentee before: HUAIYIN INSTITUTE OF TECHNOLOGY |