CN109359404B - Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion - Google Patents

Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion Download PDF

Info

Publication number
CN109359404B
CN109359404B CN201811273575.5A CN201811273575A CN109359404B CN 109359404 B CN109359404 B CN 109359404B CN 201811273575 A CN201811273575 A CN 201811273575A CN 109359404 B CN109359404 B CN 109359404B
Authority
CN
China
Prior art keywords
output
runoff
empirical
neural network
forecasting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811273575.5A
Other languages
Chinese (zh)
Other versions
CN109359404A (en
Inventor
彭甜
倪伟
张楚
夏鑫
纪捷
薛小明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN201811273575.5A priority Critical patent/CN109359404B/en
Publication of CN109359404A publication Critical patent/CN109359404A/en
Application granted granted Critical
Publication of CN109359404B publication Critical patent/CN109359404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of hydrological prediction, and discloses a medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion.

Description

Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion
Technical Field
The invention relates to the field of hydrological forecasting in hydrology, in particular to a medium and long term runoff forecasting method based on empirical wavelet denoising and neural network fusion.
Background
The medium-and-long-term runoff forecasting (the meeting period is 3 days to 1 year) is one of the hot problems of hydrological research in recent decades, and the improvement of the medium-and-long-term runoff forecasting precision has important significance on aspects of water resource allocation, flood control, disaster reduction and the like. The existing medium-long term runoff forecasting methods can be divided into two types: physical models and empirical models. The physical model typically predicts future runoff by simulating the physical process of watershed water circulation. The construction of the physical model requires acquiring meteorological data such as precipitation, evaporation and temperature in a long time period (a meeting period), and the difficulty of medium-long term forecasting by adopting the physical model is increased.
The various empirical models involved in the medium-and long-term hydrological forecast mainly include time series models and machine learning models. However, previous researches show that the time series model has limited capability of forecasting the runoff time series with strong nonlinearity and non-stationarity. In various machine learning methods, artificial Neural Networks (ANN) including Back Propagation Neural Networks (BPNN), radial Basis Function (RBF) neural networks, generalized Regression Neural Networks (GRNN), elman neural networks, extreme Learning Machines (ELM) and the like have achieved great results in the field of medium and long term runoff forecasting, but there is still a great room for improvement in the forecasting performance of a single ANN model. The method for preprocessing the runoff time sequence by using the data preprocessing technology is a method capable of effectively improving the medium-and-long-term runoff forecasting precision. In addition, another method capable of improving the single ANN model forecasting performance is to fuse the forecasting information of a plurality of ANN models, comprehensively consider the advantages of each model, and further improve the medium-and-long-term runoff forecasting precision.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion, which can effectively improve the forecasting precision of a medium-and-long-term runoff time sequence.
The technical scheme is as follows: the invention provides a medium and long term runoff forecasting method based on empirical wavelet denoising and neural network fusion, which is characterized by comprising the following steps of: the method comprises the following steps: acquiring historical actual runoff data of a hydrological site, establishing a medium-long runoff time sequence, and dividing sample data into a training period and an inspection period; step two: decomposing the runoff time sequence into a plurality of independent empirical modes by adopting EWT, removing the mode with the highest frequency in the decomposed empirical modes, and carrying out linear summation on the rest empirical modes to obtain a reconstructed runoff time sequence so as to eliminate redundant noise of the original runoff time sequence; step three: performing phase space reconstruction on the runoff time sequence processed by the EWT, constructing a phase space matrix as an input factor of a base forecasting model, and establishing an input matrix and an output matrix of the ANN base forecasting model; step four: training the RBF, ELM and Elman neural network models respectively by adopting the input and output matrixes of the training period determined in the third step; step five: taking the outputs of the three basis forecasting models as forecasting factors of the GRNN model, establishing an input matrix and an output matrix again, training the GRNN model, and substituting the test samples into the trained GRNN model to obtain a predicted value of a test period; step six: and evaluating the prediction result by adopting four evaluation indexes including Root Mean Square Error (RMSE), mean Absolute Error (MAE), mean Absolute Percent Error (MAPE) and correlation coefficient (R).
Further, in the second step, for the original runoff time series x (t), the decomposition process of EWT is as follows: (1) Calculating a Fourier spectrum F (omega) of the original runoff time sequence x (t) according to a Fast Fourier Transform (FFT) algorithm; (2) The frequency domain [0, pi ] of the Fourier spectrum F (omega)]Adaptive partitioning into K frequency bands of unequal bandwidth, [0, ω 1 ],[ω 12 ],...,[ω K-1 ,π]Wherein the boundary ω of the wavelet filter n N =1,2,.. N-1 is chosen as the intermediate frequency between two consecutive local maxima; (3) Constructing empirical wavelet according to Meyer wavelet, and determining empirical wavelet function psi k (omega) and empirical scale function
Figure BDA0001846493830000027
Figure BDA0001846493830000021
Figure BDA0001846493830000022
In the formula (I), the compound is shown in the specification,
Figure BDA0001846493830000023
β(x)=x 4 (35-84x+70x 2 -20x 3 ) (ii) a (4) And reconstructing the original runoff time sequence to obtain different empirical modes.
Preferably, in the (4), the original runoff time series is reconstructed by the following formula:
Figure BDA0001846493830000024
wherein, is convolution operation, W x (0,t) is an approximation coefficient, W x (k, t) are detail coefficients, and the approximation coefficient and the detail coefficient are given by:
Figure BDA0001846493830000025
Figure BDA0001846493830000026
in the formula, # k (t) and ψ k (t) represents an empirical wavelet function and an empirical scaling function respectively,
Figure BDA0001846493830000031
and
Figure BDA0001846493830000032
respectively represent psi k (t-t) and
Figure BDA0001846493830000033
complex conjugation of (2), F -1 (. Cndot.) represents an inverse fourier transform,
Figure BDA0001846493830000034
and
Figure BDA0001846493830000035
respectively indicate ψ (ω) and
Figure BDA0001846493830000036
fourier transform of (1); the empirical mode u k (t) is defined as follows:
Figure BDA0001846493830000037
preferably, the hidden layer of the RBF neural network uses a Gaussian transfer function asActivation function, gaussian activation function is defined as:
Figure BDA0001846493830000038
wherein x = [ x ] 1 ,x 2 ,...,x n ] T An input vector of dimension n, c i =[c i1 ,c i2 ,...,c in ] T Is the center of the ith hidden layer neuron, q i Represents the width of the gaussian function,. Is the euclidean norm; the response of the jth node of the output layer is:
Figure BDA0001846493830000039
in the formula, w ij H and m respectively represent the number of nodes of the hidden layer and the output layer; the extreme learning machine is a single hidden layer feedforward neural network, and K training samples { x is given k ,y k K =1,2, K, where X = [ X ] 1 ,x 2 ,...,x K ] T ∈R M For the input vector, Y = [ Y = 1 ,y 2 ,...,y K ] T ∈R N Is an output vector; the output of the jth hidden layer neuron for the kth training sample is represented as
Figure BDA00018464938300000310
Wherein J represents the number of hidden neurons, g (-) represents the activation function, β j And a j Respectively representing the desired connection weight and bias between the input layer and the hidden layer; the output of the k training sample of the ELM model can be expressed as:
Figure BDA00018464938300000311
in the formula, w j Representing weights between the hidden layer and the output layer; the hidden layer kernel mapping matrix is represented as follows:
Figure BDA00018464938300000312
the output of the ELM network is represented as:
Figure BDA00018464938300000313
(11) (ii) a Wherein w = [ w = 1 ,w 2 ,...,w J ] T (ii) a The goal of the ELM model is to find the most appropriate w, so that the network outputs values
Figure BDA00018464938300000314
And the error between the actual measurement value Y is minimum, and the association weight between the hidden layer and the output layer is obtained by solving the following optimization problem:
Figure BDA00018464938300000315
the solution to equation (12) is found from the Moore Penrose (MP) generalized inverse:
Figure BDA00018464938300000316
in the formula (I), the compound is shown in the specification,
Figure BDA00018464938300000317
is the generalized inverse of matrix D; the output of the k training sample of the ELM model is represented as:
Figure BDA00018464938300000318
the Elman neural network is a dynamic recurrent neural network. In addition to the input, hidden and output layers, the Elman neural network includes a special recursive layer that remembers the output information of the hidden neurons at previous times and then uses this information as input to the hidden layer.
Further, in the fifth step, assuming that the random vector X, y and the joint probability density function p (X, y) are known, if the observed value of X is X, the regression of y with respect to X is represented as:
Figure BDA0001846493830000041
in the formula, E [ y | X]For the predicted value of output Y given input X, the density function p (X, Y) is unknown and estimated by a Parzen non-parametric estimate:
Figure BDA0001846493830000042
in the formula (I), the compound is shown in the specification,
Figure BDA0001846493830000043
X j and Y j Respectively representing the actual measurement input value and the actual measurement output value of the jth sample, wherein N is the number of sample sets, d is the dimension of x, and sigma is a smoothing parameter; output of GRNN model according to equations (15) and (16)
Figure BDA0001846493830000044
Can be expressed as follows:
Figure BDA0001846493830000045
preferably, in the sixth step, the calculation formula of RMSE, MAE, MAPE and R is as follows:
Figure BDA0001846493830000046
Figure BDA0001846493830000047
Figure BDA0001846493830000048
in the formula, q p (i) Is a predicted value; q. q.s o (i) Is an actual measurement value;
Figure BDA0001846493830000049
and
Figure BDA00018464938300000410
respectively expressed as the mean values of the measured value and the predicted value; n is the number of sample sets.
Has the advantages that: compared with the prior art, the medium-and-long-term runoff forecasting method established by the invention can obtain the following beneficial effects:
1) According to the method, the empirical wavelet transform is adopted to decompose and reconstruct the runoff time sequence, so that redundant noise in the natural runoff time sequence can be removed, the interference of nonlinear fluctuation of the runoff time sequence on runoff forecasting is reduced, and the performance of a forecasting model is improved;
2) Aiming at the problems that the forecasting precision of a single ANN model cannot reach an expected level easily and each model has advantages and disadvantages, forecasting results of three ANN forecasting models are used as input factors, the GRNN neural network model is adopted to further forecast the monthly runoff time sequence, the advantages of different ANN forecasting models are integrated, the forecasting information of each basic model is fused, and therefore runoff forecasting information with higher precision is obtained.
Drawings
FIG. 1 is a flow chart of a medium-and-long-term runoff forecasting model based on empirical wavelet denoising and neural network fusion;
FIG. 2 is a graph of the EWT decomposition of the Wudongde station runoff time series and a comparison of the original and noise-canceled runoff time series;
FIG. 3 is a graph comparing the monthly runoff predictions of the Wudongda test period GNE and EWT-GNE models.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings.
Embodiment 1:
the method takes the Udongde hydrology standing-month runoff time sequence of the Yangtze river upstream watershed as an embodiment, and performs example simulation to verify the effect of the method. Fig. 1 is a flow chart of a method for forecasting medium and long term runoff based on empirical wavelet denoising and neural network fusion, which is implemented by the following steps:
the method comprises the following steps: historical actual measurement month flow data (660 sample data points) of Uedod hydrology station from 1 month to 12 months in 1958 are obtained, a month flow time series is established, the actual measurement value of the month flow in the first 525 months (from 1 month to 9 months in 1958) is used as a training sample, and the actual measurement value of the month flow in the later 135 months (from 10 months to 12 months in 2012 in 2001) is used as an inspection sample.
Step two: and decomposing the Wudongde month runoff time series into a plurality of independent empirical modes by adopting EWT, removing the mode with the highest frequency in the decomposed empirical modes, and performing linear summation on the rest empirical modes to obtain a reconstructed runoff time series. The EWT decomposition of the udot station runoff time series and the comparison of the original and noise-cancelled runoff time series are shown in fig. 2.
The main idea of EWT is to perform adaptive partitioning on Fourier spectrum of the original runoff time series and establish a group of appropriate wavelet filters to screen out amplitude modulation and frequency modulation (AM-FM) components of the original runoff time series. For the original runoff time series x (t), the decomposition process of EWT is as follows:
(1) And calculating the Fourier spectrum F (omega) of the original runoff time sequence x (t) according to a fast Fourier transform algorithm (FFT).
(2) The frequency domain [0, pi ] of the Fourier spectrum F (omega)]Adaptive partitioning into K frequency bands of unequal bandwidth, [0, ω 1 ],[ω 12 ],...,[ω K-1 ,π]Wherein the boundary ω of the wavelet filter n N =1,2, N-1 is chosen as the intermediate frequency between two consecutive local maxima.
(3) Constructing an empirical wavelet according to the Meyer wavelet, and determining an empirical wavelet function psi k (omega) and empirical scale function
Figure BDA00018464938300000613
Figure BDA0001846493830000061
Figure BDA0001846493830000062
In the formula (I), the compound is shown in the specification,
Figure BDA0001846493830000063
β(x)=x 4 (35-84x+70x 2 -20x 3 )。
(4) And reconstructing the original runoff time sequence to obtain different modes. The original runoff time series was reconstructed by the following formula:
Figure BDA0001846493830000064
wherein, is convolution operation, W x (0,t) is an approximation coefficient, W x (k, t) are detail coefficients, and the approximation coefficients and the detail coefficients are given by:
Figure BDA0001846493830000065
Figure BDA0001846493830000066
in the formula, # k (t) and ψ k (t) represents an empirical wavelet function and an empirical scaling function respectively,
Figure BDA0001846493830000067
and
Figure BDA0001846493830000068
respectively represent psi k (t-t) and
Figure BDA0001846493830000069
complex conjugation of (1), F -1 (. Cndot.) represents an inverse fourier transform,
Figure BDA00018464938300000610
and
Figure BDA00018464938300000611
respectively indicate ψ (ω) and
Figure BDA00018464938300000612
the fourier transform of (d).
The empirical mode u k (t) is defined as follows:
Figure BDA0001846493830000071
step three: and (3) carrying out phase space reconstruction on the noise-eliminated monthly runoff time sequence processed by the EWT, constructing a phase space matrix as an input factor of the basis forecasting model, and establishing an input matrix and an output matrix of the ANN basis forecasting model.
Step four: and (4) training the RBF, ELM and Elman neural network models respectively by adopting the input matrix and the output matrix of the training period determined in the third step.
The RBF neural network is a multi-layer feed-Forward Neural Network (FNN). The RBF neural network includes a three-layer network structure: the system comprises an input layer consisting of input variables, an implicit layer for carrying out nonlinear transformation on the input variables and an output layer for generating network linear response. The hidden layer of the RBF neural network uses a gaussian transfer function as the activation function. The gaussian activation function is defined as:
Figure BDA0001846493830000072
wherein x = [ x ] 1 ,x 2 ,...,x n ] T An input vector of dimension n, c i =[c i1 ,c i2 ,...,c in ] T Is the center of the ith hidden layer neuron, q i Denotes the width of the gaussian function, and is the euclidean norm. The response of the jth node of the output layer is:
Figure BDA0001846493830000073
in the formula, w ij H and m respectively represent the number of nodes of the hidden layer and the output layer for the connection weight between the ith hidden layer node and the jth output layer node.
The extreme learning machine is a single hidden layer feedforward neural network, and K training samples { x is given k ,y k K =1,2, K, where X = [ X ] 1 ,x 2 ,...,x K ] T ∈R M For the input vector, Y = [ Y = 1 ,y 2 ,...,y K ] T ∈R N Is the output vector. The output of the jth hidden layer neuron for the kth training sample can be expressed as
Figure BDA0001846493830000074
Wherein J represents the number of hidden neurons, g (-) represents the activation function, β j And a j Respectively representing the desired connection weights and offsets between the input layer and the hidden layer. The output of the k training sample of the ELM model can be expressed as:
Figure BDA0001846493830000075
in the formula, w j Representing the weights between the hidden layer and the output layer.
The hidden layer kernel mapping matrix can be represented as follows:
Figure BDA0001846493830000081
the output of the ELM network can be expressed as:
Figure BDA0001846493830000082
wherein w = [ w = 1 ,w 2 ,...,w J ] T
The goal of the ELM model is to find the most appropriate w, so that the network outputs values
Figure BDA0001846493830000083
And the measured value Y is minimized. The associated weights between the hidden layer and the output layer can be derived by solving the following optimization problem:
Figure BDA0001846493830000084
the solution to equation (12) can be found from the Moore Penrose (MP) generalized inverse:
Figure BDA0001846493830000085
in the formula (I), the compound is shown in the specification,
Figure BDA0001846493830000086
is the generalized inverse of matrix D.
The output of the k-th training sample of the ELM model can be expressed as:
Figure BDA0001846493830000087
the Elman neural network is a dynamic recurrent neural network. In addition to the input, hidden and output layers, the Elman neural network includes a special recursive layer that remembers the output information of the hidden neurons at previous times and then uses this information as input to the hidden layer.
Step five: and (3) taking the outputs of the three ANN-based prediction models as the prediction factors of the GRNN model, establishing an input matrix and an output matrix again, training the GRNN model, and substituting the test samples into the trained GRNN model to obtain the predicted value of the test period.
The GRNN neural network is a special form of RBF neural network. Assuming that random vectors X, y, and a joint probability density function p (X, y) are known, if the observed value of X is X, then the regression of y with respect to X can be expressed as:
Figure BDA0001846493830000088
where E [ Y | X ] is the predicted value of output Y given input X. The density function p (X, y) is usually unknown and can be estimated from a Parzen non-parametric estimate:
Figure BDA0001846493830000091
in the formula (I), the compound is shown in the specification,
Figure BDA0001846493830000092
X j and Y j Respectively represent the j th sampleN is the number of sample sets, d is the dimension of x, and σ is a smoothing parameter.
Output of GRNN model according to equations (15) and (16)
Figure BDA0001846493830000093
Can be expressed as follows:
Figure BDA0001846493830000094
step six: and evaluating the prediction result by adopting four evaluation indexes including Root Mean Square Error (RMSE), mean Absolute Error (MAE), mean Absolute Percent Error (MAPE) and correlation coefficient (R).
The formula for RMSE, MAE, MAPE and R is as follows:
Figure BDA0001846493830000095
Figure BDA0001846493830000096
Figure BDA0001846493830000097
Figure BDA0001846493830000098
in the formula, q p (i) Is a predicted value; q. q.s o (i) Is an actual measured value;
Figure BDA0001846493830000099
and
Figure BDA00018464938300000910
respectively expressed as the mean values of the measured value and the predicted value; n is the number of sample sets.
The Udongde hydrological station monthly runoff time sequence is forecasted by adopting an empirical wavelet denoising and neural network fusion (EWT-GNE) based model, and is compared with three single ANN models (RBF, ELM and Elman) and a GRNN-based neural network fusion forecasting model (GNE), the forecasting results of the training period and the testing period of the Udongde hydrological station monthly runoff time sequence are shown in a table 1, and a comparison graph of the forecasting results of the test period GNE of the Wu Dongde station and the EWT-GNE model is shown in a figure 3.
TABLE 1 Udongde stand-month runoff forecasting result error statistics
Figure BDA0001846493830000101
As can be seen from table 1, the RMSE, MAE, and MAPE of the neural network fusion GNE model are smaller than those of the three single ANN prediction models, and R is larger than those of the three ANN models, which indicates that the neural network fusion model can integrate the advantages of the three single ANN models and improve the performance of the single ANN prediction model. The forecasting effect of the EWT-GNE model is superior to that of the GNE model, which shows that the EWT can effectively remove redundant noise in the monthly runoff time sequence and remarkably improve the precision of the GNE forecasting model. As can be seen from fig. 3, compared to the GNE model, the predicted flow rate of the EWT-GNE model is closer to the measured flow rate value, further proving the superiority of the EWT-GNE model.
The above embodiments are merely illustrative of the technical concepts and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered in the protection scope of the present invention.

Claims (6)

1. A method for forecasting medium and long term runoff based on empirical wavelet denoising and neural network fusion is characterized by comprising the following steps:
the method comprises the following steps: acquiring historical actual runoff data of a hydrological site, establishing a medium-long runoff time sequence, and dividing sample data into a training period and an inspection period;
step two: decomposing the runoff time sequence into a plurality of independent empirical modes by adopting EWT, removing the mode with the highest frequency in the decomposed empirical modes, and carrying out linear summation on the rest empirical modes to obtain a reconstructed runoff time sequence so as to eliminate redundant noise of the original runoff time sequence;
step three: performing phase space reconstruction on the runoff time sequence processed by the EWT, constructing a phase space matrix as an input factor of a base forecasting model, and establishing an input matrix and an output matrix of the ANN base forecasting model;
step four: training the RBF, ELM and Elman neural network models respectively by adopting the input matrix and the output matrix of the training period determined in the third step;
step five: taking the outputs of the three basis forecasting models as forecasting factors of the GRNN model, establishing an input matrix and an output matrix again, training the GRNN model, and substituting the test samples into the trained GRNN model to obtain a predicted value of a test period;
step six: and evaluating the prediction result by adopting four evaluation indexes including a root mean square error RMSE, an average absolute error MAE, an average absolute percentage error MAPE and a correlation coefficient R.
2. The method for forecasting the runoff in the medium and long term based on the empirical wavelet denoising and the neural network fusion of claim 1, wherein in the second step, for the time series x (t) of the original runoff, the decomposition process of the EWT is as follows:
(1) Calculating a Fourier spectrum F (omega) of the original runoff time sequence x (t) according to a Fast Fourier Transform (FFT) algorithm;
(2) The frequency domain [0, pi ] of the Fourier spectrum F (omega)]Adaptive partitioning into K frequency bands of unequal bandwidth, [0, ω 1 ],[ω 12 ],...,[ω K-1 ,π]Wherein the boundary ω of the wavelet filter n N =1,2,.. N-1 is chosen as the intermediate frequency between two consecutive local maxima;
(3) Constructing an empirical wavelet according to the Meyer wavelet, and determining an empirical wavelet function psi k (omega) and empirical scale function
Figure FDA0003918771550000021
Figure FDA0003918771550000022
Figure FDA0003918771550000023
In the formula (I), the compound is shown in the specification,
Figure FDA0003918771550000024
β(x)=x 4 (35-84x+70x 2 -20x 3 );
(4) And reconstructing the original runoff time sequence to obtain different empirical modes.
3. The method for forecasting the medium-and-long-term runoff based on the empirical wavelet denoising and the neural network fusion as claimed in claim 2, wherein in the step (4), the original runoff time series is reconstructed by the following formula:
Figure FDA0003918771550000025
wherein, is convolution operation, W x (0,t) is an approximation coefficient, W x (k, t) are detail coefficients, and the approximation coefficient and the detail coefficient are given by:
Figure FDA0003918771550000026
Figure FDA0003918771550000027
in the formula, # k (t) and ψ k (t) represents an empirical wavelet function and an empirical scaling function respectively,
Figure FDA0003918771550000028
and
Figure FDA0003918771550000029
respectively represent psi k (t-t) and
Figure FDA00039187715500000210
complex conjugation of (1), F -1 (. Cndot.) represents an inverse fourier transform,
Figure FDA00039187715500000211
and
Figure FDA00039187715500000212
respectively indicate ψ (ω) and
Figure FDA00039187715500000213
fourier transform of (3);
the empirical mode u k (t) is defined as follows:
Figure FDA0003918771550000031
4. the method for forecasting the runoff of the medium and long periods based on the empirical wavelet denoising and the neural network fusion according to any one of claims 1 to 3, wherein in the fourth step, the hidden layer of the RBF neural network uses a Gaussian transfer function as an activation function, and the Gaussian activation function is defined as:
Figure FDA0003918771550000032
wherein x = [ x ] 1 ,x 2 ,...,x n ] T An input vector of dimension n, c i =[c i1 ,c i2 ,...,c in ] T Is the center of the ith hidden layer neuron, q i The width of a Gaussian function is represented, and | is | · | | | is an Euclidean norm; the response of the jth node of the output layer is:
Figure FDA0003918771550000033
in the formula, w ij H and m respectively represent the number of nodes of the hidden layer and the output layer;
the extreme learning machine is a single hidden layer feedforward neural network, and K training samples { x is given k ,y k K =1,2, K, where X = [ X ] 1 ,x 2 ,...,x K ] T ∈R M For the input vector, Y = [ Y = 1 ,y 2 ,...,y K ] T ∈R N Is an output vector; the output of the jth hidden layer neuron for the kth training sample is represented as
Figure FDA0003918771550000034
Wherein J represents the number of hidden layer neurons, g (-) represents the activation function, β j And a j Respectively representing the wanted connection weight and the bias between the input layer and the hidden layer; the output of the k training sample of the ELM model is represented as:
Figure FDA0003918771550000035
in the formula, w j Representing weights between the hidden layer and the output layer;
the hidden layer kernel mapping matrix is represented as follows:
Figure FDA0003918771550000041
the output of the ELM network is represented as:
Figure FDA0003918771550000042
wherein w = [ w = 1 ,w 2 ,...,w J ] T
The goal of the ELM model is to find the most appropriate w, so that the network outputs values
Figure FDA0003918771550000043
And the error between the actual measurement value Y is minimum, and the association weight between the hidden layer and the output layer is obtained by solving the following optimization problem:
Figure FDA0003918771550000044
the solution to equation (12) is found from the Moore Penrose (MP) generalized inverse:
Figure FDA0003918771550000045
in the formula (I), the compound is shown in the specification,
Figure FDA0003918771550000046
is the generalized inverse of matrix D;
the output of the k training sample of the ELM model is represented as:
Figure FDA0003918771550000047
the Elman neural network is a dynamic recursive neural network, and comprises a special recursive layer besides an input layer, a hidden layer and an output layer, wherein the special recursive layer is used for memorizing output information of neurons of the hidden layer at previous moments and then using the information as input of the hidden layer.
5. The method for forecasting the medium-and-long-term runoff based on the empirical wavelet denoising and neural network fusion of any one of claims 1 to 3, wherein in the step five, assuming that random vectors X, y and a joint probability density function p (X, y) are known, if the observed value of X is X, then the regression of y with respect to X is expressed as:
Figure FDA0003918771550000048
where E [ Y | X ] is the predicted value of output Y given input X, the density function p (X, Y) is unknown, and is estimated by Parzen's nonparametric estimation:
Figure FDA0003918771550000051
in the formula (I), the compound is shown in the specification,
Figure FDA0003918771550000052
X j and Y j Respectively representing the actual measurement input value and the actual measurement output value of the jth sample, wherein N is the number of sample sets, d is the dimension of x, and sigma is a smoothing parameter;
output of GRNN model according to equations (15) and (16)
Figure FDA0003918771550000053
Is represented as follows:
Figure FDA0003918771550000054
6. the method for forecasting the runoff of a long and medium term according to any one of claims 1 to 3, wherein in the sixth step, the calculation formulas of RMSE, MAE, MAPE and R are as follows:
Figure FDA0003918771550000055
Figure FDA0003918771550000056
Figure FDA0003918771550000057
Figure FDA0003918771550000058
in the formula, q p (i) Is a predicted value; q. q.s o (i) Is an actual measurement value;
Figure FDA0003918771550000059
and
Figure FDA00039187715500000510
respectively expressed as the mean values of the measured value and the predicted value; n is the number of sample sets.
CN201811273575.5A 2018-10-30 2018-10-30 Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion Active CN109359404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811273575.5A CN109359404B (en) 2018-10-30 2018-10-30 Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811273575.5A CN109359404B (en) 2018-10-30 2018-10-30 Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion

Publications (2)

Publication Number Publication Date
CN109359404A CN109359404A (en) 2019-02-19
CN109359404B true CN109359404B (en) 2023-01-13

Family

ID=65347339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811273575.5A Active CN109359404B (en) 2018-10-30 2018-10-30 Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion

Country Status (1)

Country Link
CN (1) CN109359404B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909943A (en) * 2019-11-27 2020-03-24 淮阴工学院 Multi-scale multi-factor joint-driven monthly runoff probability forecasting method
CN111079069A (en) * 2019-12-17 2020-04-28 华中科技大学 Prediction difficulty calculation method and system based on error distribution
CN111275253B (en) * 2020-01-15 2022-09-20 中国地质大学(武汉) Runoff probabilistic prediction method and system integrating deep learning and error correction
CN111311026A (en) * 2020-03-19 2020-06-19 中国地质大学(武汉) Runoff nonlinear prediction method considering data characteristics, model and correction
CN111783363B (en) * 2020-07-15 2022-05-17 华东交通大学 Ionized layer prediction method based on SSA and RBF neural network model
CN112257960A (en) * 2020-11-12 2021-01-22 国网湖南省电力有限公司 Reservoir basin runoff decomposition prediction method and system
CN113176092B (en) * 2021-04-25 2022-08-02 江苏科技大学 Motor bearing fault diagnosis method based on data fusion and improved experience wavelet transform
CN116701949B (en) * 2023-08-07 2023-10-20 苏州思萃融合基建技术研究所有限公司 Training method of spatial point location monitoring model based on regional environment data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881563A (en) * 2015-01-28 2015-09-02 柳州师范高等专科学校 Chaotic characteristic analysis and non-linear prediction method of run-off
CN107798431A (en) * 2017-10-31 2018-03-13 河海大学 A kind of Medium-and Long-Term Runoff Forecasting method based on Modified Elman Neural Network
CN107885951A (en) * 2017-11-27 2018-04-06 河海大学 A kind of Time series hydrological forecasting method based on built-up pattern

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881563A (en) * 2015-01-28 2015-09-02 柳州师范高等专科学校 Chaotic characteristic analysis and non-linear prediction method of run-off
CN107798431A (en) * 2017-10-31 2018-03-13 河海大学 A kind of Medium-and Long-Term Runoff Forecasting method based on Modified Elman Neural Network
CN107885951A (en) * 2017-11-27 2018-04-06 河海大学 A kind of Time series hydrological forecasting method based on built-up pattern

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"长江上游径流混沌动力特性及其集成预测研究";周建中,彭甜;《长江科学院院报》;20181015;第35卷(第10期);全文 *

Also Published As

Publication number Publication date
CN109359404A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN109359404B (en) Medium-and-long-term runoff forecasting method based on empirical wavelet denoising and neural network fusion
Altan et al. A new hybrid model for wind speed forecasting combining long short-term memory neural network, decomposition methods and grey wolf optimizer
Huang et al. Landslide displacement prediction using discrete wavelet transform and extreme learning machine based on chaos theory
Kisi et al. Investigation of empirical mode decomposition in forecasting of hydrological time series
Coulibaly et al. Nonstationary hydrological time series forecasting using nonlinear dynamic methods
Nassif et al. Distributed diffusion adaptation over graph signals
Partal et al. Daily precipitation predictions using three different wavelet neural network algorithms by meteorological data
CN110377942B (en) Multi-model space-time modeling method based on finite Gaussian mixture model
CN112445131A (en) Self-adaptive optimal tracking control method for linear system
Altunkaynak et al. Comparison of discrete and continuous wavelet–multilayer perceptron methods for daily precipitation prediction
Li et al. Identification method of neuro‐fuzzy‐based Hammerstein model with coloured noise
Paul et al. Wavelets based artificial neural network technique for forecasting agricultural prices
Schaub et al. Signal processing on simplicial complexes
Cintra et al. Tracking the model: Data assimilation by artificial neural network
Cai et al. Wind power forecasting based on ensemble empirical mode decomposition with generalized regression neural network based on cross-validated method
Latifoğlu et al. Importance of hybrid models for forecasting of hydrological variable
Antari et al. Identification of quadratic systems using higher order cumulants and neural networks: Application to model the delay of video-packets transmission
AU2020103329A4 (en) A based on recursive least squares online distributed multitask graph filter algorithm.
Malek et al. Imputation of time series data via Kohonen self organizing maps in the presence of missing data
Jaber et al. Future smart grids creation and dimensionality reduction with signal handling on smart grid using targeted projection
CN113740671A (en) Fault arc identification method based on VMD and ELM
CN109217844B (en) Hyper-parameter optimization method based on pre-training random Fourier feature kernel LMS
Scott et al. Nonlinear system identification and prediction using orthogonal functions
Ding et al. Chaotic feature analysis and forecasting of Liujiang River runoff
Zhang et al. Daily runoff prediction during flood seasons based on the VMD–HHO–KELM model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231214

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 223005 Huaian 1 Jiangsu Higher Education Park

Patentee before: HUAIYIN INSTITUTE OF TECHNOLOGY