CN115169439A - Method and system for predicting effective wave height based on sequence-to-sequence network - Google Patents

Method and system for predicting effective wave height based on sequence-to-sequence network Download PDF

Info

Publication number
CN115169439A
CN115169439A CN202210678407.4A CN202210678407A CN115169439A CN 115169439 A CN115169439 A CN 115169439A CN 202210678407 A CN202210678407 A CN 202210678407A CN 115169439 A CN115169439 A CN 115169439A
Authority
CN
China
Prior art keywords
sequence
data set
wave data
wave
wave height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210678407.4A
Other languages
Chinese (zh)
Other versions
CN115169439B (en
Inventor
谭家明
朱俊星
李小勇
任小丽
邓科峰
汪祥
赵娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202210678407.4A priority Critical patent/CN115169439B/en
Publication of CN115169439A publication Critical patent/CN115169439A/en
Application granted granted Critical
Publication of CN115169439B publication Critical patent/CN115169439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an effective wave height prediction method and system based on a sequence-to-sequence network, which are used for obtaining an original wave data set and adding a lag variable into the original wave data set to obtain a first wave data set; performing feature selection on the first wave data set by using a random forest algorithm to obtain a second wave data set; constructing a sequence-to-sequence model with a memory layer-based attention mechanism, and performing effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model; and performing effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain a final effective wave height prediction value. The method can improve the prediction precision of the effective wave height and has better prediction effect on abnormal values.

Description

Method and system for predicting effective wave height based on sequence-to-sequence network
Technical Field
The invention relates to the technical field of sea wave parameter calculation, in particular to an effective wave height prediction method and system based on a sequence-to-sequence network.
Background
Extreme waves threaten the life safety of human beings, destroy coastal engineering and have great influence on the production activities of people. Accurate and rapid effective wave height prediction has important significance in the fields of air route planning, port construction, wave power generation, military activities and the like. However, the ocean is a complex chaotic system, and due to uncertainty of the ocean environment and complex interaction between meteorological elements, it is difficult to accurately predict the effective wave height.
The most widely used method for predicting the effective wave height is a physical-based numerical method. These methods describe waves using mathematical models based on changes in physical conditions and processes, and are often complex and difficult to solve. In addition, the methods are difficult to excavate deep latent features contained in ocean data, which are valuable for the prediction of the effective wave height, and apply the deep latent features to the wave prediction process. With the rise of artificial intelligence, the effective wave height prediction method based on machine learning develops rapidly, and valuable potential features can be mined from ocean data to achieve a good prediction effect. With the increasing importance of the prediction of the effective wave height, the prediction effect is steadily improved, but the following challenges still remain to be solved:
(1) Existing methods assess potentially valuable significant wave height prediction features in buoy data in little detail;
(2) The prediction accuracy needs to be further improved;
(3) Most methods do not have ideal prediction effect on abnormal values.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an effective wave height prediction method and system based on a sequence-to-sequence network, which can improve the effective wave height prediction precision and have better prediction effect on abnormal values.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for predicting a significant wave height based on a sequence-to-sequence network, including:
acquiring an original wave data set, and adding a lag variable into the original wave data set to acquire a first wave data set;
performing feature selection on the first wave data set by using a random forest algorithm to obtain a second wave data set;
constructing a sequence-to-sequence model with a memory layer-based attention mechanism, and performing effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model;
and performing effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain a final effective wave height prediction value.
Compared with the prior art, the first aspect of the invention has the following beneficial effects:
the method comprises the steps of obtaining an original wave data set, and adding a lag variable into the original wave data set to obtain a first wave data set; selecting the characteristics of the first wave data set by using a random forest algorithm to obtain a second wave data set; constructing a sequence-to-sequence model with an attention mechanism based on a memory layer, and performing effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model; and performing effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain a final effective wave height prediction value. The method can enhance the characteristics of the original wave data set by adding a lag variable into the original wave data set; the importance of each feature in the first wave data set relative to the effective wave height is sequenced through a random forest algorithm, so that feature selection is carried out on the first wave data set, irrelevant or redundant features are eliminated, gains are brought to prediction, and the accuracy of a prediction model is improved; the method also comprises the steps of constructing a sequence-to-sequence model with an attention mechanism based on a memory layer, and performing effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model; the method comprises the steps of carrying out effective wave height prediction on data to be predicted through a trained sequence-to-sequence model to obtain a final effective wave height prediction value, constructing a sequence-to-sequence network with an attention mechanism based on a memory layer to reduce the occurrence of forgetting problems in long sequence prediction, capturing sudden changes of effective wave heights better, and further improving the accuracy of the prediction model, so that the effective wave height prediction precision is improved, and a better prediction effect is achieved on abnormal values.
According to some embodiments of the invention, the obtaining the raw wave data set, adding a lag variable to the raw wave data set, obtaining a first wave data set, comprises:
and adding historical effective wave height data of the current wave data in the original wave data set into the original wave data set as a lag variable to obtain a first wave data set.
According to some embodiments of the invention, the selecting features of the first wave data set using a random forest algorithm to obtain a second wave data set comprises:
obtaining an effective wave height observation value of the first wave data set, initializing a historical correlation coefficient to be zero, initializing a second wave data set to be a null data set, initializing a tolerance value to be zero, and presetting a maximum tolerance value;
sorting the features in the first wave data set from high to low according to feature importance by using a random forest algorithm to obtain a sorted first wave data set;
adding the characteristics in the ordered first wave data set to the second wave data set from high to low according to characteristic importance, and calculating a predicted value of the effective wave height through a neural network when data is added to the second wave data set every time;
calculating a current correlation coefficient between the predicted value of the significant wave height and the observed value of the significant wave height, and judging the current correlation coefficient and the historical correlation coefficient:
adding features in the first wave data set to the second wave data set if the current correlation coefficient is greater than the historical correlation coefficient;
if the current correlation coefficient is less than or equal to the historical correlation coefficient, increasing the patience value by one, and removing the characteristic in the first wave data set corresponding to the current correlation coefficient;
and finishing the judgment after all the data in the first wave data set are judged or the endurance value is increased to be larger than the maximum endurance value, and obtaining a second wave data set after the characteristics are selected.
According to some embodiments of the invention, the constructing a sequence-to-sequence model with a memory-layer based attention mechanism comprises:
adopting a gated recursive cell network as an encoder and a decoder, and mining the relation between front and back values in a time sequence based on the encoder and the decoder;
adding a memory layer in an attention mechanism, and constructing a sequence-to-sequence model according to the encoder, the decoder and the attention mechanism.
According to some embodiments of the invention, the encoder is built by the following formula:
Input:X=(x 1 ,x 2 ,...,x t )
z t =σ(W z ·[h t-1 ,x t ])
r t =σ(W r ·[h t-1 ,x t ])
Figure BDA0003697344340000041
Figure BDA0003697344340000042
where X represents the current input data set for each element in the gated recursive element network, X t Data representing the current time, h t-1 Representing the hidden state, z, passed from the previous instant t-1 t Represents an update gate, r t Indicating that the gate is to be reset and,
Figure BDA0003697344340000051
an intermediate memory representing the current time, said intermediate memory representing the input combination of the memory of the previous time and the memory of the current time, W z Represents the update door z t Weight matrix of W r Represents the reset gate r t W represents the intermediate memory
Figure BDA0003697344340000052
Weight matrix of h t Representing the hidden state at the current time, input representing the Input to the network of gated recursive cells, and Output representing the Output of the network of gated recursive cells.
According to some embodiments of the invention, the decoder is built by the following formula:
Input:X=(c 1 ,c 2 ,...,c k )
s 0 =h t
z k =σ(W z ·[s k-1 ,c k ])
r k =σ(W r ·[s k-1 ,c k ])
Figure BDA0003697344340000053
Figure BDA0003697344340000054
Output:y k =s k
wherein X denotes the input of a decoder, said input of the decoder being obtained by means of an attention mechanism, s 0 =h t Representing the last hidden state h of the encoder t Initial input s used as hidden layer of decoder 0 ,z k Indicating an update gate, r k Indicating that the gate is to be reset and,
Figure BDA0003697344340000055
an intermediate memory representing the current time, said intermediate memory representing the input of the memory of the previous time combined with the memory of the current time, W z Represents the update door z k Weight matrix of W r Represents the reset gate r k W represents the intermediate memory
Figure BDA0003697344340000056
Weight matrix of s k Indicating the hidden state at the current time, y k =s k Indicating the hidden state s at the current moment k As the output y of a gated recursive cell network k
According to some embodiments of the invention, the attention mechanism is established by the following formula:
score(h i ,s j )=v T tanh(W[h i ;s j ]+b)
ω ij =Softmax((score(h i ,s j )))
Figure BDA0003697344340000061
Figure BDA0003697344340000062
wherein h is i Representing the hidden state of the encoder at time i, s j Representing the hidden state of the decoder at time j, in accordance with h i And said s j Obtaining a corresponding output score, v T And b represents a bias parameter, ω, in the neural network ij Representing computing the encoder and decoder concealment by normalizing the score by a softmax functionThe weight between the states is such that,
Figure BDA0003697344340000063
represents a weighted average of the encoder output, c j+1 Representing a memory layer that hides the output h of the unit in the encoder j Prediction for the next moment by said h j And said
Figure BDA0003697344340000064
T obtained by splicing in characteristic dimension and linear transformation j+1 The input of the time instant decoder.
In a second aspect, an embodiment of the present invention provides a system for predicting a significant wave height based on a sequence-to-sequence network, including:
the wave data acquisition unit is used for acquiring an original wave data set, and adding a lag variable into the original wave data set to obtain a first wave data set;
the second wave data set acquisition unit is used for selecting the characteristics of the first wave data set by using a random forest algorithm to obtain a second wave data set;
the sequence-to-sequence model training unit is used for constructing a sequence-to-sequence model with an attention mechanism based on a memory layer, and performing effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model;
and the final effective wave height predicted value obtaining unit is used for performing effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain a final effective wave height predicted value.
Compared with the prior art, the second aspect of the invention has the following beneficial effects:
the system comprises a first wave data set, a second wave data set and a third wave data set, wherein the first wave data set obtains an original wave data set, and a lag variable is added into the original wave data set to obtain a first wave data set; the second wave data set acquisition unit selects the characteristics of the first wave data set by using a random forest algorithm to obtain a second wave data set; the sequence-to-sequence model training unit constructs a sequence-to-sequence model with an attention mechanism based on a memory layer, and effective wave height prediction training is carried out on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model; and finally, the effective wave height predicted value obtaining unit carries out effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain the final effective wave height predicted value. The system can enhance the characteristics of the original wave data set by adding a lag variable into the original wave data set; the importance of each feature in the first wave data set relative to the effective wave height is sequenced through a random forest algorithm, so that feature selection is carried out on the first wave data set, irrelevant or redundant features are eliminated, gains are brought to prediction, and the accuracy of a prediction model is improved; the system also constructs a sequence-to-sequence model with an attention mechanism based on a memory layer, and performs effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model; the method comprises the steps of conducting effective wave height prediction on data to be predicted through a trained sequence-to-sequence model to obtain a final effective wave height prediction value, constructing a sequence-to-sequence network with an attention mechanism based on a memory layer to reduce the occurrence of forgetting problems in long sequence prediction, simultaneously capturing sudden changes of effective wave height better, and further improving the accuracy of the prediction model, so that the effective wave height prediction precision is improved, and the method has a better prediction effect on abnormal values.
In a third aspect, an embodiment of the present invention provides a sequence-to-sequence network-based significant wave height prediction apparatus, including at least one control processor and a memory communicatively connected to the at least one control processor; the memory stores instructions executable by the at least one control processor to cause the at least one control processor to perform a sequence-to-sequence network based method of significant wave height prediction as described above.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform a method for sequence-to-sequence network-based significant wave height prediction as described above.
It is to be understood that the advantageous effects of the third aspect to the fourth aspect compared to the related art are the same as the advantageous effects of the first aspect compared to the related art, and reference may be made to the related description in the first aspect, which is not repeated herein.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method for predicting a significant wave height based on a sequence-to-sequence network according to an embodiment of the present invention;
FIG. 2 is a block diagram of a sequence-to-sequence model with a memory-layer based attention mechanism provided in accordance with one embodiment of the present invention;
fig. 3 is a block diagram of a system for predicting a significant wave height based on a sequence-to-sequence network according to an embodiment of the present invention.
Detailed Description
The technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making any creative effort, shall fall within the protection scope of the disclosure. It should be noted that the features of the embodiments and examples of the present disclosure may be combined with each other without conflict. In addition, the purpose of the drawings is to graphically supplement the description in the written portion of the specification so that a person can intuitively and visually understand each technical feature and the whole technical solution of the present disclosure, but it should not be construed as limiting the scope of the present disclosure.
In the description of the invention, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The most widely used method for forecasting the effective wave height is a physical numerical method. These methods describe waves using mathematical models based on changes in physical conditions and processes, and are often complex and difficult to solve. In addition, the methods are difficult to excavate deep latent features contained in ocean data that are valuable for effective wave height prediction and apply the deep latent features to a wave prediction process. With the rise of artificial intelligence, the effective wave height prediction method based on machine learning develops rapidly, and valuable potential features can be mined from ocean data to achieve a good prediction effect. With the increasing importance of the prediction of the effective wave height, the prediction effect is steadily improved, but the following challenges still remain to be solved:
(1) Existing methods assess potentially valuable significant wave height prediction features in buoy data in little detail;
(2) The prediction accuracy needs to be further improved;
(3) Most methods do not have ideal prediction effect on outliers.
In order to solve the problems, the characteristics in the original wave data set can be enhanced by adding a lag variable in the original wave data set; the importance of each feature in the first wave data set relative to the effective wave height is sequenced through a random forest algorithm, so that feature selection is carried out on the first wave data set, irrelevant or redundant features are eliminated, gains are brought to prediction, and the accuracy of a prediction model is improved; the invention also constructs a sequence-to-sequence model with an attention mechanism based on a memory layer, and adopts the sequence-to-sequence model to carry out effective wave height prediction training on the second wave data set to obtain a trained sequence-to-sequence model; the method comprises the steps of conducting effective wave height prediction on data to be predicted through a trained sequence-to-sequence model to obtain a final effective wave height prediction value, constructing a sequence-to-sequence network with an attention mechanism based on a memory layer to reduce the occurrence of forgetting problems in long sequence prediction, simultaneously capturing sudden changes of effective wave height better, and further improving the accuracy of the prediction model, so that the effective wave height prediction precision is improved, and the method has a better prediction effect on abnormal values.
Referring to fig. 1, an embodiment of the present invention provides a method for predicting a significant wave height based on a sequence-to-sequence network, including the steps of:
s100, acquiring an original wave data set, and adding a lag variable into the original wave data set to acquire a first wave data set;
specifically, an original wave data set is obtained, and a lag variable is added to the original wave data set to obtain a first wave data set.
For example, assume the original wave data set is
Figure BDA0003697344340000101
Where n represents the number of features of the buoy data, T 1 The time step representing each feature, i.e.,
Figure BDA0003697344340000102
length of T 2 Is expressed as
Figure BDA0003697344340000103
The aim of this embodiment is to find a function F that is good enough to minimize the error between the predicted and actual values.
Y=F(X)
The embodiment utilizes neural network to mine T 1 Input characteristics and T of time step 2 Functional relationship between the features to be predicted (significant wave height) at time step.
According to the principle of the autoregressive moving average model, the data sequence formed by the characteristic to be predicted changing with time is regarded as a random sequence, and the change of the random sequence reflects the continuity of the original wave data in time. On the one hand, the predicted features may be affected by other features; on the other hand, it has its own law of variation. Predicted value Y t The change rule influenced by the change of the t time period is shown as the following formula:
Y t =β 1 Y t-12 Y t-2 +...+β T Y t-T +Z t
predicted value Y of predicted feature at time t t Can be viewed as a form of a linear combination of the value of the previous time plus the remainder, which is consistent with the idea of multi-layer perceptron (MLP). Based on this heuristic, in the effective wave height prediction problem, the present embodiment adds the historical effective wave height data of the current wave data in the original wave data set as a lag variable to the original wave data set to obtain the first wave data set. In this embodiment, wave data before 1 to 12 hours is added to the current wave data as a hysteresis variable of 1 to 12 steps. It should be noted that, this embodiment does not limit the data with the hysteresis variable being only 1 to 12 hours, and this time period can be adjusted as needed.
In this embodiment, the characteristics in the raw wave data set can be enhanced by adding a hysteresis variable to the raw wave data set.
S200, selecting characteristics of the first wave data set by using a random forest algorithm to obtain a second wave data set;
specifically, the random forest algorithm is used as a combination of a bagging algorithm and a decision tree algorithm, and is widely applied to feature selection. Suppose a given feature X has c 1 ,X 2 ,...,X c Data of (2)Set, intended to calculate different features X i And X j The relative importance between i ≠ j. In the random forest algorithm, the importance of a feature refers to the average contribution degree of the feature in each decision tree of the random forest, and is calculated by the kini index. The formula of the kini index is as follows:
Figure BDA0003697344340000121
where K denotes that all samples can be classified into K classes, p hk Indicating the proportion of class k in node h. After determining the Keynie index for node h, the feature X may be computed j Importance at node h, i.e., change in the kini index before and after branching of node h:
Figure BDA0003697344340000122
wherein, GI m And GI o Respectively representing the kini indexes of two new nodes after the node h branches. Will be characterized by X j The set appearing in the decision tree is denoted H, then feature X j Importance in Tree i
Figure BDA0003697344340000123
Comprises the following steps:
Figure BDA0003697344340000124
suppose there are N trees in a random forest, feature X j Importance score of
Figure BDA0003697344340000125
Comprises the following steps:
Figure BDA0003697344340000126
due to the fact that the marine environment difference of different areas is large, the feature selection result of data of each station also has certain difference. In addition, although the random forest algorithm ranks the importance of each feature of the wave data relative to the height of the significant wave, it is also necessary to test in practice to specifically select which features are input to the model to achieve the best predictive effect. Therefore, the embodiment designs an adaptive feature selection algorithm for each site, and the specific process of the algorithm is as follows:
obtaining an effective wave height observation value of a first wave data set, initializing a historical correlation coefficient to be zero, initializing a second wave data set to be an empty data set, initializing a tolerance value to be zero, and presetting a maximum tolerance value;
sorting the characteristics in the first wave data set from high to low according to the characteristic importance by using the random forest algorithm to obtain a sorted first wave data set;
adding the characteristics in the ordered first wave data set to a second wave data set from high to low according to the characteristic importance, and calculating an effective wave height predicted value through a neural network when data is added to the second wave data set once;
calculating a current correlation coefficient between the effective wave height predicted value and the effective wave height observed value, wherein the calculation formula of the correlation coefficient is as follows:
Figure BDA0003697344340000131
wherein y is i Is the effective wave height observed, p i Is a predicted value of the effective wave height,
Figure BDA0003697344340000132
is the average of the observed values of the significant wave height,
Figure BDA0003697344340000133
is the average of the predicted values of the effective wave height;
judging the current correlation coefficient and the historical correlation coefficient:
if the current correlation coefficient is greater than the historical correlation coefficient, adding the features in the first wave data set to a second wave data set;
if the current correlation coefficient is less than or equal to the historical correlation coefficient, increasing the patience value by one, and removing the characteristic of the current correlation coefficient corresponding to the first wave data set;
and finishing the judgment after all the data in the first wave data set are judged or the endurance value is increased to be larger than the maximum endurance value, and obtaining a second wave data set after the characteristics are selected.
After the importance ranking of the features is obtained through the random forest, the embodiment uses a forward selection method to sequentially add the features to the second wave data set from high importance to low importance. After each addition, the present embodiment performs 24-hour effective wave height prediction using the current second wave data set, and records the correlation coefficient between the prediction result and the true value. And finally, selecting a feature set corresponding to the optimal correlation coefficient as the input of the sequence to the sequence model.
In this embodiment, the importance of each feature in the first wave data set with respect to the effective wave height is ranked by a random forest algorithm, and a feature set corresponding to an optimal correlation coefficient is selected as an input of a sequence to a sequence model, thereby eliminating irrelevant or redundant features, increasing the correlation between features, providing a gain for prediction, and improving the accuracy of the prediction model.
S300, constructing a sequence-to-sequence model with an attention mechanism based on a memory layer, and performing effective wave height prediction training on a second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model;
specifically, referring to fig. 2, in the present embodiment, due to the defects of the recurrent neural network, even the recurrent neural network such as the optimized long-time memory network (LSTM) or gated recurrent unit network (GRU) can be reduced as much as possible and cannot avoid the occurrence of forgetting when inputting a long sequence. To alleviate this problem, this embodiment adds an attention mechanism with a memory-based layer to the sequence-to-sequence model (Seq 2 Seq). In particular, the present embodiment introduces a neural network to compute the weights of the output contributions of the encoder to the decoder. Based on the weights, a weighted average output of the encoder is then calculated, which relates the particular decoding of the decoder to certain outputs of the encoder, as follows:
using a network of gated recursive elements as an encoder and a decoder, mining a relationship between previous and subsequent values in a time series based on the encoder and the decoder, wherein,
the encoder is built by the following formula:
Input:X=(x 1 ,x 2 ,...,x t )
z t =σ(W z ·[h t-1 ,x t ])
r t =σ(W r ·[h t-1 ,x t ])
Figure BDA0003697344340000141
Figure BDA0003697344340000142
where X represents the current input data set for each element in the gated recursive element network, X t Data representing the current time, h t-1 Representing the hidden state, z, passed from the previous instant t-1 t Indicating an update gate, r t It is indicated that the gate is reset,
Figure BDA0003697344340000151
an intermediate memory representing the current time, the intermediate memory representing the input of the memory of the previous time and the memory of the current time in combination, W z Representation update Gate z t Weight matrix of, W r Represents a reset gate r t W represents the intermediate memory
Figure BDA0003697344340000152
Weight matrix of h t Representing the hidden state at the current time, input representing gatingThe input, output, of the network of recursive elements represents the Output of the gated network of recursive elements.
The decoder is built by the following formula:
Input:X=(c 1 ,c 2 ,...,c k )
s 0 =h t
z k =σ(W z ·[s k-1 ,c k ])
r k =σ(W r ·[s k-1 ,c k ])
Figure BDA0003697344340000153
Figure BDA0003697344340000154
Output:y k =s k
wherein X denotes the input of the decoder, the input of the decoder being obtained by means of the attention mechanism, s 0 =h t Representing the last hidden state h of the encoder t Initial input s used as a hidden layer for a decoder 0 ,z k Indicating an update gate, r k It is indicated that the gate is reset,
Figure BDA0003697344340000155
an intermediate memory representing the current time, the intermediate memory representing the input of the memory of the previous time and the memory of the current time in combination, W z Representation update Gate z k Weight matrix of W r Represents a reset gate r k W represents the intermediate memory
Figure BDA0003697344340000156
Weight matrix of s k Indicating the hidden state at the current time, y k =s k Indicating the hidden state s at the current moment k Output y as a gated recursive cell network k
Adding a memory layer in the attention mechanism, constructing a sequence to a sequence model according to an encoder, a decoder and the attention mechanism, and establishing the attention mechanism with the memory-based layer by the following formula:
score(h i ,s j )=v T tanh(W[h i ;s j ]+b)
ω ij =Softmax((score(h i ,s j )))
Figure BDA0003697344340000161
Figure BDA0003697344340000162
wherein h is i Representing the hidden state of the encoder at time i, s j Indicating the hidden state of the decoder at time j, according to h i And s j Obtaining a corresponding output score, v T And b represents a bias parameter, ω, in the neural network ij Indicating that the weights between the encoder and decoder hidden states are calculated by normalizing the scores by the softmax function,
Figure BDA0003697344340000163
representing a weighted average of the encoder outputs, c j+1 Representing a memory layer that hides the output h of the unit in the encoder j For prediction of the next instant, by h j And
Figure BDA0003697344340000164
splicing in characteristic dimension, and linear transformation to obtain t j+1 The input of the time instant decoder.
And performing effective wave height prediction training on the second wave data set by adopting a sequence-to-sequence model to obtain a trained sequence-to-sequence model.
In this embodiment, a sequence-to-sequence model with attention mechanism is constructed, and a gated recursive element network is used as an encoder and a decoder to mine the relationship between the front and back values in the time sequence. In order to reduce the occurrence of forgetting problem when inputting long sequence, the present embodiment also designs a memory layer to increase the weight of the hidden layer of the decoder in the attention mechanism. Meanwhile, the network depth of the coder and the decoder is deepened, and the network generalization capability and the feature extraction capability are improved. In the embodiment, the sequence with the attention mechanism based on the memory layer is constructed to the sequence network, so that the occurrence of forgetting problem in long sequence prediction is reduced, meanwhile, the mutation of the effective wave height can be captured better, and the accuracy of the prediction model is further improved.
And S400, performing effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain a final effective wave height prediction value.
Specifically, the effective wave height prediction is performed on data to be predicted through a trained sequence-to-sequence model to obtain a final effective wave height prediction value, and the specific process is as follows:
inputting data to be predicted in a trained sequence to sequence model, and inputting the last hidden state of an encoder in the trained sequence to sequence model
Figure BDA0003697344340000171
Initial input s used as a hidden layer for a decoder 0 . By using h i Represents the hidden state of the encoder at time i, i =1,2 1 By s j Representing the hidden state of the decoder at time j, j = t 1 +1,t 1 +2,...,t 1 +t 2 . The corresponding output score is then calculated from these two vectors by the following formula:
score(h i ,s j )=v T tanh(W[h i ;s j ]+b)。
and then according to the obtained output score, calculating the weight omega between the hidden states of the coder and the decoder by using the normalized score of the softmax function ij The formula is as follows:
ω ij =Softmax((score(h i ,s j )))。
from the calculated weights, we derive the encoderWeighted average of outputs
Figure BDA0003697344340000172
The formula is as follows:
Figure BDA0003697344340000173
in order to fully utilize the decoding information and use the prediction result of the previous step for the next prediction, the embodiment designs a storage layer for storing the output s of each step of the decoder j By passing
Figure BDA0003697344340000174
And s j T obtained by splicing in characteristic dimension and linear transformation j Input c of time of day decoder j The calculation formula is as follows:
Figure BDA0003697344340000175
then obtaining the final effective wave height predicted value y of the data to be predicted through a decoder j And completing the effective wave height prediction of the data to be predicted through the steps.
In this embodiment, the characteristics in the original wave data set can be enhanced by adding a lag variable to the original wave data set; the importance of each feature in the first wave data set relative to the effective wave height is sequenced through a random forest algorithm, so that feature selection is carried out on the first wave data set, irrelevant or redundant features are eliminated, gains are brought to prediction, and the accuracy of a prediction model is improved; in the embodiment, a sequence-to-sequence model with an attention mechanism based on a memory layer is constructed, and the sequence-to-sequence model is adopted to carry out effective wave height prediction training on the second wave data set, so that a trained sequence-to-sequence model is obtained; the method comprises the steps of carrying out effective wave height prediction on data to be predicted through a trained sequence-to-sequence model to obtain a final effective wave height prediction value, constructing a sequence-to-sequence network with an attention mechanism based on a memory layer to reduce the occurrence of forgetting problems in long sequence prediction, capturing sudden changes of effective wave heights better, and further improving the accuracy of the prediction model, so that the effective wave height prediction precision is improved, and a better prediction effect is achieved on abnormal values.
Referring to fig. 3, an embodiment of the present invention provides a system for predicting a significant wave height based on a sequence-to-sequence network, including:
a first wave data set obtaining unit 100, configured to obtain an original wave data set, and add a lag variable to the original wave data set to obtain a first wave data set;
a second wave data set obtaining unit 200, configured to perform feature selection on the first wave data set by using a random forest algorithm to obtain a second wave data set;
a sequence-to-sequence model training unit 300, configured to construct a sequence-to-sequence model with a memory layer-based attention mechanism, and perform effective wave height prediction training on the second wave data set by using the sequence-to-sequence model to obtain a trained sequence-to-sequence model;
and a final effective wave height predicted value obtaining unit 400, configured to perform effective wave height prediction on data to be predicted through the trained sequence-to-sequence model, to obtain a final effective wave height predicted value.
It should be noted that, since the system for predicting the significant wave height based on the sequence-to-sequence network in the present embodiment is based on the same inventive concept as the above-mentioned method for predicting the significant wave height based on the sequence-to-sequence network, the corresponding contents in the method embodiments are also applicable to the system embodiments, and are not described in detail herein.
The embodiment of the invention also provides an effective wave height prediction device based on a sequence-to-sequence network, which comprises: at least one control processor and a memory for communicative connection with the at least one control processor.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The non-transitory software programs and instructions required to implement a sequence-to-sequence network-based significant wave height prediction method of the above embodiments are stored in a memory, and when executed by a processor, perform the sequence-to-sequence network-based significant wave height prediction method of the above embodiments, for example, perform the above-described method steps S100 to S400 in fig. 1.
The above described system embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Embodiments of the present invention also provide a computer-readable storage medium storing computer-executable instructions, which are executed by one or more control processors, and can cause the one or more control processors to execute a method for predicting a significant wave height based on a sequence-to-sequence network in the above method embodiments, for example, perform the functions of the method steps S100 to S400 in fig. 1 described above.
Through the above description of the embodiments, those skilled in the art can clearly understand that the embodiments can be implemented by software plus a general hardware platform. Those skilled in the art will appreciate that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that the present invention is not limited to the above embodiments, and various equivalent modifications or substitutions can be made without departing from the spirit of the present invention and the scope of the present invention is defined by the appended claims.

Claims (10)

1. A method for predicting the effective wave height based on a sequence-to-sequence network is characterized by comprising the following steps:
acquiring an original wave data set, and adding a lag variable into the original wave data set to acquire a first wave data set;
performing feature selection on the first wave data set by using a random forest algorithm to obtain a second wave data set;
constructing a sequence-to-sequence model with a memory layer-based attention mechanism, and performing effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model;
and performing effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain a final effective wave height prediction value.
2. The method according to claim 1, wherein the obtaining a raw wave data set, adding a lag variable to the raw wave data set, and obtaining a first wave data set comprises:
and adding historical effective wave height data of the current wave data in the original wave data set into the original wave data set as a lag variable to obtain a first wave data set.
3. The method for predicting the effective wave height based on the sequence-to-sequence network according to claim 1, wherein the step of performing feature selection on the first wave data set by using a random forest algorithm to obtain a second wave data set comprises:
obtaining an effective wave height observation value of the first wave data set, initializing a historical correlation coefficient to be zero, initializing a second wave data set to be a null data set, initializing a tolerance value to be zero, and presetting a maximum tolerance value;
sorting the features in the first wave data set from high to low according to feature importance by using a random forest algorithm to obtain a sorted first wave data set;
adding the characteristics in the ordered first wave data set to the second wave data set from high to low according to characteristic importance, and calculating an effective wave height predicted value through a neural network when data is added to the second wave data set once;
calculating a current correlation coefficient between the predicted value of the significant wave height and the observed value of the significant wave height, and judging the current correlation coefficient and the historical correlation coefficient:
adding features in the first wave data set to the second wave data set if the current correlation coefficient is greater than the historical correlation coefficient;
if the current correlation coefficient is less than or equal to the historical correlation coefficient, increasing the patience value by one, and removing the characteristic in the first wave data set corresponding to the current correlation coefficient;
and finishing the judgment after all the data in the first wave data set are judged or the endurance value is increased to be larger than the maximum endurance value, and obtaining a second wave data set after the characteristics are selected.
4. The method for predicting the significant wave height based on the sequence-to-sequence network according to claim 1, wherein the constructing of the sequence-to-sequence model with the attention mechanism based on the memory layer comprises:
adopting a gated recursion unit network as an encoder and a decoder, and mining the relation between front and back values in a time sequence based on the encoder and the decoder;
adding a memory layer in an attention mechanism, and constructing a sequence-to-sequence model according to the encoder, the decoder and the attention mechanism.
5. The method of claim 4, wherein the encoder is constructed by the following formula:
Input:X=(x 1 ,x 2 ,...,x t )
z t =σ(W z ·[h t-1 ,x t ])
r t =σ(W r ·[h t-1 ,x t ])
Figure FDA0003697344330000031
Output:
Figure FDA0003697344330000032
where X represents the current input data set for each element in the gated recursive element network, X t Data representing the current time, h t-1 Representing concealment communicated from a previous time t-1State z t Represents an update gate, r t Indicating that the gate is to be reset and,
Figure FDA0003697344330000033
an intermediate memory representing the current time, said intermediate memory representing the input of the memory of the previous time combined with the memory of the current time, W z Represents the update door z t Weight matrix of W r Represents the reset gate r t W represents the intermediate memory
Figure FDA0003697344330000034
Weight matrix of h t Representing the hidden state at the current time, input representing the Input of the gated recursive element network, and Output representing the Output of the gated recursive element network.
6. The method of claim 5, wherein the decoder is constructed by the following formula:
Input:X=(c 1 ,c 2 ,...,c k )
s 0 =h t
z k =σ(W z ·[s k-1 ,c k ])
r k =σ(W r ·[s k-1 ,c k ])
Figure FDA0003697344330000035
Figure FDA0003697344330000036
Output:y k =s k
where X denotes the input of a decoder, obtained by means of the attention mechanism, s 0 =h t Representing said encoderLast hidden state h t Initial input s used as hidden layer of decoder 0 ,z k Represents an update gate, r k It is indicated that the gate is reset,
Figure FDA0003697344330000041
an intermediate memory representing the current time, said intermediate memory representing the input combination of the memory of the previous time and the memory of the current time, W z Represents the update door z k Weight matrix of, W r Represents the reset gate r k W represents the intermediate memory
Figure FDA0003697344330000042
Weight matrix of s k Indicating the hidden state at the current time, y k =s k Indicating that the current time is hidden k Output y as a gated recursive cell network k
7. The method of claim 6, wherein the attention mechanism is established by the following formula:
score(h i ,s j )=v T tanh(W[h i ;s j ]+b)
ω ij =Softmax((score(h i ,s j )))
Figure FDA0003697344330000043
Figure FDA0003697344330000044
wherein h is i Representing the hidden state of the encoder at time i, s j Representing the hidden state of the decoder at time j, in dependence on h i And said s j Obtaining a corresponding output score, v T And b represents a bias parameter in the neural network, ω ij Representing computing a weight between the encoder and the decoder hidden state by normalizing a score by a softmax function,
Figure FDA0003697344330000045
represents a weighted average of the encoder output, c j+1 Representing a memory layer that hides the output h of a unit in the encoder j Prediction for the next moment by said h j And said
Figure FDA0003697344330000046
Splicing in characteristic dimension, and linear transformation to obtain t j+1 The input of the time instant decoder.
8. A system for prediction of significant wave height based on a sequence-to-sequence network, comprising:
the wave data acquisition unit is used for acquiring an original wave data set, and adding a lag variable into the original wave data set to obtain a first wave data set;
the second wave data set acquisition unit is used for selecting the characteristics of the first wave data set by using a random forest algorithm to obtain a second wave data set;
the sequence-to-sequence model training unit is used for constructing a sequence-to-sequence model with an attention mechanism based on a memory layer, and performing effective wave height prediction training on the second wave data set by adopting the sequence-to-sequence model to obtain a trained sequence-to-sequence model;
and the final effective wave height predicted value obtaining unit is used for performing effective wave height prediction on data to be predicted through the trained sequence-to-sequence model to obtain a final effective wave height predicted value.
9. A sequence-to-sequence network based significant wave height prediction apparatus comprising at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to cause the at least one control processor to perform a sequence-to-sequence network based significant wave height prediction method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform a method of sequence-to-sequence network based significant wave height prediction as claimed in any one of claims 1 to 7.
CN202210678407.4A 2022-06-16 2022-06-16 Effective wave height prediction method and system based on sequence-to-sequence network Active CN115169439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210678407.4A CN115169439B (en) 2022-06-16 2022-06-16 Effective wave height prediction method and system based on sequence-to-sequence network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210678407.4A CN115169439B (en) 2022-06-16 2022-06-16 Effective wave height prediction method and system based on sequence-to-sequence network

Publications (2)

Publication Number Publication Date
CN115169439A true CN115169439A (en) 2022-10-11
CN115169439B CN115169439B (en) 2023-07-07

Family

ID=83485931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210678407.4A Active CN115169439B (en) 2022-06-16 2022-06-16 Effective wave height prediction method and system based on sequence-to-sequence network

Country Status (1)

Country Link
CN (1) CN115169439B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115711612A (en) * 2022-11-02 2023-02-24 中国人民解放军国防科技大学 Method for predicting effective wave height of wave
CN116340868A (en) * 2023-03-27 2023-06-27 中国科学院南海海洋研究所 Wave feature set forecasting method and device based on machine learning
CN116449462A (en) * 2023-06-19 2023-07-18 山东省计算中心(国家超级计算济南中心) Method for predicting effective wave height space-time sequence of sea wave System, storage medium, and apparatus
CN117791856A (en) * 2023-12-20 2024-03-29 武汉人云智物科技有限公司 Power grid fault early warning method and device based on inspection robot
CN117909713A (en) * 2024-01-03 2024-04-19 中国人民解放军国防科技大学 Wave data characteristic selection method, device and equipment based on unbalanced learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050514A (en) * 2014-05-29 2014-09-17 河海大学 Sea wave significant wave height long-term trend prediction method based on reanalysis data
CN113159389A (en) * 2021-03-25 2021-07-23 大连海事大学 Financial time sequence prediction method based on deep forest generation countermeasure network
CN114398819A (en) * 2021-11-26 2022-04-26 中国石油大学(华东) Method and system for predicting effective wave height of unstructured grid based on deep learning
CN114445634A (en) * 2022-02-28 2022-05-06 南京信息工程大学 Sea wave height prediction method and system based on deep learning model
CN114519311A (en) * 2022-04-21 2022-05-20 中国海洋大学 Prediction method, system, storage medium and application of total harbor basin wave effective wave height
CN114549925A (en) * 2022-01-18 2022-05-27 大连理工大学 Sea wave effective wave height time sequence prediction method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050514A (en) * 2014-05-29 2014-09-17 河海大学 Sea wave significant wave height long-term trend prediction method based on reanalysis data
CN113159389A (en) * 2021-03-25 2021-07-23 大连海事大学 Financial time sequence prediction method based on deep forest generation countermeasure network
CN114398819A (en) * 2021-11-26 2022-04-26 中国石油大学(华东) Method and system for predicting effective wave height of unstructured grid based on deep learning
CN114549925A (en) * 2022-01-18 2022-05-27 大连理工大学 Sea wave effective wave height time sequence prediction method based on deep learning
CN114445634A (en) * 2022-02-28 2022-05-06 南京信息工程大学 Sea wave height prediction method and system based on deep learning model
CN114519311A (en) * 2022-04-21 2022-05-20 中国海洋大学 Prediction method, system, storage medium and application of total harbor basin wave effective wave height

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JICHAO WANG 等: "Forecasting of significant wave height based on gated recurrent unit network in the taiwan strait and its adjacent waters", 《WATER》, pages 1 - 21 *
MOHAMMAD PIRHOOSHYARAN 等: "Forecasting, hindcasting and feature selection of ocean waves via recurrent and sequence-to-sequence networks", 《OCEAN ENGINEERING》, pages 1 - 14 *
NAWIN RAJ 等: "An EEMD-BiLSTM Algorithm Integrated with Boruta Random Forest Optimiser for Significant Wave Height Forecasting along Coastal Areas of Queensland, Australia", 《REMOTE SENSING》, pages 1 - 20 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115711612A (en) * 2022-11-02 2023-02-24 中国人民解放军国防科技大学 Method for predicting effective wave height of wave
CN115711612B (en) * 2022-11-02 2024-06-11 中国人民解放军国防科技大学 Prediction method for effective wave height of waves
CN116340868A (en) * 2023-03-27 2023-06-27 中国科学院南海海洋研究所 Wave feature set forecasting method and device based on machine learning
CN116449462A (en) * 2023-06-19 2023-07-18 山东省计算中心(国家超级计算济南中心) Method for predicting effective wave height space-time sequence of sea wave System, storage medium, and apparatus
CN116449462B (en) * 2023-06-19 2023-10-03 山东省计算中心(国家超级计算济南中心) Method, system, storage medium and equipment for predicting effective wave height space-time sequence of sea wave
CN117791856A (en) * 2023-12-20 2024-03-29 武汉人云智物科技有限公司 Power grid fault early warning method and device based on inspection robot
CN117909713A (en) * 2024-01-03 2024-04-19 中国人民解放军国防科技大学 Wave data characteristic selection method, device and equipment based on unbalanced learning

Also Published As

Publication number Publication date
CN115169439B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN115169439B (en) Effective wave height prediction method and system based on sequence-to-sequence network
Lian et al. Landslide displacement prediction with uncertainty based on neural networks with random hidden weights
CN112668804B (en) Method for predicting broken track of ground wave radar ship
CN112712214B (en) Method, system, device and storage medium for predicting track of maritime search and rescue object
CN107944648A (en) A kind of accurate Forecasting Methodology of large ship speed of a ship or plane rate of fuel consumption
CN114912673A (en) Water level prediction method based on whale optimization algorithm and long-term and short-term memory network
Lopes et al. Artificial neural networks approaches for predicting the potential for hydropower generation: a case study for Amazon region
CN109460874B (en) Sense wave height prediction method based on deep learning
CN115545334B (en) Land utilization type prediction method and device, electronic equipment and storage medium
CN115018193A (en) Time series wind energy data prediction method based on LSTM-GA model
CN112508177A (en) Network structure searching method and device, electronic equipment and storage medium
Cruz et al. Prediction intervals with LSTM networks trained by joint supervision
CN117271979A (en) Deep learning-based equatorial Indian ocean surface ocean current velocity prediction method
CN116822722A (en) Water level prediction method, system, device, electronic equipment and medium
Xiao et al. Mixture of deep neural networks for instancewise feature selection
CN115936208A (en) Short-term power load prediction method and device based on deep hybrid learning model
CN111428420B (en) Method and device for predicting sea surface flow velocity, computer equipment and storage medium
Brunner et al. Using state predictions for value regularization in curiosity driven deep reinforcement learning
CN114139783A (en) Wind power short-term power prediction method and device based on nonlinear weighted combination
Stalder et al. Probabilistic modeling of lake surface water temperature using a Bayesian spatio-temporal graph convolutional neural network
Doudkin et al. Spacecraft Telemetry Time Series Forecasting With Ensembles of Neural Networks
Shi et al. An Attention-based Context Fusion Network for Spatiotemporal Prediction of Sea Surface Temperature
CN116070714B (en) Cloud edge cooperative training method and system based on federal learning and neural architecture search
Abdelkader et al. On the Utilization of an Ensemble of Meta-Heuristics for Simulating Energy Consumption in Buildings
Tan et al. OSP-FEAN: Optimizing Significant Wave Height Prediction with Feature Engineering and Attention Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant