CN114742206A - Rainfall intensity estimation method for comprehensive multi-space-time scale Doppler radar data - Google Patents

Rainfall intensity estimation method for comprehensive multi-space-time scale Doppler radar data Download PDF

Info

Publication number
CN114742206A
CN114742206A CN202210417830.9A CN202210417830A CN114742206A CN 114742206 A CN114742206 A CN 114742206A CN 202210417830 A CN202210417830 A CN 202210417830A CN 114742206 A CN114742206 A CN 114742206A
Authority
CN
China
Prior art keywords
data
factors
radar
model
rainfall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210417830.9A
Other languages
Chinese (zh)
Other versions
CN114742206B (en
Inventor
刘欢欢
田伟
沈凯令
易雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202210417830.9A priority Critical patent/CN114742206B/en
Publication of CN114742206A publication Critical patent/CN114742206A/en
Application granted granted Critical
Publication of CN114742206B publication Critical patent/CN114742206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Development Economics (AREA)
  • Biophysics (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Marketing (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to the technical field of deep learning quantitative rainfall estimation, in particular to a rainfall intensity estimation method of comprehensive multi-space-time scale Doppler radar data, which comprises the following steps: acquiring Doppler single polarization radar data and ground automatic meteorological station observation data; acquiring a radar reflectivity factor, a meteorological factor and a geographic factor; converting the polar coordinate into a Cartesian coordinate; k neighbor interpolation and data slicing; making a label; designing a self-attention module and building a model; training a model and adjusting parameters; adjusting the optimal model to estimate the precipitation intensity of the test set; compared with the traditional rainfall estimation method, the rainfall estimation method has the advantages that the meteorological factors which are beneficial to rainfall estimation in multiple scales can be well combined, the unfavorable factors are restrained to a certain degree, the designed model can learn the micro-physical characteristics in a rainfall field, the meteorological factors and the geographic factors are effectively utilized, the meteorological factors and the geographic factors are combined with the radar reflectivity factors, errors are reduced, and more accurate estimation is carried out.

Description

Rainfall intensity estimation method for comprehensive multi-space-time scale Doppler radar data
Technical Field
The invention relates to the technical field of deep learning quantitative rainfall estimation, in particular to a rainfall intensity estimation method of comprehensive multi-space-time scale Doppler radar data.
Background
Rainfall intensity estimation is an important research direction in meteorology and is closely related to daily life of people. In recent years, strong convection weather in summer is more frequent, and natural disasters such as urban waterlogging, flood, debris flow and the like occur, so that the serious threat to the life and property safety of people is caused, and the serious economic loss is caused to the nation. The Doppler radar plays an increasingly important role in rainfall intensity estimation, and accurate and timely rainfall intensity estimation has great significance in disaster prevention and reduction.
However, the rainfall process is particularly complex, and high-resolution, high-accuracy rainfall intensity estimation is a challenging task. In the conventional radar rainfall estimation research, small-scale radar reflectivity factors around the rain gauge are often focused, and the influence of meteorological factors and the geographical environment around the rain gauge on rainfall is less considered.
Disclosure of Invention
The invention aims to provide a rainfall intensity estimation method for comprehensive multi-space-time scale Doppler radar data, so as to solve the problems in the background technology.
The technical scheme of the invention is as follows: the rainfall intensity estimation method of the comprehensive multi-space-time scale Doppler radar data comprises the following steps:
s1, acquiring Doppler single polarization radar data and ground automatic meteorological station observation data;
s2, acquiring radar reflectivity factors, meteorological factors and geographic factors;
s3, adopting the combined reflectivity, converting the radar data in the polar coordinate system into grid data in a Cartesian coordinate system, using the inverse distance weighted interpolation rule to regulate the data, removing noise by using the Mahalanobis distance, performing quality control on the radar data, and acquiring accurate radar and other latitude and longitude data;
s4, acquiring 1 x 400 data sets of all the sites at all the moments under a Cartesian coordinate system, cutting to obtain three single-layer data sets, carrying out normalization processing on all the data, and combining the three single-layer data sets into one data sample for storage;
s5, dividing the data into a test set and a training set according to the ratio of 2:8 by taking the actual precipitation of the site as a ground truth label, and finally storing all the data in a matrix form;
s6, establishing a rainfall intensity estimation model designed by using a deep learning technology;
s7, initializing each neuron weight, training times, learning rate and learning rate attenuation coefficient of the model, obtaining a precipitation estimation value through a feature extraction network and a full-connection neural network, and calculating loss of a prediction result; obtaining an optimal network model and parameters;
s8, inputting the data in the test set into the network model as input layer data to obtain corresponding forecast precipitation data;
s9, selecting an evaluation index for measuring model performance, measuring the correlation between the true value and the estimated value, and analyzing in time and space dimensions respectively according to the result to obtain the optimal result.
Further, in S1, the doppler radar-based data and the ground station precipitation data are respectively obtained in the chinese meteorological data network.
Further, in S2, the radar reflectivity factor is the main input, the weather factor and the geographic factor are the auxiliary inputs, the weather factor mainly uses the temperature and the humidity, the geographic factor adopts the elevation, and the data is subjected to preliminary preprocessing.
Further, in S3, the grid point closest to the national weather station is selected as the center of the reflectivity factor, and the radar-based data in polar coordinates is converted into grid data in cartesian coordinates by using multi-scale input.
Further, in S3, the step of removing noise includes: denoising and filtering and removing pixel points smaller than 70 by using a conventional echo image processing method.
Further, in S4, the obtaining of the 1 × 400 × 400 data sets of all the sites at all the time under the cartesian coordinate system is performed by cutting with the national weather site as the center, obtaining the single-layer data sets with the sizes of 1 × 100 × 100, 1 × 50 × 50, and 1 × 25 × 25, respectively, performing normalization processing on all the data, and combining the three single-layer data sets into one data sample for storage.
Further, in S6, radar data image precipitation feature extraction is performed by using mixed hole convolution, down sampling is performed by using maximum pooling, redundant information is removed, features are compressed, network complexity is simplified, the receptive field of a high-level network is increased by using a non-local module, and the obtained information is distributed widely. The large-scale image and the small-scale image centered on the site are balanced using a designed multi-scale attention module.
Further, in S7, a weighted combination of Mean Square Error (MSE) and Mean Absolute Error (MAE) is used as a loss function to calculate a loss for the prediction result; and (3) performing back propagation by using a neural network, calculating the gradient of each weight, updating the weight according to a gradient descent algorithm, continuously adjusting the weight of the neuron, stopping network training until the error of the training set is within a reasonable range, and obtaining an optimal network model and parameters.
Further, in S9, a Root Mean Square Error (RMSE), a Mean Absolute Error (MAE), and a Correlation Coefficient (CC) are used as evaluation indexes for measuring model performance.
Compared with the prior art, the rainfall intensity estimation method for the comprehensive multi-space-time scale Doppler radar data provided by the invention has the following improvements and advantages:
one is as follows: the radar reflectivity factor is used as a main input, the meteorological factor and the geographic factor are used as auxiliary input variables, the meteorological factor adopts temperature and humidity, the geographic factor adopts elevation, data are processed, a training set is used for training a model and adjusting parameters, and finally the model which can be applied to actual rainfall intensity estimation is obtained. The model reasonably utilizes historical rainfall observation data, improves the accuracy of rainfall intensity estimation, reasonably refers to the influence of cloud cluster motion paths and cloud cluster sizes on rainfall values measured by the rain gauge and the influence of the geographical environment around the rain gauge on rainfall, can estimate regional rainfall intensity, and has good application prospect;
the second step is as follows: the model is entirely based on the convolutional neural network, and a brand-new multi-scale self-attention module is designed to better fuse factors which are beneficial to rainfall estimation in different scales. Under the condition of keeping the network result unchanged, the input data has a plurality of good dimensionalities, the meteorological factors in the range are subjected to difference values according to the meteorological factor data of the national meteorological station by using a kriging interpolation method and a spherical model, and then are matched with the reflectivity factors in time and space to be used as input together, and the influence of multi-scale space-time data on the actual precipitation of the site is reasonably considered;
and the third step: the invention improves the precision of estimating the rainfall according to the radar data, and proves the effectiveness of the radar reflectivity factor of multiple scales and the meteorological and geographic factors as auxiliary input variables in the rainfall intensity estimation task by using the deep learning technology. The designed multi-scale attention module can better combine meteorological factors which are beneficial to rainfall estimation in multiple scales, inhibition is conducted on adverse factors to a certain degree, the designed model can learn micro physical characteristics in a rainfall field, the meteorological factors and geographic factors are effectively utilized, the meteorological factors and the geographic factors are combined with radar reflectivity factors, errors are reduced, and more accurate estimation is conducted.
Drawings
The invention is further explained below with reference to the figures and examples:
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram illustrating convolution of a hybrid hole in the present embodiment;
FIG. 3 is a schematic diagram showing a non-local module structure in the present embodiment;
FIG. 4 is a schematic structural diagram of a multi-scale attention module in the present embodiment;
FIG. 5 is a schematic view of a model structure in the present embodiment;
fig. 6 is a schematic diagram of the structure of the fully-connected layer in this embodiment.
Detailed Description
The present invention is described in detail below, and technical solutions in embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the defect of insufficient precision of the traditional precipitation estimation method, the invention aims to solve the problem of improving the precision of the precipitation data estimation according to radar data and design a multi-scale deep learning model of a self-attention module. The effectiveness of multi-scale radar reflectivity factors, as well as meteorological and geographic factors as covariates, in the quantitative precipitation estimation task was demonstrated. Because the change of the cloud cluster affects the rain gauge, the large-scale feature map in multiple scales is adopted to learn the complex change and the motion condition of the cloud cluster in a wider area, and the small-scale feature map learns the spatial information which is near the rain gauge and has stronger correlation with rainfall. And taking the spatial correlation of the meteorological factors and the geographic factors into consideration, and capturing the spatial characteristics of the meteorological factors and the geographic factors by adopting two dimensions as auxiliary variables. The designed self-attention mechanism module is better combined with factors favorable for rainfall estimation in multiple scales, and adverse factors are restrained to a certain degree. Compared with a traditional rainfall estimation method which takes a single scale as input, the rainfall estimation method has the advantages that the model learns the micro-physical process characteristics in a rainfall field, effectively utilizes meteorological factors and geographic factors, combines the meteorological factors and the geographic factors with radar reflectivity factors, more objectively depicts the rainfall phenomenon, further reduces errors and obtains more accurate rainfall intensity estimation.
The specific technical scheme of the invention is as follows:
as shown in fig. 1, the rainfall intensity estimation method of the integrated multi-spatiotemporal doppler radar data includes the following steps:
s1, respectively acquiring Doppler radar reflectivity data and ground station precipitation data in a China meteorological data network (http:// data. cma.cn) according to historical precipitation data;
s2, primarily preprocessing data by taking radar reflectivity factors as main input and weather factors and geographic factors as auxiliary input, wherein the weather factors mainly adopt temperature and humidity, and the geographic factors adopt elevation;
s3, adopting the combined reflectivity, converting the radar data in the polar coordinate system into grid data in a Cartesian coordinate system, and using the inverse distance weighted interpolation rule to obtain the whole data;
Figure BDA0003605560010000061
p is any positive real number, typically, p ═ 2; h isiIs the distance from discrete point to difference point
Figure BDA0003605560010000062
(x, y) is the coordinates of the difference point, (x)i,yi) Are discrete point coordinates.
Figure BDA0003605560010000063
R is the distance from the interpolation point to the farthest discrete point, and n is the total number of discrete points. Removing noise by using the Mahalanobis distance, performing quality control on radar data, finally acquiring accurate radar equal latitude and longitude data, and executing the step 4;
s4, acquiring 1 × 400 × 400 data sets of all sites at all times under a Cartesian coordinate system, cutting to obtain three single-layer data sets, normalizing all data, and merging the three single-layer data sets into one data sample for storage;
s5, dividing the data into a test set and a training set according to the ratio of 2:8 by taking the actual precipitation of the site as a ground truth label, and finally storing all the data in a matrix form;
s6, establishing a rainfall intensity estimation model designed by using a deep learning technology, and executing the step 7;
in this embodiment, the radar data image features are extracted using a mixed hole convolution. The mixed hole convolution adds zero pixels in the characteristic mapping of the standard convolution kernel for filling, thereby reducing the calculated amount and achieving the purpose of enlarging the receptive field. Compared with the common convolution, the cavity convolution can improve the resolution of the sampling image and realize dense feature extraction in the depth CNN under the condition of not increasing the parameter number. For a common convolution kernel with the size of K, the size of a corresponding hole convolution kernel is K + (K-1) × (R-1), wherein R is the hole rate when the characteristic diagram is sampled. The method adopts a mixed hole convolution (HDC) mode to build the network, avoids the grid effect from breaking the continuity between local information, and can sample a complete area of the original characteristic diagram. That is, for a number N of convolutional layers, each layer has a convolutional kernel size of K and an expansion ratio of [ r1,r2,…,ri]The maximum expansion ratio thereof needs to satisfy the following formula:
Mi=max[Mi+1-2ri,Mi+1-2(Mi+1-ri),ri]
wherein r isiExpansion ratio of i-th layer, MiIs the maximum expansion rate of the ith layer. By HDC, the receptive field can be expanded without losing local information, capturing more global information, as shown in fig. 2.
The non-local module considers all positions, the convolution and the sequence can not consider so much information, the non-local module can directly calculate the interaction of the two positions without considering the problem of distance, the efficiency is high, the effect is good, only a few layers are used, the input scales are various, the combination with other models is easy, the global space-time characteristics can be captured, different weights are distributed, and finally the space-time characteristics are aggregated at each position, the specific structure is shown in figure 3, and the equation is as follows:
Figure BDA0003605560010000071
where x is the input and y is the output, and the f-function calculates the feature similarity of the ith position of x and the jth position of x. The g function computes a representation of the characteristics of the jth location of x, C (x) is used for normalization. It can be seen that the feature at the ith position of y is a weighted average of the features at all positions of x. When the f function selects embeddedgausian,
Figure BDA0003605560010000081
is equal to
Figure BDA0003605560010000082
SoftMax is solved in j dimension, therefore
Figure BDA0003605560010000083
Also in the form of self-attention. In the present invention, we use a 1 × 1 convolution operation to map the three different features of the output. And multiplying different modules to obtain similarity scores between every two global pixel points, and converting the similarity scores into weight scores of global information for each pixel point through a softMax function. Z for output of each positioniI.e. a weighted sum of global information.
Figure BDA0003605560010000084
Adding the input as a residual term in the equation makes the non-local module more stable.
A multi-scale attention module that balances a large-scale image centered on a site with a small-scale image, accepting two inputs, namely a small-scale feature map xMAnd a large-scale feature map xL, wherein the specific model structure is shown in FIG. 4, the small-scale feature map is used as a Key module through feature mapping, and the large-scale feature map is used as a Query module through feature mapping. Key module and Query moduleBlock multiplication is xMAnd xLA pixel-by-pixel similarity score matrix, as follows:
Gi,j=(WK*(xM)i)T*(WQ*(xL)j)
Figure BDA0003605560010000085
in order to make the module more stable, the input is often connected to the end of the model as shortcut;
s7, initializing each neuron weight, training times, learning rate and learning rate attenuation coefficient of the model, obtaining a precipitation estimation value through a feature extraction network and a full-connection neural network, and calculating loss of a prediction result by adopting a weighted combination of Mean Square Error (MSE) and Mean Absolute Error (MAE) as a loss function; performing back propagation by using a neural network, calculating each weight gradient, updating the weights according to a gradient descent algorithm, continuously adjusting the weights of the neurons until the error of the training set is within a reasonable range, stopping network training, and obtaining an optimal network model and parameters, as shown in fig. 5;
s8, inputting the data in the test set into the network model as input layer data to obtain corresponding forecast precipitation data, as shown in FIG. 6;
s9, using Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Correlation Coefficient (CC) as evaluation index for measuring model performance, the concrete formula is as follows
Figure BDA0003605560010000091
Figure BDA0003605560010000092
Figure BDA0003605560010000093
And measuring the correlation between the real value and the estimated value by using a Correlation Coefficient (CC), and analyzing in time and space dimensions according to the result to obtain an optimal result.
In S9, a Root Mean Square Error (RMSE), an average absolute error (MAE) and a Correlation Coefficient (CC) are used as evaluation indexes for measuring model performance, the correlation between a true value and an estimated value is measured by using the Correlation Coefficient (CC), and analysis is performed on time and space dimensions respectively according to results to obtain an optimal result, wherein the RMSE is one of the most commonly used evaluation indexes in an experiment, is greatly influenced by an abnormal value, and can reflect the upper limit of errors of the true value and the estimated value. The MAE is less affected by the abnormal value and can reflect the overall error of the real value and the estimated value. CC measures the correlation between the real value and the estimated value, and the research ensures that the larger the CC value is, the smaller the RMSE and MAE values are, and the superiority of the final model is represented.
Specifically, the specific parameters of the model are set as follows: the total training times is 500, the initial value of the learning rate is set to be 0.0001, the dynamic change is carried out, when the learning rate exceeds 20 rounds and does not change any more, the training can be automatically stopped, the model can be rapidly converged, the model can not generate a divergence effect, all data are normalized, and the convergence speed of the model is further improved.
Compared with the traditional Z-R relation, the invention finds that the Z-R relation cannot well fit the relation between the reflectivity factor and the rainfall and has larger difference with the ground true value. But the performance of the used BPN network is superior to that of the traditional Z-R relation, and the advantages of the deep learning method in the aspect of fitting rainfall and reflectivity factors are effectively proved. But CNN captures the spatial structure ignored by this BP network through CNN model comparison results. As for different CNN methods, weather factors and geographic factors are added as auxiliary variables, and more accurate rainfall values can be obtained compared with the method of simply taking the reflectivity factors as input, so that the correlation of rainfall with weather and geographic space environments is further verified.
The previous description is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The rainfall intensity estimation method of the comprehensive multi-space-time scale Doppler radar data is characterized by comprising the following steps of: the method comprises the following steps:
s1, obtaining Doppler single polarization radar data and ground automatic weather station observation data;
s2, acquiring radar reflectivity factors, meteorological factors and geographic factors;
s3, adopting the combined reflectivity, converting the radar data in the polar coordinate system into grid data in a Cartesian coordinate system, using the inverse distance weighted interpolation rule to regulate the data, removing noise by using the Mahalanobis distance, performing quality control on the radar data, and acquiring accurate radar and other latitude and longitude data;
s4, acquiring 1 x 400 data sets of all the sites at all the moments under a Cartesian coordinate system, cutting to obtain three single-layer data sets, carrying out normalization processing on all the data, and combining the three single-layer data sets into one data sample for storage;
s5, dividing the data into a test set and a training set according to the ratio of 2:8 by taking the actual precipitation of the site as a ground truth label, and finally storing all the data in a matrix form;
s6, establishing a rainfall intensity estimation model designed by using a deep learning technology;
s7, initializing each neuron weight, training times, learning rate and learning rate attenuation coefficient of the model, obtaining a precipitation estimation value through a feature extraction network and a full-connection neural network, and calculating loss of a prediction result; obtaining an optimal network model and parameters;
s8, inputting the data in the test set into the network model as input layer data to obtain corresponding forecast precipitation data;
s9, selecting an evaluation index for measuring model performance, measuring the correlation between the true value and the estimated value, and analyzing in time and space dimensions respectively according to the result to obtain the optimal result.
2. The method of claim 1, wherein the method comprises: in S1, the doppler radar base data and the ground station precipitation data are respectively obtained in the chinese meteorological data network.
3. The method of claim 1, wherein the method comprises: in S2, the radar reflectivity factor is the main input, the meteorological factor and the geographic factor are the auxiliary inputs, the meteorological factor mainly uses the temperature and the humidity, the geographic factor adopts the elevation, and the data are subjected to preliminary preprocessing.
4. The method of claim 1, wherein the method comprises: in S3, the grid point nearest to the national weather station is selected as the center of the reflectivity factor, multi-scale input is adopted, and then radar base data in polar coordinates are converted into grid data in Cartesian coordinates.
5. The method of claim 1, wherein the method comprises: in S3, the step of removing noise includes: denoising and filtering and removing pixel points smaller than 70 by using a conventional echo image processing method.
6. The method of claim 1, wherein the method comprises: in S4, the 1 × 400 × 400 data sets of all sites at all times in the cartesian coordinate system are obtained, the cutting is performed with the national weather site as the center, the single-layer data sets of 1 × 100 × 100, 1 × 50 × 50, and 1 × 25 × 25 are respectively obtained, all the data are normalized, and the three single-layer data sets are combined into one data sample to be stored.
7. The method of claim 1, wherein the method comprises: in S6, radar data image precipitation feature extraction is performed by using mixed hole convolution, down sampling is performed by using maximum pooling, redundant information is removed, features are compressed, and network complexity is simplified. The non-local module is used for increasing the receptive field of a high-level network, and the acquired information is distributed more widely. The large-scale image and the small-scale image centered on the site are balanced using a designed multi-scale attention module.
8. The method of claim 7, wherein the method comprises: in S7, calculating loss of the prediction result by using a weighted combination of Mean Square Error (MSE) and Mean Absolute Error (MAE) as a loss function; and (3) performing back propagation by using a neural network, calculating the gradient of each weight, updating the weight according to a gradient descent algorithm, continuously adjusting the weight of the neuron, stopping network training until the error of the training set is within a reasonable range, and obtaining an optimal network model and parameters.
9. The method of claim 1, wherein the method comprises: in S9, the Root Mean Square Error (RMSE), the Mean Absolute Error (MAE), and the Correlation Coefficient (CC) are used as evaluation indexes for measuring the model performance.
CN202210417830.9A 2022-04-20 2022-04-20 Rainfall intensity estimation method for comprehensive multi-time space-scale Doppler radar data Active CN114742206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210417830.9A CN114742206B (en) 2022-04-20 2022-04-20 Rainfall intensity estimation method for comprehensive multi-time space-scale Doppler radar data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210417830.9A CN114742206B (en) 2022-04-20 2022-04-20 Rainfall intensity estimation method for comprehensive multi-time space-scale Doppler radar data

Publications (2)

Publication Number Publication Date
CN114742206A true CN114742206A (en) 2022-07-12
CN114742206B CN114742206B (en) 2023-07-25

Family

ID=82282727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210417830.9A Active CN114742206B (en) 2022-04-20 2022-04-20 Rainfall intensity estimation method for comprehensive multi-time space-scale Doppler radar data

Country Status (1)

Country Link
CN (1) CN114742206B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824372A (en) * 2023-06-21 2023-09-29 中国水利水电科学研究院 Urban rainfall prediction method based on Transformer
CN117455809A (en) * 2023-10-24 2024-01-26 武汉大学 Image mixed rain removing method and system based on depth guiding diffusion model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107807907A (en) * 2017-09-08 2018-03-16 中国电力科学研究院 A kind of precipitation classification method and system
CN109782375A (en) * 2019-01-07 2019-05-21 华东交通大学 Precipitation intensity estimating and measuring method and system based on big data
CN110346844A (en) * 2019-07-15 2019-10-18 南京恩瑞特实业有限公司 Quantitative Precipitation estimating and measuring method of the NRIET based on cloud classification and machine learning
CN111625993A (en) * 2020-05-25 2020-09-04 中国水利水电科学研究院 Small watershed surface rainfall interpolation method based on mountainous terrain and rainfall characteristic prediction
CN112965146A (en) * 2021-04-14 2021-06-15 中国水利水电科学研究院 Quantitative precipitation estimation method combining meteorological radar and rainfall barrel observation data
CN113791415A (en) * 2021-09-15 2021-12-14 南京信息工程大学 Radar quantitative precipitation estimation method based on deep learning
CN113936142A (en) * 2021-10-13 2022-01-14 成都信息工程大学 Rainfall approach forecasting method and device based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107807907A (en) * 2017-09-08 2018-03-16 中国电力科学研究院 A kind of precipitation classification method and system
CN109782375A (en) * 2019-01-07 2019-05-21 华东交通大学 Precipitation intensity estimating and measuring method and system based on big data
CN110346844A (en) * 2019-07-15 2019-10-18 南京恩瑞特实业有限公司 Quantitative Precipitation estimating and measuring method of the NRIET based on cloud classification and machine learning
CN111625993A (en) * 2020-05-25 2020-09-04 中国水利水电科学研究院 Small watershed surface rainfall interpolation method based on mountainous terrain and rainfall characteristic prediction
CN112965146A (en) * 2021-04-14 2021-06-15 中国水利水电科学研究院 Quantitative precipitation estimation method combining meteorological radar and rainfall barrel observation data
CN113791415A (en) * 2021-09-15 2021-12-14 南京信息工程大学 Radar quantitative precipitation estimation method based on deep learning
CN113936142A (en) * 2021-10-13 2022-01-14 成都信息工程大学 Rainfall approach forecasting method and device based on deep learning

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
PANQU WANG ET AL.: "Understanding Convolution for Semantic Segmentation", 《IEEE》 *
PANQU WANG ET AL.: "Understanding Convolution for Semantic Segmentation", 《IEEE》, 31 December 2018 (2018-12-31), pages 1451 - 1460 *
WEI TIAN ET AL.: "Radar Reflectivity and Meteorological Factors MergingBased Precipitation Estimation Neural Network", 《EARTH AND SPACE SCIENCE》 *
WEI TIAN ET AL.: "Radar Reflectivity and Meteorological Factors MergingBased Precipitation Estimation Neural Network", 《EARTH AND SPACE SCIENCE》, 31 December 2021 (2021-12-31), pages 1 - 19 *
WEI TIAN1 ET AL.: "Ground radar precipitation estimation with deep learning approaches in meteorological private cloud", 《JOURNAL OF CLOUD COMPUTING》 *
WEI TIAN1 ET AL.: "Ground radar precipitation estimation with deep learning approaches in meteorological private cloud", 《JOURNAL OF CLOUD COMPUTING》, 31 December 2020 (2020-12-31), pages 1 - 12 *
XIAOLONG WANG ET AL.: "Non-local Neural Networks", 《IEEE》 *
XIAOLONG WANG ET AL.: "Non-local Neural Networks", 《IEEE》, 31 December 2018 (2018-12-31), pages 7794 - 7803 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824372A (en) * 2023-06-21 2023-09-29 中国水利水电科学研究院 Urban rainfall prediction method based on Transformer
CN116824372B (en) * 2023-06-21 2023-12-08 中国水利水电科学研究院 Urban rainfall prediction method based on Transformer
CN117455809A (en) * 2023-10-24 2024-01-26 武汉大学 Image mixed rain removing method and system based on depth guiding diffusion model
CN117455809B (en) * 2023-10-24 2024-05-24 武汉大学 Image mixed rain removing method and system based on depth guiding diffusion model

Also Published As

Publication number Publication date
CN114742206B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US20220043182A1 (en) Spatial autocorrelation machine learning-based downscaling method and system of satellite precipitation data
CN113936142B (en) Precipitation proximity forecasting method and device based on deep learning
Si et al. Hybrid solar forecasting method using satellite visible images and modified convolutional neural networks
CN114742206B (en) Rainfall intensity estimation method for comprehensive multi-time space-scale Doppler radar data
CN109001736B (en) Radar echo extrapolation method based on deep space-time prediction neural network
CN108761574A (en) Rainfall evaluation method based on Multi-source Information Fusion
CN114254561A (en) Waterlogging prediction method, waterlogging prediction system and storage medium
CN103743402B (en) A kind of underwater intelligent self adaptation Approach of Terrain Matching of topographic information based amount
CN113496104A (en) Rainfall forecast correction method and system based on deep learning
CN113255972B (en) Short-term rainfall prediction method based on Attention mechanism
CN112949407B (en) Remote sensing image building vectorization method based on deep learning and point set optimization
CN114445634A (en) Sea wave height prediction method and system based on deep learning model
CN111145245A (en) Short-time approaching rainfall forecasting method and system and computer readable storage medium
CN114049545B (en) Typhoon intensity determining method, system, equipment and medium based on point cloud voxels
CN112966853A (en) Urban road network short-term traffic flow prediction method based on space-time residual error mixed model
CN117556197B (en) Typhoon vortex initialization method based on artificial intelligence
CN116933621A (en) Urban waterlogging simulation method based on terrain feature deep learning
CN115792853A (en) Radar echo extrapolation method based on dynamic weight loss
CN115691049A (en) Convection birth early warning method based on deep learning
CN115049013A (en) Short-term rainfall early warning model fusion method combining linearity and SVM
CN114882373A (en) Multi-feature fusion sandstorm prediction method based on deep neural network
CN111488974B (en) Ocean wind energy downscaling method based on deep learning neural network
CN116699731B (en) Tropical cyclone path short-term forecasting method, system and storage medium
CN117131991A (en) Urban rainfall prediction method and platform based on hybrid neural network
CN111811465A (en) Method for predicting sea wave effective wave height based on multi-sine function decomposition neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant