CN114240796B - Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN - Google Patents
Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN Download PDFInfo
- Publication number
- CN114240796B CN114240796B CN202111578218.1A CN202111578218A CN114240796B CN 114240796 B CN114240796 B CN 114240796B CN 202111578218 A CN202111578218 A CN 202111578218A CN 114240796 B CN114240796 B CN 114240796B
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- cloud
- sensing image
- discriminator
- fog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 239000003595 mist Substances 0.000 claims abstract description 61
- 238000013256 Gubra-Amylin NASH model Methods 0.000 claims abstract description 39
- 238000001514 detection method Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 5
- 238000010191 image analysis Methods 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003032 molecular docking Methods 0.000 description 2
- 230000003042 antagnostic effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application discloses a remote sensing image cloud and fog removing method, equipment and a storage medium based on GAN, wherein the method comprises the following steps: dividing visibility levels of remote sensing data obtained with cloud and fog; training the training set of each visibility level according to the sequence from high to low of the visibility according to the training set of the visibility level divided with cloud and fog remote sensing data, fixing model parameters of a discriminator, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog remote sensing image; inputting the real and clear remote sensing image and the generated cloud and fog removing remote sensing image into a discriminator, so that the discriminator can distinguish the real and clear remote sensing image and the cloud and fog removing remote sensing image; training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level; and interacting the cloud and mist removing CAN model with the remote sensing image application system to obtain feedback and update generator parameters to generate a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system.
Description
Technical Field
The application relates to the technical field of remote sensing, in particular to a remote sensing image cloud and fog removing method, equipment and storage medium based on GAN.
Background
The generation of a countermeasures network (GAN) is one of the most important methods for unsupervised learning over complex distributions in recent years. The generating countermeasure network consists of a generating network (Generator) and a judging network (Discriminator), high-quality output is generated through mutual game learning, and finally, the training of the two neural networks is completed through sampling from complex probability distribution through mutual countermeasure learning. Currently, the generation of antagonistic network technology is widely used in many fields.
In recent years, the remote sensing technology is widely applied, multispectral images and full-color images obtained by satellite shooting are formed into remote sensing images with higher spatial resolution and spectral resolution through image fusion, and the remote sensing technology has advantages in acquiring basic geographic data, resource information and emergency disaster data compared with other technical means, and is widely applied to the fields of national economy and military. However, the remote sensing image is extremely easy to be interfered by cloud in the imaging process, the remote sensing information of the area shielded by the cloud is lost or deviated, the accuracy of the remote sensing data is greatly influenced, and the use efficiency of the remote sensing application is reduced.
The method based on the spectrum characteristics of the remote sensing image can eliminate cloud and fog, such as a fog optimization transformation method (HOT), a background fog thickness inhibition index method (BSHTI) and the like, but cannot always give satisfactory results.
Disclosure of Invention
The application provides a remote sensing image cloud and fog removing method based on GAN, which solves the technical problem of inaccuracy of the acquired remote sensing image caused by the shielding of cloud and fog.
A remote sensing image defogging method based on generating an countermeasure network GAN, the GAN comprising a generator and a discriminator, comprising:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
dividing the training set with cloud and fog remote sensing data according to the visibility levels, training the training set with each visibility level in sequence according to the sequence from high to low of the visibility, fixing model parameters of the discriminator during training, inputting the training set corresponding to each visibility level into the generator, and generating a cloud and fog removing remote sensing image;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
and interacting the cloud and mist removing CAN model with a remote sensing image application system to obtain feedback, updating generator parameters, and generating a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system.
In one embodiment of the present application, the training set for dividing the cloud remote sensing data according to the visibility levels trains the training set of each visibility level in turn according to the order of the visibility from high to low, and when training, fixes model parameters of the discriminator, inputs the training set of each visibility level into the generator, and generates a cloud-removing remote sensing image, which specifically includes: the visibility level is divided into L1 to Ln levels, and the visibility of the L1 to Ln levels is sequentially reduced; and sequentially acquiring cloud and fog remote sensing data training sets corresponding to each visibility level from L1 to Ln, fixing model parameters of the discriminator, inputting the training sets RSData-TD in each visibility level into the generator for training, and generating cloud and fog removing remote sensing images RSImg-TD corresponding to each visibility level.
In one embodiment of the present application, after the cloud-removed remote sensing image is generated, the method further comprises: generating (RSData-TD, RSImg-TD) data pairs according to the training set RSData-TD and the cloud and fog removal remote sensing image RSImg-TD; generating (RSData-TD, RSreal-TD) data pairs according to the training set RSData-TD and the real and clear remote sensing image RSReal-TD; and determining that the (RSData-TD, RSImg-TD) data pair is output to a negative value after being input to the discriminator, and determining that the (RSData-TD, RSreal-TD) data pair is output to a positive value after being input to the discriminator.
In one embodiment of the application, the method further comprises: the network parameters of the generator are updated according to the gradient descent method for training, and data pairs (RSData-TD, RSImg-TD) are output until the discriminator cannot distinguish between the data pairs (RSData-TD, RSImg-TD) and the data pairs (RSData-TD, RSreal-TD).
In one embodiment of the present application, the inputting the real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, training and updating parameters of the discriminator, so that the discriminator can distinguish the real and clear remote sensing image and the cloud and fog removing remote sensing image, specifically includes: inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, fixing network parameters of the generator, training the discriminator, and obtaining an error between the real and clear remote sensing image and the generated cloud and fog removing remote sensing image according to a loss function; the error is counter-propagated, the discriminator D network parameters are updated such that (RSData-TD, RSImg-TD) data pairs are output negative low scores and (RSData-TD, rseal-TD) data pairs are output positive high scores through the discriminator D to enable the discriminator to effectively distinguish between (RSData-TD, rseal-TD) data pairs and (RSData-TD, RSImg-TD) data pairs.
In one embodiment of the present application, before acquiring the remote sensing data with cloud, the method further includes: collecting remote sensing data, carrying out data annotation on the remote sensing data, and dividing a cloud and fog area, thickness and visibility level; according to the remote sensing data and the labeling result, performing model training to generate a cloud and fog region detection model; and inputting the result output by the cloud area detection model and the cloud-free remote sensing data into a cloud cover model for training, so that the cloud-free remote sensing data passes through the cloud cover model and then outputs cloud-free remote sensing data.
In one embodiment of the application, the method further comprises: judging the visibility level of the cloud and fog region according to the cloud and fog region detection model; selecting a corresponding cloud and mist removing GAN model according to the visibility level; generating a cloud and mist removing remote sensing image according to the cloud and mist removing GAN model; intercepting a cloud and fog region, filling the identified cloud and fog region according to the cloud and fog removing remote sensing image, and generating a final remote sensing image.
In one embodiment of the application, the method further comprises: continuously collecting remote sensing data, and optimizing the cloud area detection model and the cloud coverage model to generate a more accurate data set to train the cloud removal GAN model; according to feedback of the remote sensing image application system, subdividing visibility levels, continuously optimizing the cloud and mist removal GAN model, and generating a more reasonable and accurate cloud and mist removal remote sensing image; and according to the generated cloud and fog removing remote sensing image, adjusting the remote sensing image application system algorithm, and further optimizing a service system based on remote sensing image analysis.
A GAN-based remote sensing image cloud and mist removal device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
dividing the training set with cloud and fog remote sensing data according to the visibility levels, training the training set with each visibility level in sequence according to the sequence from high to low of the visibility, fixing model parameters of the discriminator during training, inputting the training set corresponding to each visibility level into the generator, and generating a cloud and fog removing remote sensing image;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
and interacting the cloud and mist removing CAN model with a remote sensing image application system to obtain feedback, updating generator parameters, and generating a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system.
A non-volatile storage medium storing computer executable instructions configured to:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
dividing the training set with cloud and fog remote sensing data according to the visibility levels, training the training set with each visibility level in sequence according to the sequence from high to low of the visibility, fixing model parameters of the discriminator during training, inputting the training set corresponding to each visibility level into the generator, and generating a cloud and fog removing remote sensing image;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
and interacting the cloud and mist removing CAN model with a remote sensing image application system to obtain feedback, updating generator parameters, and generating a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system.
The application provides a remote sensing image cloud and mist removing method, equipment and a storage medium based on GAN, which at least comprise the following beneficial effects: by utilizing a GAN network and a deep learning technology, a remote sensing data cloud and fog removing GAN model is constructed, the characteristics of remote sensing images are fully considered by the model, correlation among multiple bands of the remote sensing data is effectively utilized, and the characteristics of different cloud and fog thicknesses are combined, so that a remote sensing image with a more accurate cloud and fog removing effect is generated. Compared with the traditional cloud and fog eliminating mode technology, the GAN network can better find deep connection between cloud and fog shielding and ground facilities, generate more reasonably accurate remote sensing images and eliminate remote sensing information deviation caused by cloud and fog shielding areas; the cloud target detection model is utilized to identify a specific cloud area and determine the cloud thickness and visibility level, on one hand, the defogging generation area is reduced, the accuracy of a cloud-free shielding area is ensured, on the other hand, a plurality of types of targeted models are formed according to different visibility levels, and a more suitable model is selected according to different cloud conditions, so that a remote sensing image has a better cloud elimination effect; the model with gradually lower cloud and fog visibility is sequentially selected for training, meanwhile, the model of the last level is adopted as an initial network parameter, and the generator and the discriminator in the model are trained alternately, so that convergence can be realized more rapidly, and training efficiency is improved. In addition, the docking application system is used for combined training to form a more accurate and reasonable personalized model, so that the actual business requirements of remote sensing image application are met; the feedback data is continuously collected to optimize the model, the accuracy of the model is further improved, and meanwhile, the actual service application system can be optimized, so that an overall optimal service system based on remote sensing image analysis is formed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a schematic step diagram of a remote sensing image cloud and fog removing method based on GAN according to an embodiment of the present application;
Fig. 2 is a training diagram of a cloud and mist removal GAN model according to an embodiment of the present application;
fig. 3 is a schematic diagram of a device composition of a remote sensing image cloud and fog removing method based on GAN according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be clearly and completely described in connection with the following specific embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In one embodiment of the application, the remote sensing image cloud removal model based on the generated countermeasure network is designed by fully considering the correlation among multiple bands of remote sensing data and combining the characteristics of different cloud and mist thickness, the generator and the discriminator in the alternate training model are used for forming different levels of basic models, the interactive retraining is realized on the remote sensing image practical application system, a more accurate and reasonable personalized model is formed, and the practical service requirement of the remote sensing image application is met. The cloud and fog region can be effectively identified by combining the characteristic of the spectrum of the remote sensing image by utilizing the generation countermeasure network technology, the remote sensing image cloud and fog removing model is designed by utilizing the generation countermeasure network technology, the remote sensing image with cloud and fog removing effect is generated, the remote sensing information deviation of the cloud and fog shielding region is eliminated, and the application efficiency of the remote sensing data is improved. The following is a detailed description.
Fig. 1 is a schematic step diagram of a remote sensing image cloud and fog removing method based on GAN according to an embodiment of the present application, which may include the following steps:
s101: and acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data.
In one embodiment of the present application, to train the generating countermeasure network, a training set is first generated, then the training set is trained by generating the countermeasure network, and then the application system is combined for training, so as to generate remote sensing images for different services, as shown in fig. 2.
In one embodiment of the application, before the remote sensing Data with cloud is acquired, the remote sensing Data RS-Data is multi-Channel (Channel) Data formed based on multi-spectrum sensing acquisition, and the visible light part Data of the multi-Channel (Channel) Data are combined with the full-color image to form a remote sensing image. The remote sensing data are subjected to data marking, cloud and fog areas, thickness and visibility levels are divided, and meanwhile, the remote sensing data in the same area under different weather conditions can be subjected to data marking; according to the remote sensing data and the labeling result, performing model training to generate a cloud and fog region detection model; the cloud and fog region detection model CL-Det is responsible for carrying out target detection of a cloud service region of remote sensing data, identifying a specific cloud and fog region and determining the thickness and visibility level of the cloud and fog.
And inputting the result output by the cloud area detection model and the cloud-free remote sensing data into the cloud coverage model for training, so that the cloud-free remote sensing data passes through the cloud coverage model and then outputs cloud-free remote sensing data. The cloud cover model CL-Cov is used for adding cloud cover to cloud-free remote sensing data based on setting cloud area and thickness level according to the cloud-free remote sensing data.
Judging the accuracy of cloud remote sensing data generated by a cloud cover model CL-Cov by using a cloud area detection model CL-Det, and optimally adjusting the cloud cover model CL-Cov; the training set TD is constructed using the annotation data and the generated data.
S102: training the training set of each visibility level according to the sequence from high to low of the visibility according to the training set of the cloud and fog remote sensing data divided by the visibility level, fixing model parameters of the discriminator during training, inputting the training set corresponding to each visibility level into the generator, and generating a cloud and fog removing remote sensing image.
In one embodiment of the present application, the visibility level is divided into L1 to Ln levels, and the visibility of the L1 to Ln levels is sequentially reduced; and sequentially acquiring cloud and fog remote sensing data training sets corresponding to each visibility level in the L1 to Ln levels.
The remote sensing image cloud and fog removal generation countering network basic model RS-M-TD (L1-Ln) is characterized in that a GAN generation countering network comprises a generator G and a discriminator D, and a remote sensing image cloud and fog removal GAN model with multiple visibility levels is formed according to different cloud and fog thicknesses and visibility.
And fixing model parameters of the discriminator, inputting the training set RSData-TD in each visibility level into the generator G for training, and generating the cloud and fog removing remote sensing image RSImg-TD corresponding to each visibility level. The core of the generator G of the cloud and mist removal GAN model is a CNN convolutional neural network, and clear remote sensing images after cloud and mist removal are generated by inputting remote sensing data under the condition of cloud and mist.
In one embodiment of the application, after the cloud-removed remote sensing image is generated, generating (RSData-TD, RSImg-TD) data pairs according to the training set RSData-TD and the cloud-removed remote sensing image RSImg-TD; generating (RSData-TD, RSreal-TD) data pairs according to the training set RSData-TD and the real and clear remote sensing image RSReal-TD; negative values are determined (RSData-TD, RSImg-TD) for data input to the discriminator, and positive values are determined (RSData-TD, RSreal-TD) for data input to the discriminator. By continuously distinguishing the real remote sensing image from the generated cloud-removed remote sensing image, the GAN model can be continuously optimized.
In one embodiment of the application, training is performed according to the gradient descent method to update the network parameters of the generator, outputting (RSData-TD, RSImg-TD) data pairs until the discriminator is made unable to distinguish between (RSData-TD, RSImg-TD) data pairs and (RSData-TD, RSreal-TD) data pairs.
S103: the real and clear remote sensing image and the generated cloud and fog removing remote sensing image are input into the discriminator, and parameters of the discriminator are trained and updated, so that the discriminator can distinguish the real and clear remote sensing image and the cloud and fog removing remote sensing image.
In one embodiment of the application, the core of the cloud and mist removing GAN model discriminator D is a binary classifier, which is used for distinguishing a real clear remote sensing image from a cloud and mist removing remote sensing image generated by the generator G, inputting the remote sensing image with cloud and mist removing remote sensing image, and outputting a distinguishing value to effectively distinguish whether the real remote sensing image is the remote sensing image or the remote sensing image generated by the generator G.
Inputting a real and clear remote sensing image and a generated cloud and fog removing remote sensing image into a discriminator, namely inputting a (RSData-TD, RSImg-TD) data pair and a (RSData-TD, RSreal-TD) data pair into the discriminator, fixing network parameters of a generator, training the discriminator, and obtaining errors between the real and clear remote sensing image and the generated cloud and fog removing remote sensing image according to a loss function; the error is counter-propagated, the discriminator D network parameters updated such that the (RSData-TD, RSImg-TD) data pair outputs a negative low score, such as a-1 score, through the discriminator D. The (RSData-TD, RSreal-TD) data is made positive to the output high score, such as 1 score. So that the discriminator can effectively distinguish between (RSData-TD, rstal-TD) data pairs and (RSData-TD, rstmg-TD) data pairs.
S104: the generator and the discriminator are trained alternately to generate a cloud and mist removal GAN model corresponding to each visibility level.
S105: and interacting the cloud and mist removing CAN model with the remote sensing image application system to obtain feedback and update generator parameters to generate a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system.
In one embodiment of the application, the visibility level of the cloud and fog region is judged according to a cloud and fog region detection model; selecting a cloud and mist removing GAN model of a corresponding level according to the visibility level; generating a cloud and mist removing remote sensing image according to the cloud and mist removing GAN model; intercepting a cloud and fog region, filling the identified cloud and fog region according to the cloud and fog removing remote sensing image, and generating a final remote sensing image.
In one embodiment of the application, remote sensing data is continuously collected, and a cloud region detection model and a cloud coverage model are optimized to generate a more accurate data set training cloud removal GAN model.
Providing an initial value for a joint training module CUST-M of a docking application system through a cloud and mist removing GAN model, enabling the CUST-M to generate cloud and mist removing remote sensing images conforming to the service, providing the cloud and mist removing remote sensing images to an application system according to the remote sensing images, subdividing visibility levels according to feedback of the remote sensing image application system, continuously optimizing the cloud and mist removing GAN model, and generating more reasonable and accurate cloud and mist removing remote sensing images; and feeding the generated cloud-removed remote sensing image back to a remote sensing image cloud-removal generation countering network basic model discriminator in the CUST-M, feeding back a result of the remote sensing image cloud-removal generation countering network basic model discriminator to a remote sensing image cloud-removal generation countering network basic model generator G, adjusting a remote sensing image application system algorithm, and further optimizing a service system based on remote sensing image analysis.
The above method for removing cloud and mist of remote sensing images based on GAN provided by the embodiment of the present application is based on the same inventive concept, and the embodiment of the present application further provides a corresponding device for removing cloud and mist of remote sensing images based on GAN, as shown in fig. 3.
The embodiment provides a remote sensing image removes cloud fog equipment based on GAN, includes:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
training the training set of each visibility level according to the sequence from high to low of the visibility according to the training set divided with cloud and fog remote sensing data of the visibility level, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
Inputting the real and clear remote sensing image and the generated cloud and fog removing remote sensing image into a discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image and the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
And interacting the cloud and mist removing CAN model with the remote sensing image application system to obtain feedback and update generator parameters to generate a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system.
Based on the same thought, some embodiments of the present application also provide a medium corresponding to the above method.
Some embodiments of the present application provide a GAN-based remote sensing image cloud and mist removing storage medium, storing computer executable instructions, where the computer executable instructions are configured to:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
training the training set of each visibility level according to the sequence from high to low of the visibility according to the training set divided with cloud and fog remote sensing data of the visibility level, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
Inputting the real and clear remote sensing image and the generated cloud and fog removing remote sensing image into a discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image and the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
And interacting the cloud and mist removing CAN model with the remote sensing image application system to obtain feedback and update generator parameters to generate a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the method and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The method, medium and method provided by the embodiment of the application are in one-to-one correspondence, so that the method and medium also have similar beneficial technical effects as the corresponding method, and the beneficial technical effects of the method are described in detail above, so that the beneficial technical effects of the method and medium are not repeated here.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process article or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process article or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process method article or method that includes the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (8)
1. A remote sensing image defogging method based on generation of an countermeasure network GAN, wherein the GAN comprises a generator and a discriminator, comprising:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
Dividing a training set of cloud and fog remote sensing data according to the visibility levels, training the training set of each visibility level in sequence according to the sequence from high to low of the visibility, fixing model parameters of the discriminator during training, inputting the training set of each visibility level into the generator, and generating a cloud and fog removing remote sensing image;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
the cloud and mist removing CAN model is interacted with a remote sensing image application system to obtain feedback and update generator parameters, and a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system is generated;
the visibility level is divided into L1 to Ln levels, and the visibility of the L1 to Ln levels is sequentially reduced;
the cloud and fog remote sensing data training sets corresponding to each visibility level in L1 to Ln levels are sequentially obtained, model parameters of the discriminator are fixed, the training sets RSData-TD in each visibility level are input into the generator for training, and cloud and fog removal remote sensing images RSImg-TD corresponding to each visibility level are generated;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, fixing network parameters of the generator, training the discriminator, and obtaining an error between the real and clear remote sensing image and the generated cloud and fog removing remote sensing image according to a loss function;
The error is counter-propagated, the discriminator D network parameters are updated such that (RSData-TD, RSImg-TD) data pairs are output negative low scores and (RSData-TD, rseal-TD) data pairs are output positive high scores through the discriminator D to enable the discriminator to effectively distinguish between (RSData-TD, rseal-TD) data pairs and (RSData-TD, RSImg-TD) data pairs.
2. The method of claim 1, wherein, after the defogging remote sensing image is generated,
The method further comprises the steps of:
Generating (RSData-TD, RSImg-TD) data pairs according to the training set RSData-TD and the cloud and fog removal remote sensing image RSImg-TD;
generating (RSData-TD, RSreal-TD) data pairs according to the training set RSData-TD and the real and clear remote sensing image RSReal-TD;
And determining that the (RSData-TD, RSImg-TD) data pair is output to a negative value after being input to the discriminator, and determining that the (RSData-TD, RSreal-TD) data pair is output to a positive value after being input to the discriminator.
3. The method according to claim 2, wherein the method further comprises:
The network parameters of the generator are updated according to the gradient descent method for training, and data pairs (RSData-TD, RSImg-TD) are output until the discriminator cannot distinguish between the data pairs (RSData-TD, RSImg-TD) and the data pairs (RSData-TD, RSreal-TD).
4. The method of claim 1, wherein prior to obtaining the cloud-based remote sensing data, the method further comprises: collecting remote sensing data, carrying out data annotation on the remote sensing data, and dividing a cloud and fog area, thickness and visibility level;
According to the remote sensing data and the labeling result, performing model training to generate a cloud and fog region detection model;
And inputting the result output by the cloud area detection model and the cloud-free remote sensing data into a cloud cover model for training, so that the cloud-free remote sensing data passes through the cloud cover model and then outputs cloud-free remote sensing data.
5. The method according to claim 4, wherein the method further comprises:
judging the visibility level of the cloud and fog region according to the cloud and fog region detection model;
Selecting a corresponding cloud and mist removing GAN model according to the visibility level;
generating a cloud and mist removing remote sensing image according to the cloud and mist removing GAN model;
Intercepting a cloud and fog region, filling the identified cloud and fog region according to the cloud and fog removing remote sensing image, and generating a final remote sensing image.
6. The method according to claim 4, wherein the method further comprises:
Continuously collecting remote sensing data, and optimizing the cloud area detection model and the cloud coverage model to generate a more accurate data set to train the cloud removal GAN model;
according to feedback of the remote sensing image application system, subdividing visibility levels, continuously optimizing the cloud and mist removal GAN model, and generating a more reasonable and accurate cloud and mist removal remote sensing image;
and according to the generated cloud and fog removing remote sensing image, adjusting the remote sensing image application system algorithm, and further optimizing a service system based on remote sensing image analysis.
7. Remote sensing image removes cloud fog equipment based on GAN, characterized by comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
dividing the training set with cloud and fog remote sensing data according to the visibility levels, training the training set with each visibility level in sequence according to the sequence from high to low of the visibility, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
the cloud and mist removing CAN model is interacted with a remote sensing image application system to obtain feedback and update generator parameters, and a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system is generated;
the visibility level is divided into L1 to Ln levels, and the visibility of the L1 to Ln levels is sequentially reduced;
the cloud and fog remote sensing data training sets corresponding to each visibility level in L1 to Ln levels are sequentially obtained, model parameters of the discriminator are fixed, the training sets RSData-TD in each visibility level are input into the generator for training, and cloud and fog removal remote sensing images RSImg-TD corresponding to each visibility level are generated;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, fixing network parameters of the generator, training the discriminator, and obtaining an error between the real and clear remote sensing image and the generated cloud and fog removing remote sensing image according to a loss function;
The error is counter-propagated, the discriminator D network parameters are updated such that (RSData-TD, RSImg-TD) data pairs are output negative low scores and (RSData-TD, rseal-TD) data pairs are output positive high scores through the discriminator D to enable the discriminator to effectively distinguish between (RSData-TD, rseal-TD) data pairs and (RSData-TD, RSImg-TD) data pairs.
8. A non-volatile storage medium storing computer executable instructions, the computer executable instructions configured to:
acquiring cloud remote sensing data, and dividing the visibility level of the cloud remote sensing data;
dividing the training set with cloud and fog remote sensing data according to the visibility levels, training the training set with each visibility level in sequence according to the sequence from high to low of the visibility, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
Training the generator and the discriminator alternately to generate a cloud and mist removal GAN model corresponding to each visibility level;
the cloud and mist removing CAN model is interacted with a remote sensing image application system to obtain feedback and update generator parameters, and a personalized cloud and mist removing GAN model corresponding to the remote sensing image application system is generated;
the visibility level is divided into L1 to Ln levels, and the visibility of the L1 to Ln levels is sequentially reduced;
the cloud and fog remote sensing data training sets corresponding to each visibility level in L1 to Ln levels are sequentially obtained, model parameters of the discriminator are fixed, the training sets RSData-TD in each visibility level are input into the generator for training, and cloud and fog removal remote sensing images RSImg-TD corresponding to each visibility level are generated;
Inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, fixing network parameters of the generator, training the discriminator, and obtaining an error between the real and clear remote sensing image and the generated cloud and fog removing remote sensing image according to a loss function;
The error is counter-propagated, the discriminator D network parameters are updated such that (RSData-TD, RSImg-TD) data pairs are output negative low scores and (RSData-TD, rseal-TD) data pairs are output positive high scores through the discriminator D to enable the discriminator to effectively distinguish between (RSData-TD, rseal-TD) data pairs and (RSData-TD, RSImg-TD) data pairs.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111578218.1A CN114240796B (en) | 2021-12-22 | 2021-12-22 | Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN |
PCT/CN2022/105319 WO2023115915A1 (en) | 2021-12-22 | 2022-07-13 | Gan-based remote sensing image cloud removal method and device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111578218.1A CN114240796B (en) | 2021-12-22 | 2021-12-22 | Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114240796A CN114240796A (en) | 2022-03-25 |
CN114240796B true CN114240796B (en) | 2024-05-31 |
Family
ID=80761094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111578218.1A Active CN114240796B (en) | 2021-12-22 | 2021-12-22 | Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114240796B (en) |
WO (1) | WO2023115915A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114240796B (en) * | 2021-12-22 | 2024-05-31 | 山东浪潮科学研究院有限公司 | Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN |
CN117252785B (en) * | 2023-11-16 | 2024-03-12 | 安徽省测绘档案资料馆(安徽省基础测绘信息中心) | Cloud removing method based on combination of multisource SAR and optical image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109493303A (en) * | 2018-05-30 | 2019-03-19 | 湘潭大学 | A kind of image defogging method based on generation confrontation network |
CN111667431A (en) * | 2020-06-09 | 2020-09-15 | 云南电网有限责任公司电力科学研究院 | Method and device for manufacturing cloud and fog removing training set based on image conversion |
CN113450261A (en) * | 2020-03-25 | 2021-09-28 | 江苏翼视智能科技有限公司 | Single image defogging method based on condition generation countermeasure network |
WO2021248938A1 (en) * | 2020-06-10 | 2021-12-16 | 南京邮电大学 | Image defogging method based on generative adversarial network fused with feature pyramid |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10552714B2 (en) * | 2018-03-16 | 2020-02-04 | Ebay Inc. | Generating a digital image using a generative adversarial network |
CN109191400A (en) * | 2018-08-30 | 2019-01-11 | 中国科学院遥感与数字地球研究所 | A method of network, which is generated, using confrontation type removes thin cloud in remote sensing image |
CN110322419B (en) * | 2019-07-11 | 2022-10-21 | 广东工业大学 | Remote sensing image defogging method and system |
CN111383192B (en) * | 2020-02-18 | 2022-10-18 | 清华大学 | Visible light remote sensing image defogging method fusing SAR |
CN113724149B (en) * | 2021-07-20 | 2023-09-12 | 北京航空航天大学 | Weak-supervision visible light remote sensing image thin cloud removing method |
CN113744159B (en) * | 2021-09-09 | 2023-10-24 | 青海大学 | Defogging method and device for remote sensing image and electronic equipment |
CN114240796B (en) * | 2021-12-22 | 2024-05-31 | 山东浪潮科学研究院有限公司 | Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN |
-
2021
- 2021-12-22 CN CN202111578218.1A patent/CN114240796B/en active Active
-
2022
- 2022-07-13 WO PCT/CN2022/105319 patent/WO2023115915A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109493303A (en) * | 2018-05-30 | 2019-03-19 | 湘潭大学 | A kind of image defogging method based on generation confrontation network |
CN113450261A (en) * | 2020-03-25 | 2021-09-28 | 江苏翼视智能科技有限公司 | Single image defogging method based on condition generation countermeasure network |
CN111667431A (en) * | 2020-06-09 | 2020-09-15 | 云南电网有限责任公司电力科学研究院 | Method and device for manufacturing cloud and fog removing training set based on image conversion |
WO2021248938A1 (en) * | 2020-06-10 | 2021-12-16 | 南京邮电大学 | Image defogging method based on generative adversarial network fused with feature pyramid |
Also Published As
Publication number | Publication date |
---|---|
WO2023115915A1 (en) | 2023-06-29 |
CN114240796A (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114240796B (en) | Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN | |
CN114092832B (en) | High-resolution remote sensing image classification method based on parallel hybrid convolutional network | |
CN110136154A (en) | Remote sensing images semantic segmentation method based on full convolutional network and Morphological scale-space | |
CN111368766B (en) | Deep learning-based cow face detection and recognition method | |
CN114092769B (en) | Transformer substation multi-scene inspection analysis method based on federal learning | |
CN110334724B (en) | Remote sensing object natural language description and multi-scale correction method based on LSTM | |
CN112364699A (en) | Remote sensing image segmentation method, device and medium based on weighted loss fusion network | |
CN109919252A (en) | The method for generating classifier using a small number of mark images | |
CN110135413B (en) | Method for generating character recognition image, electronic equipment and readable storage medium | |
CN104680167B (en) | Auroral oval location determining method based on deep learning | |
CN113837191B (en) | Cross-star remote sensing image semantic segmentation method based on bidirectional unsupervised domain adaptive fusion | |
CN114611670A (en) | Knowledge distillation method based on teacher-student cooperation | |
CN113569672A (en) | Lightweight target detection and fault identification method, device and system | |
CN113469425B (en) | Deep traffic jam prediction method | |
CN114283285A (en) | Cross consistency self-training remote sensing image semantic segmentation network training method and device | |
CN115659807A (en) | Method for predicting talent performance based on Bayesian optimization model fusion algorithm | |
CN116343048A (en) | Accurate land block boundary extraction method and system for plain crop type complex region | |
CN109919936B (en) | Method, device and equipment for analyzing running state of composite insulator | |
CN115170403A (en) | Font repairing method and system based on deep meta learning and generation countermeasure network | |
CN111144462A (en) | Unknown individual identification method and device for radar signals | |
CN106023152A (en) | Reference-free stereo image quality objective evaluation method | |
CN105488792A (en) | No-reference stereo image quality evaluation method based on dictionary learning and machine learning | |
CN113838076A (en) | Method and device for labeling object contour in target image and storage medium | |
CN107832805B (en) | Technology for eliminating influence of spatial position error on remote sensing soft classification precision evaluation based on probability position model | |
CN106570928A (en) | Image-based re-lighting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |