CN112561817A - Remote sensing image cloud removing method, device and equipment based on AM-GAN and storage medium - Google Patents

Remote sensing image cloud removing method, device and equipment based on AM-GAN and storage medium Download PDF

Info

Publication number
CN112561817A
CN112561817A CN202011463223.3A CN202011463223A CN112561817A CN 112561817 A CN112561817 A CN 112561817A CN 202011463223 A CN202011463223 A CN 202011463223A CN 112561817 A CN112561817 A CN 112561817A
Authority
CN
China
Prior art keywords
image
cloud
attention
network
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011463223.3A
Other languages
Chinese (zh)
Other versions
CN112561817B (en
Inventor
徐萌
邓芙蓉
贾森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011463223.3A priority Critical patent/CN112561817B/en
Publication of CN112561817A publication Critical patent/CN112561817A/en
Application granted granted Critical
Publication of CN112561817B publication Critical patent/CN112561817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a remote sensing image cloud removing method, a device, equipment and a storage medium based on AM-GAN, the invention firstly trains a network model according to data formed by a cloud image group and a non-cloud image group and combines a loss function, then obtains an attention diagram of an image to be processed through an attention circulation network in a target network model obtained by training, the image to be recognized is descaled through a negative residual error network in a target network model under the guidance of the attention map to obtain a target characteristic map, reconstructing the target characteristic diagram through a standard residual block in the target network model to obtain a cloud-free background image, identifying the cloud-free background image through an identifier in the target network model to obtain an identification result, the similarity between the cloud-free background image and the real cloud-free image is determined according to the identification result, so that the interference of a cloud layer on the remote sensing image is avoided, and the result accuracy obtained by processing the remote sensing image is improved.

Description

Remote sensing image cloud removing method, device and equipment based on AM-GAN and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an AM-GAN (generalized adaptive Network of Attention Mechanism) based remote sensing image cloud removing method, device, equipment and storage medium.
Background
With the rapid development of remote sensing technology, remote sensing images with high spatial resolution and high spectral resolution can be obtained. Abundant spectral information is widely used in earth observation applications, such as resource exploration, vegetation management, disaster monitoring, and the like. However, due to the interference of the cloud layer, partial information is lost, the quality of the remote sensing image is greatly reduced, and the accuracy of a result obtained by further processing the remote sensing image is low.
Disclosure of Invention
The invention mainly aims to provide an AM-GAN-based remote sensing image cloud removing method, device, equipment and storage medium, and aims to solve the technical problem that the accuracy of a result obtained by further processing a remote sensing image is low at present.
In order to achieve the above object, an embodiment of the present invention provides an AM-GAN based remote sensing image cloud removing method, where the AM-GAN based remote sensing image cloud removing method includes:
acquiring a cloud image group and a non-cloud image group, registering data pairs formed by the cloud image group and the non-cloud image group in a preset mode, cutting a first image group obtained by registration into a second image group in a preset format, forming a training set according to the second image group, training the cloud image in the training set as the input of a network model, training the network model by taking the non-cloud image in the training set as a training target, and performing network parameter training on the network model through a loss function to obtain a target network model, wherein the target network model comprises a generator and an identifier, and the generator at least comprises an attention cycle network, a negative residual error network and a standard residual error block;
acquiring an image to be processed, and acquiring an attention diagram of the image to be processed based on an attention cycle network;
carrying out cloud removal processing on the image to be processed according to the attention diagram by combining a negative residual error network to obtain a target characteristic diagram;
performing image reconstruction on the target feature map based on a standard residual block to obtain a cloud-free background image;
and identifying the cloud-free background image to obtain an identification result, and determining whether to finish cloud removal processing on the image to be processed according to the identification result.
Preferably, the attention cycle network includes ResNet, LSTM, convolutional layers, and the step of acquiring the attention map of the image to be processed based on the attention cycle network includes:
processing the image to be processed based on ResNet in the attention circulation network to obtain a first characteristic diagram;
processing the first characteristic diagram according to the LSTM in the attention circulation network to obtain a second characteristic diagram;
and processing the second feature map according to the convolution layer in the attention circulation network to obtain the attention map of the image to be processed.
Preferably, the step of processing the first feature map according to the LSTM in the attention cycle network to obtain a second feature map includes:
receiving the first feature map based on an LSTM in the attention-cycling network;
acquiring an activation function, and activating the LSTM gate based on the first characteristic diagram and the activation function to obtain activation data;
and updating the storage unit and the hidden state of the LSTM according to the activation data to obtain a second characteristic diagram.
Preferably, the step of processing the second feature map according to the convolutional layer in the attention cycle network to obtain the attention map of the image to be processed includes:
processing the second feature map according to the convolutional layer in the attention cycle network to obtain an initial attention map;
and determining the initial attention map as a new image to be processed, and executing the step of processing the image to be processed based on ResNet in the attention loop network to obtain a first feature map until the execution times reach a preset number to obtain the attention map of the image to be processed.
Preferably, the step of performing cloud removal processing on the image to be processed according to the attention map by combining a negative residual error network to obtain a target feature map includes:
processing the image to be processed according to a negative residual error network to obtain a third feature map;
and carrying out cloud removal processing on the third feature map according to the attention map to obtain a target feature map.
Preferably, the cloud removing processing on the third feature map according to the attention map to obtain a target feature map includes:
performing a first preset operation on the third characteristic diagram and the attention diagram to obtain a fourth characteristic diagram;
acquiring a fifth characteristic diagram for standard residual error processing of the image to be processed;
and performing second preset operation on the fourth characteristic diagram and the fifth characteristic diagram to obtain a target characteristic diagram.
Preferably, the step of identifying the cloud-free background image to obtain an identification result includes:
and inputting the cloud-free background image into a discriminator consisting of a convolutional layer and a CBR (cubic boron reactor), and discriminating the cloud-free background image through the discriminator to obtain a discrimination result.
In order to achieve the above object, the present invention further provides an AM-GAN based remote sensing image cloud removing device, including:
the training module is used for acquiring a cloud image group and a non-cloud image group, registering data pairs formed by the cloud image group and the non-cloud image group in a preset mode, cutting a first image group obtained by registration into a second image group in a preset format, forming a training set according to the second image group, training the cloud image in the training set as the input of a network model, training the network model by taking the non-cloud image in the training set as a training target, and performing network parameter training on the network model through a loss function to obtain a target network model, wherein the target network model comprises a generator and an identifier, and the generator at least comprises an attention cycle network, a negative residual error network and a standard residual error block;
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image to be processed and acquiring an attention diagram of the image to be processed based on an attention cycle network;
the cloud removing module is used for carrying out cloud removing on the image to be processed according to the attention map in combination with a negative residual error network to obtain a target characteristic map;
the reconstruction module is used for reconstructing an image of the target feature map based on the standard residual block to obtain a cloud-free background image;
and the identification module is used for identifying the cloud-free background image to obtain an identification result, and determining whether the cloud removing processing is finished on the image to be processed according to the identification result.
Further, in order to achieve the above object, the present invention further provides an AM-GAN based remote sensing image cloud removing device, where the AM-GAN based remote sensing image cloud removing device includes a memory, a processor, and an AM-GAN based remote sensing image cloud removing program stored in the memory and operable on the processor, and when the AM-GAN based remote sensing image cloud removing program is executed by the processor, the steps of the AM-GAN based remote sensing image cloud removing method are implemented.
Further, in order to achieve the above object, the present invention further provides a storage medium, where an AM-GAN based remote sensing image cloud removing program is stored on the storage medium, and when the AM-GAN based remote sensing image cloud removing program is executed by a processor, the steps of the AM-GAN based remote sensing image cloud removing method are implemented.
The embodiment of the invention provides an AM-GAN based remote sensing image cloud removing method, device, equipment and storage medium, the invention firstly trains a network model according to a combination loss function of data formed by a cloud image group and a cloud-free image group, then obtains an attention diagram of an image to be processed through an attention circulation network in a target network model obtained through training, removes cloud from the image to be recognized through a negative residual error network in the target network model under the guidance of the attention diagram to obtain a target characteristic diagram, reconstructs the target characteristic diagram through a standard residual error block in the target network model to obtain a cloud-free background image, and identifies the cloud-free background image through an identifier in the target network model to obtain an identification result so as to determine the similarity between the cloud-free background image and a real cloud-free image according to the identification result and avoid the interference of a cloud layer on the remote sensing image, the accuracy of results obtained by processing the remote sensing images is improved.
Drawings
FIG. 1 is a schematic structural diagram of a hardware operating environment related to an embodiment of an AM-GAN-based remote sensing image cloud removal method of the invention;
FIG. 2 is a schematic flow chart of a first embodiment of the AM-GAN-based remote sensing image cloud removing method according to the present invention;
FIG. 3 is a schematic flow chart of a second embodiment of the AM-GAN-based remote sensing image cloud removing method according to the present invention;
FIG. 4 is a schematic flow chart of a cloud removing method for remote sensing images based on AM-GAN according to a third embodiment of the present invention;
FIG. 5 is a flowchart of an application scenario of the AM-GAN based remote sensing image cloud removing method according to the present invention;
fig. 6 is a functional module schematic diagram of a preferred embodiment of the AM-GAN-based remote sensing image cloud removing device according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides an AM-GAN based remote sensing image cloud removing method, device, equipment and storage medium, the invention firstly trains a network model according to a combination loss function of data formed by a cloud image group and a cloud-free image group, then obtains an attention diagram of an image to be processed through an attention circulation network in a target network model obtained through training, removes cloud from the image to be recognized through a negative residual error network in the target network model under the guidance of the attention diagram to obtain a target characteristic diagram, reconstructs the target characteristic diagram through a standard residual error block in the target network model to obtain a cloud-free background image, and identifies the cloud-free background image through an identifier in the target network model to obtain an identification result so as to determine the similarity between the cloud-free background image and a real cloud-free image according to the identification result and avoid the interference of a cloud layer on the remote sensing image, the accuracy of results obtained by processing the remote sensing images is improved.
As shown in fig. 1, fig. 1 is a schematic structural diagram of an AM-GAN-based remote sensing image cloud removing device in a hardware operating environment according to an embodiment of the present invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The AM-GAN based remote sensing image cloud removing equipment can be a PC (personal computer), a tablet computer, a portable computer and other mobile terminal equipment.
As shown in fig. 1, the AM-GAN based remote sensing image cloud removing device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the AM-GAN based remote sensing image cloud removing device structure shown in fig. 1 does not constitute a limitation of the AM-GAN based remote sensing image cloud removing device, and may include more or less components than those shown, or combine certain components, or a different arrangement of components.
As shown in fig. 1, a memory 1005 as a storage medium may include an operating system, a network communication module, a user interface module, and an AM-GAN-based remote sensing image cloud removing program.
In the device shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call an AM-GAN based remote sensing image cloud removing program stored in the memory 1005, and perform the following operations:
acquiring a cloud image group and a non-cloud image group, registering data pairs formed by the cloud image group and the non-cloud image group in a preset mode, cutting a first image group obtained by registration into a second image group in a preset format, forming a training set according to the second image group, training the cloud image in the training set as the input of a network model, training the network model by taking the non-cloud image in the training set as a training target, and performing network parameter training on the network model through a loss function to obtain a target network model, wherein the target network model comprises a generator and an identifier, and the generator at least comprises an attention cycle network, a negative residual error network and a standard residual error block;
acquiring an image to be processed, and acquiring an attention diagram of the image to be processed based on an attention cycle network;
carrying out cloud removal processing on the image to be processed according to the attention diagram by combining a negative residual error network to obtain a target characteristic diagram;
performing image reconstruction on the target feature map based on a standard residual block to obtain a cloud-free background image;
and identifying the cloud-free background image to obtain an identification result, and determining whether to finish cloud removal processing on the image to be processed according to the identification result.
Further, the attention cycle network comprises ResNet, LSTM and convolutional layers, and the step of acquiring the attention diagram of the image to be processed based on the attention cycle network comprises:
processing the image to be processed based on ResNet in the attention circulation network to obtain a first characteristic diagram;
processing the first characteristic diagram according to the LSTM in the attention circulation network to obtain a second characteristic diagram;
and processing the second feature map according to the convolution layer in the attention circulation network to obtain the attention map of the image to be processed.
Further, the step of processing the first feature map according to the LSTM in the attention cycle network to obtain a second feature map includes:
receiving the first feature map based on an LSTM in the attention-cycling network;
acquiring an activation function, and activating the LSTM gate based on the first characteristic diagram and the activation function to obtain activation data;
and updating the storage unit and the hidden state of the LSTM according to the activation data to obtain a second characteristic diagram.
Further, the step of processing the second feature map according to the convolution layer in the attention cycle network to obtain the attention map of the image to be processed includes:
processing the second feature map according to the convolutional layer in the attention cycle network to obtain an initial attention map;
and determining the initial attention map as a new image to be processed, and executing the step of processing the image to be processed based on ResNet in the attention loop network to obtain a first feature map until the execution times reach a preset number to obtain the attention map of the image to be processed.
Further, the step of performing cloud removal processing on the image to be processed according to the attention diagram by combining a negative residual error network to obtain a target feature diagram comprises:
processing the image to be processed according to a negative residual error network to obtain a third feature map;
and carrying out cloud removal processing on the third feature map according to the attention map to obtain a target feature map.
Further, the cloud removing processing on the third feature map according to the attention map to obtain a target feature map includes:
performing a first preset operation on the third characteristic diagram and the attention diagram to obtain a fourth characteristic diagram;
acquiring a fifth characteristic diagram for standard residual error processing of the image to be processed;
and performing second preset operation on the fourth characteristic diagram and the fifth characteristic diagram to obtain a target characteristic diagram.
Further, the step of identifying the cloud-free background image to obtain an identification result includes:
and inputting the cloud-free background image into a discriminator consisting of a convolutional layer and a CBR (cubic boron reactor), and discriminating the cloud-free background image through the discriminator to obtain a discrimination result.
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Referring to fig. 2, a first embodiment of the present invention provides a flow diagram of an AM-GAN based remote sensing image cloud removing method. In this embodiment, the AM-GAN based remote sensing image cloud removing method includes the following steps:
step S10, acquiring a cloud image group and a non-cloud image group, registering data pairs formed by the cloud image group and the non-cloud image group in a preset mode, cutting a first image group obtained by registration into a second image group in a preset format, forming a training set according to the second image group, training the cloud image in the training set as the input of a network model, training the network model by taking the non-cloud image in the training set as a training target, and performing network parameter training on the network model through a loss function to obtain a target network model, wherein the target network model comprises a generator and an identifier, and the generator at least comprises an attention cycle network, a negative residual error network and a standard residual error block;
in this embodiment, the AM-GAN based remote sensing image cloud removing method is applied to an AM-GAN based remote sensing image cloud removing system, which is referred to as a system in the present application, and a target network model obtained by network model training and optimization is configured in the system, wherein the target network model may include a generator and an identifier, the generator is configured to process an input image to obtain a cloud-free background image of the input image, the generator is provided with an attention circulation network, a negative residual error network, and a standard residual error block, the attention circulation network is configured to obtain an attention diagram of the input image, the negative residual error network is configured to perform negative residual error processing on the input image, the standard residual error block is configured to perform standard residual error processing on the input data, the identifier is configured to identify the input image to determine whether the generated cloud-free image of the generator is an original cloud-free image, unlike a binary mask, which is a matrix between 0 and 1, where a larger matrix value represents a greater attention, the attention map is a non-binary map, which represents an increase in attention from a non-cloud area to a cloud area, and shows the spatial distribution of the cloud. The method for removing the cloud from the remote sensing image based on the AM-GAN can avoid the interference of the cloud layer to the remote sensing image and improve the accuracy of a result obtained by processing the remote sensing image.
It can be understood that, before performing image cloud removing processing through the target network model, an initial network model needs to be trained to form the target network model, and specifically, in this embodiment, multi-temporal data of the remote sensing satellite Landsat 8 may be acquired from a United States Geological Survey (USGS) official website through a wireless network. In order to reduce the influence of surface feature changes on multi-temporal data, in this embodiment, a time interval used when acquiring the multi-temporal data is one cycle, the multi-temporal data is a set of cloudless image groups, and a set of data pairs formed by cloud image groups with clouds less than 30%, then, the data pairs formed by the cloud image groups and the cloudless image groups are registered using envi (the Environment for visualization images), then, the first image group obtained by the registration is cut into a second image group with a preset format, where the preset format is 256 × 256 pixels, and the cloud image in the second image group and the cloudless image in the corresponding position form a training set. Further, taking the cloud images in the training set as the input of the network model for training, taking the cloud-free images as the training targets, carrying out network parameter training of the network model, continuously learning the mapping relation between the cloud images and the cloud-free images until the network model converges, and obtaining a target network model representing the current optimal model, wherein a loss function comprises the following three parts:
Figure BDA0002830062160000091
wherein G represents a generator and D represents a discriminator.
Figure BDA0002830062160000092
Wherein the first part is a loss function of the conditional GAN, x represents a cloudy image, pdata(x) Representing the distribution of x, y the true cloud-free image, z the random noise data, pz(z) is the noise distribution, G (x, z) represents the cloud-free image of x generated with the aid of z, D (x, y) is the output of the discriminator and represents a near-true cloud-free imageThe probability of (c).
Figure BDA0002830062160000101
The second component is the standard L1 penalty for measuring the accuracy of each reconstructed pixel, N represents the number of bands in the remote sensing image used, H x W represents the size of the image, λcWeight of the represented c-th band, IinputRepresenting input images from a training network, IoutputRepresenting the output result, (u, v, c) represents a pixel point of the c-th wave band.
Figure BDA0002830062160000102
The third part is attention loss, the matrix a is an attention graph generated by the attention-cycling network module, and the matrix M is a binary mask image with cloud regions.
Step S20, acquiring an image to be processed, and acquiring an attention map of the image to be processed based on an attention cycle network;
further, when the user has an image cloud removing requirement, the to-be-processed image can be acquired from the internet or a local storage through the system, wherein the to-be-processed image is a remote sensing image in the embodiment. After obtaining an image to be processed, a system inputs the image to be processed into an attention loop network of a generator, and the image to be processed is processed through the attention loop network, specifically, the attention loop network comprises ResNet, LSTM and a convolutional layer, the attention loop network processes the image to be processed through ResNet, LSTM and convolutional layer to obtain an attention diagram of the image to be processed, wherein ResNet is a residual error network and can be composed of a plurality of RB (residual block) for performing residual error processing on the input image, LSTM (Long-Term Memory) is composed of a storage unit, a hidden state and three types of gates, the three types of gates comprise an input gate, a forgetting gate and an output gate, and the storage unit, the hidden state, the input gate, the forgetting gate and the output gate in the application are respectively formed by letters c, h, i, f, And o denotes an input gate, a forgetting gate and an output gate for controlling the read-write memory unit, and a convolution layer for generating a 2D attention.
Step S30, carrying out cloud removing processing on the image to be processed according to the attention map by combining a negative residual error network to obtain a target feature map;
further, while acquiring the attention diagram of the image to be processed, the system processes the image to be processed through a negative residual error network to obtain a third feature diagram. After the third feature map is obtained, the third feature map is subjected to cloud removal under the guidance of the attention map to obtain a target feature map, specifically, the first preset operation is performed on the first third feature map and the attention map to obtain a fourth feature map, a fifth feature map for performing standard residual error processing on the image to be processed is obtained at the same time, and finally, the second preset operation is performed on the fourth feature map and the fifth feature map to obtain the target feature map, where the first preset operation is a product operation in this embodiment, and the second preset operation is an addition operation in this embodiment.
Step S40, carrying out image reconstruction on the target characteristic diagram based on the standard residual block to obtain a cloud-free background image;
further, the system inputs the obtained target feature map into a standard residual block, and performs standard residual processing on the target feature map through the standard residual block, specifically, in this embodiment, two standard residual blocks are provided, the target feature map is input into a first standard residual block, and the target feature map is processed through the first standard residual block, so as to obtain a first residual result; and inputting the first residual result into a second standard residual block, and processing the first residual result by the second standard residual block to obtain a second residual result. It can be understood that, after the second residual result is obtained, the second residual result is input into the convolutional layer, and the image reconstruction is performed on the second residual result through the convolutional layer, so as to obtain a cloud-free background image after the image reconstruction is completed.
And step S50, identifying the cloud-free background image to obtain an identification result, and determining whether to finish cloud removing processing on the image to be processed according to the identification result.
Further, the cloud-free background image is input into a discriminator composed of a convolutional layer and a CBR, the cloud-free background image is discriminated by the discriminator to obtain a discrimination result, after the discrimination result is obtained, whether the input cloud-free background image is cloud-free after being subjected to cloud removal processing or the remote sensing image does not contain cloud, whether the image to be processed is subjected to cloud removal processing or not is determined according to the discrimination result, wherein C represents convergence, B represents Batch Normalization, and R represents ReLU (Rectified Linear Unit). It can be understood that before the cloud-free background image is input to the discriminator, the training learning of the cloud-removed image and the cloud-free image is also required to be performed by the discriminator, so that the identification of the cloud-free background image by the discriminator after the learning is completed is facilitated.
Further, the step of identifying the cloud-free background image to obtain an identification result includes:
and step S51, inputting the cloud-free background image into a discriminator consisting of a convolutional layer and a CBR, and discriminating the cloud-free background image through the discriminator to obtain a discrimination result.
Further, after the cloud-free background image is obtained through attention cycle network processing, in order to determine the similarity between the cloud-free background image and the real cloud-free image, the system needs to identify the cloud-free background image through the identifier, specifically, the system inputs the cloud-free background image into the identifier composed of 5 CBRs and one convolutional layer, processing the cloud-free background image by 5 CBRs in sequence, inputting the processed data into the convolutional layer after the 5 CBRs are processed, convolution processing is carried out on the processed data through the convolution layer, after the convolution processing is finished, the discriminator outputs discrimination results, wherein the identification result is a probability value in the embodiment, the similarity between the cloud-free background image and the real cloud-free image can be determined according to the probability value, and the higher the probability value is, the higher the similarity between the cloud-free background image and the real cloud-free image is.
The embodiment provides an AM-GAN-based remote sensing image cloud removing method, device, equipment and storage medium, the invention firstly trains a network model according to data formed by a cloud image group and a non-cloud image group and combines a loss function, then obtains an attention diagram of an image to be processed through an attention circulation network in a target network model obtained through training, the image to be recognized is descaled through a negative residual error network in a target network model under the guidance of the attention map to obtain a target characteristic map, reconstructing the target characteristic diagram through a standard residual block in the target network model to obtain a cloud-free background image, identifying the cloud-free background image through an identifier in the target network model to obtain an identification result, the similarity between the cloud-free background image and the real cloud-free image is determined according to the identification result, so that the interference of a cloud layer on the remote sensing image is avoided, and the result accuracy obtained by processing the remote sensing image is improved.
Further, referring to fig. 3, a second embodiment of the AM-GAN-based remote sensing image cloud removing method according to the present invention is proposed based on the first embodiment of the AM-GAN-based remote sensing image cloud removing method according to the present invention, and in the second embodiment, the step of obtaining the attention map of the image to be processed based on the attention circulation network includes:
step S21, processing the image to be processed based on ResNet in the attention circulation network to obtain a first feature map;
step S22, processing the first characteristic diagram according to the LSTM in the attention cycle network to obtain a second characteristic diagram;
step S23, processing the second feature map according to the convolution layer in the attention circulation network to obtain an attention map of the image to be processed.
Further, after the image to be processed is input into the attention circulation network, the ResNet in the attention circulation network receives the image to be processed, residual error processing is performed on the received image to be processed, and a first feature map in the image to be processed is extracted. Further, ResNet in the attention cycle network inputs the extracted first feature map into an LSTM connected with the ResNet through an output end; after receiving the first feature map, the LSTM receives the hidden state of the previous LSTM, activates the gate of the LSTM through the first feature map and the hidden state of the previous LSTM, and updates the storage unit and the hidden state of the current-stage LSTM according to the activation data obtained by activating the gate of the LSTM to obtain the second feature map. Further, the LSTM in the attention cycle network inputs the updated second feature map into the convolutional layer connected with the output end of the LSTM, the convolutional layer processes the second feature map after receiving the second feature map, the generated initial attention map is processed continuously through a plurality of attention cycle network blocks, an attention map in which the image to be processed exists in the form of a 2D attention map is generated after the plurality of attention cycle network blocks complete processing, and the attention map is generated through the plurality of attention cycle network blocks, so that the accuracy of the attention map can be improved, and the subsequent processing is facilitated.
Further, the step of processing the first feature map according to the LSTM in the attention cycle network to obtain a second feature map includes:
step S221, receiving the first feature map based on the LSTM in the attention cycle network;
step S222, acquiring an activation function, and activating the gate of the LSTM based on the first characteristic diagram and the activation function to obtain activation data;
step S223, updating the storage unit and the hidden state of the LSTM according to the activation data, to obtain a second feature map.
Further, when the LSTM in the attention cycle network detects the first signature of the ResNet input connected thereto, the first signature is received through the data input terminal. Further, the LSTM acquires an activation function for activating the gates of the LSTM, three types of gates in the LSTM are activated through the activation function and the first feature map together to obtain activation data, and after the activation function is obtained, the storage unit and the hidden state of the LSTM are updated according to the activation function to obtain a second feature map. Specifically, for each time step t, the LSTM unit first receives an input xtAnd hidden state h of the previous LSTMt-1, for three classesActivating the type gates, calculating the activation data of the three types of gates and updating the storage unit ct and the hidden state h of the LSTM at the current stagetMore specifically, the LSTM calculates activation data for three types of doors by a preset formulatAnd htAnd obtaining a second characteristic diagram, wherein the preset formula is as follows:
it=σ(Wxt*xt+Whi*ht-1+Wci⊙ct-1+bi)
ft=σ(Wxt*xt+Whf*ht-1+Wcf⊙ct-1+bf)
ct=ft⊙ct-1+it+tanh⊙(Wxt*xt+Whc*ht-1+bc)
ot=σ(Wxo*xt+Whf*ht-1+Wco⊙ct+bo)
ht=ot⊙tanh(ct)
wherein c, h, i, f, o respectively represent a memory cell, a hidden state, an input gate, a forgetting gate, an output gate, a represents an Hadamard product, a represents an activation function, x represents a convolution operationtIs a feature map of residual block extraction, WxtWeights representing the feature map, biRepresenting the deviation of the input gate. c. CtActing as an accumulator of state information that will propagate to the next LSTM unit. h istRepresenting the output characteristics of the LSTM cells that will be fed into the convolutional layer to generate an attention map.
Further, the step of processing the second feature map according to the convolution layer in the attention cycle network to obtain the attention map of the image to be processed includes:
step S231, processing the second feature map according to the convolutional layer in the attention cycle network to obtain an initial attention map;
step S232, determining the initial attention map as a new image to be processed, and performing processing on the image to be processed based on ResNet in the attention circulation network to obtain a first feature map until the number of execution times reaches a preset number of times, so as to obtain the attention map of the image to be processed.
It can be understood that if the image to be processed is processed by only one attention-cycling network block, the accuracy of the obtained attention map is low, and if the image to be processed is processed by too many attention-cycling network blocks, although the accuracy is improved, a lot of time is consumed, resulting in reduced efficiency, for example: in this embodiment, the number of the attention cycle network blocks may be preferably 4, and the image to be processed is processed by the 4 attention cycle network blocks, so that high accuracy can be obtained, and high processing efficiency is ensured. Specifically, when a convolutional layer in an attention cycle network receives a second feature map input by an LSTM, the convolutional layer in the attention cycle network performs convolutional processing on the second feature map to generate an initial attention map with relatively reduced accuracy, in order to improve the accuracy of the initial attention map, it is necessary to determine the initial attention map as a new image to be processed, input the new image to be processed into a ResNet of a next attention cycle network block, and perform "processing the image to be processed based on the ResNet in the attention cycle network to obtain a first feature map; processing the first characteristic diagram according to the LSTM in the attention circulation network to obtain a second characteristic diagram; and processing the second feature map according to the convolution layer in the attention loop network until the number of times of executing the step reaches a preset number, wherein the preset number is 3 times because one processing is completed before the loop is executed, and after the loop is executed 3 times (plus the original one time for 4 times), the attention map of the image to be processed can be obtained.
The embodiment is based on the attention loop network to acquire the attention diagram of the image to be processed, and the accuracy of the production attention diagram can be improved and the efficiency of data processing can be ensured at the same time because the image to be processed is processed by a plurality of attention loop network blocks in the attention loop network.
Further, referring to fig. 4, a third embodiment of the AM-GAN-based remote sensing image cloud removing method according to the present invention is provided based on the first embodiment of the AM-GAN-based remote sensing image cloud removing method according to the present invention, and in the third embodiment, the step of performing cloud removing processing on the image to be processed by combining the attention map with a negative residual error network to obtain a target feature map includes:
step S31, processing the image to be processed according to a negative residual error network to obtain a third feature map;
and step S32, carrying out cloud removing processing on the third feature map according to the attention map to obtain a target feature map.
Further, the system firstly carries out standard residual processing on an image to be processed according to a standard residual block in a negative residual network, the processed characteristic data is input into a first convolution layer, the first convolution layer is combined with a ReLU function for processing, the processed characteristic data is input into a second convolution layer, the second convolution layer is combined with the ReLU function for processing, the processed characteristic data is input into a third convolution layer, and the third convolution layer carries out convolution processing on the received characteristic data to obtain a third characteristic diagram. Further, a first preset operation is performed on the third feature map and the attention map to obtain a fourth feature map, wherein the first preset operation is a multiplication operation in the embodiment; further, the system needs to acquire a fifth feature map for performing standard residual error processing on the image to be processed, and perform a second preset operation on the fourth feature map and the fifth feature map to obtain a target feature map, where the second preset operation is an addition operation in this embodiment.
Further, the cloud removing processing on the third feature map according to the attention map to obtain a target feature map includes:
step S321, performing a first preset operation on the third feature map and the attention map to obtain a fourth feature map;
step S322, acquiring a fifth feature map for standard residual error processing of the image to be processed;
step S323, performing a second preset operation on the fourth feature map and the fifth feature map to obtain a target feature map.
Further, the system performs a first preset operation on the third characteristic diagram and the attention diagram to obtain a fourth characteristic diagram; specifically, the system multiplies the third feature map by the attention map, and obtains a fourth feature map after the multiplication is completed. Meanwhile, the system calls a standard residual block in the negative residual network, and standard residual processing is carried out on the image to be processed serving as input through the standard residual block to obtain a fifth characteristic diagram; further, after the fourth feature map and the fifth feature map are obtained, the system performs a second preset operation on the fourth feature map and the fifth feature map to obtain a target feature map, specifically, the system performs an addition operation on the fourth feature map and the fifth feature map, and obtains a target feature map after cloud removal after the addition operation is completed.
According to the attention map, the to-be-processed image is subjected to cloud removal by combining a negative residual error network obtained by improving a standard residual error network, and a target feature map subjected to cloud removal of the to-be-processed image is obtained, so that the target feature map is not interfered by a cloud layer existing in shooting of the to-be-processed image, and the accuracy of a result obtained by processing a remote sensing image is improved.
Further, referring to fig. 5, fig. 5 is a flowchart of an application scenario of the AM-GAN-based remote sensing image cloud removing method in this embodiment, an image to be processed is used as an input, the image to be processed is input into a generator, and first passes through an attention loop network formed by a residual error network ResNet, a long-short term memory unit LSTM, and a convolutional layer convs, and the attention loop network in this embodiment includes k sets of attention loop network blocks formed by ResNet, LSTM, and convs, and performs loop processing on the image to be processed by the k sets of attention loop network blocks, and an attention graph a of the image to be processed is obtained after the processing is completedkIt will be appreciated that after the first attention loop network block has completed processing the image to be processed, an initial attention graph a may be obtained1An initial attention map A1Enter a second attention cycleThe network block processes to obtain an initial attention diagram A2And the like until the circulation processing of the k attention circulation network blocks is completed, and the attention diagram A is obtainedk. Further, attention will be given to force diagram AkInputting and combining a negative residual error network to carry out cloud removing processing on an image to be processed, specifically, carrying out cloud removing processing on the image to be processed through the negative residual error network consisting of a standard residual error block RB, 3 convolutional layers conv, a multiplication operation and an addition operation while acquiring an attention diagram of the image to be processed, specifically, firstly, carrying out standard residual error processing on the image to be processed through the standard residual error block RB, inputting processed characteristic data into a first convolutional layer, processing the first convolutional layer by combining a ReLU function, inputting the processed characteristic data into a second convolutional layer, processing the second convolutional layer by combining the ReLU function, inputting the characteristic data subjected to the attention diagram processing into a third convolutional layer, carrying out convolution processing on the received characteristic data through the third convolutional layer to obtain a third characteristic diagram, and carrying out the multiplication operation on the third characteristic diagram and the attention, and obtaining a fourth feature map, obtaining a fifth feature map obtained by performing standard residual processing on the input image to be processed by the standard residual block RB, and adding the fourth feature map and the fifth feature map to obtain the target feature map after cloud removal. And inputting the target feature map after cloud removal into a standard residual error network formed by combining two standard residual error blocks RB and one convolution layer conv with a ReLU function for processing, and performing image reconstruction on the target feature map to form a cloud-free background image as an output. Further, the cloud-free background image is input into a discriminator comprising 5 CBRs and a convolutional layer, the discriminator discriminates the cloud-free background image and obtains a discrimination result, wherein the discrimination result is a scalar (probability value), the similarity between the cloud-free background image and the real cloud-free image can be determined according to the probability value, if the probability value is higher, the similarity between the cloud-free background image and the real cloud-free image is higher, and if the probability value is lower, the similarity between the cloud-free background image and the real cloud-free image is lower. It is understood that the identification can be performed except for the identification of the cloud-free background image obtained by the cloud-free processingThe real cloudless image (ground route) can be input into the discriminator for discrimination, and whether the real cloudless image contains cloud or not is determined according to the discrimination result.
Further, the invention also provides an AM-GAN based remote sensing image cloud removing device.
Referring to fig. 6, fig. 6 is a functional module schematic diagram of a cloud removing device for remote sensing images based on AM-GAN according to a first embodiment of the present invention.
The AM-GAN based remote sensing image cloud removing device comprises:
the training module 10 is configured to acquire a cloud image group and a non-cloud image group, register a data pair formed by the cloud image group and the non-cloud image group in a preset manner, cut a first image group obtained by the registration into a second image group in a preset format, form a training set according to the second image group, train a cloud image in the training set as an input of a network model, train a non-cloud image in the training set as a training target, and train a network parameter of the network model through a loss function to obtain a target network model, where the target network model includes a generator and an identifier, and the generator at least includes an attention cycle network, a negative residual error network, and a standard residual error block;
the acquisition module 20 is configured to acquire an image to be processed, and acquire an attention map of the image to be processed based on an attention cycle network;
the cloud removing module 30 is configured to perform cloud removing on the image to be processed according to the attention map in combination with a negative residual error network to obtain a target feature map;
the reconstruction module 40 is configured to perform image reconstruction on the target feature map based on the standard residual block to obtain a cloud-free background image;
and the identification module 50 is configured to identify the cloud-free background image to obtain an identification result, and determine whether to complete cloud removal processing on the image to be processed according to the identification result.
Further, the obtaining module 20 includes:
the first processing unit is used for processing the image to be processed based on ResNet in the attention circulation network to obtain a first characteristic diagram;
the second processing unit is used for processing the first characteristic diagram according to the LSTM in the attention circulation network to obtain a second characteristic diagram;
and the third processing unit is used for processing the second feature map according to the convolution layer in the attention circulation network to obtain the attention map of the image to be processed.
Further, the obtaining module 20 further includes:
a receiving unit, configured to receive the first feature map based on an LSTM in the attention cycle network;
the activation unit is used for acquiring an activation function, and activating the gate of the LSTM based on the first characteristic diagram and the activation function to obtain activation data;
and the updating unit is used for updating the storage unit and the hidden state of the LSTM according to the activation data to obtain a second characteristic diagram.
Further, the obtaining module 20 further includes:
a fourth processing unit, configured to process the second feature map according to the convolutional layer in the attention cycle network to obtain an initial attention map;
and the execution unit is used for determining the initial attention map as a new image to be processed, and executing the step of processing the image to be processed based on ResNet in the attention loop network to obtain a first feature map until the execution times reach a preset number to obtain the attention map of the image to be processed.
Further, the cloud removing module 30 includes:
the fifth processing unit is used for processing the image to be processed according to a negative residual error network to obtain a third feature map;
and the sixth processing unit is used for carrying out cloud removing processing on the third feature map according to the attention map to obtain a target feature map.
Further, the cloud removing module 30 further includes:
the first operation unit is used for carrying out first preset operation on the third characteristic diagram and the attention diagram to obtain a fourth characteristic diagram;
the acquisition unit is used for acquiring a fifth feature map for performing standard residual error processing on the image to be processed;
and the second operation unit is used for performing second preset operation on the fourth characteristic diagram and the fifth characteristic diagram to obtain a target characteristic diagram.
Further, the authentication module 50 includes:
and the identification unit is used for inputting the cloud-free background image into an identifier consisting of a convolutional layer and a CBR (cubic boron nitride), and identifying the cloud-free background image through the identifier to obtain an identification result.
In addition, the present invention also provides a storage medium, which is preferably a computer readable storage medium, on which an AM-GAN based remote sensing image cloud removing program is stored, and when the AM-GAN based remote sensing image cloud removing program is executed by a processor, the steps of the above AM-GAN based remote sensing image cloud removing method embodiments are implemented.
In the embodiments of the AM-GAN-based remote sensing image cloud removing device and the computer readable medium of the present invention, all technical features of the embodiments of the AM-GAN-based remote sensing image cloud removing method are included, and the description and explanation contents are basically the same as those of the embodiments of the AM-GAN-based remote sensing image cloud removing method, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or a part contributing to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and includes a plurality of instructions for enabling a terminal device (which may be a fixed terminal, such as an internet of things smart device including smart homes, such as a smart air conditioner, a smart lamp, a smart power supply, a smart router, etc., or a mobile terminal, including a smart phone, a wearable networked AR/VR device, a smart sound box, an autonomous driving automobile, etc.) to execute the method according to each embodiment of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A cloud removing method for remote sensing images based on AM-GAN is characterized by comprising the following steps:
acquiring a cloud image group and a non-cloud image group, registering data pairs formed by the cloud image group and the non-cloud image group in a preset mode, cutting a first image group obtained by registration into a second image group in a preset format, forming a training set according to the second image group, training the cloud image in the training set as the input of a network model, training the network model by taking the non-cloud image in the training set as a training target, and performing network parameter training on the network model through a loss function to obtain a target network model, wherein the target network model comprises a generator and an identifier, and the generator at least comprises an attention cycle network, a negative residual error network and a standard residual error block;
acquiring an image to be processed, and acquiring an attention diagram of the image to be processed based on an attention cycle network;
carrying out cloud removal processing on the image to be processed according to the attention diagram by combining a negative residual error network to obtain a target characteristic diagram;
performing image reconstruction on the target feature map based on a standard residual block to obtain a cloud-free background image;
and identifying the cloud-free background image to obtain an identification result, and determining whether to finish cloud removal processing on the image to be processed according to the identification result.
2. The method for cloud removal of AM-GAN based remote sensing images according to claim 1, wherein said attention-cycle network comprises ResNet, LSTM, convolutional layers, and said step of obtaining an attention map of said image to be processed based on said attention-cycle network comprises:
processing the image to be processed based on ResNet in the attention circulation network to obtain a first characteristic diagram;
processing the first characteristic diagram according to the LSTM in the attention circulation network to obtain a second characteristic diagram;
and processing the second feature map according to the convolution layer in the attention circulation network to obtain the attention map of the image to be processed.
3. The method for cloud removal of AM-GAN based remote sensing images as claimed in claim 2, wherein the step of processing the first feature map according to LSTM in the attention cycle network to obtain a second feature map comprises:
receiving the first feature map based on an LSTM in the attention-cycling network;
acquiring an activation function, and activating the LSTM gate based on the first characteristic diagram and the activation function to obtain activation data;
and updating the storage unit and the hidden state of the LSTM according to the activation data to obtain a second characteristic diagram.
4. The method for cloud removal of AM-GAN based remote sensing images as claimed in claim 2, wherein the step of processing the second feature map according to the convolution layer in the attention cycle network to obtain the attention map of the image to be processed comprises:
processing the second feature map according to the convolutional layer in the attention cycle network to obtain an initial attention map;
and determining the initial attention map as a new image to be processed, and executing the step of processing the image to be processed based on ResNet in the attention loop network to obtain a first feature map until the execution times reach a preset number to obtain the attention map of the image to be processed.
5. The AM-GAN based remote sensing image cloud removing method according to claim 1, wherein the step of removing the cloud of the image to be processed according to the attention map by combining a negative residual error network to obtain a target feature map comprises:
processing the image to be processed according to a negative residual error network to obtain a third feature map;
and carrying out cloud removal processing on the third feature map according to the attention map to obtain a target feature map.
6. The AM-GAN based remote sensing image cloud removing method according to claim 5, wherein the step of removing the cloud of the third feature map according to the attention map to obtain a target feature map comprises:
performing a first preset operation on the third characteristic diagram and the attention diagram to obtain a fourth characteristic diagram;
acquiring a fifth characteristic diagram for standard residual error processing of the image to be processed;
and performing second preset operation on the fourth characteristic diagram and the fifth characteristic diagram to obtain a target characteristic diagram.
7. The method for cloud removal of AM-GAN based remote sensing images according to claim 1, wherein said step of identifying said cloud-free background image to obtain an identification result comprises:
and inputting the cloud-free background image into a discriminator consisting of a convolutional layer and a CBR (cubic boron reactor), and discriminating the cloud-free background image through the discriminator to obtain a discrimination result.
8. An AM-GAN based remote sensing image cloud removing device is characterized by comprising:
the training module is used for acquiring a cloud image group and a non-cloud image group, registering data pairs formed by the cloud image group and the non-cloud image group in a preset mode, cutting a first image group obtained by registration into a second image group in a preset format, forming a training set according to the second image group, training the cloud image in the training set as the input of a network model, training the network model by taking the non-cloud image in the training set as a training target, and performing network parameter training on the network model through a loss function to obtain a target network model, wherein the target network model comprises a generator and an identifier, and the generator at least comprises an attention cycle network, a negative residual error network and a standard residual error block;
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image to be processed and acquiring an attention diagram of the image to be processed based on an attention cycle network;
the cloud removing module is used for carrying out cloud removing on the image to be processed according to the attention map in combination with a negative residual error network to obtain a target characteristic map;
the reconstruction module is used for reconstructing an image of the target feature map based on the standard residual block to obtain a cloud-free background image;
and the identification module is used for identifying the cloud-free background image to obtain an identification result, and determining whether the cloud removing processing is finished on the image to be processed according to the identification result.
9. An AM-GAN based remote sensing image cloud removing device, characterized in that the AM-GAN based remote sensing image cloud removing device comprises a memory, a processor and an AM-GAN based remote sensing image cloud removing program stored on the memory and operable on the processor, wherein the AM-GAN based remote sensing image cloud removing program when executed by the processor realizes the steps of the AM-GAN based remote sensing image cloud removing method according to any one of claims 1 to 7.
10. A storage medium, wherein the storage medium stores thereon an AM-GAN based remote sensing image cloud removing program, and the AM-GAN based remote sensing image cloud removing program, when executed by a processor, implements the steps of the AM-GAN based remote sensing image cloud removing method according to any one of claims 1 to 7.
CN202011463223.3A 2020-12-10 2020-12-10 Remote sensing image cloud removing method, device, equipment and storage medium based on AM-GAN Active CN112561817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011463223.3A CN112561817B (en) 2020-12-10 2020-12-10 Remote sensing image cloud removing method, device, equipment and storage medium based on AM-GAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011463223.3A CN112561817B (en) 2020-12-10 2020-12-10 Remote sensing image cloud removing method, device, equipment and storage medium based on AM-GAN

Publications (2)

Publication Number Publication Date
CN112561817A true CN112561817A (en) 2021-03-26
CN112561817B CN112561817B (en) 2023-05-12

Family

ID=75062936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011463223.3A Active CN112561817B (en) 2020-12-10 2020-12-10 Remote sensing image cloud removing method, device, equipment and storage medium based on AM-GAN

Country Status (1)

Country Link
CN (1) CN112561817B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838064A (en) * 2021-09-23 2021-12-24 哈尔滨工程大学 Cloud removing method using multi-temporal remote sensing data based on branch GAN

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022222A (en) * 2017-12-15 2018-05-11 西北工业大学 A kind of thin cloud in remote sensing image minimizing technology based on convolution-deconvolution network
CN108921799A (en) * 2018-06-22 2018-11-30 西北工业大学 Thin cloud in remote sensing image minimizing technology based on multiple dimensioned Cooperative Study convolutional neural networks
CN109300090A (en) * 2018-08-28 2019-02-01 哈尔滨工业大学(威海) A kind of single image to the fog method generating network based on sub-pix and condition confrontation
CN110033410A (en) * 2019-03-28 2019-07-19 华中科技大学 Image reconstruction model training method, image super-resolution rebuilding method and device
US20190294108A1 (en) * 2018-03-21 2019-09-26 The Regents Of The University Of California Method and system for phase recovery and holographic image reconstruction using a neural network
CN111145112A (en) * 2019-12-18 2020-05-12 华东师范大学 Two-stage image rain removing method and system based on residual error countermeasure refinement network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022222A (en) * 2017-12-15 2018-05-11 西北工业大学 A kind of thin cloud in remote sensing image minimizing technology based on convolution-deconvolution network
US20190294108A1 (en) * 2018-03-21 2019-09-26 The Regents Of The University Of California Method and system for phase recovery and holographic image reconstruction using a neural network
CN108921799A (en) * 2018-06-22 2018-11-30 西北工业大学 Thin cloud in remote sensing image minimizing technology based on multiple dimensioned Cooperative Study convolutional neural networks
CN109300090A (en) * 2018-08-28 2019-02-01 哈尔滨工业大学(威海) A kind of single image to the fog method generating network based on sub-pix and condition confrontation
CN110033410A (en) * 2019-03-28 2019-07-19 华中科技大学 Image reconstruction model training method, image super-resolution rebuilding method and device
CN111145112A (en) * 2019-12-18 2020-05-12 华东师范大学 Two-stage image rain removing method and system based on residual error countermeasure refinement network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱清;侯恩兵;: "基于生成式对抗网络的遥感影像云检测方法", 地理空间信息 *
贾森 等: "面向高光谱图像分类的超像素级Gabor特征融合方法研究", 《南京信息工程大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838064A (en) * 2021-09-23 2021-12-24 哈尔滨工程大学 Cloud removing method using multi-temporal remote sensing data based on branch GAN
CN113838064B (en) * 2021-09-23 2023-12-22 哈尔滨工程大学 Cloud removal method based on branch GAN using multi-temporal remote sensing data

Also Published As

Publication number Publication date
CN112561817B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN110569901A (en) Channel selection-based countermeasure elimination weak supervision target detection method
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN110728295B (en) Semi-supervised landform classification model training and landform graph construction method
JP2023533907A (en) Image processing using self-attention-based neural networks
CN110838095B (en) Single image rain removing method and system based on cyclic dense neural network
CN115345866B (en) Building extraction method in remote sensing image, electronic equipment and storage medium
CN116342884B (en) Image segmentation and model training method and server
CN111967598A (en) Neural network compression method, device, equipment and computer readable storage medium
CN113378897A (en) Neural network-based remote sensing image classification method, computing device and storage medium
CN112561817A (en) Remote sensing image cloud removing method, device and equipment based on AM-GAN and storage medium
CN111242183A (en) Image identification and classification method and device based on attention mechanism
CN112069412B (en) Information recommendation method, device, computer equipment and storage medium
CN117422936A (en) Remote sensing image classification method and system
CN111860287A (en) Target detection method and device and storage medium
CN111583417A (en) Method and device for constructing indoor VR scene with combined constraint of image semantics and scene geometry, electronic equipment and medium
CN116629375A (en) Model processing method and system
CN116563748A (en) Height measuring method and system for high-rise construction building
CN116363457A (en) Task processing, image classification and data processing method of task processing model
CN115223181A (en) Text detection-based method and device for recognizing characters of seal of report material
CN111695470A (en) Visible light-near infrared pedestrian re-identification method based on depth feature orthogonal decomposition
CN117273963B (en) Risk identification method and device based on car insurance scene
CN117292120B (en) Light-weight visible light insulator target detection method and system
CN117789222A (en) Lightweight label digital identification method, system, equipment and medium
CN117152618A (en) Method and device for detecting time-sensitive target change in remote sensing image
CN116823385A (en) Commodity recommendation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant