CN113516600B - Remote sensing image thin cloud removing method based on characteristic self-adaptive correction - Google Patents

Remote sensing image thin cloud removing method based on characteristic self-adaptive correction Download PDF

Info

Publication number
CN113516600B
CN113516600B CN202110615563.1A CN202110615563A CN113516600B CN 113516600 B CN113516600 B CN 113516600B CN 202110615563 A CN202110615563 A CN 202110615563A CN 113516600 B CN113516600 B CN 113516600B
Authority
CN
China
Prior art keywords
cloud
image
network
remote sensing
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110615563.1A
Other languages
Chinese (zh)
Other versions
CN113516600A (en
Inventor
刘宇航
王晓宇
杨志
佘玉成
张严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Dongfanghong Satellite Co Ltd
Original Assignee
Aerospace Dongfanghong Satellite Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Dongfanghong Satellite Co Ltd filed Critical Aerospace Dongfanghong Satellite Co Ltd
Priority to CN202110615563.1A priority Critical patent/CN113516600B/en
Publication of CN113516600A publication Critical patent/CN113516600A/en
Application granted granted Critical
Publication of CN113516600B publication Critical patent/CN113516600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a feature self-adaptive correction-based thin cloud removing method for remote sensing images, which is characterized in that a deep convolutional neural network is adopted to design a thin cloud removing network, so that single image thin cloud removing is realized, the thin cloud removing network inputs cloud images, outputs cloud images, cloud remote sensing images and cloud-free remote sensing images are utilized to construct a training data set, and the training data set is used for network learning of a mapping relation between the cloud images and the cloud-free images; the thin cloud removal network is trained by adopting the training data set, the network is optimized by combining the mean loss function, the VGG loss function and the minimum absolute value deviation loss function, and the single Zhang Yaogan image thin cloud removal is realized by adopting the method after the training is finished. The method does not depend on priori knowledge and a physical model, can finish end-to-end thin cloud removal only by a single Zhang Baoyun remote sensing image, and has the advantages of wide application range, simplicity and convenience in implementation, and real and natural thin cloud removal effect.

Description

Remote sensing image thin cloud removing method based on characteristic self-adaptive correction
Technical Field
The invention relates to the technical field of artificial intelligence and image processing, in particular to a remote sensing image thin cloud removing method based on characteristic self-adaptive correction.
Background
The optical image shot by the remote sensing satellite can capture ground information, and great convenience is provided for people to grasp information, such as the fields of weather forecast, disaster early warning and the like. However, the optical sensor is affected by the atmospheric environment, especially the cloud is most seriously affected, and the photographed remote sensing image is blocked by the cloud, so that the subsequent utilization of the remote sensing image, such as target detection, target positioning, data fusion and the like, is limited. Therefore, research on the cloud and mist removal technology has practical significance and practical value.
The traditional optical remote sensing image thin cloud removal technology mainly comprises a filtering method, a priori knowledge method and the like. The filtering method generally adopts a homomorphic filtering method and the like, the homomorphic filtering method transforms the image from a space domain to a frequency domain, and the filtering function is used for inhibiting low frequency and enhancing high frequency because the cloud only affects low frequency information, so that the thin cloud is removed. The prior knowledge method provides some hypothesized prior knowledge according to experience and analysis by analyzing the imaging mechanism of the sensor, and infers the cloud-free remote sensing image according to the hypothesized prior knowledge, so that the cloud-free remote sensing image is recovered. Such as dark channel prior and dark target subtraction, etc. The dark channel prior method assumes that pixels of the cloud-free image in a local area other than the sky have a channel close to zero, and image defogging is realized based on an atmospheric scattering model; the dark target subtraction assumes that the image is composed of surface reflection and atmospheric scattering, the radiation values of dark targets such as mountain shadows, dense vegetation and the like in the sensor are almost all from the atmospheric scattering, and the radiation of the dark targets comprises the atmospheric scattering and the cloud and fog when the cloud and fog are uniformly distributed, so that the purpose of removing the cloud and fog can be achieved through the dark target subtraction. The method has obvious defects, is limited in applicability, and has invalid priori knowledge in some cases, so that the result is not satisfactory.
In recent years, with the rapid development of artificial intelligence technology, convolutional neural networks are becoming the first choice technology of scientific researchers by virtue of strong feature learning ability, deep learning achieves good results in aspects of image classification, target detection, face recognition and the like, and many scientific researchers try to adopt a deep learning-based method to carry out an optical remote sensing image thin cloud removal algorithm and achieve good effects. The thin cloud removal technology based on deep learning has become a popular research direction, and the algorithm has better performance and can avoid the defects of manual design characteristics and experience.
According to the analysis, the traditional thin cloud removal algorithm has obvious limitation, and the thin cloud removal algorithm based on the end-to-end remote sensing image of the deep learning can fit the function mapping between the thin cloud remote sensing image and the clear remote sensing image, so that the thin cloud removal effect is more natural and real.
Disclosure of Invention
The invention solves the technical problems that: the method for removing the thin cloud of the remote sensing image based on the characteristic self-adaptive correction is characterized in that the characteristic extraction and reconstruction work is completed through a multi-layer characteristic interaction unit, the characteristic self-adaptive correction mechanism is adopted to conduct image characteristic recalibration according to the importance degree of the thin cloud removal, and the image characteristic self-calibration work is achieved. According to the method, the prior knowledge and the physical model are not relied on, and the thin cloud removal of the single Zhang Yaogan image is realized by training and learning the relationship between the thin cloud remote sensing image and the cloud-free remote sensing image, and the following description is given in detail:
the technical scheme of the invention is as follows: a method for removing thin cloud of a remote sensing image based on feature adaptive correction, the method comprising:
a thin cloud removing network is designed by adopting a deep convolutional neural network, so that single image thin cloud removing is realized, the thin cloud removing network inputs a cloud image, outputs a cloud image, a training data set is constructed by utilizing a cloud remote sensing image and a cloud-free remote sensing image, and the method is used for network learning of a mapping relation between the cloud image and the cloud-free image;
thin cloud removal network architecture feature extraction subnetwork F 1 And feature recovery subnetwork F 2 Composition, feature extraction subnetwork F 1 Responsible for extracting features related to thin clouds, feature recovery subnetwork F 2 Responsible for completing reconstruction of cloud-free remote sensing images and feature extraction subnetwork F 1 And feature recovery subnetwork F 2 The structure is built based on a multi-layer characteristic interaction unit MFU, and information exchange is carried out between sub-networks by adopting a characteristic self-adaptive calibration unit FCU;
the multi-layer feature interaction unit MFU is responsible for extracting thin cloud features under different receptive fields, and designing a plurality of branches to integrate and interact image features of different layers;
the characteristic adaptive calibration unit FCU is according to the graphThe importance degree of the image features to the thin cloud removal is self-adaptively given corresponding weights to finish image feature recalibration, so that the feature extraction subnetwork F 1 Delivering features valuable for thin cloud removal to feature restoration subnetwork F 2 Completing reconstruction of the cloud-free image;
the thin cloud removal network is trained by adopting the training set, the network is optimized by combining the mean loss function, the VGG loss function and the minimum absolute value deviation loss function, and the single Zhang Yaogan image thin cloud removal can be realized by adopting the method after the training is finished.
Wherein, the training data set specifically comprises:
a depth convolution neural network is adopted to design an end-to-end remote sensing image thin cloud removal method, and the algorithm does not depend on a physical model. The thin cloud removing network is responsible for realizing thin cloud removing of a single image, wherein the network input is a cloud remote sensing image, and the network output is a cloud remote sensing image;
collecting p Zhang Moyun remote sensing images, performing cloud adding operation on the images to obtain p Zhang Youyun remote sensing images, cutting non-cloud images and cloud images at the same position to obtain image blocks with the size of N multiplied by N, removing image blocks with smaller information content, selecting q Zhang Youyun remote sensing image blocks and q Zhang Moyun remote sensing image blocks with corresponding relations from the rest image blocks to form a training data set, wherein the training data set is expressed as { J } i ,H i I e {1,2, …, q } for network learning of the mapping relationship between cloudy and cloudless maps, where N, p and q are both positive integers.
The thin cloud removal network specifically comprises:
thin cloud removal network architecture feature extraction subnetwork F 1 (Feature Extraction Sub-network, FES) and feature recovery subnetwork F 2 (Feature Restoration Sub-network, FRS) composition, two subnetworks F 1 And F 2 The extraction and reconstruction tasks of the image features are realized;
the feature extraction subnetwork F 1 Is responsible for extracting features related to thin clouds, the size of a feature map is reduced and the number of features is increased through stride convolution, the thin clouds only affect the low-frequency features of the image, and therefore the network extracts more low-frequency features for useAnd recovering the subsequent images. Feature extraction subnetwork F 1 Consists of r convolution layers, wherein the number of convolution kernels of each convolution layer is {20k,2 1 k,…,2 r-1 k }, the convolution kernel sizes are e×e, the convolution steps are f, wherein r, k, e, f are positive integers, the activating function adopts an LReLU function, the slope is beta, and beta epsilon (0, 1);
the feature recovery subnetwork F 2 Is responsible for completing reconstruction of cloud-free remote sensing images, expanding the size of a feature map through transposition convolution and integrating a feature extraction sub-network F 1 And reconstructing and recovering the cloud-free image by the extracted image features. Feature recovery subnetwork F 2 Consists of r convolution layers, wherein the number of convolution kernels of each convolution layer is {2 }, respectively r-1 k,…,2 1 k,2 0 k }, the convolution kernel sizes are e×e, the convolution steps are g, wherein r, k, e, g are positive integers, and the activation function adopts a ReLU function;
the feature extraction subnetwork F 1 And feature recovery subnetwork F 2 The structure is built based on a multi-layer characteristic interaction unit MFU, and information exchange is carried out between sub-networks by adopting a characteristic self-adaptive calibration unit FCU.
Wherein, the multi-layer characteristic interaction unit specifically comprises:
the Multi-layer feature interaction unit (Multi-layer Feature interaction Unit, MFU) is in the feature extraction subnetwork F 1 Is responsible for extracting thin cloud characteristics under various scales, and recovers a sub-network F in the characteristics 2 Is responsible for integrating image characteristics and completing reconstruction of cloud-free remote sensing images;
the multi-layer feature interaction unit MFU adopts m cavity convolutions with different coefficients to extract image features under various receptive fields, the coefficients are {1,3, …,2m-1}, m is more than or equal to 1, the convolution kernel size of the cavity convolutions is d multiplied by d, the activation function is a ReLU function, and under the condition that the parameters are unchanged, the receptive fields are increased to obtain the features under larger scale;
the multi-layer feature interaction unit MFU designs n branches to integrate and interact different levels of image features, so that the network fuses the image features of multiple scales.
The characteristic self-adaptive calibration unit specifically comprises:
the characteristic self-adaptive calibration unit (Feature adaptive Calibration Unit, FCU) considers that the importance of each channel in the image characteristics to the thin cloud removal work is different, so that optimization is carried out on the image characteristics on the channels, weight coefficients are self-adaptively given to each characteristic channel through learning, the recalibration task of the channel characteristics is completed, and the expression capability of the image characteristics is improved as much as possible;
the characteristic self-adaptive calibration unit FCU uses global average pooling to integrate the characteristics of each channel into a value, uses two layers of full-connection to learn the corresponding weight coefficient of each channel, and outputs nodes with the number of two full-connection layers as followsC is the number of channels of the input feature map, sigma is a positive integer, the activation functions are a ReLU function and a Sigmoid function respectively, and finally the obtained weight coefficient is multiplied by the corresponding channel to finish the self-adaptive calibration work of the image features;
the feature extraction subnetwork F 1 The extracted characteristics favorable for removing the thin cloud are subjected to self-adaptive recalibration through a characteristic self-adaptive calibration unit FCU and then sent to a characteristic recovery subnetwork F 2 And integrating and reconstructing the cloud-free image, which is beneficial to the network to select the most valuable feature for removing the thin cloud to reconstruct the image, continuously adjusting the feature image and optimizing the thin cloud removing effect of the network.
The mean loss function is specifically as follows:
L M =||ω(H)-ω(R(J))|| 1
wherein ω (·) represents an average value in a window of b×b, b is a positive integer, R (·) represents an output of the thin cloud removal network, H represents a cloud-free remote sensing map, J represents a cloud-free remote sensing map, and R (J) represents a cloud-free remote sensing map.
The VGG loss function is specifically as follows:
in the method, in the process of the invention,representing the 2-2 layer output of the VGG16 network, w, h and c represent the width, height and number of feature maps, and i, j and k represent the indices of the width, height and number of feature maps.
The minimum absolute value deviation loss function is specifically:
L L =||H-R(J)|| 1
the technical scheme provided by the invention has the beneficial effects that:
1. according to the method, the prior knowledge and the physical model are not relied on, and the thin cloud removal of the remote sensing image can be completed without auxiliary conditions;
2. according to the method, the end-to-end remote sensing image thin cloud removal is realized, and a clear image can be recovered only by using a single Zhang Han cloud image;
3. the thin cloud removing method is wide in application range, simple and convenient to implement, and the thin cloud removing effect is real and natural.
Drawings
FIG. 1 is a flow chart of a method for removing thin cloud of a remote sensing image based on feature adaptive correction;
FIG. 2 is a schematic diagram of a thin cloud removal network model structure;
FIG. 3 is a schematic diagram of a multi-layer feature interaction unit structure;
FIG. 4 is a schematic diagram of a feature adaptive calibration unit;
FIG. 5 shows cloud-based and cloud-free remote sensing images in experimental results;
fig. 6 is another cloud-based remote sensing image and cloud-free remote sensing image in the experimental results.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below.
Example 1
In order to achieve natural effect of removing the thin cloud of the remote sensing image, the embodiment of the invention provides a method for removing the thin cloud of the remote sensing image based on characteristic self-adaptive correction, which is described in detail below with reference to fig. 1:
a method for removing thin cloud of a remote sensing image based on feature adaptive correction, the method comprising:
101: a thin cloud removing network is designed by adopting a deep convolutional neural network, so that single image thin cloud removing is realized, the thin cloud removing network inputs a cloud image, outputs a cloud image, a training data set is constructed by utilizing a cloud remote sensing image and a cloud-free remote sensing image, and the method is used for network learning of a mapping relation between the cloud image and the cloud-free image;
102: thin cloud removal network architecture feature extraction subnetwork F 1 And feature recovery subnetwork F 2 Composition, feature extraction subnetwork F 1 Responsible for extracting features related to thin clouds, feature recovery subnetwork F 2 Responsible for completing reconstruction of cloud-free remote sensing images and feature extraction subnetwork F 1 And feature recovery subnetwork F 2 The structure is built based on a multi-layer characteristic interaction unit MFU, and information exchange is carried out between sub-networks by adopting a characteristic self-adaptive calibration unit FCU;
103: the multi-layer feature interaction unit MFU is responsible for extracting thin cloud features under different receptive fields, and designing a plurality of branches to integrate and interact image features of different layers;
104: the characteristic self-adaptive calibration unit FCU adaptively gives corresponding weight to finish image characteristic recalibration according to the importance degree of image characteristics to thin cloud removal, so that the characteristic extraction subnetwork F 1 Delivering features valuable for thin cloud removal to feature restoration subnetwork F 2 Completing reconstruction of the cloud-free image;
105: the thin cloud removal network is trained by adopting the training set, the network is optimized by combining the mean loss function, the VGG loss function and the minimum absolute value deviation loss function, and the single Zhang Yaogan image thin cloud removal can be realized by adopting the method after the training is finished.
The specific steps implemented in step 101 are as follows:
1) The method for removing the thin cloud of the end-to-end remote sensing image is designed by adopting a deep convolutional neural network, and the algorithm does not depend on a physical model, as shown in figure 1. The thin cloud removing network is responsible for realizing thin cloud removing of a single image, wherein the network input is a cloud remote sensing image, and the network output is a cloud remote sensing image;
2) Collecting p Zhang Moyun remote sensing images, performing cloud adding operation on the images to obtain p Zhang Youyun remote sensing images, cutting non-cloud images and cloud images at the same position to obtain image blocks with the size of N multiplied by N, removing image blocks with smaller information content, selecting q Zhang Youyun remote sensing image blocks and q Zhang Moyun remote sensing image blocks with corresponding relations from the rest image blocks to form a training data set, wherein the training data set is expressed as { J } i ,H i I e {1,2, …, q } for network learning of the mapping relationship between cloudy and cloudless maps, where N, p and q are both positive integers.
The specific steps of step 102 are as follows:
1) Thin cloud removal network architecture feature extraction subnetwork F 1 (Feature Extraction Sub-network, FES) and feature recovery subnetwork F 2 (Feature Restoration Sub-network, FRS) composition, two subnetworks F 1 And F 2 The extraction and reconstruction tasks of the image features are realized, and the specific structure is shown in figure 2;
2) Feature extraction subnetwork F 1 Is responsible for extracting features related to thin clouds, which affect only low frequency features of the image by reducing feature map size and increasing feature quantity by stride convolution, so the network extracts more low frequency features for subsequent image restoration, as shown in fig. 2. Feature extraction subnetwork F 1 Consists of r convolution layers, wherein the number of convolution kernels of each convolution layer is {2 }, respectively 0 k,2 1 k,…,2 r-1 k }, the convolution kernel sizes are e×e, the convolution steps are f, wherein r, k, e, f are positive integers, the activating function adopts an LReLU function, the slope is beta, and beta epsilon (0, 1);
3) Feature recovery subnetwork F 2 Is responsible for completing reconstruction of cloud-free remote sensing images, expanding the size of a feature map through transposition convolution and integrating a feature extraction sub-network F 1 The extracted image features are reconstructed and restored without cloud patterns, as shown in fig. 2. Feature recovery subnetwork F 2 Consists of r convolution layers, wherein the number of convolution kernels of each convolution layer is {2 }, respectively r-1 k,…,2 1 k,2 0 k }, the convolution kernel sizes are e×e, the convolution steps are g, wherein r, k, e, g are positive integers, and the activation function adopts a ReLU function;
4) Feature extraction subnetwork F 1 And feature recovery subnetwork F 2 The structure is built based on a multi-layer characteristic interaction unit MFU, and information exchange is carried out between sub-networks by adopting a characteristic self-adaptive calibration unit FCU.
The specific steps of step 103 are as follows:
1) Multi-layer feature interaction unit (Multi-layer Feature interaction Unit, MFU) in feature extraction subnetwork F 1 Is responsible for extracting thin cloud characteristics under various scales, and recovers a sub-network F in the characteristics 2 Is responsible for integrating image characteristics to complete reconstruction of cloud-free remote sensing images, and has a specific structure shown in figure 3;
2) The multi-layer feature interaction unit MFU adopts m cavity convolutions with different coefficients to extract image features under various receptive fields, the coefficients are {1,3, …,2m-1}, m is more than or equal to 1, the convolution kernel size of the cavity convolutions is d multiplied by d, the activation function is a ReLU function, and under the condition that the parameters are unchanged, the receptive fields are increased to obtain the features under larger scale;
3) The multi-layer feature interaction unit MFU designs n branches to integrate and interact image features of different layers, so that the network fuses image features of multiple scales.
The specific steps of step 104 are as follows:
1) The characteristic self-adaptive calibration unit (Feature adaptive Calibration Unit, FCU) considers that the importance of each channel in the image characteristics to the thin cloud removal work is different, so that optimization is carried out on the image characteristics on the channels, weight coefficients are self-adaptively given to each characteristic channel through learning, the task of recalibrating the channel characteristics is completed, and the expression capacity of the image characteristics is improved as much as possible;
2) The characteristic self-adaptive calibration unit FCU uses global average pooling to integrate the characteristics of each channel into a value, uses two layers of full-connection to learn the corresponding weight coefficient of each channel, and the number of output nodes of the two layers of full-connection isC is the number of channels of the input feature map, sigma is a positive integer, the activation functions are a ReLU function and a Sigmoid function respectively, and finally the obtained weight coefficient is multiplied by the corresponding channel to complete the self-adaptive calibration work of the image features, and the specific structure is shown in figure 4;
3) Feature extraction subnetwork F 1 The extracted characteristics favorable for removing the thin cloud are subjected to self-adaptive recalibration through a characteristic self-adaptive calibration unit FCU and then sent to a characteristic recovery subnetwork F 2 And integrating and reconstructing the cloud-free image, which is beneficial to the network to select the most valuable feature for removing the thin cloud to reconstruct the image, continuously adjusting the feature image and optimizing the thin cloud removing effect of the network.
The specific steps of step 105 are as follows:
1) The thin cloud removal network is trained by adopting the training data set in the step 101, and is optimized by combining a mean loss function, a VGG loss function and a minimum absolute value deviation loss function, wherein the specific function form is as follows;
2) The specific form of the mean loss function is shown in the formula (1):
L M =||ω(H)-ω(R(J))|| 1 (1)
wherein ω (·) represents an average value in a window of b×b, b is a positive integer, R (·) represents an output of the thin cloud removal network, H represents a cloud-free remote sensing map, J represents a cloud-free remote sensing map, and R (J) represents a cloud-free remote sensing map;
3) The specific form of the VGG loss function is shown in formula (2):
in the method, in the process of the invention,representing the 2-2 layer output of the VGG16 network, w, h and c representing the width, height and number of feature maps, i, j and k representing the indices of the width, height and number of feature maps;
4) The specific form of the minimum absolute value deviation loss function is shown in the formula (3):
L L =||H-R(J)|| 1 (3)
5) Training the thin cloud removal network using a combination of the loss functions (1), (2) and (3), as shown in equation (4):
L Total =ηL M +αL V +μL L (4)
wherein eta, alpha and mu are L respectively M 、L V And L L Weights of (2);
6) After training, the method can be used for removing the thin cloud of the single Zhang Yaogan image.
Example 2
The scheme in the embodiment 1 is described in detail below with reference to the specific drawings and the calculation formula:
a method for removing thin cloud of a remote sensing image based on feature adaptive correction, the method comprising:
201: a thin cloud removing network is designed by adopting a deep convolutional neural network, so that single image thin cloud removing is realized, the thin cloud removing network inputs a cloud image, outputs a cloud image, a training data set is constructed by utilizing a cloud remote sensing image and a cloud-free remote sensing image, and the method is used for network learning of a mapping relation between the cloud image and the cloud-free image;
202: thin cloud removal network architecture feature extraction subnetwork F 1 And feature recovery subnetwork F 2 Composition, feature extraction subnetwork F 1 Responsible for extracting features related to thin clouds, feature recovery subnetwork F 2 Responsible for completing reconstruction of cloud-free remote sensing images and feature extraction subnetwork F 1 And feature recovery subnetwork F 2 The structure is built based on a multi-layer characteristic interaction unit MFU, and information exchange is carried out between sub-networks by adopting a characteristic self-adaptive calibration unit FCU;
203: the multi-layer feature interaction unit MFU is responsible for extracting thin cloud features under different receptive fields, and designing a plurality of branches to integrate and interact image features of different layers;
204: the characteristic self-adaptive calibration unit FCU performs image characteristic matchingThe importance degree of thin cloud removal is self-adaptively given corresponding weight to finish image feature recalibration, so that a feature extraction sub-network F 1 Delivering features valuable for thin cloud removal to feature restoration subnetwork F 2 Completing reconstruction of the cloud-free image;
205: the thin cloud removal network is trained by adopting the training set, the network is optimized by combining the mean loss function, the VGG loss function and the minimum absolute value deviation loss function, and the single Zhang Yaogan image thin cloud removal can be realized by adopting the method after the training is finished.
The specific steps implemented in step 201 are as follows:
1) The method for removing the thin cloud of the end-to-end remote sensing image is designed by adopting a deep convolutional neural network, and the algorithm does not depend on a physical model, as shown in figure 1. The thin cloud removing network is responsible for realizing thin cloud removing of a single image, wherein the network input is a cloud remote sensing image, and the network output is a cloud remote sensing image;
2) Collecting 20 cloud-free remote sensing images, performing cloud adding operation on the images to obtain 20 cloud-free remote sensing images, cutting the cloud-free images and the cloud-free images at the same position to obtain 256×256 image blocks, removing image blocks with low information content, selecting 1000 cloud-free remote sensing image blocks and 1000 cloud-free remote sensing image blocks with corresponding relations from the rest image blocks to form a training data set, wherein the training data set is expressed as { J } i ,H i I e {1,2, …,1000}, for network learning the mapping relationship between cloudy and cloudless images.
The specific steps of step 202 are as follows:
1) Thin cloud removal network architecture feature extraction subnetwork F 1 (Feature Extraction Sub-network, FES) and feature recovery subnetwork F 2 (Feature Restoration Sub-network, FRS) composition, two subnetworks F 1 And F 2 The extraction and reconstruction tasks of the image features are realized, and the specific structure is shown in figure 2;
2) Feature extraction subnetwork F 1 The method is responsible for extracting features related to thin clouds, the size of a feature map is reduced and the number of features is increased through stride convolution, and the thin clouds only affect low-frequency features of images, so that the network extracts moreThe low frequency features are used for subsequent image restoration as shown in fig. 2. Feature extraction subnetwork F 1 The method comprises the steps of forming 3 convolution layers, wherein the number of convolution kernels of each convolution layer is {64,128,256}, the convolution kernel sizes are 4×4, the convolution steps are 2, an LReLU function is adopted as an activation function, and the slope is beta=0.2;
3) Feature recovery subnetwork F 2 Is responsible for completing reconstruction of cloud-free remote sensing images, expanding the size of a feature map through transposition convolution and integrating a feature extraction sub-network F 1 The extracted image features are reconstructed and restored without cloud patterns, as shown in fig. 2. Feature recovery subnetwork F 2 The method comprises the steps of forming 3 convolution layers, wherein the number of convolution kernels of each convolution layer is {256,128,64}, the convolution kernel sizes are 4 multiplied by 4, the convolution steps are 0.5, and an activation function is a ReLU function;
4) Feature extraction subnetwork F 1 And feature recovery subnetwork F 2 The structure is built based on the multi-layer characteristic interaction unit, and information exchange is carried out between the sub-networks by adopting the characteristic self-adaptive calibration unit.
The specific steps of step 203 are:
1) Multi-layer feature interaction unit (Multi-layer Feature interaction Unit, MFU) in feature extraction subnetwork F 1 Is responsible for extracting thin cloud characteristics under various scales, and recovers a sub-network F in the characteristics 2 Is responsible for integrating image characteristics to complete reconstruction of cloud-free remote sensing images, and has a specific structure shown in figure 3;
2) Extracting image features under various receptive fields by using 3 cavity convolutions with different coefficients, wherein the coefficients are {1,3,5}, the convolution kernel size of the cavity convolutions is 4 multiplied by 4, the activation function is a ReLU function, and under the condition that the parameter number is unchanged, the receptive fields are increased to obtain the features under larger scale;
3) The multi-layer feature interaction unit MFU designs 2 branches to integrate and interact different-level image features, so that the network fuses image features of multiple scales.
The specific steps of step 204 are as follows:
1) The characteristic self-adaptive calibration unit (Feature adaptive Calibration Unit, FCU) considers that the importance of each channel in the image characteristics to the thin cloud removal work is different, so that optimization is carried out on the image characteristics on the channels, weight coefficients are self-adaptively given to each characteristic channel through learning, the task of recalibrating the channel characteristics is completed, and the expression capacity of the image characteristics is improved as much as possible;
2) The characteristic self-adaptive calibration unit FCU uses global average pooling to integrate the characteristics of each channel into a value, uses two layers of full-connection to learn the corresponding weight coefficient of each channel, and the number of output nodes of the two layers of full-connection isC is the number of channels of an input feature map, the activation functions are a ReLU function and a Sigmoid function respectively, and finally the obtained weight coefficient is multiplied by the corresponding channel to finish the self-adaptive calibration work of the image features, wherein in the invention, sigma=16, and the specific structure is shown in figure 4;
3) Feature extraction subnetwork F 1 The extracted characteristics favorable for removing the thin cloud are subjected to self-adaptive recalibration through a characteristic self-adaptive calibration unit FCU and then sent to a characteristic recovery subnetwork F 2 And integrating and reconstructing the cloud-free image, which is beneficial to the network to select the most valuable feature for removing the thin cloud to reconstruct the image, continuously adjusting the feature image and optimizing the thin cloud removing effect of the network.
The specific steps of step 205 are:
1) The thin cloud removal network is trained by adopting the training data set in the step 201, and is optimized by combining a mean loss function, a VGG loss function and a minimum absolute value deviation loss function, wherein the specific function form is as follows;
2) A specific form of the mean loss function is shown in formula (1), wherein ω (·) represents the mean value within a window of size 11×11; a specific form of VGG loss function, as shown in formula (2); a specific form of the minimum absolute value deviation loss function is shown in the formula (3); training a thin cloud removal network by adopting the combination of the loss functions (1), (2) and (3), as shown in a formula (4), wherein eta, alpha and mu are L respectively M 、L V And L L Weights of (1) are specifically setη=5.0, α=1.0, μ=100.0;
3) After training, the method can be used for removing the thin cloud of the single Zhang Yaogan image.
Example 3
The protocols in examples 1 and 2 were validated by experimental data as follows:
two optical thin cloud remote sensing images are selected, thin cloud removal is carried out by using the method, and the results are shown in fig. 5 and 6. According to the result, the method can realize the thin cloud removal of the remote sensing image with natural and vivid effect.
Those skilled in the art will appreciate that the drawings are schematic representations of only one preferred embodiment, and that the above-described embodiment numbers are merely for illustration purposes and do not represent advantages or disadvantages of the embodiments. The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (4)

1. A remote sensing image thin cloud removing method based on characteristic self-adaptive correction is characterized by comprising the following steps of:
a thin cloud removing network is designed by adopting a deep convolutional neural network, so that single image thin cloud removing is realized, the thin cloud removing network inputs a cloud image, outputs a cloud image, a training data set is constructed by utilizing a cloud remote sensing image and a cloud-free remote sensing image, and the method is used for network learning of a mapping relation between the cloud image and the cloud-free image; the thin cloud removing network adopts the training data set to train, combines the mean loss function, the VGG loss function and the minimum absolute value deviation loss function to optimize the network, and adopts the thin cloud removing network to realize single Zhang Yaogan image thin cloud removing after the training is finished;
the thin cloud removal network is composed of a feature extraction sub-network F 1 And feature recovery subnetwork F 2 Composition, wherein the features extract the subnetwork F 1 Responsible for extracting features related to thin clouds, feature recovery subnetwork F 2 The cloud-free remote sensing image reconstruction is completed;
the feature extraction subnetwork F 1 And feature recovery subnetwork F 2 The method comprises the following steps:
feature extraction subnetwork F 1 Consists of r convolution layers, wherein the number of convolution kernels of each convolution layer is {2 }, respectively 0 k,2 1 k,…,2 r-1 k }, the convolution kernel sizes are e×e, the convolution steps are f, wherein r, k, e, f are positive integers, the activating function adopts an LReLU function, the slope is beta, and beta epsilon (0, 1);
the feature recovery subnetwork F 2 Is responsible for completing reconstruction of cloud-free remote sensing images, expanding the size of a feature map through transposition convolution and integrating a feature extraction sub-network F 1 Reconstructing and recovering the extracted image features without cloud pictures; feature recovery subnetwork F 2 Consists of r convolution layers, wherein the number of convolution kernels of each convolution layer is {2 }, respectively r-1 k,…,2 1 k,2 0 k }, the convolution kernel sizes are e×e, the convolution steps are g, wherein r, k, e, g are positive integers, and the activation function adopts a ReLU function;
the feature extraction subnetwork F 1 And feature recovery subnetwork F 2 Based on multi-layer characteristic interaction unit MFU, characteristic self-adaptive calibration unit FCU is adopted to extract sub-network F in characteristics 1 And feature recovery subnetwork F 2 Information exchange is carried out between the two;
the multi-layer feature interaction unit MFU is responsible for extracting thin cloud features under different receptive fields, and designing a plurality of branches to integrate and interact image features of different layers;
the characteristic self-adaptive calibration unit FCU adaptively gives corresponding weight to finish image characteristic recalibration according to the importance degree of image characteristics to thin cloud removal, so that the characteristic extraction subnetwork F 1 Delivering features valuable for thin cloud removal to feature restoration subnetwork F 2 And (5) completing reconstruction of the cloud-free image.
2. The method for removing the thin cloud of the remote sensing image based on the characteristic adaptive correction according to claim 1, wherein the multi-layer characteristic interaction unit specifically comprises:
the multi-layer characteristic interaction unit is arranged in a characteristic extraction sub-network F 1 Is responsible for extracting thin cloud characteristics under various scales, and recovers a sub-network F in the characteristics 2 Is responsible for integrating image characteristics and completing reconstruction of cloud-free remote sensing images;
the multi-layer feature interaction unit MFU adopts m cavity convolutions with different coefficients to extract image features under various receptive fields, the coefficients are {1,3, …,2m-1}, m is more than or equal to 1, the convolution kernel size of the cavity convolutions is d multiplied by d, the activation function is a ReLU function, and under the condition that the parameters are unchanged, the receptive fields are increased to obtain the features under larger scale; the multi-layer feature interaction unit MFU designs n branches to integrate and interact different-layer image features.
3. The method for removing the thin cloud of the remote sensing image based on the characteristic adaptive correction according to claim 2, wherein the characteristic adaptive calibration unit specifically comprises:
the characteristic self-adaptive calibration unit FCU uses global average pooling to integrate the characteristics of each channel into a value, uses two layers of full-connection to learn the corresponding weight coefficient of each channel, and outputs nodes with the number of two full-connection layers as followsC is the number of channels of the input feature map, sigma is a positive integer, the activation functions are a ReLU function and a Sigmoid function respectively, and finally the obtained weight coefficient is multiplied by the corresponding channel to complete the self-adaptive calibration work of the image features.
4. The method for removing the thin cloud of the remote sensing image based on the characteristic self-adaptive correction according to claim 1, wherein the mean loss function is specifically:
L M =||ω(H)-ω(R(J))|| 1
wherein ω (·) represents an average value in a window of b×b, b is a positive integer, R (·) represents an output of the thin cloud removal network, H represents a cloud-free remote sensing map, J represents a cloud-free remote sensing map, and R (J) represents a cloud-free remote sensing map.
CN202110615563.1A 2021-06-02 2021-06-02 Remote sensing image thin cloud removing method based on characteristic self-adaptive correction Active CN113516600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110615563.1A CN113516600B (en) 2021-06-02 2021-06-02 Remote sensing image thin cloud removing method based on characteristic self-adaptive correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110615563.1A CN113516600B (en) 2021-06-02 2021-06-02 Remote sensing image thin cloud removing method based on characteristic self-adaptive correction

Publications (2)

Publication Number Publication Date
CN113516600A CN113516600A (en) 2021-10-19
CN113516600B true CN113516600B (en) 2024-03-19

Family

ID=78065455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110615563.1A Active CN113516600B (en) 2021-06-02 2021-06-02 Remote sensing image thin cloud removing method based on characteristic self-adaptive correction

Country Status (1)

Country Link
CN (1) CN113516600B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022222A (en) * 2017-12-15 2018-05-11 西北工业大学 A kind of thin cloud in remote sensing image minimizing technology based on convolution-deconvolution network
CN108921799A (en) * 2018-06-22 2018-11-30 西北工业大学 Thin cloud in remote sensing image minimizing technology based on multiple dimensioned Cooperative Study convolutional neural networks
CN109934200A (en) * 2019-03-22 2019-06-25 南京信息工程大学 A kind of RGB color remote sensing images cloud detection method of optic and system based on improvement M-Net
WO2020244261A1 (en) * 2019-06-05 2020-12-10 中国科学院长春光学精密机械与物理研究所 Scene recognition system for high-resolution remote sensing image, and model generation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022222A (en) * 2017-12-15 2018-05-11 西北工业大学 A kind of thin cloud in remote sensing image minimizing technology based on convolution-deconvolution network
CN108921799A (en) * 2018-06-22 2018-11-30 西北工业大学 Thin cloud in remote sensing image minimizing technology based on multiple dimensioned Cooperative Study convolutional neural networks
CN109934200A (en) * 2019-03-22 2019-06-25 南京信息工程大学 A kind of RGB color remote sensing images cloud detection method of optic and system based on improvement M-Net
WO2020244261A1 (en) * 2019-06-05 2020-12-10 中国科学院长春光学精密机械与物理研究所 Scene recognition system for high-resolution remote sensing image, and model generation method

Also Published As

Publication number Publication date
CN113516600A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN112288647B (en) Remote sensing image cloud and shadow restoration method based on gating convolution
CN107145846B (en) A kind of insulator recognition methods based on deep learning
CN111582483B (en) Unsupervised learning optical flow estimation method based on space and channel combined attention mechanism
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
CN109523482B (en) Deep neural network-based restoration method for degraded image containing texture
CN110517203B (en) Defogging method based on reference image reconstruction
CN112150521A (en) PSmNet optimization-based image stereo matching method
CN110223234A (en) Depth residual error network image super resolution ratio reconstruction method based on cascade shrinkage expansion
CN112184577A (en) Single image defogging method based on multi-scale self-attention generation countermeasure network
CN114119444A (en) Multi-source remote sensing image fusion method based on deep neural network
CN112419171A (en) Image restoration method for multi-residual-block conditional generation countermeasure network
CN109410144A (en) A kind of end-to-end image defogging processing method based on deep learning
CN113379618A (en) Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN112529788A (en) Multispectral remote sensing image thin cloud removing method based on thin cloud thickness map estimation
CN115116054A (en) Insect pest identification method based on multi-scale lightweight network
CN113724149A (en) Weak supervision visible light remote sensing image thin cloud removing method
CN112785539A (en) Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive
CN114387195A (en) Infrared image and visible light image fusion method based on non-global pre-enhancement
Yang et al. UGC-YOLO: underwater environment object detection based on YOLO with a global context block
CN113516600B (en) Remote sensing image thin cloud removing method based on characteristic self-adaptive correction
CN114764752B (en) Night image defogging algorithm based on deep learning
CN112767277B (en) Depth feature sequencing deblurring method based on reference image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant