CN109064430B - Cloud removing method and system for aerial region cloud-containing image - Google Patents

Cloud removing method and system for aerial region cloud-containing image Download PDF

Info

Publication number
CN109064430B
CN109064430B CN201810893813.6A CN201810893813A CN109064430B CN 109064430 B CN109064430 B CN 109064430B CN 201810893813 A CN201810893813 A CN 201810893813A CN 109064430 B CN109064430 B CN 109064430B
Authority
CN
China
Prior art keywords
cloud
image
layer
convolution
generation network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810893813.6A
Other languages
Chinese (zh)
Other versions
CN109064430A (en
Inventor
李从利
薛松
沈延安
张思雨
韦哲
武昕伟
马建华
魏沛杰
刘永峰
李兴山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA Army Academy of Artillery and Air Defense
Original Assignee
PLA Army Academy of Artillery and Air Defense
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA Army Academy of Artillery and Air Defense filed Critical PLA Army Academy of Artillery and Air Defense
Priority to CN201810893813.6A priority Critical patent/CN109064430B/en
Publication of CN109064430A publication Critical patent/CN109064430A/en
Application granted granted Critical
Publication of CN109064430B publication Critical patent/CN109064430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a cloud removing method and a cloud removing system for cloud pictures in an aerial photographing region, wherein the method comprises the steps of inputting a plurality of cloud pictures in the aerial photographing region into a generator of an optimized deep convolution countermeasure generation network to obtain a plurality of predicted pictures; inputting the cloud-free image and the predicted image of the aerial photographing area into a discriminator to obtain a plurality of discrimination results; calculating the difference value between the true probability and the false probability of the prediction image corresponding to each identification result; when the N continuous difference values are smaller than a set value, determining any one of the N predicted images corresponding to the N identification results as a semi-finished image of the aerial photographing area after cloud removal; and fusing the semi-finished image of the aerial photographing region after cloud removal by adopting a Poisson image editing method to obtain the aerial photographing region after cloud removal. According to the method, the depth convolution countermeasure generation network is used for generating the missing cloud area, and the Poisson image is used for editing and processing the difference between the predicted image and the boundary around the scene, so that the authenticity of the generated cloud-removed aerial photography area image is improved.

Description

Cloud removing method and system for aerial region cloud-containing image
Technical Field
The invention relates to the technical field of image processing, in particular to a cloud removing method and system for aerial photography areas containing cloud pictures.
Background
The weather phenomenon of the earth atmosphere is very rich, and the cloud and fog weather phenomenon is very common. Influenced by the cloud layer, many remote sensing images shot by the unmanned aerial vehicle have blind areas shielded by the cloud, the quality of obtaining the remote sensing information is influenced to a great extent, and the analysis and the interpretation of the remote sensing images are not facilitated. How to remove cloud layer interference from the remote sensing image is often the first problem faced by many users. In the last decades, experts and scholars have proposed a large number of cloud removing methods, wherein the cloud removing method based on a single image is widely applied to color image enhancement, the ground feature information is recovered by using a homomorphic filtering method, although the calculated amount is low, the method is sensitive to image noise, when the image noise is large, part of useful information in a processed remote sensing image may be lost in the filtering process, and the authenticity of the generated cloud-removed image is poor.
Disclosure of Invention
The invention aims to provide a cloud removing method and system for a cloud-containing image in an aerial photographing area, and aims to solve the problem that a cloud removing image obtained by using the cloud removing method in the prior art is poor in authenticity.
A cloud removing method for aerial photography area cloud-containing images comprises the following steps:
acquiring a cloud-free image, a plurality of central mask images and a plurality of simulated cloud mask images in an aerial photographing area; the central mask image is an aerial photography area cloud-free image with a square hollow area; the simulated cloud mask image is an aerial photography area cloud-free image with a hollow shape of an irregular graph;
taking the aerial photography area cloud-free image, the central mask image and the simulated cloud mask image as the input of a depth convolution countermeasure generation network to obtain an optimized depth convolution countermeasure generation network; the deep convolution countermeasure generation network comprises a generator and a discriminator; the generator comprises a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a full-connection layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer in sequence from an input end to an output end; the discriminator is sequentially provided with a sixth convolution layer, a seventh convolution layer, an eighth convolution layer, a ninth convolution layer and a tenth convolution layer from the input end to the output end;
acquiring cloud-containing pictures of a plurality of aerial photographing areas;
inputting the cloud pictures of the plurality of aerial photographing regions into a generator of the optimized deep convolution countermeasure generation network to obtain a plurality of predicted pictures;
inputting the cloud-free image of the aerial photographing region into the discriminator of the optimized depth convolution countermeasure generation network, and inputting a plurality of predicted images into the discriminator of the optimized depth convolution countermeasure generation network one by one to obtain a plurality of discrimination results; the identification result is the true probability of the predicted image;
calculating the difference value between the true probability of the predicted image corresponding to each identification result and the false probability of the predicted image; the predicted image is the difference between the false probability of 1 and the true probability of the predicted image;
when the N continuous difference values are smaller than a set value, determining any one of the N predicted images corresponding to the N difference values as a semi-finished image of the aerial photographing area after cloud removal; wherein N is a positive integer;
and fusing the semi-finished aerial photo area image subjected to cloud removal by adopting a Poisson image editing method to obtain the aerial photo area image subjected to cloud removal.
Optionally, convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the full-link layer, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, the fourth deconvolution layer, the fifth deconvolution layer, the sixth convolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer, and the tenth convolution layer are 4 × 4, and a step length is 2.
Optionally, a batch normalization method is adopted to perform output normalization on the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the full-link layer, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, the fourth deconvolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer, and the tenth convolution layer.
Optionally, the first convolutional layer, the second convolutional layer, the third convolutional layer, the fourth convolutional layer, the fifth convolutional layer, the fully-connected layer, the first anti-convolutional layer, the second anti-convolutional layer, the third anti-convolutional layer, and the fourth anti-convolutional layer adopt linear rectification functions; the fifth deconvolution layer adopts a hyperbolic tangent function; the sixth, seventh, eighth, ninth, and tenth convolutional layers employ a leaky linear rectification function.
Optionally, the joint loss of the deep convolution countermeasure generation network is L:
L=λrecLrecadvLadvtvLtv
wherein L isrecRepresents the reconstruction loss, λrecCoefficient representing reconstruction loss, LadvDenotes the loss of antagonism, λadvCoefficient representing resistance loss, LtvDenotes the total variation loss, λtvCoefficient representing total variation loss.
Optionally, the reconstruction loss LrecThe method specifically comprises the following steps:
Figure BDA0001756568510000031
wherein,
Figure BDA0001756568510000032
binary image representing cloud mask, ⊙ being dot-by-dot symbol, x representing aerial region without cloud, F representing saidDeep convolution antagonism generates a network.
Optionally, the countermeasure loss LadvThe method specifically comprises the following steps:
Figure BDA0001756568510000033
wherein, χ represents the cloud-free image library of the aerial photography area and is used for storing the cloud-free image of the aerial photography area; d denotes a discriminator which is,
Figure BDA0001756568510000034
and representing a binary image of the cloud region mask, ⊙ is a dot-by-dot symbol, x represents an aerial region cloud-free image, and F represents the deep convolution countermeasure generation network.
Optionally, the total variation loss LtvThe method specifically comprises the following steps:
Figure BDA0001756568510000035
wherein x isi,(j+1)First pixel points, x, on both sides of the hollow boundary of the mask mapi,jSecond pixel points, x, on both sides of the hollow boundary of the mask map(i+1),jAnd third pixel points at two sides of the hollow boundary of the mask diagram are represented, i is a natural number, and j is a natural number.
A cloud removal system for aerial photography of a cloud-containing image, comprising:
the first acquisition module is used for acquiring a cloud-free image, a plurality of central mask images and a plurality of simulated cloud mask images in an aerial photographing area; the central mask image is an aerial photography area cloud-free image with a square hollow shape; the simulated cloud mask image is an aerial photography area cloud-free image with a hollow shape of an irregular graph;
the optimization module is used for taking the aerial photography area cloud-free image, the central mask image and the simulated cloud mask image as the input of the depth convolution countermeasure generation network to obtain the optimized depth convolution countermeasure generation network; the deep convolution countermeasure generation network comprises a generator and a discriminator; the generator comprises a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a full-connection layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer in sequence from an input end to an output end; the discriminator is sequentially provided with a sixth convolution layer, a seventh convolution layer, an eighth convolution layer, a ninth convolution layer and a tenth convolution layer from the input end to the output end;
the second acquisition module is used for acquiring cloud pictures of a plurality of aerial photographing areas;
the predicted image generation module is used for inputting the cloud images in the plurality of aerial photographing regions into the generator of the optimized depth convolution countermeasure generation network to obtain a plurality of predicted images;
the identification result calculation module is used for inputting the cloud-free image of the aerial photographing area into the identifier of the optimized depth convolution countermeasure generation network and inputting a plurality of predicted images into the identifier of the optimized depth convolution countermeasure generation network one by one to obtain a plurality of identification results; the identification result is the true probability of the predicted image;
the difference value calculating module is used for calculating the difference value between the true probability of the predicted image corresponding to each identification result and the false probability of the predicted image; the predicted image is the difference between the false probability of 1 and the true probability of the predicted image;
the cloud-removed aerial photography area cloud-free semi-finished product determining module is used for determining any one of N predicted images corresponding to the N difference values as the cloud-removed aerial photography area semi-finished product image when the N continuous difference values are smaller than a set value; wherein N is a positive integer;
and the cloud-removed aerial photography area cloud-free image generation module is used for fusing the cloud-removed aerial photography area semi-finished image by adopting a Poisson image editing method to obtain the cloud-removed aerial photography area image.
Optionally, the joint loss of the deep convolution countermeasure generation network is L:
L=λrecLrecadvLadvtvLtv
wherein L isrecRepresents the reconstruction loss, λrecRepresentation reconstructionCoefficient of loss, LadvDenotes the loss of antagonism, λadvCoefficient representing resistance loss, LtvDenotes the total variation loss, λtvCoefficient representing total variation loss.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the cloud removing method and system for the cloud-containing image in the aerial photography area adopt the generator and the discriminator to build a deep convolution countermeasure generation network; the generator comprises a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a full connection layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer in sequence from an input end to an output end; the discriminator is sequentially provided with a sixth convolution layer, a seventh convolution layer, an eighth convolution layer, a ninth convolution layer and a tenth convolution layer from the input end to the output end; inputting the cloud pictures of the plurality of aerial photographing areas into a generator of the optimized deep convolution countermeasure generation network to obtain a plurality of predicted pictures; inputting the cloud-free image of the aerial photographing area into an optimized discriminator of the depth convolution countermeasure generation network, and inputting a plurality of predicted images into the discriminator of the optimized depth convolution countermeasure generation network one by one to obtain a plurality of discrimination results; calculating the difference value between the true probability of the predicted image corresponding to each identification result and the false probability of the predicted image; when the N continuous difference values are smaller than a set value, determining any one of the N predicted images corresponding to the N identification results as a semi-finished image of the aerial photographing area after cloud removal; and fusing the semi-finished image of the aerial photographing region after cloud removal by adopting a Poisson image editing method to obtain the aerial photographing region after cloud removal. According to the method, the constructed depth convolution countermeasure generation network is used for generating the missing cloud area, and the Poisson image is used for editing and processing the difference between the predicted image and the boundary around the scene, so that the authenticity of the generated cloud-removed aerial photography area image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of an embodiment of a cloud removing method for cloud-containing images in an aerial region according to the present invention;
FIG. 2 is a schematic diagram of a center mask provided by the present invention;
FIG. 3 is a schematic diagram of a simulated cloud mask provided by the present invention;
fig. 4 is a schematic structural diagram of a cloud removal system for aerial photography areas including cloud images according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a cloud removing method and system for a cloud-containing image in an aerial photographing area, and aims to solve the problem that a cloud removing image obtained by using the cloud removing method in the prior art is poor in authenticity.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of an embodiment of a cloud removing method for cloud-containing images in an aerial region according to the present invention.
As shown in fig. 1, the method includes:
step S101: and acquiring a cloud-free image, a plurality of central mask images and a plurality of simulated cloud mask images in the aerial photographing area. FIG. 2 is a schematic diagram of a center mask pattern provided by the present invention. As shown in fig. 2, the central mask image is an aerial photography area cloud-free image with a square hollow area. Fig. 3 is a schematic diagram of a simulated cloud mask provided by the present invention. As shown in fig. 3, the simulated cloud mask image is an aerial photography area cloud-free image with a hollow shape of an irregular pattern. Combining a cloud area on a cloud mask (namely an image with an irregular closed curve) with an aerial photographing area cloud-free image by using Photoshop software to obtain a simulated cloud mask image;
step S102: taking the aerial photography area cloud-free image, the central mask image and the simulated cloud mask image as the input of the depth convolution countermeasure generation network to obtain the optimized depth convolution countermeasure generation network; the deep convolution countermeasure generation network comprises a generator and a discriminator; the output end of the generator is connected with the input end of the discriminator; the input end of the deep convolution countermeasure generation network is the input end of the generator; the generator comprises a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a full connection layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer in sequence from an input end to an output end; the discriminator is provided with a sixth convolution layer, a seventh convolution layer, an eighth convolution layer, a ninth convolution layer and a tenth convolution layer in sequence from the input end to the output end.
Step S103: acquiring cloud pictures of a plurality of aerial photographing areas.
Step S104: and inputting the cloud pictures of the plurality of aerial photographing regions into a generator of the optimized deep convolution countermeasure generation network to obtain a plurality of predicted pictures.
Step S105: inputting the cloud-free image of the aerial photographing area into an optimized discriminator of the depth convolution countermeasure generation network, and inputting a plurality of predicted images into the discriminator of the optimized depth convolution countermeasure generation network one by one to obtain a plurality of discrimination results; the discrimination result is the true probability of the predicted image.
Step S106: calculating the difference value between the true probability of the predicted image corresponding to each identification result and the false probability of the predicted image; the difference between the probability of a predicted image being false being 1 and the probability of a predicted image being true.
Step S107: when the N continuous difference values are smaller than a set value, determining any one of N predicted images corresponding to the N difference values as a semi-finished image of the aerial photographing area after cloud removal; wherein N is a positive integer. And when the continuous N difference values are smaller than the set value, the network model parameter is considered to reach the optimal state at the moment.
Step S108: and fusing the semi-finished image of the aerial photographing region after cloud removal by adopting a Poisson image editing method to obtain the aerial photographing region after cloud removal.
In the cloud removing method for the aerial photography area cloud-containing image in the embodiment, the built deep convolution countermeasure generation network is used for generating the missing cloud area, and the Poisson image is used for editing and processing the difference between the predicted image and the boundary around the scene, so that the authenticity of the generated aerial photography area image after cloud removal is improved.
In practical application, step S102 specifically includes:
taking the cloud-free image and the central mask image of the aerial photographing area as the input of a deep convolution countermeasure generation network, training the deep convolution countermeasure generation network, and obtaining a first deep convolution countermeasure generation network; the aerial photography area cloud-free image is a square image, and the ratio of the area of the hollow area of the central mask image to the area of the aerial photography area cloud-free image ranges from 0 to 0.25.
And taking the aerial photography area cloud-free image and the simulated cloud mask image as the input of the first deep convolution countermeasure generation network, training the first deep convolution countermeasure generation network, and obtaining the optimized deep convolution countermeasure generation network.
In the optimization of the deep convolution countermeasure generation network, the deep convolution countermeasure generation network is trained by using the central mask image as an initial model, and in this case, the deep convolution countermeasure generation network tends to learn the low-level image features near the boundary. After the training of the central mask image is completed, the simulated cloud mask image is used for training the deep convolution countermeasure generation network, the generalization capability of the deep convolution countermeasure generation network feature learning is improved, and therefore the capability of the optimized deep convolution countermeasure generation network for generating images with high authenticity is improved.
In practical application, the deep convolution countermeasure generation network cancels a pooling (Pooling) layer on a network structure, the network of the generator is a process of down-sampling and up-sampling, a full connection layer with the output of 1000 neurons is added when encoding is completed, and the learned features are mapped to a mark sample space. The input of the discriminator is a predicted image and an aerial photography area cloud-free image, a Convolutional Neural Network (CNN) is used for down sampling, and the true and false of the predicted image are discriminated through a full connection layer output. The convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the full link layer, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, the fourth deconvolution layer, the fifth deconvolution layer, the sixth convolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer and the tenth convolution layer of the deep convolution pair generation network are 4 x 4, and the step length is 2.
The first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the full connection layer, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer and the fourth deconvolution layer adopt a linear rectification function (ReLU); the fifth deconvolution layer adopts a hyperbolic tangent function (tanh); the sixth, seventh, eighth, ninth, and tenth convolutional layers employ leaky linear rectification functions (LeakyReLU).
Each of the layers except the output layer of the generator and the input layer of the discriminator is subjected to a batch normalization (batch normalization) process, that is, a batch normalization method is used to output-normalize the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the full-link layer, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, the fourth deconvolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer, and the tenth convolution layer.
In practical application, step S104 specifically includes:
and inputting the cloud-containing images of the plurality of aerial photographing areas into a first convolution layer of a generator of the optimized depth convolution countermeasure generation network, and performing convolution operation on the cloud-containing images of the aerial photographing areas to obtain a first predicted image semi-finished product.
And inputting the first predicted image semi-finished product into a second convolution layer of the generator, and performing convolution operation on the first predicted image semi-finished product to obtain a second predicted image semi-finished product.
And inputting the second predicted image semi-finished product into a third convolution layer of the generator, and performing convolution operation on the second predicted image semi-finished product to obtain a third predicted image semi-finished product.
And inputting the third predicted image semi-finished product into a fourth convolution layer of the generator, and performing convolution operation on the third predicted image semi-finished product image to obtain a fourth predicted image semi-finished product.
And inputting the fourth predicted image semi-finished product into a fifth convolution layer of the generator, and performing convolution operation on the fourth predicted image semi-finished product to obtain a fifth predicted image semi-finished product.
And inputting the fifth semi-finished prediction image into a full-connection layer of the generator, and mapping the fifth semi-finished prediction image to a marked sample space to obtain a sixth semi-finished prediction image.
And inputting the sixth prediction image semi-finished product into the first deconvolution layer of the generator, and performing deconvolution operation on the sixth prediction image semi-finished product to obtain a seventh prediction image semi-finished product.
And inputting the seventh predicted image semi-finished product into a second deconvolution layer of the generator, and carrying out deconvolution operation on the seventh predicted image semi-finished product to obtain an eighth predicted image semi-finished product.
And inputting the eighth predicted image semi-finished product into a third deconvolution layer of the generator, and carrying out deconvolution operation on the eighth predicted image semi-finished product to obtain a ninth predicted image semi-finished product.
And inputting the ninth prediction image semi-finished product into a fourth deconvolution layer of the generator, and performing deconvolution operation on the ninth prediction image semi-finished product to obtain a tenth prediction image semi-finished product.
And inputting the tenth prediction image semi-finished product into a fifth deconvolution layer of the generator, and performing deconvolution operation on the tenth prediction image semi-finished product to obtain the prediction image.
In practical applications, step S105 specifically includes:
and inputting the aerial photographing region cloud-free image into a sixth convolution layer of the optimized depth convolution countermeasure generation network discriminator, inputting a plurality of predicted images into the sixth convolution layer one by one, and performing convolution operation on the aerial photographing region cloud-free image and the predicted images to obtain a first judgment result.
And inputting the first judgment result into a seventh convolution layer of the discriminator, and performing convolution operation on the first judgment result to obtain a second judgment result.
And inputting the second judgment result into an eighth convolution layer of the discriminator, and performing convolution operation on the second judgment result to obtain a third judgment result.
And inputting the third discrimination result into a ninth convolution layer of the discriminator, and performing convolution operation on the third discrimination result to obtain a fourth discrimination result.
And inputting the fourth discrimination result into a tenth convolution layer of the discriminator, and performing convolution operation on the fourth discrimination result to obtain the discrimination result.
The present embodiment avoids the loss of image detail information in feature mapping by eliminating the deep convolution pair to generate all Pooling layers (Pooling) on the network. The training process for the deep convolution countermeasure generation network can be accelerated and stabilized by normalizing the output of the convolution layer with Batch Normalization (BN), which is avoided by not employing batch normalization at the output layer of the generator to avoid causing sample oscillation and model instability. A Leaky linear rectification function (leak Rectified linear unit, leak Relu) is used in the discriminator to prevent the gradient sparseness problem. By using Stride convolution (Stride convolution) on the discriminator and transposed convolution (TransposedConvolution) on the generator, the stability of training the deep convolution robust generation network is increased.
In practical applications, the joint loss of the deep convolution countermeasure generation network is L:
L=λrecLrecadvLadvtvLtv
wherein L isrecRepresents the reconstruction loss, λrecCoefficient representing reconstruction loss, LadvDenotes the loss of antagonism, λadvCoefficient representing resistance loss, LtvDenotes the total variation loss, λtvCoefficient representing total variation loss.
In practical application, the overall structure and the consistency with the background of the missing part are captured by taking the distance between the predicted image generated by the generator and the aerial image-free region (namely, the reconstruction loss Lrec) as the overall content constraint:
reconstruction loss LrecThe method specifically comprises the following steps:
Figure BDA0001756568510000101
wherein,
Figure BDA0001756568510000102
the binary image representing the cloud region mask is ⊙ dot-by-dot notation, x represents the aerial region cloud-free image, and F represents the deep convolution countermeasure generation network.
In practical applications, Lrec can predict the approximate contours of objects in the aerial region including the cloud covered region in the cloud image, but it is more difficult to obtain details in the image because the reconstruction loss tends to predict the mean of the distribution. Thus acting against the loss to make the network produce as many image features as possible. Against loss LadvThe method specifically comprises the following steps:
Figure BDA0001756568510000103
wherein, χ represents the cloud-free image library of the aerial photography area and is used for storing the cloud-free image of the aerial photography area; d denotes a discriminator which is,
Figure BDA0001756568510000104
the binary image representing the cloud mask, ⊙ is a dot-by-dot symbol, x represents an aerial region without cloud, and F represents a deep convolution countermeasure generation network, in the training, a generator and a discriminator are trained 5 times in each step, so that the network generates more image features as much as possible, and both the generator and the discriminator use Adaptive Moment Estimation (adam) with a learning rate of 0.0002 as an optimization function.
In practical applications, the reconstruction loss function and the counterloss function are directly used as the joint loss function, which may cause unnecessary noise to the predicted image.
Total variation loss LtvThe method specifically comprises the following steps:
Figure BDA0001756568510000105
wherein x isi,(j+1)First pixel points, x, on both sides of the hollow boundary of the mask mapi,jSecond pixel points, x, on both sides of the hollow boundary of the mask map(i+1),jAnd third pixel points at two sides of the hollow boundary of the mask diagram are represented, i is a natural number, and j is a natural number.
The embodiment provides the joint loss function of the deep convolution countermeasure generation network, and the total variation loss is introduced to be a part of the joint loss function, so that the capability of the deep convolution countermeasure generation network for smoothly generating the image is improved. The cloud area simulation experiment on the four types of real-shot ground feature scene information (vegetation, water body, artificial ground features and soil) images shows that the experimental result shows that the cloud area completion work of the cloud removing method for the aerial image containing the cloud image in the aerial image area is superior to that of the conventional algorithm.
In practical application, step S108 specifically includes:
and inputting the semi-finished product image of the aerial photographing region after cloud removal and the non-cloud image of the aerial photographing region by using a seamless _ cloning function in opencv3.0, and fusing the semi-finished product image of the aerial photographing region after cloud removal to obtain the aerial photographing region image after cloud removal.
According to the embodiment, the color difference or the boundary artifact between the generation area and the background in the aerial photo area semi-finished image after cloud removal is removed through the Poisson algorithm, so that the authenticity of the aerial photo area image after cloud removal is improved.
Fig. 4 is a schematic structural diagram of a cloud removal system for aerial photography areas including cloud images according to the present invention. As shown in fig. 4, the system includes:
the first acquisition module 1 is used for acquiring a cloud-free image, a plurality of central mask images and a plurality of simulated cloud mask images in an aerial photographing area; the central mask image is an aerial photography area cloud-free image with a hollow square shape; the simulated cloud mask image is an aerial photography area cloud-free image with a hollow shape of an irregular graph;
the optimization module 2 is used for taking the aerial photography area cloud-free image, the central mask image and the simulated cloud mask image as the input of the depth convolution countermeasure generation network to obtain the optimized depth convolution countermeasure generation network; the deep convolution countermeasure generation network comprises a generator and a discriminator; the generator comprises a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a full connection layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer in sequence from an input end to an output end; the discriminator is sequentially provided with a sixth convolution layer, a seventh convolution layer, an eighth convolution layer, a ninth convolution layer and a tenth convolution layer from the input end to the output end;
the second acquisition module 3 is used for acquiring cloud pictures of a plurality of aerial photographing areas;
the predicted image generation module 4 is used for inputting the cloud images in the plurality of aerial photographing regions into a generator of the optimized deep convolution countermeasure generation network to obtain a plurality of predicted images;
the identification result calculation module 5 is used for inputting the cloud-free image of the aerial photographing area into the identifier of the optimized depth convolution countermeasure generation network, and inputting a plurality of predicted images into the identifier of the optimized depth convolution countermeasure generation network one by one to obtain a plurality of identification results; the identification result is the true probability of the predicted image;
a difference value calculating module 6, configured to calculate a difference value between the true probability of the predicted image corresponding to each identification result and the false probability of the predicted image; the difference between the probability of the predicted image being false 1 and the probability of the predicted image being true;
the cloud-removed aerial photography area cloud-free image semi-finished product determining module 7 is used for determining any one of N predicted images corresponding to the N difference values as the cloud-removed aerial photography area semi-finished product image when the N continuous difference values are smaller than a set value; wherein N is a positive integer;
and the cloud-removed aerial photography area cloud-free image generation module 8 is used for performing fusion processing on the cloud-removed aerial photography area semi-finished image by adopting a Poisson image editing method to obtain the cloud-removed aerial photography area image.
In the cloud removal system for the aerial photography area including the cloud image in the embodiment, the predicted image generation module 4 with the built deep convolution countermeasure generation network is used for generating the missing cloud area, and the cloud-removed aerial photography area cloud-free image generation module 8 is used for editing and processing the difference between the predicted image and the boundary around the scene through the Poisson image, so that the reality of the generated aerial photography area image after cloud removal is improved.
In practical applications, the joint loss of the depth convolution countermeasure generation network in the optimization module 2 and the prediction image generation module 4 is L:
L=λrecLrecadvLadvtvLtv
wherein L isrecRepresents the reconstruction loss, λrecCoefficient representing reconstruction loss, LadvDenotes the loss of antagonism, λadvCoefficient representing resistance loss, LtvDenotes the total variation loss, λtvCoefficient representing total variation loss.
Wherein the reconstruction loses LrecThe method specifically comprises the following steps:
Figure BDA0001756568510000121
wherein,
Figure BDA0001756568510000122
the binary image representing the cloud region mask is ⊙ dot-by-dot notation, x represents the aerial region cloud-free image, and F represents the deep convolution countermeasure generation network.
Against loss LadvThe method specifically comprises the following steps:
Figure BDA0001756568510000131
wherein, chi represents the regional cloudless picture gallery of taking photo by plane, is used for depositing and taking photo by planeThe region has no cloud picture; d denotes a discriminator which is,
Figure BDA0001756568510000132
the binary image representing the cloud region mask is ⊙ dot-by-dot notation, x represents the aerial region cloud-free image, and F represents the deep convolution countermeasure generation network.
Total variation loss LtvThe method specifically comprises the following steps:
Figure BDA0001756568510000133
the embodiment provides the joint loss function of the deep convolution countermeasure generation network, and the total variation loss is introduced to be a part of the joint loss function, so that the capability of the deep convolution countermeasure generation network for smoothly generating the image is improved. The cloud area simulation experiment on the four types of real-shot ground feature scene information (vegetation, water body, artificial ground features and soil) images shows that the experimental result shows that the cloud area completion work of the cloud removing method for the aerial image containing the cloud image in the aerial image area is superior to that of the conventional algorithm.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In summary, this description should not be construed as limiting the invention.

Claims (10)

1. A cloud removing method for a cloud-containing image in an aerial photographing area is characterized by comprising the following steps:
acquiring a non-cloud image, a plurality of central mask images and a plurality of simulated cloud mask images of an aerial photographing area which is not subjected to mask processing; the central mask image is an aerial photography area cloud-free image with a square hollow area; the simulated cloud mask image is an aerial photography area cloud-free image with a hollow shape of an irregular graph;
taking the aerial photography area non-cloud image which is not subjected to mask processing, the central mask image and the simulated cloud mask image as the input of the depth convolution countermeasure generation network to obtain the optimized depth convolution countermeasure generation network; the deep convolution countermeasure generation network comprises a generator and a discriminator; the generator comprises a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a full-connection layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer in sequence from an input end to an output end; the discriminator is sequentially provided with a sixth convolution layer, a seventh convolution layer, an eighth convolution layer, a ninth convolution layer and a tenth convolution layer from the input end to the output end; taking an aerial photography area cloudless image which is not subjected to mask processing and a central mask image as the input of a deep convolution countermeasure generation network, training the deep convolution countermeasure generation network to obtain a first deep convolution countermeasure generation network; taking an aerial photographing region cloud-free image and a simulated cloud mask image which are not subjected to mask processing as input of a first deep convolution countermeasure generation network, training the first deep convolution countermeasure generation network to obtain an optimized deep convolution countermeasure generation network, specifically, firstly, using a central mask image as an initial model to train the deep convolution countermeasure generation network, and enabling the deep convolution countermeasure generation network to tend to learn low-layer image characteristics near a boundary; after the training of the central mask image is completed, the simulated cloud mask image is used for training the deep convolution countermeasure generation network so as to improve the generalization capability of deep convolution countermeasure generation network feature learning;
acquiring cloud-containing pictures of a plurality of aerial photographing areas;
inputting the cloud pictures of the plurality of aerial photographing regions into a generator of the optimized deep convolution countermeasure generation network to obtain a plurality of predicted pictures;
inputting an aerial photographing region cloudless image which is not subjected to mask processing into the discriminator of the optimized depth convolution countermeasure generation network, and inputting a plurality of predicted images into the discriminator of the optimized depth convolution countermeasure generation network one by one to obtain a plurality of discrimination results; the identification result is the true probability of the predicted image;
calculating the difference value between the true probability of the predicted image corresponding to each identification result and the false probability of the predicted image; the predicted image is the difference between the false probability of 1 and the true probability of the predicted image;
when the N continuous difference values are smaller than a set value, determining any one of the N predicted images corresponding to the N difference values as a semi-finished image of the aerial photographing area after cloud removal; wherein N is a positive integer;
and fusing the semi-finished aerial photo area image subjected to cloud removal by adopting a Poisson image editing method to obtain the aerial photo area image subjected to cloud removal.
2. The cloud removal method for aerial photography area cloud-containing images according to claim 1,
the convolution kernels of the first, second, third, fourth, fifth, fully-connected, first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth convolution layers are 4 × 4, and the step size is 2.
3. The cloud removal method for aerial photography area cloud-containing images according to claim 1,
performing output normalization on the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the fully-connected layer, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, the fourth deconvolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer, and the tenth convolution layer by using a batch normalization method.
4. The cloud removal method for aerial photography area cloud-containing images according to claim 1,
the first convolutional layer, the second convolutional layer, the third convolutional layer, the fourth convolutional layer, the fifth convolutional layer, the fully-connected layer, the first anti-convolutional layer, the second anti-convolutional layer, the third anti-convolutional layer, and the fourth anti-convolutional layer adopt linear rectification functions; the fifth deconvolution layer adopts a hyperbolic tangent function; the sixth, seventh, eighth, ninth, and tenth convolutional layers employ a leaky linear rectification function.
5. The cloud removal method for aerial photography area cloud-containing images according to claim 1,
the joint loss of the deep convolution countermeasure generation network is L:
L=λrecLrecadvLadvtvLtv
wherein L isrecRepresents the reconstruction loss, λrecCoefficient representing reconstruction loss, LadvDenotes the loss of antagonism, λadvCoefficient representing resistance loss, LtvDenotes the total variation loss, λtvCoefficient representing total variation loss.
6. The cloud removal method for aerial photography area cloud cover according to claim 5,
the reconstruction loss LrecThe method specifically comprises the following steps:
Figure FDA0002632495650000031
wherein,
Figure FDA0002632495650000032
binary image representing cloud region mask, ⊙ is dot-by-dot symbol, x represents no cloud image in the unmasked aerial region, and F represents the depthThe convolution countermeasure generates a network.
7. The cloud removal method for aerial photography area cloud cover according to claim 5,
the antagonistic loss LadvThe method specifically comprises the following steps:
Figure FDA0002632495650000033
wherein,
Figure FDA0002632495650000034
the aerial photographing region cloud-free image library is used for storing aerial photographing region cloud-free images which are not subjected to masking processing; d denotes a discriminator which is,
Figure FDA0002632495650000035
and representing a binary image of the cloud region mask, ⊙ is a dot-by-dot symbol, x represents an aerial region non-cloud image which is not subjected to mask processing, and F represents the deep convolution countermeasure generation network.
8. The cloud removal method for aerial photography area cloud cover according to claim 5,
the total variation loss LtvThe method specifically comprises the following steps:
Figure FDA0002632495650000036
wherein x isi,(j+1)First pixel points, x, on both sides of the hollow boundary of the mask mapi,jSecond pixel points, x, on both sides of the hollow boundary of the mask map(i+1),jAnd third pixel points at two sides of the hollow boundary of the mask diagram are represented, i is a natural number, and j is a natural number.
9. A cloud removal system for aerial photography of a cloud-containing image, comprising:
the first acquisition module is used for acquiring a non-cloud image, a plurality of central mask images and a plurality of simulated cloud mask images of an aerial photographing area which is not subjected to mask processing; the central mask image is an aerial photography area cloud-free image with a square hollow shape; the simulated cloud mask image is an aerial photography area cloud-free image with a hollow shape of an irregular graph;
the optimization module is used for obtaining an optimized deep convolution countermeasure generation network by taking the aerial photographing region cloud-free image which is not subjected to mask processing, the central mask image and the simulated cloud mask image as the input of the deep convolution countermeasure generation network; the deep convolution countermeasure generation network comprises a generator and a discriminator; the generator comprises a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a full-connection layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer in sequence from an input end to an output end; the discriminator is sequentially provided with a sixth convolution layer, a seventh convolution layer, an eighth convolution layer, a ninth convolution layer and a tenth convolution layer from the input end to the output end; taking an aerial photography area cloudless image which is not subjected to mask processing and a central mask image as the input of a deep convolution countermeasure generation network, training the deep convolution countermeasure generation network to obtain a first deep convolution countermeasure generation network; taking an aerial photographing region cloud-free image and a simulated cloud mask image which are not subjected to mask processing as input of a first deep convolution countermeasure generation network, training the first deep convolution countermeasure generation network to obtain an optimized deep convolution countermeasure generation network, specifically, firstly, using a central mask image as an initial model to train the deep convolution countermeasure generation network, and enabling the deep convolution countermeasure generation network to tend to learn low-layer image characteristics near a boundary; after the training of the central mask image is completed, the simulated cloud mask image is used for training the deep convolution countermeasure generation network so as to improve the generalization capability of deep convolution countermeasure generation network feature learning;
the second acquisition module is used for acquiring cloud pictures of a plurality of aerial photographing areas;
the predicted image generation module is used for inputting the cloud images in the plurality of aerial photographing regions into the generator of the optimized depth convolution countermeasure generation network to obtain a plurality of predicted images;
the identification result calculation module is used for inputting the aerial photographing region cloud-free image which is not subjected to mask processing into the identifier of the optimized depth convolution countermeasure generation network, and inputting the prediction images into the identifier of the optimized depth convolution countermeasure generation network one by one to obtain a plurality of identification results; the identification result is the true probability of the predicted image;
the difference value calculating module is used for calculating the difference value between the true probability of the predicted image corresponding to each identification result and the false probability of the predicted image; the predicted image is the difference between the false probability of 1 and the true probability of the predicted image;
the cloud-removed aerial photography area cloud-free semi-finished product determining module is used for determining any one of N predicted images corresponding to the N difference values as the cloud-removed aerial photography area semi-finished product image when the N continuous difference values are smaller than a set value; wherein N is a positive integer;
and the cloud-removed aerial photography area cloud-free image generation module is used for fusing the cloud-removed aerial photography area semi-finished image by adopting a Poisson image editing method to obtain the cloud-removed aerial photography area image.
10. The cloud removal system for aerial photography region cloud-containing images of claim 9,
the joint loss of the deep convolution countermeasure generation network is L:
L=λrecLrecadvLadvtvLtv
wherein L isrecRepresents the reconstruction loss, λrecCoefficient representing reconstruction loss, LadvDenotes the loss of antagonism, λadvCoefficient representing resistance loss, LtvDenotes the total variation loss, λtvCoefficient representing total variation loss.
CN201810893813.6A 2018-08-07 2018-08-07 Cloud removing method and system for aerial region cloud-containing image Active CN109064430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810893813.6A CN109064430B (en) 2018-08-07 2018-08-07 Cloud removing method and system for aerial region cloud-containing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810893813.6A CN109064430B (en) 2018-08-07 2018-08-07 Cloud removing method and system for aerial region cloud-containing image

Publications (2)

Publication Number Publication Date
CN109064430A CN109064430A (en) 2018-12-21
CN109064430B true CN109064430B (en) 2020-10-09

Family

ID=64678087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810893813.6A Active CN109064430B (en) 2018-08-07 2018-08-07 Cloud removing method and system for aerial region cloud-containing image

Country Status (1)

Country Link
CN (1) CN109064430B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816612A (en) * 2019-02-18 2019-05-28 京东方科技集团股份有限公司 Image enchancing method and device, computer readable storage medium
CN111062479B (en) * 2019-12-19 2024-01-23 北京迈格威科技有限公司 Neural network-based rapid model upgrading method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303526A (en) * 2015-09-17 2016-02-03 哈尔滨工业大学 Ship target detection method based on coastline data and spectral analysis
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN107330954A (en) * 2017-07-14 2017-11-07 深圳市唯特视科技有限公司 A kind of method based on attenuation network by sliding attribute manipulation image
CN108334904A (en) * 2018-02-07 2018-07-27 深圳市唯特视科技有限公司 A kind of multiple domain image conversion techniques based on unified generation confrontation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303526A (en) * 2015-09-17 2016-02-03 哈尔滨工业大学 Ship target detection method based on coastline data and spectral analysis
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN107330954A (en) * 2017-07-14 2017-11-07 深圳市唯特视科技有限公司 A kind of method based on attenuation network by sliding attribute manipulation image
CN108334904A (en) * 2018-02-07 2018-07-27 深圳市唯特视科技有限公司 A kind of multiple domain image conversion techniques based on unified generation confrontation network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Automatic Cloud Removal from Multitemporal Satellite Images;S R Surya•Philomina Simon;《Journal of the Indian Society of Remote Sensing》;20150331;第43卷;第57-68页 *
Context Encoders: Feature Learning by Inpainting;Deepak Pathak et al;《2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20161212;第2536-2544页 *
From Eyes to Face Synthesis: a New Approach for Human-Centered Smart Surveillance;Xiang Chen et al;《IEEE Access》;20180108;第6卷;第14567-14575页 *
基于条件深度卷积生成对抗网络的图像识别方法;唐贤伦等;《自动化学报》;20180531;第44卷(第5期);第855-864页 *

Also Published As

Publication number Publication date
CN109064430A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN110378844B (en) Image blind motion blur removing method based on cyclic multi-scale generation countermeasure network
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN101714262B (en) Method for reconstructing three-dimensional scene of single image
CN108765344A (en) A method of the single image rain line removal based on depth convolutional neural networks
CN110853026A (en) Remote sensing image change detection method integrating deep learning and region segmentation
CN108510504A (en) Image partition method and device
CN109410135B (en) Anti-learning image defogging and fogging method
CN110443775B (en) Discrete wavelet transform domain multi-focus image fusion method based on convolutional neural network
CN109447897B (en) Real scene image synthesis method and system
CN107944437B (en) A kind of Face detection method based on neural network and integral image
CN113077554A (en) Three-dimensional structured model reconstruction method based on any visual angle picture
CN112560865B (en) Semantic segmentation method for point cloud under outdoor large scene
CN114943893B (en) Feature enhancement method for land coverage classification
CN116822382B (en) Sea surface temperature prediction method and network based on space-time multiple characteristic diagram convolution
CN112508991A (en) Panda photo cartoon method with separated foreground and background
CN109064430B (en) Cloud removing method and system for aerial region cloud-containing image
CN106845343A (en) A kind of remote sensing image offshore platform automatic testing method
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN110570375B (en) Image processing method, device, electronic device and storage medium
CN116778165A (en) Remote sensing image disaster detection method based on multi-scale self-adaptive semantic segmentation
CN116977674A (en) Image matching method, related device, storage medium and program product
CN114926734A (en) Solid waste detection device and method based on feature aggregation and attention fusion
CN109598771B (en) Terrain synthesis method of multi-landform feature constraint
CN109658508B (en) Multi-scale detail fusion terrain synthesis method
CN110675311A (en) Sketch generation method and device under sketch order constraint and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant