CN111709888A - Aerial image defogging method based on improved generation countermeasure network - Google Patents

Aerial image defogging method based on improved generation countermeasure network Download PDF

Info

Publication number
CN111709888A
CN111709888A CN202010496560.6A CN202010496560A CN111709888A CN 111709888 A CN111709888 A CN 111709888A CN 202010496560 A CN202010496560 A CN 202010496560A CN 111709888 A CN111709888 A CN 111709888A
Authority
CN
China
Prior art keywords
fog
image
network
defogging
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010496560.6A
Other languages
Chinese (zh)
Other versions
CN111709888B (en
Inventor
庄子尤
魏育成
徐成华
徐永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Jiudu Beijing Spatial Information Technology Co ltd
Original Assignee
Zhongke Jiudu Beijing Spatial Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Jiudu Beijing Spatial Information Technology Co ltd filed Critical Zhongke Jiudu Beijing Spatial Information Technology Co ltd
Priority to CN202010496560.6A priority Critical patent/CN111709888B/en
Publication of CN111709888A publication Critical patent/CN111709888A/en
Application granted granted Critical
Publication of CN111709888B publication Critical patent/CN111709888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an aerial image defogging method based on an improved generation countermeasure network, which comprises the steps of establishing a data set; inputting the image with the fog sample into a generating network for defogging treatment; inputting the defogged sample image and the corresponding fog-free sample image into a countermeasure network, judging a threshold value and true and false, and calculating a loss function model parameter; feeding back the parameters of the loss function model to the generation network, and generating a network model for updating the generation network; repeating the steps to obtain a training model; and inputting the pictures with fog into a training model to obtain fog-free pictures. The invention provides an aerial image defogging method based on an improved generation countermeasure network aiming at the defects in the existing image defogging method.

Description

Aerial image defogging method based on improved generation countermeasure network
Technical Field
The invention relates to an aerial image defogging method, in particular to an aerial image defogging method based on an improved generation countermeasure network.
Background
With the rapid development of the internet and information processing technology, people have higher and higher requirements on image definition. However, due to the limitation of physical imaging conditions and an acquisition environment, the images are often interfered by fog with different degrees when being acquired, a large amount of small water drops and small dust microparticles exist in the atmosphere in natural fog and haze weather, and the scattering effect of light can be increased due to the existence of the small water drops and the small dust microparticles, so that the problems that the contrast of the images shot by an outdoor image acquisition sensor is reduced, the dynamic range is narrowed, the definition is reduced, the colors are not rich, part of detail information is covered, even the color is distorted and the like are solved, great noise is added to the images, the visual effect is poor, and the image detail information is difficult to effectively extract and analyze subsequently, so that the application requirements can not be met.
In order to enhance the effectiveness and practicability of captured images, reduce the limit of fog in the air on the normal work of an image sensor, improve the visual effect and quality of the images, and carry out the clarification treatment and removal on natural fog and haze, which is a problem to be solved urgently. Therefore, the aerial image defogging method has high application value for improving the contrast of the image under the fog and haze weather conditions, increasing the dynamic range of the image, recovering the edge detail information of the image and ensuring the normal and stable work of the vision processing system, and particularly provides guarantee for the effective and correct work of the aerial image system under the severe weather conditions such as fog and haze.
The current image defogging methods are mainly classified into a traditional model-based defogging method and a deep learning-based image defogging method. The traditional defogging method based on the model can be subdivided into a defogging algorithm based on scene depth knowledge; a defogging method for solving scene depth information based on auxiliary information and a defogging method based on prior conditions.
The traditional defogging method based on the model discusses how to achieve the defogging purpose from an imaging mechanism on the basis of a foggy day imaging model, and can be divided into the following categories in principle, and the technical characteristics are respectively as follows:
(1) defogging method based on scene depth knowledge
The method assumes that the depth of field information of an observation scene is acquired, estimates the transmissivity of each pixel point by constructing an image degradation model, and then realizes the defogging operation of the image by combining a foggy day imaging model. The method is not practical because it needs some expensive third-party equipment for measuring scene geometric information when processing color images.
(2) Defogging method for solving scene depth information based on auxiliary information
When the method is used for defogging the image, firstly, the scene depth information is estimated by using the fog-free image of the same scene, and then the image is subjected to defogging treatment by modeling on the basis. However, since the method needs to obtain a clear image of the scene of the image to be defogged at the same time, it is difficult to obtain the image with fog and the clear image at the same time in practical application, and therefore the practicability is not high.
(3) Defogging method based on prior condition
A plurality of methods are provided for defogging based on prior conditions in the academic world, and some researchers provide a method for defogging by utilizing the characteristic that light ray propagation is not related to shadow region local, so that the method is poor in universality and is only suitable for images with thin fog, and the effect of the image with slightly thick fog is poor. The other part of researchers put forward that the contrast of a fog-free image is stronger than that of a fog-containing image, and the numerical value is higher, and based on the assumption, a method for maximizing the local contrast of the image is put forward to carry out defogging, so that the processing effect of a dense fog image is improved, but the image after defogging by the method sometimes generates a halo phenomenon. Hokeming et al have proposed a defogging algorithm based on a pixel dark channel prior, which means that a color channel with a pixel value close to 0 always exists for each pixel point except for the sky and some regions with high brightness in a fog-free picture, and the pixel values in these regions are considered to be caused by the thin cloud effect of this region, so that a window module with a certain size is selected to move on the image during processing to find the pixel minimum values in different regions, which is considered to be the cloud effect. The method requires that the transmissivity of each pixel point in a local area is constant, and the transmissivity of an actual image is not always constant in one local area, so that the method has inaccurate estimated transmissivity and block effect, can cause the whole image to be dark, has color distortion and color degradation in a sky area, and has a large number of color blocks. Due to the fact that prior information of the image defogging problem is insufficient, and an accurate prior model is difficult to analyze and search, various prior hypothesis defogging methods can bring new problems while solving a certain kind of problems.
The image defogging method based on the deep learning can be classified into the following methods. A method for representing an atmospheric scattering model by using a generative confrontation network or other deep learning networks estimates corresponding transmissivity T and an atmospheric environment light value A through training and learning. However, this method usually requires that the scene depth of the picture to be defogged is known to train to obtain the transmittance T and the atmospheric ambient light value a, but in practical application, it is difficult to obtain the scene depth of one picture, and therefore the practicability is limited.
Researchers such as Su-delay calling and the like firstly carry out preliminary defogging on the image by using priori knowledge, input the image after the preliminary defogging into a generation countermeasure network or other deep learning networks for processing, and carry out further defogging operation. The method utilizes the prior knowledge to guide the coding network, but also faces the problem of insufficient prior information, thereby restricting the practicability of the method.
Researchers such as Tianqing and the like firstly carry out pretreatment on an image to be defogged, normalize the image into a gray level image, then carry out gradient operation, and then input HOG characteristics into a generation countermeasure network for processing. The method has more steps, complex operation and poor efficiency in practical application.
Researchers like Tang-Huan-Happy people directly input images into a generation countermeasure network for processing, but the network structure is complicated, two generators and two discriminators are respectively arranged, the operation amount is greatly increased during defogging operation, and the processing speed is low.
Disclosure of Invention
In order to solve the defects of the technology, the invention provides an aerial image defogging method based on an improved generation countermeasure network.
In order to solve the technical problems, the invention adopts the technical scheme that: an aerial image defogging method based on improved generation of an antagonistic network, comprising the steps of:
collecting fog-carrying and fog-free sample images, establishing a data set required by a training model, and classifying according to the fog-carrying and fog-free sample images;
II. Inputting the sample image with fog into a generating network, and carrying out defogging treatment on the sample image by the generating network; the generation network is composed of a generator, defogging processing is completed by an encoder and a decoder which are corresponding to each other in the generator, and a feature diagram in the decoder and a feature diagram in the corresponding encoder are fused in dimensionality, so that the decoder obtains effective feature expression capability in an anti-learning stage, and PRelu activation operation is used for the fused features;
III, inputting the generated sample image subjected to network defogging and the corresponding fog-free sample image into a countermeasure network, judging the threshold value and the truth of the two images, and calculating a loss function model parameter;
due to the fact that information sharing exists between the fog-carrying image and the generated fog-free image, overfitting is easy to happen during training, information of the fog-carrying portion in the image is concentrated on a low-frequency portion, calculation of Loss function model parameters is needed to guarantee the similarity between the generated fog-free image and an original image, and a total Loss function Loss is shown by a formula ①, wherein L is1Representing the function of the opposing loss, L2Representing the smoothing loss function, W1Representing the weight of the countermeasure loss, W2Represents the smoothing loss weight:
Loss=W1·L1+W2·L2formula ①
The countermeasure loss function L1 is defined as shown in formula (ii):
L1=E(x,y)[log(D(x,y))]+E(x,z)[log(1-D(x,G(x,z)))]formula ②
Smoothing loss function L2Is defined as shown in equation ③:
L2=E(x,y,z)[||y-G(x,z)||1]formula ③
Wherein G is a generator, D is a discriminator, x is an input image with fog, y is a fog-free image corresponding to x, and z is random noise; e(x,y)Representing the average of the loss of all the fog-bearing images and corresponding fog-free image samples input to the discriminator, E(x,y,z)Representing an averaging of the loss of the real fog-free image and the fog-free image generated by the generating network;
IV, feeding back the loss function model parameters to the generation network, generating network adjustment parameters, and updating the generation network model;
v, repeating the steps I to IV until the training is finished to obtain a training model;
and VI, inputting the defogged image with the fog into the training model to obtain the defogged image without the fog.
Further, the encoder performs feature extraction on the fog sample image, performs down-sampling operation, and uses a BN layer and a PRelu active layer for the convolution layer;
the decoder performs up-sampling operation on the features transmitted by the encoder in sequence, and the up-sampling correspondingly amplifies the originally reduced features back to the original size so as to ensure end-to-end output; after each step of up-sampling operation is carried out in the decoder, feature information is enriched by using convolution operation, so that information lost in the process of the encoder can be obtained in the decoder part through learning; the convolved features of each layer are normalized and Dropout operations are performed to prevent overfitting.
Further, the generation network adopts a U-Net structure; U-Net is a full convolution structure, can skip connection, combines a low-level feature map with a high-level feature map, and retains detail information of pixel levels under different resolutions; based on the U-Net structure, convolution layers with different fog sample images are combined to the convolution layer corresponding to the generated fog-free image so as to expand image information.
The invention has the following beneficial effects: the invention aims to provide an aerial image defogging method based on an improved generation countermeasure network aiming at the defects in the existing image defogging method.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of the discriminator.
Fig. 3 is a fog-bearing image a to be processed by the present invention.
FIG. 4 shows image A after the defogging process of the present invention.
Fig. 5 is a fog-bearing image B to be processed by the present invention.
FIG. 6 shows image B after the defogging process according to the present invention.
Fig. 7 is a fog-bearing image C to be processed by the present invention.
Fig. 8 is an image C after the defogging process of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 shows an aerial image defogging method based on improved generation of an antagonistic network, which comprises the following steps:
collecting fog and fog-free sample images, establishing a data set required by a training model, and classifying according to the fog and fog-free samples.
II, inputting the sample image with the fog into a generating network, and carrying out defogging treatment on the sample image by the generating network; the generation network is composed of a generator, defogging processing is completed by an encoder and a decoder which are corresponding to each other in the generator, and a feature diagram in the decoder and a feature diagram in the corresponding encoder are fused in dimensionality, so that the decoder obtains effective feature expression capability in an anti-learning stage, and PRelu activation operation is used for the fused features;
the encoder performs feature extraction on the foggy sample image, performs downsampling operation, and uses a BN layer and a PRelu active layer for the convolution layer;
the decoder performs up-sampling operation on the features transmitted by the encoder in sequence, and the up-sampling correspondingly amplifies the originally reduced features back to the original size so as to ensure end-to-end output; after each step of up-sampling operation is carried out in the decoder, feature information is enriched by using convolution operation, so that information lost in the process of the encoder can be obtained in the decoder part through learning; the convolved features of each layer are normalized and Dropout operations are performed to prevent overfitting.
III, inputting the generated sample image subjected to network defogging and the corresponding fog-free sample image into a countermeasure network, judging the threshold value and the truth of the two images through a discriminator, and calculating a loss function model parameter; the anti-network is used for judging the defogged image generated by the generation network and the original fog-free image block by block, judging the authenticity of the image, transmitting the parameters to the generation network in a reverse way after the judgment is finished, and assisting the generation network to generate a fog-free image which is closer to a real image; if the output is greater than the threshold value, 1 is output, and if the output is less than the threshold value, 0 is output. The discriminator principle is shown in fig. 2.
Due to the fact that information sharing exists between the fog-carrying image and the generated fog-free image, overfitting is easy to happen during training, information of the fog-carrying portion in the image is concentrated on a low-frequency portion, calculation of Loss function model parameters is needed to guarantee the similarity between the generated fog-free image and an original image, and a total Loss function Loss is shown by a formula ①, wherein L is1Representing the function of the opposing loss, L2Representing the smoothing loss function, W1Representing the weight of the countermeasure loss, W2Represents the smoothing loss weight:
Loss=W1·L1+W2·L2formula ①
The countermeasure loss function L1 is defined as shown in formula (ii):
L1=E(x,y)[log(D(x,y))]+E(x,z)[log(1-D(x,G(x,z)))]formula ②
Smoothing loss function L2Is defined as shown in equation ③:
L2=E(x,y,z)[||y-G(x,z)||1]formula ③
Wherein G is a generator, D is a discriminator, x is an input image with fog, y is a fog-free image corresponding to x, and z is random noise; e(x,y)Representing the average of the loss of all the fog-bearing images and corresponding fog-free image samples input to the discriminator, E(x,y,z) Representing the averaging of the loss of the true haze-free image and the haze-free image generated by the generating network.
And IV, feeding back the parameters of the loss function model to the generation network, generating network adjustment parameters, and updating the generation network model.
And V, repeating the steps I to IV until the training is finished to obtain a training model.
And VI, inputting the defogged image with the fog into the training model to obtain the defogged image without the fog.
The generation network adopts a U-Net structure; compared with a common network with an encoding and decoding structure which is firstly sampled to a low dimension and then upsampled to an original resolution, the U-Net is connected in a skipping mode, and a low-layer feature map and a high-layer feature map are combined, so that the detail information of pixel levels under different resolutions can be well reserved; based on the U-Net structure, the convolution layers with different fog images are combined to the convolution layer corresponding to the generated fog-free image so as to expand the image information.
After the defogging is finished, the defogging effect of the image is measured by two indexes of the structural similarity and the peak signal-to-noise ratio. Structural Similarity Index (SSIM) represents a picture as a vector, and represents the similarity of two pictures by calculating the cosine distance between the vectors. Given two images x and y, the structural similarity of the two images can be found according to the formula (iv):
Figure BDA0002523066870000081
wherein muxIs the average value of x, where μyIs the average value of y and is,
Figure BDA0002523066870000082
is the variance of x and is,
Figure BDA0002523066870000083
is the variance of y, σxyIs the covariance of x and y, c1And c2Is a constant used to maintain stability. The structural similarity ranges from 0 to 1, with SSIM equal to 1 when the two images are identical.
Peak signal to noise ratio (PSNR) before and after the image is restored represents a ratio of maximum power of a signal, and generally speaking, the higher the PSNR, the better the image reconstruction quality.
Figure BDA0002523066870000084
Where MSE is the mean square error between the original image and the processed image and n is the number of bits per sample value.
150 foggy images on a data set are selected as a test set, a traditional classic method based on Dark Channel Prior Defogging (DCP) is selected to compare the effects with the method, the evaluation indexes SSIM and PSNR are shown in table 1, and the defogging effect is shown in fig. 3-8. Compared with the DCP, the defogged image obtained by the algorithm is brighter and vivid, has clearer edge and detail information, and has higher scores of SSIM and PSNR due to good performance brought by the function of extracting features of the neural network and the function of identifying the discriminator for resisting learning.
TABLE 1 two methods defogged image quality contrast
DCP Method for producing a composite material
SSIM 0.660 0.759
PSNR 13.89 20.32
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make variations, modifications, additions or substitutions within the technical scope of the present invention.

Claims (3)

1. An aerial image defogging method based on an improved generation countermeasure network is characterized by comprising the following steps: the method comprises the following steps:
I. collecting sample images with fog and without fog, establishing a data set required by a training model, and classifying according to the sample images with fog and without fog;
II. Inputting the sample image with fog into a generating network, and carrying out defogging treatment on the sample image by the generating network; the generation network is composed of a generator, defogging processing is completed by an encoder and a decoder which are corresponding to each other in the generator, and a feature diagram in the decoder and a feature diagram in the corresponding encoder are fused in dimensionality, so that the decoder obtains effective feature expression capability in an anti-learning stage, and PRelu activation operation is used for the fused features;
III, inputting the generated sample image subjected to network defogging and the corresponding fog-free sample image into a countermeasure network, judging the threshold value and the truth of the two images, and calculating a loss function model parameter;
due to the fact that information sharing exists between the fog-carrying image and the generated fog-free image, overfitting is easy to happen during training, information of the fog-carrying part in the image is concentrated on a low-frequency part, and in order to guarantee the similarity between the generated fog-free image and the original image, a loss function needs to be conductedCalculation of the numerical model parameters, the total Loss function Loss is shown by the formula ①, where L1Representing the function of the opposing loss, L2Representing the smoothing loss function, W1Representing the weight of the countermeasure loss, W2Represents the smoothing loss weight:
Loss=W1·L1+W2·L2formula ①
Wherein the penalty function L is resisted1Is defined as shown in equation ②:
L1=E(x,y)[log(D(x,y))]+E(x,z)[log(1-D(x,G(x,z)))]formula ②
Smoothing loss function L2Is defined as shown in equation ③:
L2=E(x,y,z)[||y-G(x,z)||1]formula ③
Wherein G is a generator, D is a discriminator, x is an input image with fog, y is a fog-free image corresponding to x, and z is random noise; e(x,y)Representing the average of the loss of all the fog-bearing images and corresponding fog-free image samples input to the discriminator, E(x,y,z)Representing an averaging of the loss of the real fog-free image and the fog-free image generated by the generating network;
IV, feeding back the loss function model parameters to the generation network to generate network adjustment parameters, and updating the generation network model;
v, repeating the steps I to IV until the training is finished to obtain a training model;
and VI, inputting the defogged image with the fog into the training model to obtain the defogged image without the fog.
2. The aerial image defogging method based on the improved generation of countermeasure networks as claimed in claim 1, wherein: the encoder performs feature extraction on the foggy sample image, performs downsampling operation, and uses a BN layer and a PRelu active layer for the convolution layer;
the decoder performs up-sampling operation on the features transmitted by the encoder in sequence, and the up-sampling correspondingly amplifies the originally reduced features back to the original size so as to ensure end-to-end output; after each step of up-sampling operation is carried out in the decoder, feature information is enriched by using convolution operation, so that information lost in the process of the encoder can be obtained in the decoder part through learning; the convolved features of each layer are normalized and Dropout operations are performed to prevent overfitting.
3. The aerial image defogging method based on the improved generation of countermeasure network as claimed in claim 2, wherein: the generation network adopts a U-Net structure; U-Net is a full convolution structure, can skip connection, combines a low-level feature map with a high-level feature map, and retains detail information of pixel levels under different resolutions; based on the U-Net structure, convolution layers with different fog sample images are combined to the convolution layer corresponding to the generated fog-free image so as to expand image information.
CN202010496560.6A 2020-06-03 2020-06-03 Aerial image defogging method based on improved generation countermeasure network Active CN111709888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010496560.6A CN111709888B (en) 2020-06-03 2020-06-03 Aerial image defogging method based on improved generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010496560.6A CN111709888B (en) 2020-06-03 2020-06-03 Aerial image defogging method based on improved generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111709888A true CN111709888A (en) 2020-09-25
CN111709888B CN111709888B (en) 2023-12-08

Family

ID=72538823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010496560.6A Active CN111709888B (en) 2020-06-03 2020-06-03 Aerial image defogging method based on improved generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111709888B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362251A (en) * 2021-06-27 2021-09-07 东南大学 Anti-network image defogging method based on double discriminators and improved loss function
CN113658051A (en) * 2021-06-25 2021-11-16 南京邮电大学 Image defogging method and system based on cyclic generation countermeasure network
CN116645298A (en) * 2023-07-26 2023-08-25 广东电网有限责任公司珠海供电局 Defogging method and device for video monitoring image of overhead transmission line
CN116721403A (en) * 2023-06-19 2023-09-08 山东高速集团有限公司 Road traffic sign detection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665432A (en) * 2018-05-18 2018-10-16 百年金海科技有限公司 A kind of single image to the fog method based on generation confrontation network
CN109272455A (en) * 2018-05-17 2019-01-25 西安电子科技大学 Based on the Weakly supervised image defogging method for generating confrontation network
CN109300090A (en) * 2018-08-28 2019-02-01 哈尔滨工业大学(威海) A kind of single image to the fog method generating network based on sub-pix and condition confrontation
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN109949242A (en) * 2019-03-19 2019-06-28 内蒙古工业大学 The generation method of image defogging model, device and image defogging method, device
CN109993804A (en) * 2019-03-22 2019-07-09 上海工程技术大学 A kind of road scene defogging method generating confrontation network based on condition
CN110288550A (en) * 2019-06-28 2019-09-27 中国人民解放***箭军工程大学 The single image defogging method of confrontation network is generated based on priori knowledge guiding conditions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272455A (en) * 2018-05-17 2019-01-25 西安电子科技大学 Based on the Weakly supervised image defogging method for generating confrontation network
CN108665432A (en) * 2018-05-18 2018-10-16 百年金海科技有限公司 A kind of single image to the fog method based on generation confrontation network
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN109300090A (en) * 2018-08-28 2019-02-01 哈尔滨工业大学(威海) A kind of single image to the fog method generating network based on sub-pix and condition confrontation
CN109949242A (en) * 2019-03-19 2019-06-28 内蒙古工业大学 The generation method of image defogging model, device and image defogging method, device
CN109993804A (en) * 2019-03-22 2019-07-09 上海工程技术大学 A kind of road scene defogging method generating confrontation network based on condition
CN110288550A (en) * 2019-06-28 2019-09-27 中国人民解放***箭军工程大学 The single image defogging method of confrontation network is generated based on priori knowledge guiding conditions

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾绪仲 文志强: "一种基于条件生成对抗网络的去雾方法", 信息与电脑, no. 9, pages 60 - 62 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658051A (en) * 2021-06-25 2021-11-16 南京邮电大学 Image defogging method and system based on cyclic generation countermeasure network
CN113658051B (en) * 2021-06-25 2023-10-13 南京邮电大学 Image defogging method and system based on cyclic generation countermeasure network
CN113362251A (en) * 2021-06-27 2021-09-07 东南大学 Anti-network image defogging method based on double discriminators and improved loss function
CN113362251B (en) * 2021-06-27 2024-03-26 东南大学 Anti-network image defogging method based on double discriminators and improved loss function
CN116721403A (en) * 2023-06-19 2023-09-08 山东高速集团有限公司 Road traffic sign detection method
CN116645298A (en) * 2023-07-26 2023-08-25 广东电网有限责任公司珠海供电局 Defogging method and device for video monitoring image of overhead transmission line
CN116645298B (en) * 2023-07-26 2024-01-26 广东电网有限责任公司珠海供电局 Defogging method and device for video monitoring image of overhead transmission line

Also Published As

Publication number Publication date
CN111709888B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN111709888A (en) Aerial image defogging method based on improved generation countermeasure network
CN108961198B (en) Underwater image synthesis method of multi-grid generation countermeasure network and application thereof
CN110992275A (en) Refined single image rain removing method based on generation countermeasure network
CN109447917B (en) Remote sensing image haze eliminating method based on content, characteristics and multi-scale model
CN109993804A (en) A kind of road scene defogging method generating confrontation network based on condition
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
Zhao et al. An attention encoder-decoder network based on generative adversarial network for remote sensing image dehazing
CN114266977B (en) Multi-AUV underwater target identification method based on super-resolution selectable network
CN111986108A (en) Complex sea-air scene image defogging method based on generation countermeasure network
CN111242868B (en) Image enhancement method based on convolutional neural network in scotopic vision environment
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN116596792A (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN115272072A (en) Underwater image super-resolution method based on multi-feature image fusion
CN113487509B (en) Remote sensing image fog removal method based on pixel clustering and transmissivity fusion
CN113920476A (en) Image identification method and system based on combination of segmentation and color
CN114119694A (en) Improved U-Net based self-supervision monocular depth estimation algorithm
CN117451716A (en) Industrial product surface defect detection method
CN117152642A (en) Ecological environment supervision system and method based on unmanned aerial vehicle
CN117036182A (en) Defogging method and system for single image
CN116452450A (en) Polarized image defogging method based on 3D convolution
CN115641271A (en) Lightweight image defogging method based on cross-stage local connection
CN113724156A (en) Generation countermeasure network defogging method and system combined with atmospheric scattering model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant