CN115330623A - Image defogging model construction method and system based on generation countermeasure network - Google Patents

Image defogging model construction method and system based on generation countermeasure network Download PDF

Info

Publication number
CN115330623A
CN115330623A CN202210977715.7A CN202210977715A CN115330623A CN 115330623 A CN115330623 A CN 115330623A CN 202210977715 A CN202210977715 A CN 202210977715A CN 115330623 A CN115330623 A CN 115330623A
Authority
CN
China
Prior art keywords
image
defogging
defogged
network
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210977715.7A
Other languages
Chinese (zh)
Inventor
马瑞强
邢红梅
关玉欣
王拴乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Technology
Original Assignee
Inner Mongolia University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Technology filed Critical Inner Mongolia University of Technology
Priority to CN202210977715.7A priority Critical patent/CN115330623A/en
Publication of CN115330623A publication Critical patent/CN115330623A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a construction system and a method of an image defogging model based on generation of a countermeasure network, wherein the system at least comprises a second defogging module and a generation countermeasure network training module, and the second defogging module is configured to: carrying out prior defogging on the foggy image to obtain a verification defogged image and sending the verification defogged image to the generation countermeasure network training module; the generate confrontation network training module is configured to: acquiring a correlation distribution rule of characteristic values of the prior defogged image and the defogged image in a mode of learning an RGB histogram of the defogged image by an RGB histogram of the prior defogged image based on a generation countermeasure mapping network, so as to convert the distribution rule into a defogging model which is verified in advance. Compared with the phenomenon of color deviation in the prior art, the method utilizes the RGB information data after the DCP defogging to perform the confrontation training with the RGB information data of the haze-free image, thereby effectively solving the defogging distortion problem of the bright part which can not be avoided by the DCP.

Description

Image defogging model construction method and system based on generation countermeasure network
The application is a divisional application of a patent with the application number of 201911422944.7, the application date of 2019, 12 and 31, the application name of the method and the system is a single image defogging enhancement method and system based on a generation countermeasure network, and the application type is invention.
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for constructing an image defogging model based on a generation countermeasure network.
Background
Fog is a natural phenomenon that blurs a scene, reduces visibility, and changes color. Due to various factors causing the imaging quality of a vision system in real life, images acquired by an imaging device are degraded to a certain degree. It is worth paying attention to that in recent years, haze weather frequently occurring in all parts of the country has seriously influenced social activities and normal operation of industrial production, so that safety problems of systems such as criminals, outdoor monitoring of computer vision, remote sensing systems, flight navigation and the like are locked and tracked by monitoring videos which are widely distributed, and great potential safety hazards are generated on application of the haze weather, especially an automatic driving technology. Image blurring causes scene features to be attenuated to different degrees, and robustness (Robust) and reliability of relevant electronic equipment systems working under the environment are greatly reduced. Therefore, the improvement of the research and scientific evaluation system of the high-quality defogging algorithm of the foggy degraded image is still the key content and research focus of the recent image processing and computer vision identification research. The image defogging (Dehaze) technology mainly refers to removing haze information interference in an image by a certain means, and recovering image color, contrast and scene detail information so as to obtain a high-quality image, obtain a satisfactory visual effect, obtain more effective image information and provide theoretical scientific evaluation. The method has great engineering application value for reducing the restriction of outdoor imaging equipment such as traffic transportation, video monitoring and navigation systems on severe weather conditions and improving the reliability and stability of the work of related systems.
The traditional defogging algorithms are mainly divided into two types, namely a defogging algorithm based on image restoration of an atmospheric light scattering model and a defogging algorithm based on an image enhancement theory. At present, the mainstream defogging algorithm is established based on an atmospheric light scattering model, wherein the most extensive image defogging algorithm is a dark channel prior defogging algorithm. The dark channel prior defogging algorithm meets the requirements in many fields, but the optimal perspective ratio cannot be obtained due to the lack of effective prior knowledge in the foggy image, and the color shift phenomenon appears in the image restoration process. In order to meet the requirements of fields with high image quality such as monitoring video locking and criminals tracking, outdoor monitoring of computer vision, remote sensing systems, flight navigation, automatic driving technologies and the like, how to further improve the quality of defogged images is a technical problem to be solved in the field.
Chinese patent publication No. CN106127702B discloses an image defogging method based on deep learning, which is used for removing fog interference in a foggy image and reducing the influence of fog on image quality. It includes: acquiring a sample set and a test sample set; carrying out HSL (hue, saturation and lightness) spatial change on the foggy image in the sample set, extracting local low-brightness features of the foggy image, and carrying out scale scaling and normalization processing on all feature components; finding out the discrimination perspective so that the depth discrimination neural network realizes antagonistic training; training the characteristic components by using a depth generation antagonistic neural network, and learning and establishing a mapping network between the foggy image and the perspective ratio; and carrying out defogging test on the test sample set by using the deep generation neural network. The method is used for solving the technical problem that the prior information of the previous defogging algorithm is insufficient.
However, this patent does not perceive the fog-attenuated features in the defogged image, and thus may result in the loss of some details and color shifts in the image, making the fog image characterized by weakened scene information difficult to extract the original scene information.
Furthermore, on the one hand, due to the differences in understanding to those skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a single image defogging enhancement system based on a generation confrontation network, which comprises a sample data acquisition module, a generation confrontation network training module and a first defogging module, wherein: the sample data acquisition module acquires a plurality of images suitable for generating a defogging model constructed by the anti-network training module through a public image library and/or through a network crawler technology as sample data; the generation confrontation network training module extracts the characteristic values of the images and converts the characteristic values into a defogging model which can be used for the first defogging module to restore the images to be defogged into effective defogging images through generation confrontation mapping networks; the sample data acquisition module at least acquires a verification defogged image and a plurality of defogged images of a plurality of fogging images which are verified in advance by the second defogging module, so that the generation countermeasure network training module can acquire a correlation distribution rule of the feature values of the prior defogged image and the defogged image based on the generation countermeasure mapping network in a mode that the histogram of the feature values of the prior defogged image learns the histogram of the feature values of the defogged image, and then convert the distribution rule into a defogged model which is verified in advance, so that the first defogging module embeds the defogged model which is verified in advance into the image to be defogged to restore the image to be defogged into an effective defogged image. For example, its feature value may be RGB.
According to a preferred embodiment, the second defogging module is configured to acquire the verification defogged image by defogging the foggy image a priori according to the following modes: in computer vision and computer graphics, configuring a mapping model for converting the foggy image into the verification defogged image a priori, wherein the mapping model at least comprises a transmissivity value and an atmospheric light value for describing a mapping relation between the foggy image and the verification defogged image, and solving the atmospheric light value: selecting a minimum channel map which can be used for obtaining a dark channel map from RGB three channels of the foggy image, and obtaining the atmospheric light value based on the dark channel map; obtaining a transmittance value: acquiring the perspective value based on a dark channel prior theory; and carrying out prior defogging on the foggy image based on the mapping model, the atmospheric light value and the perspective value to obtain the verification defogged image.
According to a preferred embodiment, the generation countermeasure network training module compares the effective defogged image corresponding to the image to be defogged and recovered by the defogged model previously verified in the first defogging module with the verified defogged image previously verified by the second defogging module, so as to obtain a correlation distribution rule of the feature values 200 of the effective defogged image and the verified defogged image, so that the generation countermeasure network training module can adaptively modify the defogged model based on the effective defogged image previously verified and the verified defogged image previously verified.
According to a preferred embodiment, the generation countermeasure network training module comprises a generator and a discriminator, the generator constructs a generation network for the a priori defogged image to mimic the defogged image to generate an intermediate defogged image; the discriminator builds a discrimination network, and the discrimination network calculates a cost function based on the intermediate defogged image; if the cost function calculation result is smaller than a preset defogging threshold value, taking the generation network as a defogging model; otherwise, the intermediate defogging image is taken as the input image of the generator to continue learning until the cost function calculation result is smaller than a preset defogging threshold value, so as to update the generation network.
According to a preferred embodiment, the generation confrontation network training module comprises a generator and a discriminator, the generator constructs a generation network for the a priori defogged image to mimic the defogged image to generate an intermediate defogged image; the discriminator builds a discrimination network, and the discrimination network calculates a cost function based on the intermediate defogged image; if the cost function calculation result is smaller than a preset defogging threshold value, taking the generation network as a defogging model; otherwise, the intermediate defogging image is taken as the input image of the generator to continue learning until the cost function calculation result is smaller than a preset defogging threshold value, so as to update the generation network.
According to a preferred embodiment, the first defogging module divides the image to be defogged into a plurality of small images, counts the pixel value distribution of each small image to obtain a histogram statistical result of each small image, inputs the histogram statistical result into the generation network to output a histogram corresponding to the defogging result of each small image, accumulates and splices the histograms corresponding to each small image into a large image, and filters the large image by using a guide filter in a mode of removing unnatural transition of corners to obtain the effective defogging image.
According to a preferred embodiment, the generation network learning establishes a mapping network from the fog image to the perspective view, obtains a generated perspective view, and generates the perspective view and the optimal perspective view by discriminating the network; the generation network and the discrimination network meet the following requirements:
Figure BDA0003800606810000041
Figure BDA0003800606810000042
wherein dist is the optimal viewing angle obtained from the training sample as the discrimination viewing angle in the discrimination process, and dist is the discrimination viewing angle obtained from the sample and satisfies the distribution data p; t is the generation perspective of g; f is an input feature, namely a training feature extracted from the fog image sample; wherein, for the discrimination network D, it is used to distinguish and distinguish the view angle and generate the view angle; and for the network G, the D cannot correctly distinguish the generated visual angles, so that the network G is in counter training with the discrimination network D.
According to a preferred embodiment, the defogging enhancement system includes an evaluation module configured to: sending an evaluation signal to the second defogging module in response to the fact that the valid defogged image output by the first defogging module is received, so that the second defogging module inputs a verification defogged image to the evaluation module; evaluating a quality improvement relative value of the effective defogged image relative to the verification defogged image based on at least one preset evaluation index, wherein the first defogging module outputs the effective defogged image under the condition that the quality improvement relative value is greater than or equal to a set quality relative value, and the generation countermeasure network training module can adaptively update the defogging model based on the previously verified effective defogged image and the previously verified verification defogged image; and under the condition that the quality improvement relative value is smaller than a set quality relative value, the generation countermeasure network training module corrects the defogging model by adopting at least one Gaussian regression model.
According to a preferred embodiment, the invention also discloses a single image defogging and enhancing method based on the generation countermeasure network, which is used for the system.
Compared with the prior art, the invention has the advantages that: the RGB information data after the DCP defogging is used for countertraining with the RGB information data of the haze-free image, so that the problem of defogging distortion of a bright part which cannot be avoided by the DCP is effectively solved. In order to solve the problem of brightness distortion caused by DCP, a countermeasure network is generated and an output judgment network is generated at the same time for evaluation so as to ensure that the output looks like a real image. Meanwhile, the network is generated to generate the target graph first. The target map is the most important part of this network as it will guide the generation of the network to focus on the nebulized area. The target graph is generated by a cyclic network. The generation network then takes as input the input image and the target map using a designed auto-encoder. To obtain more extensive context information, on the decoder side of the auto-encoder, multi-scale losses are employed. Each loss compares the difference between the convolutional layer output and the corresponding ground truth. The input to the convolutional layer contains the features of the decoder layer. In addition to these losses, a perceptual loss is used to obtain a more comprehensive similarity to the ground truth for the final output of the auto-encoder. The final output is also the output of the generating network. After the resulting image output is obtained, the discrimination network will check whether it is authentic. In fact, the target fogging area is not given during the testing phase. Therefore, there is no information on the local area to discriminate that the network can pay attention to. To solve this problem, a guiding discriminant network is used to point to a local target area. In general, the target map is introduced into the generation network and the discrimination network, which is a brand new method, and the image defogging can be effectively realized.
The invention also provides a construction system based on an image defogging model generating a countermeasure network, the system at least comprising a second defogging module and a countermeasure network generating training module, the second defogging module being configured to: carrying out prior defogging on the foggy image to obtain a verification defogged image and sending the verification defogged image to the generation countermeasure network training module; the generate confrontation network training module is configured to: acquiring a correlation distribution rule of characteristic values of the prior defogged image and the defogged image in a mode of learning an RGB histogram of the defogged image by an RGB histogram of the prior defogged image based on a generation countermeasure mapping network, so as to convert the distribution rule into a defogging model which is verified in advance.
Preferably, the generative confrontation network training module comprises a generator and a discriminator, the generator is used for constructing a generation network, and the generation network is used for the prior defogged image to imitate the defogged image so as to generate an intermediate defogged image; the discriminator is used for constructing a discrimination network, and the discrimination network calculates a cost function based on the intermediate defogged image; if the cost function calculation result is smaller than a preset defogging threshold value, taking the generated network as a defogging model; otherwise, the intermediate defogging image is taken as the input image of the generator to continue learning until the cost function calculation result is smaller than a preset defogging threshold value, so as to update the generation network.
Preferably, the system further comprises a first defogging module, wherein the first defogging module is embedded into the defogging model so as to restore the image to be defogged into an effective defogged image; the first defogging module inputs the histogram statistical result into the generation network to output a histogram corresponding to the defogging result of each small image, accumulates and splices the histograms corresponding to each small image into a large image, and filters the large image by using a guide filter in a mode of removing unnatural transition of corners to obtain the effective defogging image.
Preferably, the generate confrontation network training module is further configured to: and comparing the effective defogging image verified in advance with the verified defogging image verified in advance, acquiring a correlation distribution rule of characteristic values of the effective defogging image and the verified defogging image, and adaptively correcting the defogging model based on the effective defogging image verified in advance and the verified defogging image verified in advance.
Preferably, the second defogging module is configured to a priori defogg the foggy image to acquire the verification defogged image as follows: in computer vision and computer graphics, a mapping model which can be used for converting the foggy image into a verified defogged image a priori is configured, the mapping model at least comprises a transmittance value and an atmospheric light value which are used for describing the mapping relation between the foggy image and the verified defogged image, and the atmospheric light value is obtained: selecting a minimum channel map which can be used for obtaining a dark channel map from RGB three channels of the foggy image, and obtaining an atmospheric light value based on the dark channel map; obtaining a transmittance value: determining a perspective value based on a dark channel prior theory; carrying out prior defogging on the foggy image F based on the mapping model, the atmospheric light value and the perspective value to obtain a verification defogged image; and carrying out prior defogging on the foggy image based on the mapping model, the atmospheric light value and the perspective value to obtain the verification defogged image.
Preferably, the system further comprises an evaluation module, which sends an evaluation signal to the second defogging module in response to the fact that the valid defogged image output by the first defogging module is received, so that the second defogging module inputs a verification defogged image to the evaluation module; evaluating a quality improvement relative value of the effective defogged image relative to the verification defogged image based on at least one preset evaluation index, wherein the first defogging module outputs the effective defogged image under the condition that the quality improvement relative value is greater than or equal to a set quality relative value, and the generation countermeasure network training module can adaptively update the defogging model based on the previously verified effective defogged image and the previously verified verification defogged image; and under the condition that the quality improvement relative value is smaller than a set quality relative value, the generation countermeasure network training module corrects the defogging model by adopting at least one Gaussian regression model.
The invention also provides a construction method of the image defogging model based on the generation of the countermeasure network, which at least comprises the following steps: carrying out prior defogging on the foggy image to obtain a verification defogged image; acquiring a correlation distribution rule of characteristic values of the prior defogged image and the defogged image in a mode of learning an RGB histogram of the defogged image by an RGB histogram of the prior defogged image based on a generation countermeasure mapping network, so as to convert the distribution rule into a defogging model which is verified in advance.
Preferably, the method further comprises: constructing a generating network and a judging network; the generation network is used for the prior defogged image to imitate the defogged image without fog so as to generate an intermediate defogged image; the discrimination network calculates a cost function based on the intermediate defogged image; if the cost function calculation result is smaller than a preset defogging threshold value, taking the generation network as a defogging model; otherwise, the intermediate defogging image is taken as the input image of the generator to continue learning until the cost function calculation result is smaller than a preset defogging threshold value, so as to update the generation network.
Preferably, the method further comprises: inputting the histogram statistical result into the generation network to output a histogram corresponding to the defogging result of each small image, accumulating and splicing the histograms corresponding to each small image into a large image, and filtering the large image by using a guide filter in a mode of removing unnatural transition of corners to obtain the effective defogging image.
Preferably, the method further comprises: and comparing the effective defogging image verified in advance with the verified defogging image verified in advance, acquiring a correlation distribution rule of characteristic values of the effective defogging image and the verified defogging image, and adaptively correcting the defogging model based on the effective defogging image verified in advance and the verified defogging image verified in advance.
Drawings
Fig. 1 is a schematic diagram of a single image defogging and enhancement system based on a generation countermeasure network provided by the invention.
List of reference numerals
F: foggy image 100: sample collection module
DF 1 : prior defogged image 200: generating a confrontation network module
NF: fog-free image 300: first defogging module
DF 2 : effective defogged image 400: second defogging module
WF: image to be defogged 500: evaluation module
DF 3 : verifying defogged images
Detailed Description
This is explained in detail below with reference to fig. 1.
In order to facilitate understanding of the present invention, terms used in the art of the present invention are explained as follows.
Image defogging (DeFog scene): and repairing the image containing the haze, thereby restoring the original visibility and color of the scene to the maximum extent. And carrying out prior defogging on the foggy image F based on the mapping model, the atmospheric light value and the perspective value to obtain a verification defogged image.
And (3) generating a countermeasure network: a Generative adaptive Networks includes a generating network and a countering network.
Fog pattern: namely haze images, images taken in haze weather. Due to the influence of turbid media such as water drops, smoke, fog, dust particles and the like in the atmosphere on the absorption and scattering of ambient Light (Atmospheric Light), the intensity of transmitted Light (Transmission Light) is attenuated, so that the Light intensity received by the optical sensor is changed, the image contrast is reduced, the dynamic range is reduced, the image is not clear and fuzzy, the detail information is lost, the scene characteristics are covered, the color fidelity is reduced, and a satisfactory visual effect cannot be achieved.
Dark Channel Prior (Dark Channel Prior) algorithm: is based on statistically observed results. For an observed image J, the dark channel prior observation results are:
J dark(x) =miny ∈Ω(x) (minc ∈(r,g,b) J c (y));
if J dark(x) → 0, J is the fog-free outdoor image.
Where Ω (x) is a local window centered at x. J. the design is a square c Is one color channel of J.
The countermeasure network: (GAN, generic adaptive Networks) is a deep learning model, and is one of the most promising methods for unsupervised learning in complex distribution in recent years.
RGB: colors are the three primary colors in general, R stands for Red (Red), G stands for Green (Green), and B stands for Blue (Blue).
In the present invention, a "module" is a processor having hardware, software, or a combination of hardware and software with corresponding functions. In the invention, the main equipment adopted for training and learning is configured as follows: intel (R) Core (TM) i5-7300HQ CPU @2.5GHz and NVIDIA GeForce GTX 1050Ti.
Example 1
The embodiment discloses a single image defogging enhancement system based on a generation confrontation network, which comprises a sample data acquisition module 100, a generation confrontation network training module 200 and a first defogging module 300.
The sample data obtaining module 100 is configured to obtain sample data. It mainly acquires several images through a public image library and/or through web crawler technology. The plurality of images includes a foggy image and a fogless image. These images are suitable for use in generating the anti-network training module 200 to build the defogging model. In this embodiment, the sample data obtaining module 100 can obtain a plurality of samplesThe fog images F and the fog-free images NF are firstly verified by the second defogging module 400 to obtain corresponding verified defogged images DF 1
The generation confrontation network training module 200 extracts the characteristic values of the plurality of images and can convert the characteristic value learning into the defogging model by generating the confrontation mapping network. The defogging model can be used for the first defogging module 300 to restore the image WF to be defogged into the effective defogging image DF 2 . The confrontation network training module 200 can generate the confrontation mapping network to prior defogging image DF based on 1 The prior defogging image DF is obtained in a mode of learning the RGB histogram of the defogged image NF by the RGB histogram 1 And the correlation distribution rule of the characteristic values of the non-fog image NF, and then converting the distribution rule into a defogging model which is verified in advance. The first defogging module 300 embeds the defogging model which is verified in advance into the image WF to be defogged to recover the effective defogging image DF 2 . The RGB information data after the DCP defogging is used for countertraining with the RGB information data of the haze-free image, so that the problem of defogging distortion of a bright part which cannot be avoided by the DCP is effectively solved. In order to solve the problem of brightness distortion caused by DCP, a countermeasure network is generated and an output judgment network is generated at the same time for evaluation so as to ensure that the output looks like a real image. Meanwhile, the network is generated to generate a target map. The target map is the most important part of this network as it will guide the generation of the network's atomizing area of interest. The target graph is generated by a recurrent network. The generation network then takes as input the input image and the target map using a designed auto-encoder. To obtain more extensive context information, on the decoder side of the auto-encoder, multi-scale losses are employed. Each loss compares the difference between the convolutional layer output and the corresponding ground route. The input to the convolutional layer contains the features of the decoder layer. In addition to these losses, a perceptual loss is used to obtain a more comprehensive similarity to the ground truth for the final output of the auto-encoder. The final output is also the output of the generating network. After the resulting image output is obtained, the discrimination network will check whether it is authentic. In fact, the target fogging area is not given during the test phase. Therefore, the temperature of the molten metal is controlled,there is no information on the local area that discriminates that the network can be interested in. To address this problem, a target map is used to guide the discriminating network to point to a local target area. In general, the introduction of the target map into the generation network and the discrimination network can effectively realize image defogging.
Preferably, the generative confrontation network training module 200 includes a generator and an arbiter. The generator constructs a generation network for the prior defogging image DF1 to mimic the defogging image NF to generate an intermediate defogging image. A discriminator establishes a discrimination network, and the discrimination network calculates a cost function based on the intermediate defogged image; if the cost function calculation result is smaller than a preset defogging threshold value, taking the generated network as a defogging model; otherwise, the intermediate defogging image is used as the input image of the generator to continue learning until the calculation result of the cost function is smaller than the preset defogging threshold value, so as to update the generation network. Preferably, the first defogging module 300 is configured to: the image is divided into a series of small blocks so that within each small block the concentration of the fog does not differ much. And (5) counting the distribution of pixel values in each small block, and dividing the distribution into three channels of R, G and B to count a gray level histogram. Assuming that the abscissa of the gradation histogram is spaced at 4 pixels, 256/4=64 total intervals are provided. There are three channels, and the data obtained by statistics is expressed as a matrix of 64 multiplied by 1 multiplied by 3; and dividing the matrix obtained by statistics by the maximum value in the matrix to ensure that the value range of all input data is 0-1, so that model learning is facilitated. And inputting the histogram statistical result of each small block image obtained before into a trained network, and finally outputting the histogram of the corresponding block in the predicted defogging result through a series of one-dimensional convolutions. The output histogram and the input histogram are divided by the sum of all data thereof so that the cumulative sum of data in them becomes 1, and are converted into a cumulative distribution histogram by integration to perform histogram matching. And after histogram matching, correspondingly obtaining the defogging result of each block respectively, and splicing into a large graph. Due to block prediction, there will be unnatural transitions between blocks at this time, and the unnatural transitions are removed by a guided filter. The original foggy image serves as a guide map. Preferably, the tile image tile size is divided into 8 × 8 pixels. Preferably, the network structure is divided into three parts: a feature extraction layer, a mapping layer, and a compression layer. The feature extraction layer consists of three layers of networks: the first layer consists of 64 convolution kernels of 3 × 1 × 3, the second layer consists of 64 convolution kernels of 3 × 1 × 64, and the third layer consists of 32 convolution kernels of 3 × 1 × 64. All convolution kernels are convolved in one dimension. The input features are "zero-padded" as each layer is convolved, ensuring that all output features remain 64 a in the first dimension. The mapping layer has 16 layers, each layer containing 32 convolution kernels of 3 × 1 × 32. The mapping layer adopts a one-dimensional 'residual error network' structure. The output of each of the mapping layers is the sum of the input and the convolution result. The compression layer compresses the output of the mapping layer to a size of 64 x 1 x 3 for histogram matching of the three channels thereafter.
Preferably, the generation network learning establishes a mapping network from the fog image to the perspective view, obtains the generated perspective view, and generates the perspective view and the optimal perspective view by discriminating the network. The generation network and the discrimination network meet the following requirements:
Figure BDA0003800606810000101
Figure BDA0003800606810000102
where dist is the best view obtained from the training sample as the discriminative view in the discrimination process. dist is the discriminatory view obtained from the sample that satisfies the distribution data p. t is the generation viewing angle of g. f is an input feature, i.e., a training feature extracted from the fog image sample.
Wherein, for the discriminant network D, it is used to distinguish the view angle and generate the view angle. For the network G, the D cannot correctly distinguish the generated view angle, so that the network G is in confrontation with the discriminant network D for training.
The discrimination network mainly classifies input visual angles to obtain correct discrimination probability. Since the perspective ratio has a certain correlation with the depth information of the image, and the depth information is a kind of gradual change information in the actual scene, the change of the perspective ratio in the actual scene has a certain rule. However, for the generated perspective view, because defogging may have deviation, the generated result cannot meet the change rule in a real scene, and the light transmission state of the fog scene cannot be accurately reflected. In view of this, the feature of the perspective ratio is extracted by using the convolutional neural network, so as to obtain the feature representing the change rule of the perspective ratio, and the feature is used as the criterion for judging the perspective ratio, thereby determining the probability of distinguishing the two perspective ratios. In convolutional layer training, the selection of initial parameters directly affects the accuracy of the training. The parameter updating is related to the gradient change of each input node, a saturated nonlinear model is easily obtained after training, and in order to reduce the influence of input and initialization on the parameter updating and improve the training learning rate, a batch standardization layer is introduced into a discrimination network and comprises standardization and activation functions re-lu. And sensing input information by using a convolutional network, extracting the characteristics related to fog in the perspective ratio layer by layer, and distinguishing the two perspective views in a fully connected mode. The discrimination network mainly classifies input visual angles to obtain correct discrimination probability. Since the perspective ratio is related to the depth information of the image, which is a kind of gradient information in the actual scene, the change of the perspective ratio in the actual scene has a certain rule. For the generation rate of the perspective view, due to structural deviation, the generated result cannot meet the change rule in a real scene, and the light transmission state in a fog scene cannot be accurately reflected. In view of this, the convolutional neural network is used to extract the features of the perspective, and the feature change rule of the perspective is obtained by using these features as the standard of the perspective, so that the probabilities of the two perspectives can be distinguished. In the training of the convolutional layer, a new model with larger influence of initial parameters on training precision is related to the gradient change of input nodes of each layer, so that a saturated nonlinear model is obtained after training, and a batch normalization layer is introduced into a discrimination network in order to reduce the influence of input and initialization on parameter updating and improve the training learning rate. The perspective in the fog scene of the generated network structure diagram is related to the saturation, the chromatic aberration, the vector contrast, the black channel and the halo of the fog image. Therefore, these features are extracted layer by layer using a convolutional network. And (5) characterizing. Since the extracted highly abstract features have characteristics representing perspective views and contain perspective view information, a desired perspective view can be obtained through mapping and restoration. And decomposing the abstract features by using a step convolutional network, and mapping a sigmoid activation function to obtain a perspective view in the scene. In order to solve the problem that the initial gradient and the loss function are too small in descending amplitude, a batch standardization layer is added in the network, in order to enable perspective information to reflect the actual situation of a scene more reliably, a space pool layer is added in the generated network, the rough perspective information is extracted and is subjected to sparse processing, and therefore the generated network is guaranteed to have the best perspective effect.
Preferably, an effective defogged image DF corresponding to the image to be defogged WF by the antagonistic network training module 200 restored via the previously verified defogging model in the first defogging module 100 is generated 2 And a verified defogged image DF previously verified via the second defogging module 200 3 For obtaining effective defogging images DF 2 And validating the defogged image DF 3 The correlation distribution of the characteristic values of (1). Generating effective defogging images DF that the confrontation network training module 200 can adaptively verify based on look-ahead 2 And a verified defogged image DF verified in advance 3 And correcting the defogging model. Preferably, the defogging enhancement system includes an evaluation module 500. The defogging model is corrected from the two aspects of model accuracy and training set searching. A two-layer combined Gaussian process regression model is provided to reduce the model error; and optimizing the data driving model from the aspect of screening the data set, and specifically designing the data driving model used by the defogging algorithm. The preset evaluation index evaluation may be:
Figure BDA0003800606810000121
in the equation, PSNR reflects the ratio of the maximum possible power of a signal and the power of destructive noise that affects its accuracy of representation. The evaluation module 500 is configured as follows: in response to receiving the effective defogged image DF output by the first defogging module 300 2 In case of (2) sending an evaluation signal to the second defogging module 400, so that the second defogging module 400 inputs the verification defogged image DF to the evaluation module 500 3 . Use of PSNR for effective defogging of images DF 2 Relative to the verification defogged image DF 3 The quality of (2) improves the relative value. The quality improvement relative value may be PSNR DF2 And PSNR DF3 The difference of (a). In the case where the quality improvement relative value is greater than or equal to the set quality relative value, the first defogging module 300 outputs the effective defogged image DF 2 And generate effective defogged image DF that confrontation network training module 200 can adaptively base on prior verification 2 And a verified defogged image DF verified in advance 3 And updating the defogging model. And under the condition that the quality improvement relative value is smaller than the set quality relative value, the generation countermeasure network training module 200 corrects the defogging model by adopting at least one Gaussian regression model.
Example 2
The embodiment discloses a prior defogging system based on a dark channel. This embodiment may be a further improvement and/or a supplement to embodiment 1, and repeated contents are not described again. The preferred embodiments of the present invention are described in whole and/or in part in the context of other embodiments, which can supplement the present embodiment, without resulting in conflict or inconsistency.
The second defogging module 400 is configured to perform prior defogging on the foggy image F to obtain a verification defogged image DF1 as follows. In computer vision and computer graphics, a mapping model is configured, which can be used to convert the foggy image F into the verification defogged image DF1 a priori, and the mapping model at least includes a transmittance value and an atmospheric light value to describe a mapping relationship of the foggy image F and the verification defogged image DF1. Calculating an atmospheric light value: and selecting a minimum channel map which can be used for acquiring a dark channel map from RGB three channels of the foggy image F, and acquiring an atmospheric light value based on the dark channel map. Obtaining a transmittance value: and obtaining a perspective value based on a dark channel prior theory. And carrying out prior defogging on the foggy image F based on the mapping model, the atmospheric light value and the perspective value to obtain a verification defogged image DF1.
Preferably, the second defogging module 200 is a dark channel a priori based defogging module that performs a priori defogging according to at least the following steps:
s1: the minimum value of three channels of R, G and B of each pixel point of the foggy image is solved, then minimum value filtering is carried out, and the dark channel image of the foggy image is obtained, wherein the specific expression is as follows:
Figure BDA0003800606810000131
wherein J represents a hazy image, i.e., F; j is a unit of c One of three color channels R, G, B representing J; j. the design is a square c (y) a value representing a color channel for each pixel in the image; Ω (x) is a filtering region centered on the pixel point x; j. the design is a square dark This is the dark channel map of J.
Determining a perspective value based on a dark channel prior theory:
Figure BDA0003800606810000132
thus, the following steps are carried out:
Figure BDA0003800606810000133
based on dark channel prior theory:
Figure BDA0003800606810000134
thus, the following steps are carried out:
Figure BDA0003800606810000135
Figure BDA0003800606810000136
is a transmittance value;
introducing a defogging factor omega, and taking 0-1
Figure BDA0003800606810000141
S2: calculating the positions of pixel points 0.1% before the pixel values in the dark channel image, and then solving the average value of the pixel values of the corresponding positions in the foggy image as an atmospheric light value A of the foggy image;
s3: based on the atmospheric scattering model-mapping model:
I(x)=J(x)t(x)+A(1-t(x))
substituting the atmospheric light value and the transmissivity value into a mapping model to obtain a prior defogging image DF 1 —J(x)。
The prior defogged image obtained based on the dark channel prior has at least the following defects: visual defects such as halos and blockiness exist due to some of the dissymmetric relationships between transmittance and depth information. For the unreliable transmission estimation, filtering-based methods such as soft extinction and guided filtering are usually adopted to refine the operation, so that the edge information of the transmission image is closer to the original image, however, the filtering-based precision operation has some defects, such as the transmittance does not accord with the change rule of the depth information, so that the estimation deviation is caused, and the contrast of the non-fog image is weakened. I is
Example 3
This embodiment may be a further improvement and/or a supplement to embodiment 1, and repeated contents are not described again. The preferred embodiments of the present invention are described in whole and/or in part in the context of other embodiments, which can supplement the present embodiment, without resulting in conflict or inconsistency.
The embodiment discloses a single image defogging and enhancing method based on a generation countermeasure network. The method comprises the following steps:
the sample data acquisition module 100 acquires a plurality of images suitable for generating a defogging model constructed by the anti-network training module 200 through a public image library and/or through a web crawler technology as sample data;
the generate confrontation network training module 200 extracts feature values of several images and maps the feature values by generating the confrontation mapping networkThe image WF converted into the effective defogged image DF can be used by the first defogging module 300 to restore the image WF to be defogged into the effective defogged image 2 The defogging model of (1);
the generate confrontation network training module 200 can thus generate the confrontation mapping network to prior defogging image DF based on the generation of the confrontation mapping network 1 The prior defogging image DF is obtained in a way that the RGB histogram of the haze-free image NF is learned by the RGB histogram 1 And the characteristic value of the defogged image NF, so that the distribution rule is converted into a defogging model which is verified in advance, and the first defogging module 300 embeds the defogging model which is verified in advance into the image WF to be defogged and recovers the defogged image WF into an effective defogging image DF 2
Preferably, the second defogging module 400 is configured to acquire the verification defogged image DF1 by a priori defogging the foggy image F as follows:
in computer vision and computer graphics, a mapping model is configured, which can be used to convert the foggy image F into the verification defogged image DF1 a priori, the mapping model at least comprises a transmittance value and an atmospheric light value for describing the mapping relationship between the foggy image F and the verification foggy image DF1,
and (3) calculating an atmospheric light value: selecting a minimum channel map which can be used for obtaining a dark channel map from RGB three channels of the foggy image F, and obtaining an atmospheric light value based on the dark channel map;
obtaining a transmittance value: determining a perspective value based on a dark channel prior theory;
and carrying out prior defogging on the foggy image F based on the mapping model, the atmospheric light value and the perspective value to obtain a verification defogged image DF1.
Preferably, an effective defogged image DF corresponding to the image to be defogged WF by the antagonistic network training module 200 restored via the previously verified defogging model in the first defogging module 100 is generated 2 And a verified defogged image DF previously verified via the second defogging module 200 3 For obtaining effective defogging images DF 2 And validating the defogged image DF 3 To enable the generation of valid defogged images DF based on the advance verification adaptively by the countermeasure network training module 200 2 And previously verifiedVerifying defogged image DF 3 And correcting the defogging model.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of this disclosure, may devise various solutions which are within the scope of this disclosure and are within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1. A construction system based on an image defogging model for generating an antagonistic network is characterized by at least comprising a second defogging module and a generation antagonistic network training module,
the second defogging module is configured to: carrying out prior defogging on the foggy image to obtain a verification defogged image and sending the verification defogged image to the generation confrontation network training module;
the generate confrontation network training module is configured to: and acquiring a correlation distribution rule of the feature values of the prior defogged image and the defogged image based on a mode of generating an RGB histogram of the prior defogged image to learn the RGB histogram of the defogged image by the countermeasure mapping network, so that the distribution rule is converted into a beforehand verification defogging model.
2. The image defogging model based on generation of the countermeasure network as claimed in claim 1, wherein the generation countermeasure network training module comprises a generator and a discriminator,
the generator is configured to construct a generation network for the a priori defogged image to mimic the haze-free image to generate an intermediate defogged image;
the discriminator is used for constructing a discrimination network, and the discrimination network calculates a cost function based on the intermediate defogged image; if the cost function calculation result is smaller than a preset defogging threshold value, taking the generated network as a defogging model; otherwise, the intermediate defogging image is taken as the input image of the generator to continue learning until the cost function calculation result is smaller than a preset defogging threshold value, so as to update the generation network.
3. The construction system based on image defogging model generating countermeasure network according to claim 2, further comprising a first defogging module,
the first defogging module is embedded into the defogging model so as to restore the image to be defogged into an effective defogged image;
the first defogging module inputs the histogram statistical result into the generation network to output a histogram corresponding to the defogging result of each small image, accumulates and splices the histograms corresponding to each small image into a large image, and filters the large image by using a guide filter in a mode of removing unnatural transition of corners to obtain the effective defogging image.
4. The system of claim 3, wherein the generate antagonistic network training module is further configured to:
and comparing the effective defogging image verified in advance with the verified defogging image verified in advance, acquiring a correlation distribution rule of characteristic values of the effective defogging image and the verified defogging image, and adaptively correcting the defogging model based on the effective defogging image verified in advance and the verified defogging image verified in advance.
5. The system according to any one of claims 1 to 4, wherein the second defogging module is configured to acquire the verification defogged image by a priori defogging the fogging image according to the following modes:
in computer vision and computer graphics, a mapping model is configured for converting the foggy image into a verification defogged image a priori, the mapping model at least comprises a transmissivity value and an atmospheric light value for describing the mapping relation of the foggy image and the verification defogged image,
and (3) calculating an atmospheric light value: selecting a minimum channel map which can be used for obtaining a dark channel map from RGB three channels of the foggy image, and obtaining an atmospheric light value based on the dark channel map;
obtaining a transmittance value: determining a perspective value based on a dark channel prior theory;
carrying out prior defogging on the foggy image F based on the mapping model, the atmospheric light value and the perspective value to obtain a verification defogged image;
and carrying out prior defogging on the foggy image based on the mapping model, the atmospheric light value and the perspective value to obtain the verification defogged image.
6. The system for constructing an image defogging model according to claim 5, wherein said system further comprises an evaluation module,
sending an evaluation signal to the second defogging module in response to the condition that the effective defogging image output by the first defogging module is received, so that the second defogging module inputs a verification defogging image to the evaluation module;
evaluating a relative value of quality improvement of the valid defogged image with respect to the verification defogged image based on at least one preset evaluation index,
in the case where the quality improvement relative value is greater than or equal to a set quality relative value, the first defogging module outputs the effective defogged image, and the generation countermeasure network training module is capable of adaptively updating the defogging model based on the previously verified effective defogged image and the previously verified defogged image;
and under the condition that the quality improvement relative value is smaller than a set quality relative value, the generation countermeasure network training module corrects the defogging model by adopting at least one Gaussian regression model.
7. A method for constructing an image defogging model based on generation of a countermeasure network, the method at least comprising:
carrying out prior defogging on the foggy image to obtain a verification defogged image;
acquiring a correlation distribution rule of characteristic values of the prior defogged image and the defogged image in a mode of learning an RGB histogram of the defogged image by an RGB histogram of the prior defogged image based on a generation countermeasure mapping network, so as to convert the distribution rule into a defogging model which is verified in advance.
8. The method for constructing an image defogging model based on generation of a countermeasure network according to claim 7, wherein said method further comprises:
constructing a generating network and a judging network;
the generation network is used for the prior defogged image to imitate the defogged image so as to generate an intermediate defogged image;
the discrimination network calculates a cost function based on the intermediate defogged image; if the cost function calculation result is smaller than a preset defogging threshold value, taking the generated network as a defogging model; otherwise, the intermediate defogging image is taken as the input image of the generator to continue learning until the cost function calculation result is smaller than a preset defogging threshold value, so as to update the generation network.
9. The method for constructing an image defogging model based on generation of a countermeasure network according to claim 8, wherein said method further comprises:
inputting the histogram statistical result into the generation network to output a histogram corresponding to the defogging result of each small image, accumulating and splicing the histograms corresponding to each small image into a large image, and filtering the large image by using a guide filter in a mode of removing unnatural transition of corners to obtain the effective defogging image.
10. The method for constructing the image defogging model based on the generation of the countermeasure network according to any one of claims 7 to 9, wherein the method further comprises the following steps:
and comparing the verified effective defogged image with the verified defogged image, acquiring a correlation distribution rule of the characteristic values of the effective defogged image and the verified defogged image, and adaptively correcting the defogging model based on the verified effective defogged image and the verified defogged image.
CN202210977715.7A 2019-12-31 2019-12-31 Image defogging model construction method and system based on generation countermeasure network Pending CN115330623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210977715.7A CN115330623A (en) 2019-12-31 2019-12-31 Image defogging model construction method and system based on generation countermeasure network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911422944.7A CN111179202B (en) 2019-12-31 2019-12-31 Single image defogging enhancement method and system based on generation countermeasure network
CN202210977715.7A CN115330623A (en) 2019-12-31 2019-12-31 Image defogging model construction method and system based on generation countermeasure network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201911422944.7A Division CN111179202B (en) 2019-12-31 2019-12-31 Single image defogging enhancement method and system based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN115330623A true CN115330623A (en) 2022-11-11

Family

ID=70650662

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210977715.7A Pending CN115330623A (en) 2019-12-31 2019-12-31 Image defogging model construction method and system based on generation countermeasure network
CN201911422944.7A Active CN111179202B (en) 2019-12-31 2019-12-31 Single image defogging enhancement method and system based on generation countermeasure network

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201911422944.7A Active CN111179202B (en) 2019-12-31 2019-12-31 Single image defogging enhancement method and system based on generation countermeasure network

Country Status (1)

Country Link
CN (2) CN115330623A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102214B (en) * 2020-09-14 2023-11-14 山东浪潮科学研究院有限公司 Image defogging method based on histogram and neural network
WO2022067668A1 (en) * 2020-09-30 2022-04-07 中国科学院深圳先进技术研究院 Fire detection method and system based on video image target detection, and terminal and storage medium
CN112614070B (en) * 2020-12-28 2023-05-30 南京信息工程大学 defogNet-based single image defogging method
CN113627287B (en) * 2021-07-27 2023-10-27 上海交通大学 Water surface target detection method and system under foggy condition based on diffusion information
CN113744159B (en) * 2021-09-09 2023-10-24 青海大学 Defogging method and device for remote sensing image and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7242185B2 (en) * 2018-01-10 2023-03-20 キヤノン株式会社 Image processing method, image processing apparatus, image processing program, and storage medium
CN108665432A (en) * 2018-05-18 2018-10-16 百年金海科技有限公司 A kind of single image to the fog method based on generation confrontation network
CN109993804A (en) * 2019-03-22 2019-07-09 上海工程技术大学 A kind of road scene defogging method generating confrontation network based on condition
CN110288550B (en) * 2019-06-28 2020-04-24 中国人民解放***箭军工程大学 Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition

Also Published As

Publication number Publication date
CN111179202A (en) 2020-05-19
CN111179202B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN111179202B (en) Single image defogging enhancement method and system based on generation countermeasure network
He et al. Haze removal using the difference-structure-preservation prior
CN107103591B (en) Single image defogging method based on image haze concentration estimation
CN107301624B (en) Convolutional neural network defogging method based on region division and dense fog pretreatment
CN112767279B (en) Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN111104943A (en) Color image region-of-interest extraction method based on decision-level fusion
CN110097522B (en) Single outdoor image defogging method based on multi-scale convolution neural network
CN110866879A (en) Image rain removing method based on multi-density rain print perception
Fayaz et al. Underwater image restoration: A state‐of‐the‐art review
CN111242868B (en) Image enhancement method based on convolutional neural network in scotopic vision environment
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
Pan et al. No-reference assessment on haze for remote-sensing images
Swami et al. Candy: Conditional adversarial networks based fully end-to-end system for single image haze removal
CN111598814B (en) Single image defogging method based on extreme scattering channel
CN114219732A (en) Image defogging method and system based on sky region segmentation and transmissivity refinement
Agaian et al. New haze removal scheme and novel measure of enhancement
CN115205713A (en) Method for recovering details of scenery color and texture in shadow area of remote sensing image of unmanned aerial vehicle
Singh et al. Visibility enhancement and dehazing: Research contribution challenges and direction
CN114862707A (en) Multi-scale feature recovery image enhancement method and device and storage medium
Fayaz et al. Efficient underwater image restoration utilizing modified dark channel prior
CN114677722A (en) Multi-supervision human face in-vivo detection method integrating multi-scale features
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN116664448B (en) Medium-high visibility calculation method and system based on image defogging
Bartani et al. An adaptive optic-physic based dust removal method using optimized air-light and transfer function
CN112348762A (en) Single image rain removing method for generating confrontation network based on multi-scale fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination