CN110738622A - Lightweight neural network single image defogging method based on multi-scale convolution - Google Patents

Lightweight neural network single image defogging method based on multi-scale convolution Download PDF

Info

Publication number
CN110738622A
CN110738622A CN201910987873.9A CN201910987873A CN110738622A CN 110738622 A CN110738622 A CN 110738622A CN 201910987873 A CN201910987873 A CN 201910987873A CN 110738622 A CN110738622 A CN 110738622A
Authority
CN
China
Prior art keywords
block
sub
convolution
defogging
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910987873.9A
Other languages
Chinese (zh)
Inventor
张笑钦
唐贵英
赵丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN201910987873.9A priority Critical patent/CN110738622A/en
Publication of CN110738622A publication Critical patent/CN110738622A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses lightweight neural network single-image defogging methods based on multi-scale convolution, which comprise the following steps of (1) dividing a data set into a training set, a verification set and a test set, (2) training a proposed network model by using the training set and the verification set, (3) testing the trained model by using the test set, and (4) evaluating the model by adopting a measurement standard.

Description

Lightweight neural network single image defogging method based on multi-scale convolution
Technical Field
The invention relates to the technical field of image processing, in particular to lightweight neural network single image defogging methods based on multi-scale convolution.
Background
Fog has long been used as natural phenomena which not only affect the visual experience of people, but also have the defects that a captured fog image is not beneficial to -step processing of an image and has the serious influence on -step development of other applications in the field of image processing, such as image classification, target tracking and the like, pictures captured in a fog environment generally suffer from the problems of low contrast, vignetting, color shift and the like, and the visibility of the pictures is limited by the phenomena.
Defogging studies have been long, imaging models of fog maps have been proposed as early as 1924,
I=Jt+A(1-t)
t(x)=eβd(x)
where I is the observed fog pattern, J is the sharp image to be restored, t and a are the air transmittance and the global atmospheric light, respectively, and d is the depth of imaging.
However, the model is a serious ill-posed problem, in order to obtain enough information to solve the model, the defogging method is concentrated on defogging of multiple pictures before 08 years, until foggy methods of two different single pictures are respectively proposed in Fattal and Tan in 08 years, a defogging method of a single picture is directly occupied after milestones are operated, in 09, doctor who proposes a defogging method based on Dark Channel Prior (DCP, Dark Channel Prior), the Prior is obtained through statistical results, in most non-sky local areas, at least Color channels always have low values in pixels, the operation created by is proved to be very effective, and a modified method based on the series of DCP is triggered, such as median filtering, bilateral filtering and guided filtering are modified on transmittance t, and a new Prior is continuously proposed, such as Color Attenuation, Color classification, Color difference in the Color difference, and Color difference in saturation line density of defogging of a haze image are also proposed in 3625-12 forest image, and a haze is obtained through a random haze-line-depth perception method based on a combination of a haze-line difference between a haze-line Prior-saturation line-density of a haze-induced by a Prior-line-induced method.
Some of the above methods have achieved results, but they have the disadvantages of over-enhancement, over-saturation or edge effects, blocking effects, etc. to varying degrees, so there is still a need for better and more efficient methods to supplement the defogging method.
After 16 years, methods based on depth learning have emerged.the earliest method of defogging using neural network structures was a multi-scale convolutional neural network, this newly proposed volume and network layer first obtained preliminary air transmissivity t through the Coarse-scale module, then entered into the fine-scale module to get finer air transmissivity.finally, the defogged picture was obtained through the imaging model of the fogging map using the obtained t.Dehaze-Net was proposed by scholars, which fused the traditional prior model into different forms of volume layers to get better experimental results.A later end-to-end AOD-Net (All-in-one Dehaze Net) was proposed.AOD-Net No. proposed to combine air transmissivity t and global atmospheric light into parameters, and finally, the defogged result was obtained directly using a variant of the fogging map imaging model to estimate two parameters simultaneously, thus avoiding the overlap of multiple parameters, which resulted in a better estimation of the error, which was not well-based on the global atmospheric light, which resulted in a fusion of the defogging image, and a more recent haze-based on the smoothing model, which had been developed by a more linear filtering process of the smoothing of the haze-based on the smoothing network model, and a more recent filtering process of the fusion of defogging image, which had been proposed by using a smoothing process of the smoothing filter-mesh model, which had been proposed by using a smoothing filter-mesh model to extract the fusion of defogging model.
The difficulty of the above defogging methods is mostly to estimate the transmittance t and the global atmospheric light A accurately, the size of the model of the network from a small part of images to the images is increased along with the improvement of the defogging effect, and the difficulty of calculation is caused to the training and application of the model.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide lightweight neural network single-image defogging methods based on multi-scale convolution, which can realize a defogging effect with a higher peak signal-to-noise ratio, and simultaneously realize a defogging method of a single image without estimating transmissivity and global atmospheric light.
In order to achieve the purpose, the invention provides the following technical scheme that lightweight neural network single image defogging method based on multi-scale convolution comprises the following steps:
(1) dividing a data set into a training set, a verification set and a test set;
(2) training the proposed network model by utilizing a training set and a verification set;
(3) testing the trained model by using the test set;
(4) the model is evaluated using metrics.
Preferably, step (1), selecting part1 of the data set RESIDE-beta as the data set used in the training process; wherein 90% of part1 is used as a training set, 10% is used as a verification set, and HSTS of a RESIDE data set standard version is selected as a simulation data test set; and taking a data set formed by the proposed real scene pictures as a test set of qualitative price comparison.
Preferably, in the step (2), pictures in a training set are input into a network model to be trained by , a superposition combination of four groups of feature maps is obtained through the th sub-block of the multi-scale block, the obtained feature maps of the branches of the four th sub-blocks are overlapped to and serve as the input of the second sub-block of the multi-scale block, the feature maps output by the second sub-block sequentially pass through three convolutional layers to obtain three-channel features, global jump connections used for keeping original picture information are set, and finally a clear picture corresponding to a fog picture is obtained through an activation function.
Preferably, the th sub-block includes four th sub-block branches, the four th sub-block branches are arranged as:
sub-block branch the input picture passes through convolutions of 1 × 1;
sub-block branch two, input pictures are first pooled by averages of 3 × 3 and then convolved by convolutions of 1 × 1;
, branch III, inputting pictures by 1 × 1 convolution and 3 × 3 convolution;
th subblock is four, the input pictures are successively convolved by 1 × 1, 3 × 3, 3 × 3.
Preferably, the second sub-block comprises four second sub-block branches, and the four second sub-block branches are set as follows:
a second branch of 1 × 1 convolutions;
the second branch is that the average pooling is firstly carried out and then convolutions of 1 multiplied by 1 are carried out;
the third branch, containing convolution pairs of 1 × 1 and convolution pairs of 1 × 7 and 7 × 1;
the fourth branch contains convolution pairs of 1 × 1 and two convolution pairs of 1 × 7 and 7 × 1.
Preferably, all convolutional layers are followed by a regularization operation and an activation function; utilizing Adam algorithm to carry out back propagation to update parameters of the convolutional layer in the training process through a loss function; the loss function is the Euclidean distance between a clear image and an output image passing through the network, and the change condition of the loss function on the verification set is monitored.
Preferably, in step (3), test set pictures to be tested are input into the trained network model one by one, and corresponding defogging pictures are directly obtained.
Preferably, in step (4), the obtained defogging picture and the clear picture for synthesizing the defogging picture are respectively subjected to peak signal-to-noise ratio and structural self-similarity on a plurality of data sets to evaluate the defogging effect of the defogging picture.
Compared with the prior art, the invention has the advantages that the multi-scale design is designed to improve the defogging performance, in addition, , the method fully considers the efficiency problem brought by the size of the model and compresses the size of the model as much as possible, in addition, the fog image can be directly mapped into a corresponding clear image through the network model of the invention, in order to fully retain the information of the original image, multi-scale blocks with special are designed, the multi-scale blocks can retain the outline and detail information of the image under each scale, because the multi-scale design needs to introduce more lamination layers, in order to reduce the model parameters and reduce the size of the model, large lamination layers are replaced by a plurality of small lamination layers, the operation has a remarkable effect on reducing the model parameters, in addition, in order to further , the information of the original image which can be lost in the multi-scale blocks, a global jump connection is added at the head and the tail of the model, the adjustment effectively promotes the effect of the model training, the invention provides a novel light-scale neural network, and the method can directly recover the global transmission rate and the fog image under the condition of reducing the global peak value without the atmospheric optical noise ratio.
The invention is further described with reference to the figures and the specific embodiments of the specification.
Drawings
FIG. 1 is an overall framework of a defogging system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network model according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating specific configuration of th sub-block of the multi-scale block of the network model according to the embodiment of the present invention;
FIG. 4 is a diagram illustrating a specific configuration of a second sub-block of a network model multi-scale block according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the variation of the loss function in the case of adding a global hopping connection and not adding a global hopping connection to the th epoch according to the embodiment of the present invention;
fig. 6 is a diagram illustrating the variation of the average loss function at each epoch in the case of adding a global hopping connection and not adding a global hopping connection in the 20 th epoch according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention proceeds to steps of detailed description, which is provided in the following description and examples, it should be understood that the specific embodiments described herein are only for the purpose of explaining the present invention, and are not intended to limit the present invention.
Referring to fig. 1 to 6, the invention discloses lightweight neural network single image defogging method based on multi-scale convolution, which comprises the following steps:
(1) dividing a data set into a training set, a verification set and a test set;
(2) training the proposed network model by utilizing a training set and a verification set;
(3) testing the trained model by using the test set;
(4) the model is evaluated using metrics.
Wherein, step (1) specifically includes:
part1 of the data set RESIDE-beta is selected as the data set used in the training process. Wherein, 90% of part1 is used as a training set, and 10% is used as a verification set. HSTS (hybrid Subjectivetesting set) of the standard version of the RESIDE data set is selected as the simulation data test set. In addition, a data set formed by real scene pictures proposed by Fattal commonly used in the defogging field is used as a test set of qualitative price comparison.
The step (2) specifically comprises:
pictures in the training set are input into a network model to be trained, and firstly, a superposition combination of four groups of feature maps is obtained through the th sub-block of the multi-scale block, wherein the th sub-block comprises four th sub-block branches, and the four th sub-block branches are set as follows:
sub-block branch the input picture passes through convolutions of 1 × 1;
sub-block branch two, input pictures are first pooled by averages of 3 × 3 and then convolved by convolutions of 1 × 1;
, branch III, inputting pictures by 1 × 1 convolution and 3 × 3 convolution;
th subblock is four, the input pictures are successively convolved by 1 × 1, 3 × 3, 3 × 3.
The resulting feature maps of the four th sub-block branches are then overlaid away and used as input for the second sub-block of the multi-scale block.
The second sub-block comprises four second sub-block branches, the four second sub-block branches being arranged:
a second branch of 1 × 1 convolutions;
the second branch is that the average pooling is firstly carried out and then convolutions of 1 multiplied by 1 are carried out;
the third branch, containing convolution pairs of 1 × 1 and convolution pairs of 1 × 7 and 7 × 1;
the fourth branch contains convolution pairs of 1 × 1 and two convolution pairs of 1 × 7 and 7 × 1.
And superposing and merging the output characteristic diagrams of the four sub-paths of the second sub-block in the trunk path.
In order to reduce the channel number of the feature diagram, the feature diagram output by the second sub-block passes through the three convolution layers successively to obtain the features of three channels.
To further retain the information of the original picture, global skip connections are set.
And finally, obtaining a clear picture corresponding to the fog picture through an activation function.
All of the above mentioned convolutional layers are followed by a regularization operation and an activation function.
Parameters of the convolutional layer are updated by back-propagation through the Mean Square Error (MSE) loss function using the Adam algorithm during training. Where the loss function MSE is the euclidean distance between the sharp image and the output image through the network. And simultaneously monitoring the change of the loss function on the verification set.
This training process is repeated until the loss function stabilizes.
At this point, the training of the model ends.
The step (3) specifically comprises:
and inputting the test set pictures to be tested into the trained network model by , so as to directly obtain corresponding defogging pictures.
The step (4) specifically comprises:
the obtained defogging pictures and the clear pictures used for synthesizing the defogging pictures are respectively subjected to peak signal-to-noise ratio and structural self-similarity on a plurality of data sets to evaluate the defogging effect of the defogging pictures.
In addition, in the aspect of , the method fully considers the efficiency problem brought by the size of the model and compresses the size of the model as much as possible, in addition, the fog image can be directly mapped into a corresponding clear image through the network model of the invention, in order to fully retain the information of the original image, multi-scale blocks special for are designed, the multi-scale blocks can retain the outline and detail information of the image under each scale, in order to reduce the model parameters and reduce the size of the model, a part of design of GoogleNet is introduced, a plurality of small convolutional layers are used for replacing large convolutional layers, the operation has a remarkable effect on reducing the model parameters, in addition, in order to carry out steps, the information which is possibly lost in the multi-scale blocks of the original image is retained, a global jump connection is added at the head and the tail of the model, the adjustment effectively promotes the effect of the model, and in addition, the invention provides a new class neural network model training method which can directly recover the fog image structure under the condition of reducing the similar peak value and directly corresponding to the fog image.
The above embodiments are specifically described for the purpose of illustrating the present invention by , and should not be construed as limiting the scope of the present invention, and the skilled engineer can make insubstantial modifications and adaptations of the present invention based on the above disclosure fall within the scope of the present invention.

Claims (8)

1, lightweight neural network single image defogging method based on multi-scale convolution, which is characterized by comprising the following steps:
(1) dividing a data set into a training set, a verification set and a test set;
(2) training the proposed network model by utilizing a training set and a verification set;
(3) testing the trained model by using the test set;
(4) the model is evaluated using metrics.
2. The method for defogging single images on a lightweight neural network based on multi-scale convolution according to claim 1, wherein in step (1), part1 of the data set RESIDE-beta is selected as a data set used in the training process, wherein 90% of part1 is selected as a training set, 10% is selected as a verification set, a mixed subjective test set of a RESIDE data set standard version is selected as a simulation data test set, and a data set formed by the proposed real scene pictures is selected as a test set of qualitative rating.
3. The lightweight neural network single-image defogging method based on multi-scale convolution according to claim 1, wherein the method comprises the steps of (2) inputting pictures in a training set into a network model to be trained, firstly obtaining a superposition combination of four groups of feature maps through a th sub-block of a multi-scale block, then overlapping the feature maps of the obtained four th sub-block branches to and using the feature maps as an input of a second sub-block of the multi-scale block, successively passing the feature maps output by the second sub-block through three convolutional layers to obtain three-channel features, setting global jump connections for retaining original picture information, and finally obtaining a clear picture corresponding to a fog map through an activation function.
4. The method of claim 3, wherein the th sub-block includes four th sub-block branches, and the four th sub-block branches are set as:
sub-block branch the input picture passes through convolutions of 1 × 1;
sub-block branch two, input pictures are first pooled by averages of 3 × 3 and then convolved by convolutions of 1 × 1;
, branch III, inputting pictures by 1 × 1 convolution and 3 × 3 convolution;
th subblock is four, the input pictures are successively convolved by 1 × 1, 3 × 3, 3 × 3.
5. The method of claim 4, wherein the second sub-block includes four branches of the second sub-block, and the four branches of the second sub-block are arranged as follows:
a second branch of 1 × 1 convolutions;
the second branch is that the average pooling is firstly carried out and then convolutions of 1 multiplied by 1 are carried out;
the third branch, containing convolution pairs of 1 × 1 and convolution pairs of 1 × 7 and 7 × 1;
the fourth branch contains convolution pairs of 1 × 1 and two convolution pairs of 1 × 7 and 7 × 1.
6. The lightweight neural network single-image defogging method based on multi-scale convolution is characterized in that all convolution layers are followed by a regularization operation and an activation function, parameters of the convolution layers are updated through back propagation of a loss function in a training process by using an Adam algorithm, wherein the loss function is the Euclidean distance between a clear image and an output image passing through the network, and the change condition of the loss function on a verification set is monitored.
7. The method of claim 1, wherein in step (3), pictures of the test set to be tested are input into the trained network model to directly obtain the corresponding defogged pictures.
8. The method for defogging single images based on the lightweight neural network of multi-scale convolution according to claim 1, wherein the step (4) is to find the peak signal-to-noise ratio and the structural self-similarity of the acquired defogging images and the clear images used for synthesizing the defogging images on a plurality of data sets respectively to evaluate the defogging effects of the acquired defogging images.
CN201910987873.9A 2019-10-17 2019-10-17 Lightweight neural network single image defogging method based on multi-scale convolution Pending CN110738622A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910987873.9A CN110738622A (en) 2019-10-17 2019-10-17 Lightweight neural network single image defogging method based on multi-scale convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910987873.9A CN110738622A (en) 2019-10-17 2019-10-17 Lightweight neural network single image defogging method based on multi-scale convolution

Publications (1)

Publication Number Publication Date
CN110738622A true CN110738622A (en) 2020-01-31

Family

ID=69269113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910987873.9A Pending CN110738622A (en) 2019-10-17 2019-10-17 Lightweight neural network single image defogging method based on multi-scale convolution

Country Status (1)

Country Link
CN (1) CN110738622A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489301A (en) * 2020-03-19 2020-08-04 山西大学 Image defogging method based on image depth information guide for migration learning
CN111539891A (en) * 2020-04-27 2020-08-14 高小翎 Wave band self-adaptive demisting optimization processing method for single remote sensing image
CN111915530A (en) * 2020-08-06 2020-11-10 温州大学 End-to-end-based haze concentration self-adaptive neural network image defogging method
CN111950635A (en) * 2020-08-12 2020-11-17 温州大学 Robust feature learning method based on hierarchical feature alignment
CN114972076A (en) * 2022-05-06 2022-08-30 华中科技大学 Image defogging method based on layered multi-block convolutional neural network
CN117151990A (en) * 2023-06-28 2023-12-01 西南石油大学 Image defogging method based on self-attention coding and decoding
WO2024040973A1 (en) * 2022-08-22 2024-02-29 南京邮电大学 Multi-scale fused dehazing method based on stacked hourglass network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805839A (en) * 2018-06-08 2018-11-13 西安电子科技大学 Combined estimator image defogging method based on convolutional neural networks
CN108885713A (en) * 2016-02-18 2018-11-23 谷歌有限责任公司 image classification neural network
CN109801232A (en) * 2018-12-27 2019-05-24 北京交通大学 A kind of single image to the fog method based on deep learning
CN110097522A (en) * 2019-05-14 2019-08-06 燕山大学 A kind of single width Method of defogging image of outdoor scenes based on multiple dimensioned convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885713A (en) * 2016-02-18 2018-11-23 谷歌有限责任公司 image classification neural network
CN108805839A (en) * 2018-06-08 2018-11-13 西安电子科技大学 Combined estimator image defogging method based on convolutional neural networks
CN109801232A (en) * 2018-12-27 2019-05-24 北京交通大学 A kind of single image to the fog method based on deep learning
CN110097522A (en) * 2019-05-14 2019-08-06 燕山大学 A kind of single width Method of defogging image of outdoor scenes based on multiple dimensioned convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JING DING ET AL.: "Light-weight residual learning for single image dehazing", 《JOURNAL OF ELECTRONIC IMAGING》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489301A (en) * 2020-03-19 2020-08-04 山西大学 Image defogging method based on image depth information guide for migration learning
CN111489301B (en) * 2020-03-19 2022-05-31 山西大学 Image defogging method based on image depth information guide for migration learning
CN111539891A (en) * 2020-04-27 2020-08-14 高小翎 Wave band self-adaptive demisting optimization processing method for single remote sensing image
CN111915530A (en) * 2020-08-06 2020-11-10 温州大学 End-to-end-based haze concentration self-adaptive neural network image defogging method
CN111950635A (en) * 2020-08-12 2020-11-17 温州大学 Robust feature learning method based on hierarchical feature alignment
CN111950635B (en) * 2020-08-12 2023-08-25 温州大学 Robust feature learning method based on layered feature alignment
CN114972076A (en) * 2022-05-06 2022-08-30 华中科技大学 Image defogging method based on layered multi-block convolutional neural network
CN114972076B (en) * 2022-05-06 2024-04-26 华中科技大学 Image defogging method based on layered multi-block convolutional neural network
WO2024040973A1 (en) * 2022-08-22 2024-02-29 南京邮电大学 Multi-scale fused dehazing method based on stacked hourglass network
CN117151990A (en) * 2023-06-28 2023-12-01 西南石油大学 Image defogging method based on self-attention coding and decoding
CN117151990B (en) * 2023-06-28 2024-03-22 西南石油大学 Image defogging method based on self-attention coding and decoding

Similar Documents

Publication Publication Date Title
CN110738622A (en) Lightweight neural network single image defogging method based on multi-scale convolution
Cao et al. Underwater image restoration using deep networks to estimate background light and scene depth
CN109712083B (en) Single image defogging method based on convolutional neural network
CN108269244B (en) Image defogging system based on deep learning and prior constraint
CN109584170B (en) Underwater image restoration method based on convolutional neural network
CN110009580B (en) Single-picture bidirectional rain removing method based on picture block rain drop concentration
Hu et al. Underwater image restoration based on convolutional neural network
CN106910175A (en) A kind of single image defogging algorithm based on deep learning
CN110223251B (en) Convolution neural network underwater image restoration method suitable for artificial and natural light sources
CN110544213A (en) Image defogging method based on global and local feature fusion
CN111445418A (en) Image defogging method and device and computer equipment
CN110570363A (en) Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN107292830B (en) Low-illumination image enhancement and evaluation method
CN109584188B (en) Image defogging method based on convolutional neural network
CN109859120A (en) Image defogging method based on multiple dimensioned residual error network
CN109801232A (en) A kind of single image to the fog method based on deep learning
CN113284061B (en) Underwater image enhancement method based on gradient network
CN110443759A (en) A kind of image defogging method based on deep learning
CN113284070A (en) Non-uniform fog image defogging algorithm based on attention transfer mechanism
Qian et al. FAOD‐Net: a fast AOD‐Net for dehazing single image
Qian et al. CIASM-Net: a novel convolutional neural network for dehazing image
CN110807743B (en) Image defogging method based on convolutional neural network
CN114764752B (en) Night image defogging algorithm based on deep learning
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN111651954B (en) Method for reconstructing SMT electronic component in three dimensions based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200131