CN111340718A - Image defogging method based on progressive guiding strong supervision neural network - Google Patents

Image defogging method based on progressive guiding strong supervision neural network Download PDF

Info

Publication number
CN111340718A
CN111340718A CN202010075090.6A CN202010075090A CN111340718A CN 111340718 A CN111340718 A CN 111340718A CN 202010075090 A CN202010075090 A CN 202010075090A CN 111340718 A CN111340718 A CN 111340718A
Authority
CN
China
Prior art keywords
neural network
image
defogging
output
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010075090.6A
Other languages
Chinese (zh)
Other versions
CN111340718B (en
Inventor
徐向民
赵银湖
邢晓芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010075090.6A priority Critical patent/CN111340718B/en
Publication of CN111340718A publication Critical patent/CN111340718A/en
Application granted granted Critical
Publication of CN111340718B publication Critical patent/CN111340718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image defogging method based on a progressive guiding strong supervision neural network, which comprises the following steps: constructing an end-to-end convolutional neural network, wherein the network input is an original hazy RGB image and the network output is a clear RGB image; the convolutional neural network is formed by connecting 3 defogging modules with the same structure end to end, and the output of the previous defogging module is used as the input of the next defogging module; each defogging module consists of 1 neural network block and 1 guiding filter layer; in each defogging module, the neural network block performs defogging processing on the image to obtain 3-channel RGB output, and the guide filter layer performs image edge sharpening processing on the 3-channel RGB output of the neural network block in the current defogging module by taking the original foggy RGB image as a guide. After learning reconstruction is carried out on the basis of training data, the constructed convolutional neural network can directly input the RGB image with fog into the network when the convolutional neural network is actually applied, and further the fog-free image with better definition and higher quality is obtained.

Description

Image defogging method based on progressive guiding strong supervision neural network
Technical Field
The invention relates to the field of deep learning and computer vision, in particular to an image defogging method based on a progressive guiding strong supervision neural network.
Background
Fog is a common atmospheric phenomenon, and drops of water, dust, fine sand, or other particles suspended in the air cause degradation of image quality. In the imaging process in foggy days, reflected light of a distant object cannot penetrate through dense atmosphere to reach a camera, and atmospheric scattering causes loss of image contrast and saturation. The foggy images can seriously affect the work of high-level computer vision tasks such as automatic driving, semantic segmentation of satellite images and the like, and the image defogging is a research focus and a hot spot in the fields of deep learning and computer vision.
In recent ten years, computer vision technology has been developed at a high speed, and various image defogging methods have appeared in the meantime, and the image defogging methods are mainly classified into two main categories: traditional methods based on manual priors and methods based on deep learning. The traditional method based on manual prior analyzes and summarizes image prior knowledge from foggy and fogless image data set, constructs manual characteristics, mainly constructs a projection diagram based on an atmospheric light model, and uses methods based on image histograms, contrast, saturation and the like. With the development of deep learning technology, a plurality of defogging methods based on deep learning appear, wherein the deep learning method mainly uses a convolutional neural network to replace the manual characteristic extraction process, and the defogging effect based on the deep learning method is greatly improved compared with the defogging effect based on the prior traditional method by virtue of the strong calculation power of a machine.
At present, most image defogging methods have obvious limitations, the defogging quality of images still needs to be improved, and because the fog concentrations in the images are different in different areas, different defogging modules are used for defogging treatment in different areas of the images. Therefore, the research on the defogging method capable of overcoming the defects has important research significance and practical value.
Disclosure of Invention
In order to overcome the defects of the existing image defogging method, the invention provides an image defogging method based on a progressive guiding strong supervision neural network.
The purpose of the invention is realized by the following technical scheme: an image defogging method based on a progressive guiding strong supervision neural network comprises the following steps:
constructing an end-to-end convolutional neural network, wherein the network input is an original hazy RGB image and the network output is a clear RGB image; the convolutional neural network is formed by connecting 3 defogging modules with the same structure end to end, and the output of the previous defogging module is used as the input of the next defogging module; each defogging module consists of 1 neural network block and 1 guiding filter layer;
in each defogging module, the image input into the current defogging module is defogged by the neural network block to obtain 3-channel RGB output, the input of the guide filter layer is an original fogging RGB image and the RGB output of the neural network block in the current defogging module, and the guide filter layer takes the original fogging RGB image as guidance to sharpen the edge of the image of the 3-channel RGB output of the neural network block in the current defogging module.
After learning reconstruction is carried out on the basis of training data, the constructed convolutional neural network can directly input the RGB image with fog into the network when the convolutional neural network is actually applied, and further the fog-free image with better definition and higher quality is obtained.
Preferably, the neural network block performs defogging processing on the image input to the current defogging module, and the method includes:
the method comprises the steps of preprocessing input by using 2 convolutional layers of 3 × 3 to obtain 64 feature maps, performing 1/2 down-sampling by using a convolutional layer of 3 × 3 to obtain 64 feature maps with the size half of the original input, building by using a dense network (denseNet) architecture, wherein each dense block uses convolutional layers of 1 × 1 and 3 × 3, the convolutional layer of 1 × 1 plays a role of bottomenck, the input feature maps are converted into 64, input of each dense block comes from all previous feature maps, the number of feature maps of the dense block is 128, 192, 256 and 320 respectively, then the convolutional layer of 3 × 3 is used to obtain 64 feature maps, then up-sampling is performed by 2 times by using a 3 × 3 deconvolution layer to obtain 64 feature maps with the size same as the original input, and finally, the convolutional layers of 3 × 3 and 1 × 1 are used to obtain 3-channel RGB output.
Preferably, the guiding filter layer takes the original hazy RGB image as a guide, and performs image edge sharpening on the 3-channel RGB output of the neural network block in the current defogging module, so that the edge of the output image of the neural network block is consistent with the edge of the original hazy RGB image, and the method is as follows:
and taking the original foggy RGB image as a guide image I, taking the output of the neural network block as an image P to be filtered, and taking the filtered output image as Q, wherein the simple definition formula of the guide filtering layer is as follows:
Q=∑W(I)*P
wherein, W is a weight value determined according to the guide image I, and the weight value is a parameter learned by the guide filter layer in the model training process.
Preferably, the feature maps before and after the guided filter layer and the ground truth both calculate a mean square error (loss), and the loss weight before and after the guided filter layer is 1: the supervision before the guiding filter plays a role of strong supervision, so that the neural network block can learn more features.
Specifically, the convolutional neural network comprises 3 defogging modules connected end to end, and a loss function of the whole end-to-end convolutional neural network is as follows:
L=αL1+βL2+αL3+βL4+αL5+βL6
wherein, the ratio of α to β is 1: 10, L1 represents the MSE (minimum mean square error) loss of the output of the first neural network block and the ground channel, L2 represents the MSE loss of the output of the first guiding filter layer and the ground channel, L3 represents the MSE loss of the output of the second neural network block and the ground channel, L4 represents the MSE loss of the output of the second guiding filter layer and the ground channel, L5 represents the MSE loss of the output of the third neural network block and the ground channel, and L6 represents the MSE loss of the output of the third guiding filter layer and the ground channel.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) the invention uses a progressive strategy to remove the fog in the image in a multi-level way, so as to obtain a clearer result with higher quality;
(2) according to the invention, the edge preservation post-processing is carried out by using a method of a guide filter, so that the final output image has sharper edges and has a more visual effect;
(3) the invention adds strong supervision in front of the guide filter, so that the neural network block is focused on the function of extracting the characteristic reconstruction image, and simultaneously, the guide filter is used as a post-processing operation, thereby being more in line with the logic of image reconstruction.
(4) The invention is an end-to-end image processing technology, avoids the intervention of artificial features, directly obtains clear fog-free images from foggy input images and avoids the introduction of noise interference.
Drawings
FIG. 1 is a general flow chart of an image defogging method based on a progressive-oriented strong supervision neural network;
FIG. 2 is a technical implementation diagram of a defogging module;
fig. 3 is a technical implementation diagram of a strong supervision method.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
As shown in fig. 1 to fig. 3, the present embodiment discloses an image defogging method based on a progressive-oriented strong supervised neural network, including the following steps:
s1, constructing an end-to-end convolutional neural network, directly inputting the foggy RGB image into a network model, and outputting to obtain a clear RGB image, wherein the convolutional neural network comprises 3 defogging modules based on model effect and model complexity, and the 3 defogging modules have the same structure and are connected end to end. Each defogging module consists of 1 neural network block and 1 guiding filter layer.
And S2, in each defogging module, the neural network block performs defogging processing on the input image to obtain 3-channel RGB output. The guiding filter layer comprises two inputs, one is an original foggy RGB image, the other is RGB output of a neural network block in the current defogging module, and the guiding filter layer takes the original foggy RGB image as guidance to carry out image edge sharpening processing on 3-channel RGB output of the neural network block in the current defogging module.
S4, controlling the image to pass through 3 defogging modules by adopting a progressive training strategy, wherein the output of the previous defogging module is used as the input of the next defogging module;
s5, calculating loss by using the characteristic graphs before and after the guide filter layer and the ground route, and playing a role of strong supervision by supervision before the guide filter so that the neural network block can learn more characteristics.
As shown in fig. 1, the end-to-end convolutional neural network constructed in this embodiment directly inputs a hazy RGB image into a network model, the network model is composed of three defogging modules, the hazy RGB image is input, and three-channel output with a general defogging effect is obtained through the defogging module 1; then, the output of the defogging module 1 is input into the defogging module 2, and three-channel output with suboptimal defogging effect is obtained; and finally, inputting the output of the defogging module 2 into the defogging module 3 to obtain an RGB image with the optimal defogging effect, wherein the three defogging modules have the same structure. Each defogging module comprises 1 neural network block and 1 guiding filter layer, 3 neural network blocks and 3 guiding filter layers are arranged in total, the foggy RGB images are directly input into the network, the network carries out learning reconstruction based on training data, and clear RGB images are obtained and serve as output results.
Referring to fig. 2, each neural network block performs defogging processing on an input image, specifically:
firstly, preprocessing input by using 2 convolutional layers of 3 × to obtain 64 feature maps, wherein the feature maps mainly learn some simple information of an image, then performing 1/2 downsampling by using a convolutional layer of 3 × 3 (stride size is set to be 2) to obtain 64 feature maps with the size being half of the original input, then performing construction by using a dense network (dense) architecture, wherein the network can maximize texture information along features with different proportions, each dense block uses convolutional layers of 1 × and 3 ×, the convolutional layer of 1 × plays a role of bottech to convert the input feature maps to 64, the input of each dense block is from all feature maps of the previous convolutional layer, the number of the feature maps of the dense block is 128, 192, 256 and 320 respectively, then using a convolutional layer of 3 5631 to obtain 64 feature maps, then using a convolutional layer of 3 to obtain 64 feature maps, then performing 2-fold upsampling by using a convolutional layer of 3 to obtain 2-fold upsampling with the size being as large as that of the original input, and obtaining × original feature maps with the size being as large as that of the original input, and finally using an output channel of a convolutional layer of 3 ×.
Each guiding filter layer takes the original foggy RGB image as a guide to carry out image edge sharpening processing on the 3-channel RGB output of the neural network block,
in this embodiment, the input to the guided filter layer is the original hazy RGB image and the output of the neural network block. Because the output image of the neural network block has the phenomenon of fuzzy edge, and the original foggy RGB image has sharper edge information, the guide filter layer takes the original foggy RGB image as a guide to sharpen the edge of the output of the neural network block, so that the edge of the output image of the neural network block is consistent with the edge of the original foggy RGB image. In particular to a method for preparing a high-performance nano-silver alloy,
and taking the original foggy RGB image as a guide image I, taking the output of the neural network block as an image P to be filtered, and taking the filtered output image as Q, wherein the simple definition formula of the guide filtering layer is as follows:
Q=∑W(I)*P
wherein, W is a weight value determined according to the guide image I, and the weight value is a parameter learned by the guide filter layer in the model training process.
In this embodiment, the image is controlled by a progressive training strategy through 3 defogging modules, and the output of the previous defogging module is used as the input of the next defogging module, i.e., the foggy image sequentially passes through the defogging module 1, the defogging module 2 and the defogging module 3, and finally a clear image is obtained. When a foggy image is shot, the fog density at a position close to the camera (with a shallow depth of field) is smaller, and the fog density at a position far away from the camera (with a deep depth of field) is larger, so that the fog density at a position with a shallow depth of field is lower, and the fog density at a position with a deep depth of field is higher in the foggy image. According to the invention, along with the processing of the network model, the defogging module 1 can process an area with shallow depth of field, the defogging module 2 can process an area with deep depth of field, the defogging module 3 can process an area with deeper depth of field, the 3 defogging modules work progressively and sequentially defogge the areas from the area with shallow depth of field to the area with deeper depth of field of the image, and the final clear RGB image is obtained.
Referring to fig. 3, the feature maps before and after the guided filter layer and the ground truth all calculate the mean square error, which is 1: and 10, the supervision before the guide filter plays a role of strong supervision, so that the neural network block can learn more characteristics, and the whole end-to-end convolutional neural network is trained by using a training data set by using the supervision method and is finally used for testing.
The loss function of the whole end-to-end convolutional neural network is as follows:
L=αL1+βL2+αL3+βL4+αL5+βL6
wherein, the ratio of α to β is 1: 10, L1 represents the minimum Mean Square Error (MSE) loss of the output of the neural network block 1 to the ground channel, L2 represents the MSE loss of the output of the guided filter layer 1 to the ground channel, L3 represents the MSE loss of the output of the neural network block 2 to the ground channel, L4 represents the mseols of the output of the guided filter layer 2 to the ground channel, L5 represents the MSE loss of the output of the neural network block 3 to the ground channel, and L6 represents the MSE loss of the output of the guided filter layer 3 to the ground channel.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. An image defogging method based on a progressive guiding strong supervision neural network is characterized by comprising the following steps:
constructing an end-to-end convolutional neural network, wherein the network input is an original hazy RGB image and the network output is a clear RGB image; the convolutional neural network is formed by connecting 3 defogging modules with the same structure end to end, and the output of the previous defogging module is used as the input of the next defogging module; each defogging module consists of 1 neural network block and 1 guiding filter layer;
in each defogging module, the image input into the current defogging module is defogged by the neural network block to obtain 3-channel RGB output, the input of the guide filter layer is an original fogging RGB image and the RGB output of the neural network block in the current defogging module, and the guide filter layer takes the original fogging RGB image as guidance to sharpen the edge of the image of the 3-channel RGB output of the neural network block in the current defogging module.
2. The image defogging method based on the progressive-oriented strong supervision neural network as claimed in claim 1, wherein the neural network block performs defogging processing on the image input into the current defogging module by:
the method comprises the steps of preprocessing input by using 2 convolutional layers of 3 × 3 to obtain 64 feature maps, conducting 1/2 downsampling by using a convolutional layer of 3 × 3 to obtain 64 feature maps with the size being half of the original input, building by using a dense network architecture, using convolutional layers of 1 × 1 and 3 × 3 for each dense block, enabling the convolutional layer of 1 × 1 to function as a bottellen, converting the input feature maps into 64, enabling the input of each dense block to be from all the previous feature maps, enabling the feature maps of the dense blocks to be 128, 192, 256 and 320, obtaining 64 feature maps by using a convolutional layer of 3 × 3, conducting 2-time upsampling by using a deconvolution layer of 3 × 3 to obtain 64 feature maps with the size being the same as the original input, and finally obtaining 3-channel RGB output by using convolution layers of 3 × 3 and 1 × 1.
3. The image defogging method according to claim 1, wherein the guiding filter layer takes the original foggy RGB image as a guide to perform image edge sharpening on the 3-channel RGB output of the neural network block in the current defogging module, so that the edge of the output image of the neural network block is consistent with the edge of the original foggy RGB image, and the method comprises the following steps:
and taking the original foggy RGB image as a guide image I, taking the output of the neural network block as an image P to be filtered, and taking the filtered output image as Q, wherein the simple definition formula of the guide filtering layer is as follows:
Q=∑W(I)*P
wherein, W is a weight value determined according to the guide image I, and the weight value is a parameter learned by the guide filter layer in the model training process.
4. The image defogging method based on the progressive-oriented strong supervision neural network according to claim 1, wherein the feature maps before and after the oriented filter layer and a ground truth are used for calculating a mean square error (los), and a loss weight before and after the oriented filter layer is 1: the supervision before the guiding filter plays a role of strong supervision, so that the neural network block can learn more features.
5. The image defogging method according to claim 1, wherein the convolutional neural network comprises 3 defogging modules connected end to end, and the loss function of the whole end-to-end convolutional neural network is as follows:
L=αL1+βL2+αL3+βL4+αL5+βL6
wherein the ratio of α to β is 1: 10, L1 represents the MSE loss of the output of the first neural network block to the group channel, L2 represents the MSE loss of the output of the first guided filter layer to the group channel, L3 represents the MSE loss of the output of the second neural network block to the group channel, L4 represents the MSE loss of the output of the second guided filter layer to the group channel, L5 represents the MSE loss of the output of the third neural network block to the group channel, and L6 represents the MSE loss of the output of the third guided filter layer to the group channel.
CN202010075090.6A 2020-01-22 2020-01-22 Image defogging method based on progressive guiding strong supervision neural network Active CN111340718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010075090.6A CN111340718B (en) 2020-01-22 2020-01-22 Image defogging method based on progressive guiding strong supervision neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010075090.6A CN111340718B (en) 2020-01-22 2020-01-22 Image defogging method based on progressive guiding strong supervision neural network

Publications (2)

Publication Number Publication Date
CN111340718A true CN111340718A (en) 2020-06-26
CN111340718B CN111340718B (en) 2023-06-20

Family

ID=71183363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010075090.6A Active CN111340718B (en) 2020-01-22 2020-01-22 Image defogging method based on progressive guiding strong supervision neural network

Country Status (1)

Country Link
CN (1) CN111340718B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861936A (en) * 2020-07-29 2020-10-30 北京字节跳动网络技术有限公司 Image defogging method and device, electronic equipment and computer readable storage medium
CN111932365A (en) * 2020-08-11 2020-11-13 武汉谦屹达管理咨询有限公司 Financial credit investigation system and method based on block chain
CN112070701A (en) * 2020-09-08 2020-12-11 北京字节跳动网络技术有限公司 Image generation method, device, equipment and computer readable medium
CN114049274A (en) * 2021-11-13 2022-02-15 哈尔滨理工大学 Defogging method for single image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903232A (en) * 2018-12-20 2019-06-18 江南大学 A kind of image defogging method based on convolutional neural networks
CN109934779A (en) * 2019-01-30 2019-06-25 南京邮电大学 A kind of defogging method based on Steerable filter optimization
CN110097519A (en) * 2019-04-28 2019-08-06 暨南大学 Double supervision image defogging methods, system, medium and equipment based on deep learning
CN110599534A (en) * 2019-09-12 2019-12-20 清华大学深圳国际研究生院 Learnable guided filtering module and method suitable for 2D convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903232A (en) * 2018-12-20 2019-06-18 江南大学 A kind of image defogging method based on convolutional neural networks
CN109934779A (en) * 2019-01-30 2019-06-25 南京邮电大学 A kind of defogging method based on Steerable filter optimization
CN110097519A (en) * 2019-04-28 2019-08-06 暨南大学 Double supervision image defogging methods, system, medium and equipment based on deep learning
CN110599534A (en) * 2019-09-12 2019-12-20 清华大学深圳国际研究生院 Learnable guided filtering module and method suitable for 2D convolutional neural network

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861936A (en) * 2020-07-29 2020-10-30 北京字节跳动网络技术有限公司 Image defogging method and device, electronic equipment and computer readable storage medium
CN111861936B (en) * 2020-07-29 2023-03-24 抖音视界有限公司 Image defogging method and device, electronic equipment and computer readable storage medium
CN111932365A (en) * 2020-08-11 2020-11-13 武汉谦屹达管理咨询有限公司 Financial credit investigation system and method based on block chain
CN111932365B (en) * 2020-08-11 2021-09-10 上海华瑞银行股份有限公司 Financial credit investigation system and method based on block chain
CN112070701A (en) * 2020-09-08 2020-12-11 北京字节跳动网络技术有限公司 Image generation method, device, equipment and computer readable medium
CN114049274A (en) * 2021-11-13 2022-02-15 哈尔滨理工大学 Defogging method for single image

Also Published As

Publication number Publication date
CN111340718B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN111340718A (en) Image defogging method based on progressive guiding strong supervision neural network
CN111062880B (en) Underwater image real-time enhancement method based on condition generation countermeasure network
CN110210551B (en) Visual target tracking method based on adaptive subject sensitivity
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
Tran et al. GAN-based noise model for denoising real images
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN109360155A (en) Single-frame images rain removing method based on multi-scale feature fusion
CN110310241B (en) Method for defogging traffic image with large air-light value by fusing depth region segmentation
Zhang et al. Single image dehazing via dual-path recurrent network
CN115393396B (en) Unmanned aerial vehicle target tracking method based on mask pre-training
CN113392711A (en) Smoke semantic segmentation method and system based on high-level semantics and noise suppression
CN116188325A (en) Image denoising method based on deep learning and image color space characteristics
CN115063434B (en) Low-low-light image instance segmentation method and system based on feature denoising
CN113066025A (en) Image defogging method based on incremental learning and feature and attention transfer
CN111652231B (en) Casting defect semantic segmentation method based on feature self-adaptive selection
CN115731138A (en) Image restoration method based on Transformer and convolutional neural network
CN115439738A (en) Underwater target detection method based on self-supervision cooperative reconstruction
Han et al. UIEGAN: Adversarial learning-based photorealistic image enhancement for intelligent underwater environment perception
CN113436101B (en) Method for removing rain by Dragon lattice tower module based on efficient channel attention mechanism
CN113280820B (en) Orchard visual navigation path extraction method and system based on neural network
CN112419163A (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN111612803B (en) Vehicle image semantic segmentation method based on image definition
CN112634289A (en) Rapid feasible domain segmentation method based on asymmetric void convolution
Niu et al. Underwater Waste Recognition and Localization Based on Improved YOLOv5.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant