CN114972097A - Image deblurring method for generating countermeasure network based on cycle consistency - Google Patents

Image deblurring method for generating countermeasure network based on cycle consistency Download PDF

Info

Publication number
CN114972097A
CN114972097A CN202210611043.8A CN202210611043A CN114972097A CN 114972097 A CN114972097 A CN 114972097A CN 202210611043 A CN202210611043 A CN 202210611043A CN 114972097 A CN114972097 A CN 114972097A
Authority
CN
China
Prior art keywords
image
loss
network
discriminator
countermeasure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210611043.8A
Other languages
Chinese (zh)
Inventor
邓立为
李允发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202210611043.8A priority Critical patent/CN114972097A/en
Publication of CN114972097A publication Critical patent/CN114972097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image deblurring method for generating a confrontation network based on cycle consistency, and belongs to the technical field of deep learning and the field of image processing. The method mainly comprises the following steps: s1, optimizing a model structure and a loss function of a countermeasure network by using the problems of information loss caused by feature extraction of down-sampling and poor correlation of single-scale acquired features; s2, aiming at the problems that a paired data set is difficult to obtain and a synthetic data set is poor in generalization capability, a generation confrontation network model capable of being trained on an unpaired training set is provided; s3, designing a generation countermeasure network model on the unpaired training set; s4, forming a loss function of the network by using multiple losses, wherein the loss function comprises a mean square error, a countermeasure loss and a cyclic consistency loss; s5: and evaluating the effect of deblurring the image, and measuring the mean square error, the peak signal-to-noise ratio and the structural similarity. The method solves the problem of blurred images generated by the portable camera equipment and the shot motion by optimizing the blurred image recovery algorithm, comprehensively improves the image quality, and improves the accuracy of the artificial intelligence algorithm.

Description

Image deblurring method for generating countermeasure network based on cycle consistency
Technical Field
The invention belongs to the technical field of deep learning and the field of image processing, and particularly relates to an image deblurring method for generating a confrontation network based on cycle consistency.
Background
In the process of image shooting, due to the fact that relative motion occurs between the shooting equipment and the shot object, the quality of shot images is degraded, and motion blur occurs. The blurred images generally cannot meet daily requirements of people, and are difficult to serve as basic data processed by various artificial intelligence algorithms, so that the restoration of motion blurred images and the improvement of image quality become one of the key points of current image research. In the field of image restoration, data acquisition is often a difficult problem, and sharp and blurred image pairs cannot be acquired by conventional photography. Therefore, most image restoration studies are trained based on image pairs
In real life, the process of acquiring, transmitting and storing images is often affected by various uncontrollable factors, such as relative motion between the image pickup device and the object, diffraction phenomena, turbulence effects, noise of electronic circuits, and the like, which results in reduced image quality. There are many manifestations of image quality degradation, such as blurring, local loss, brightness degradation, etc., all of which are referred to as image degradation.
In recent years, with the increasing demand for image sharpness, research on blurred image restoration algorithms has attracted attention of many scholars. There are many applications in all directions, such as the Photoshop image processing module for eliminating image blur; some brands of mobile phones or cameras are also added with an anti-shake function, so that motion blur generated by images can be reduced to a certain extent. Therefore, the subject of blurred image restoration is of practical significance and has potential commercial value.
Disclosure of Invention
In view of the problems of the above studies, the present invention aims to: the method solves the problem of blurred images generated by the portable camera equipment and the shot motion by optimizing the blurred image recovery algorithm, comprehensively improves the image quality, and improves the accuracy of the artificial intelligence algorithm.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an image deblurring method based on cycle-consistent generation of a countermeasure network comprises the following steps:
s1: aiming at the problems of information loss caused by down-sampling feature extraction and poor correlation of single-scale acquired features, a model structure and a loss function of the countermeasure network are optimized.
Further, the specific step of S1 is:
s1.1: and acquiring multi-scale features of input data by using a multi-scale residual structure, and removing extra noise possibly caused by feature information during interlayer transfer by adopting an attention mechanism.
S1.2: and judging the generated data by adopting a mode of combining a global discriminator and a local discriminator.
S1.3: the network model designed by taking the paired data sets as training further improves the quality of blurred image restoration under the condition of ensuring the image restoration effect, so that the restored image is clearer and more natural.
S2: aiming at the problems that paired data sets are difficult to acquire and the generalization capability of synthetic data sets is poor, a generation confrontation network model which can be trained on an unpaired training set is provided.
Further, the specific step of S2 is:
s2.1: two groups of generation countermeasure networks are adopted to establish an annular network structure, so that the image recovery problem is converted into the image translation problem between the fuzzy domain and the clear domain.
S2.2: the generator which combines the contrast learning network and the depth residual error network to form two groups of networks learns the information in the image domain to carry out the mutual transformation between the fuzzy domain and the clear domain, the Markov discriminator is used as the discriminator of the two groups of networks to provide feedback information, and the parameters of the two groups of networks which generate the countermeasure network are updated.
S3: the generative confrontation network model trained on the unpaired training set as described for S2.
Further, the specific step of S3 is:
s3.1: and (4) forming a learning direction of the loss function control network by using the antagonistic loss and the cyclic consistent loss.
S4: a plurality of losses are used to form a loss function of the network, including mean square error and countermeasures to the loss.
Further, the specific step of S4 is:
s4.1: the blurred image is input to the generator G F2C Generating a circularly clear image, pseudo-blurring the image and inputting the pseudo-blurring to a generator G FI2CL A cyclic pseudo-sharp image is generated.
S4.2: inputting the real clear image to a discriminator D CL Inputting the blurred image into a discriminator D FI While the pseudo-sharp image is input to a discriminator D CL Inputting the pseudo-blur to a discriminator D FI
S4.3: calculating the countermeasure loss of the cycle-consistent countermeasure network, wherein the countermeasure loss of the generator is:
Figure BDA0003673158000000023
the countermeasure loss of the discriminator is:
Figure BDA0003673158000000021
calculating the cycle consistent loss, wherein the specific loss is as follows:
Figure BDA0003673158000000022
wherein λ 1 The specific value of the weight coefficient representing the cyclic coincidence loss is 10.
S4, 4: putting the antagonistic loss and the cyclic consistent loss of the generator into a momentum optimizer Adam, and optimizing the generator G FI2CL And generator G CL2FI
S4.5: putting the countering losses of the discriminators into the momentum optimizer Adam, optimizing the discriminator D CL And discriminator D FI
S5: and evaluating the effect of deblurring the image, and measuring the mean square error, the peak signal-to-noise ratio and the structural similarity.
Further, the specific step of S5 is:
s5.1: MSE obtains the loss degree between the image to be detected and the original image by calculating the mean square error of all corresponding pixel points of the original image and the image to be detected, and the calculation formula is as follows:
Figure BDA0003673158000000031
s5.2: the PSNR value can effectively reflect the true degree of the image to be detected, and is most widely used in the field of image processing. The specific mathematical calculation mode is to compare the maximum semaphore with the mean square error, and then carry out logarithm operation and constant operation on the obtained value, and the calculation formula is as follows:
Figure BDA0003673158000000032
s5.3: the SSIM is an evaluation performance index for judging the image similarity degree through the comprehensive attributes of the images, wherein the comprehensive attributes refer to the set information of the comprehensive effects of three factors, namely image brightness, image contrast and structure. The luminance information in the composite attribute needs to be obtained by the mean value of the pixels in the image, the contrast information needs to take the variance of the pixels in the image as the numerical value of the contrast information, and the structural information uses the covariance of the pixels as the estimation of the self information in mathematical representation:
Figure BDA0003673158000000033
compared with the prior art, the invention has the beneficial effects that: the algorithm is utilized to effectively evaluate the deblurring image effect and comprehensively improve the image quality.
Drawings
Fig. 1 is an overall flowchart of the cycle-consistent generation of image deblurring of the countermeasure network in the embodiment of the present invention.
Fig. 2 is a schematic diagram of a cycle consistent generation countermeasure network in an embodiment of the present invention.
Fig. 3 is a schematic diagram of a generator in an embodiment of the invention.
FIG. 4 is a diagram of an arbiter in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments are provided, and the present invention is further described in detail.
An image deblurring method based on cycle-consistent generation of a countermeasure network comprises the following steps:
s1: aiming at the problems of information loss caused by feature extraction in downsampling and poor correlation of single-scale acquired features, a model structure and a loss function of the antagonistic network are optimized.
Further, the specific step of S1 is:
s1.1: and acquiring multi-scale features of input data by using a multi-scale residual structure, and removing extra noise possibly caused by feature information during interlayer transfer by adopting an attention mechanism.
S1.2: and judging the generated data by adopting a mode of combining a global discriminator and a local discriminator.
S1.3: the network model designed by taking the paired data sets as training further improves the quality of blurred image restoration under the condition of ensuring the image restoration effect, so that the restored image is clearer and more natural.
S2: aiming at the problems that paired data sets are difficult to acquire and the generalization capability of synthetic data sets is poor, a generation confrontation network model which can be trained on an unpaired training set is provided.
Further, the specific steps of S2 are:
s2.1: two groups of generation countermeasure networks are adopted to establish an annular network structure, so that the image recovery problem is converted into the image translation problem between the fuzzy domain and the clear domain.
S2.2: the generator which combines the contrast learning network and the depth residual error network to form two groups of networks learns the information in the image domain to carry out the mutual transformation between the fuzzy domain and the clear domain, the Markov discriminator is used as the discriminator of the two groups of networks to provide feedback information, and the parameters of the two groups of networks which generate the countermeasure network are updated.
S3: the generative confrontation network model trained on the unpaired training set as described for S2.
Further, the specific step of S3 is:
s3.1: and (4) forming a learning direction of the loss function control network by using the antagonistic loss and the cyclic consistent loss.
S4: and inputting the clipped fuzzy image and the clear image into a cyclic consistent confrontation generation network and training.
Further, the specific step of S4 is:
s4.1: and constructing a loop to generate a confrontation network model consistently.
Further, the specific step of S4.1 is:
s4.2: inputting the real clear image to a discriminator D CL Inputting the blurred image to a discriminator D FI While the pseudo-sharp image is input to a discriminator D CL Inputting the pseudo-blur to a discriminator D FI
S4.3: and (4) inputting the image in the S4.2 into a generator and a discriminator, and training the generator and the discriminator.
Further, the specific step of S4.3 is:
s4.3: calculating the countermeasure loss of the cycle-consistent countermeasure network, wherein the countermeasure loss of the generator is:
Figure BDA0003673158000000041
the countermeasure loss of the discriminator is:
Figure BDA0003673158000000051
calculating the cycle consistent loss, wherein the specific loss is as follows:
Figure BDA0003673158000000052
wherein λ 1 The specific value of the weight coefficient representing the cyclic coincidence loss is 10.
S4.4: putting the countering losses of the discriminators into the momentum optimizer Adam, optimizing the discriminator D CL And discriminator D FI
S5: and evaluating the effect of deblurring the image, and measuring the mean square error, the peak signal-to-noise ratio and the structural similarity.
Further, the specific step of S5 is:
s5.1: MSE obtains the loss degree between the image to be detected and the original image by calculating the mean square error of all corresponding pixel points of the original image and the image to be detected, and the calculation formula is as follows:
Figure BDA0003673158000000053
s5.2: the PSNR value can effectively reflect the true degree of the image to be detected, and is most widely used in the field of image processing. The specific mathematical calculation mode is to compare the maximum semaphore with the mean square error, and then carry out logarithm operation and constant operation on the obtained value, and the calculation formula is as follows:
Figure BDA0003673158000000054
s5.3: the SSIM is an evaluation performance index for judging the image similarity degree through the comprehensive attributes of the images, wherein the comprehensive attributes refer to the set information of the comprehensive effects of three factors, namely image brightness, image contrast and structure. The luminance information in the composite attribute needs to be obtained by the mean value of the pixels in the image, the contrast information needs to take the variance of the pixels in the image as the numerical value of the contrast information, and the structural information uses the covariance of the pixels as the estimation of the self information in mathematical representation:
Figure BDA0003673158000000055
the mathematical expressions for calculating the mean, variance and covariance are as follows:
Figure BDA0003673158000000056
Figure BDA0003673158000000057
Figure BDA0003673158000000061
the above description is only one embodiment of the present invention, and is not intended to limit the present invention, and it is apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. An image deblurring method based on cycle-consistent generation of a confrontation network is characterized by comprising the following steps:
s1: aiming at the problems of information loss caused by down-sampling feature extraction and poor correlation of single-scale acquired features, optimizing a model structure and a loss function of a countermeasure network;
s2: aiming at the problems that a paired data set is difficult to obtain and a synthetic data set is poor in generalization capability, a generation confrontation network model which can be trained on an unpaired training set is provided;
s3: a generative confrontation network model trained on the unpaired training set;
s4: constructing a loss function of the network with a plurality of losses, including mean square error, countering losses, and perceptual losses as loss of content;
s5: and evaluating the effect of deblurring the image, and measuring the mean square error, the peak signal-to-noise ratio and the structural similarity.
2. The method for deblurring an image based on cycle-consistent generation of an antagonistic network as claimed in claim 1, wherein the specific steps of step S1 are:
s1.1: acquiring multi-scale features of input data by using a multi-scale residual error structure, and removing extra noise possibly caused by feature information during interlayer transmission by adopting an attention mechanism;
s1.2: judging the generated data by adopting a mode of combining a global discriminator and a local discriminator;
s1.3: the network model designed by taking the paired data sets as training further improves the quality of blurred image restoration under the condition of ensuring the image restoration effect, so that the restored image is clearer and more natural.
3. The method for deblurring an image based on cycle-consistent generation of an antagonistic network as claimed in claim 1, wherein the specific steps of step S2 are:
s2.1: establishing an annular network structure by adopting two groups of generated countermeasure networks, so that the image recovery problem is converted into the image translation problem between a fuzzy domain and a clear domain;
s2.2: the generator which combines the contrast learning network and the depth residual error network to form two groups of networks learns the information in the image domain to carry out the mutual transformation between the fuzzy domain and the clear domain, the Markov discriminator is used as the discriminator of the two groups of networks to provide feedback information, and the parameters of the two groups of networks which generate the countermeasure network are updated.
4. The method for deblurring an image based on cycle-consistent generation of an antagonistic network as claimed in claim 1, wherein the specific steps of step S3 are:
s3.1: and (4) forming a learning direction of the loss function control network by using the antagonistic loss and the cyclic consistent loss.
5. The method for deblurring an image based on cycle-consistent generation of an antagonistic network as claimed in claim 1, wherein the specific steps of step S4 are:
s4.1: inputting the blurred image into a generator to generate a cyclic sharp image, and inputting the pseudo-blur image into the generator to generate a cyclic pseudo-sharp image;
s4.2: inputting a real sharp image into a discriminator, inputting the fuzzy image into the discriminator, inputting the pseudo sharp image into the discriminator, and inputting the pseudo blur into the discriminator;
s4.3: calculating the countermeasure loss of the cycle-consistent countermeasure network, wherein the countermeasure loss of the generator is:
Figure FDA0003673157990000021
the countermeasure loss of the discriminator is:
Figure FDA0003673157990000022
calculating the cycle consistent loss, wherein the specific loss is as follows:
Figure FDA0003673157990000023
wherein λ 1 The specific value of the weight coefficient representing the cyclic coincidence loss is 10.
S4.4: the countermeasure loss of the discriminator is put into a momentum optimizer Adam, and the discriminator are optimized.
6. The method for deblurring an image based on cycle-consistent generation of an antagonistic network as claimed in claim 1, wherein the specific steps of step S5 are:
s5.1: MSE obtains the loss degree between the image to be detected and the original image by calculating the mean square error of all corresponding pixel points of the original image and the image to be detected, and the calculation formula is as follows:
Figure FDA0003673157990000024
s5.2: the PSNR value can effectively reflect the true degree of the image to be detected, and is most widely used in the field of image processing. The specific mathematical calculation mode is to compare the maximum semaphore with the mean square error, and then carry out logarithm operation and constant operation on the obtained value, and the calculation formula is as follows:
Figure FDA0003673157990000025
s5.3: the SSIM is an evaluation performance index for judging the image similarity degree through the comprehensive attributes of the images, wherein the comprehensive attributes refer to the set information of the comprehensive effects of three factors, namely image brightness, image contrast and structure. The luminance information in the composite attribute needs to be obtained by the mean value of the pixels in the image, the contrast information needs to take the variance of the pixels in the image as the numerical value of the contrast information, and the structural information uses the covariance of the pixels as the estimation of the self information in mathematical representation:
Figure FDA0003673157990000026
the mathematical expressions for calculating the mean, variance and covariance are as follows:
Figure FDA0003673157990000027
Figure FDA0003673157990000031
Figure FDA0003673157990000032
compared with the prior art, the invention has the beneficial effects that: the algorithm is utilized to effectively evaluate the deblurring image effect and comprehensively improve the image quality.
CN202210611043.8A 2022-05-31 2022-05-31 Image deblurring method for generating countermeasure network based on cycle consistency Pending CN114972097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210611043.8A CN114972097A (en) 2022-05-31 2022-05-31 Image deblurring method for generating countermeasure network based on cycle consistency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210611043.8A CN114972097A (en) 2022-05-31 2022-05-31 Image deblurring method for generating countermeasure network based on cycle consistency

Publications (1)

Publication Number Publication Date
CN114972097A true CN114972097A (en) 2022-08-30

Family

ID=82957878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210611043.8A Pending CN114972097A (en) 2022-05-31 2022-05-31 Image deblurring method for generating countermeasure network based on cycle consistency

Country Status (1)

Country Link
CN (1) CN114972097A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291252A (en) * 2023-11-27 2023-12-26 浙江华创视讯科技有限公司 Stable video generation model training method, generation method, equipment and storage medium
CN118196423A (en) * 2024-05-17 2024-06-14 山东巍然智能科技有限公司 Water removal method for unmanned aerial vehicle coastal zone image and model building method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291252A (en) * 2023-11-27 2023-12-26 浙江华创视讯科技有限公司 Stable video generation model training method, generation method, equipment and storage medium
CN117291252B (en) * 2023-11-27 2024-02-20 浙江华创视讯科技有限公司 Stable video generation model training method, generation method, equipment and storage medium
CN118196423A (en) * 2024-05-17 2024-06-14 山东巍然智能科技有限公司 Water removal method for unmanned aerial vehicle coastal zone image and model building method thereof

Similar Documents

Publication Publication Date Title
Guo et al. Underwater image enhancement using a multiscale dense generative adversarial network
Li et al. PDR-Net: Perception-inspired single image dehazing network with refinement
CN108520504B (en) End-to-end blurred image blind restoration method based on generation countermeasure network
CN114972097A (en) Image deblurring method for generating countermeasure network based on cycle consistency
CN111915484A (en) Reference image guiding super-resolution method based on dense matching and self-adaptive fusion
CN112507617B (en) Training method of SRFlow super-resolution model and face recognition method
Zong et al. Local-CycleGAN: a general end-to-end network for visual enhancement in complex deep-water environment
CN108460742A (en) A kind of image recovery method based on BP neural network
CN114936979B (en) Model training method, image denoising method, device, equipment and storage medium
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
Xue et al. Research on gan-based image super-resolution method
Fang et al. Prior-guided contrastive image compression for underwater machine vision
Liu et al. WSDS-GAN: A weak-strong dual supervised learning method for underwater image enhancement
Parekh et al. A survey of image enhancement and object detection methods
Jaisurya et al. Attention-based single image dehazing using improved cyclegan
Li et al. An improved method for underwater image super-resolution and enhancement
CN117011357A (en) Human body depth estimation method and system based on 3D motion flow and normal map constraint
CN112634176B (en) Mobile phone shot image denoising method based on multi-perception countermeasure generation network
Lee et al. Efficient Low Light Video Enhancement Based on Improved Retinex Algorithms
Rani et al. ELM-Based Shape Adaptive DCT Compression technique for underwater image compression
Fang et al. Priors guided extreme underwater image compression for machine vision and human vision
Wang et al. CNN-based Single Image Dehazing via Attention Module
Su et al. Image deblurring algorithm by using conditional generation adversarial network
Xu et al. Super-Resolution Method Based on Improved Generative Adversarial Networks
Jin et al. A cnn cascade for quality enhancement of compressed depth images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination