CN113542772A - Compressed image deblocking method based on enhanced wide-activation residual error network - Google Patents

Compressed image deblocking method based on enhanced wide-activation residual error network Download PDF

Info

Publication number
CN113542772A
CN113542772A CN202010317399.1A CN202010317399A CN113542772A CN 113542772 A CN113542772 A CN 113542772A CN 202010317399 A CN202010317399 A CN 202010317399A CN 113542772 A CN113542772 A CN 113542772A
Authority
CN
China
Prior art keywords
image
compressed image
network
deblocking
activation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010317399.1A
Other languages
Chinese (zh)
Other versions
CN113542772B (en
Inventor
何小海
陈正鑫
任超
陈洪刚
熊淑华
卿粼波
滕奇志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010317399.1A priority Critical patent/CN113542772B/en
Publication of CN113542772A publication Critical patent/CN113542772A/en
Application granted granted Critical
Publication of CN113542772B publication Critical patent/CN113542772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a compressed image deblocking method based on an enhanced wide-activation residual error network. The method mainly comprises the following steps: taking an enhanced wide-activation residual error module as a main construction unit, and simultaneously introducing overlapped image block extraction and reconstruction in a full convolution mode, thereby constructing a network model for removing a compressed image blocking effect; respectively training compressed image deblocking models under different quality factors by using the convolutional neural network constructed in the previous step; and taking the compressed image as input to obtain a final deblocking image on the basis of the trained compressed image deblocking model. The method of the invention can cause more information to flow from the shallow layer to the deep layer of the convolutional neural network, has larger receptive field and can mine effective information of wider image areas. Therefore, the method can obtain good subjective and objective effects and is an effective image compression deblocking method.

Description

Compressed image deblocking method based on enhanced wide-activation residual error network
Technical Field
The invention relates to a compressed image deblocking technology, in particular to a compressed image deblocking method based on an enhanced wide-activation residual error network, and belongs to the field of digital image processing.
Background
In practical applications, in order to more effectively utilize bandwidth resources, save storage space, and improve data transmission efficiency, lossy compression is often performed on images. However, this will inevitably cause distortion of the image. The JPEG standard is widely applied to various imaging apparatuses as a simple and effective image compression method, for example: digital cameras, smart phones, etc. In lossy compression, JPEG blocks discrete cosine transforms images without overlapping and coarsely quantizes transform coefficients in blocks, which leads to various compression artifacts, especially blocking artifacts.
In order to remove blocking artifacts from compressed images, a number of methods have been proposed, which fall into three main categories: enhancement-based methods, reconstruction-based methods, and learning-based methods. Among them, the learning-based method has drawn more attention and deeper research because of faster processing speed and better quality of processed images. Particularly in recent years, with the rapid development of deep learning technology and the successful application thereof in the field of computer vision, the compressed image deblocking method based on the deep convolutional neural network makes remarkable progress. By means of the strong learning capability and the model representation capability of the deep convolutional neural network, the invention constructs an enhanced wide-activation residual error network, and further improves the performance of removing the blocking effect of the compressed image.
Disclosure of Invention
The invention further develops the advantages of the wide-activation residual error module, designs an enhanced wide-activation residual error module, and simultaneously introduces the common overlapped image block extraction and reconstruction strategies in the reconstruction-based method into the deep convolution neural network in a full convolution mode to construct an effective compressed image deblocking method.
The invention provides a compressed image deblocking method based on an enhanced wide-activation residual error network, which mainly comprises the following operation steps:
(1) taking an enhanced wide-activation residual error module as a main construction unit, and simultaneously introducing overlapped image block extraction and reconstruction in a full convolution mode, thereby constructing a network model for removing a compressed image blocking effect;
(2) respectively training compressed image deblocking models under different quality factors by using the convolutional neural network in the step one;
(3) and taking the compressed image as input to obtain a final deblocking image on the basis of the trained compressed image deblocking model.
Drawings
FIG. 1 is a schematic block diagram of the method for removing the blocking effect of the compressed image based on the enhanced wide-activation residual error network
FIG. 2 is an enhanced wide activation residual block
FIG. 3 is a block diagram of a full-convolution implementation of an overlapping image block reconstruction network
Fig. 4 is a predefined weight of the overlapping image block extraction and reconstruction network: wherein, (a) is the weight of the convolution layer in the overlapped image block extraction network, (b) is the weight of the deconvolution layer Deconv 1 in the overlapped image block reconstruction network, and (c) is the weight of the deconvolution layer Deconv 2 in the overlapped image block reconstruction network
FIG. 5 is a graph comparing the deblocking results of the present invention and six other methods on a compressed image "29030" (quality factor of 10): wherein (a) is the original image, (b) is the JPEG compressed image, and (c) (d) (e) (f) (g) (h) (i) are method 1, method 2, method 3, method 4, method 5, method 6, and the deblocking result of the present invention
Fig. 6 is a graph comparing the deblocking results of the compressed image "barbarbara" according to the invention with six other methods (quality factor of 10): wherein (a) is the original image, (b) is the JPEG compressed image, and (c) (d) (e) (f) (g) (h) (i) are method 1, method 2, method 3, method 4, method 5, method 6, and the deblocking result of the present invention
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
in fig. 1, the method for removing the blocking effect of the compressed image based on the enhanced wide-activation residual network may be specifically divided into the following steps:
(1) taking an enhanced wide-activation residual error module as a main construction unit, and simultaneously introducing overlapped image block extraction and reconstruction in a full convolution mode, thereby constructing a network model for removing a compressed image blocking effect;
(2) respectively training compressed image deblocking models under different quality factors by using the convolutional neural network in the step one;
(3) and taking the compressed image as input to obtain a final deblocking image on the basis of the trained compressed image deblocking model.
Specifically, in step (1), the constructed compressed image deblocking network model based on the enhanced wide activation residual module mainly includes: an Overlapping patch extraction network (OPENet), a Feature encoding network (FENet), N Enhanced wide-activated residual blocks (ewabrb), a Feature decoding network (FDNet), and an Overlapping patch reconstruction network (OPCNet).
The overlapping image block extraction network contains only one convolutional layer, whose weights are predefined as the filter of fig. 4(a), and are not updated during the training process. The size of each filter is p × p, the input compressed image is convolved by a step length s, and p × p image blocks in the compressed image are extracted as p2A vector of dimensions. In the present invention, s and p are set to 2 and 8, respectively. The overlapped image block extraction network can not only improve the processing efficiency, but also enlarge the receptive field, which is beneficial to improving the performance of the convolutional neural network for removing the compressed image block effect.
The feature coding network only comprises one convolution layer, the size of the convolution kernel is 3 multiplied by 3, and the number of input and output channels is 64. The feature coding network converts the pixel values from the overlapped image block extraction network into feature coefficients, which is beneficial to improving the robustness of the convolutional neural network.
The N enhanced wide-activation residual modules are main parts of the convolutional neural network, and the nonlinear mapping capability and the learning capability of the model are greatly improved. The present invention sets N to 19. Fig. 2 shows the structure of each enhanced wide-activation residual module, which includes two convolution layers 3 × 3Conv with convolution kernel size 3 × 3, two convolution layers 1 × 1Conv with convolution kernel size 1 × 1, and one ReLU activation function. The mathematical expression for the ReLU activation function is:
y=max(x,0)
x and y are input, input values, respectively. The function outputs the non-negative value input as is, and maps the negative value input to 0, enhancing the non-linear capability of the convolutional neural network. However, this also causes a loss of information to some extent, impeding the flow of information in the convolutional neural network. To alleviate this problem, the present invention introduces two 1 × 1 convolutional layers into a common residual module, proposing an enhanced wide-activation residual module. In the enhanced wide-activation residual module, the first 1 × 1 convolutional layer increases the number of channels of the tensor from the previous convolutional layer, so that more information can pass through the ReLU, and the second 1 × 1 convolutional layer reduces the number of channels of the tensor activated by the ReLU function to the original number. Unlike the general approach of wide-activation residual module, which directly increases the number of output channels of the first 3 × 3 convolutional layer in the residual module to increase the activation width (number of channels through the tensor of the ReLU activation function), the enhanced wide-activation residual module of the present invention uses 1 × 1 convolutional layer to achieve this goal. Because the 1 × 1 convolutional layer has less parameters than the 3 × 3 convolutional layer, the enhanced wide-activation residual module can obtain an activation width much larger than that of a general wide-activation residual module under the condition of not increasing the parameters and the calculation complexity, and the information utilization efficiency of the convolutional neural network and the deblocking performance of a compressed image are greatly improved. The invention sets the number of input and output channels of the enhanced wide-activation residual module 3 x 3 convolutional layer as 64 and the activation width as 512.
The feature decoding network only comprises one convolution layer, the size of the convolution kernel is 3 multiplied by 3, the number of input and output channels is 64, and the feature coefficients subjected to nonlinear mapping by 19 enhanced wide-activation residual modules are converted into pixel values. In order to prevent gradient disappearance and accelerate the convergence speed of the convolutional neural network, the invention adds the output of the overlapped image block extraction network and the result of the feature decoding network by using the thought of residual error learning.
Fig. 3 shows the structure of the overlapped image block reconstruction network, where T denotes the input tensor, MGOnes denotes a matrix generator, Deconv 1 and Deconv 2 denote deconvolution layers,/denotes element-by-element division, and I denotes the output image. If the size of the tensor T is (H, W, C), H, W, C represents the length, width and number of channels of the tensor, respectively, then MGOnes produces an H W all-one matrix. The weights of Deconv 1 and Deconv 2 are predefined as the filters of fig. 4(b) and 4(c), respectively, and are not updated during the training process. These filters are all of size p, and their inputs are deconvolved with a step size s. In the present invention, s and p are set to 2 and 8, respectively. Overlapping image block reconstruction network will express p in tensor2The vector of dimensions is reduced to p x p image blocks and these image blocks are put back in their original positions in the image, the areas of overlap between the image blocks being averaged. Specifically, in the process of performing deconvolution in each sliding window, Deconv 1 sums a plurality of pixel values in the tensor T, where the pixel values can be regarded as a plurality of estimates of a certain pixel in the compressed image, and then puts the sum result to a corresponding position in the image. Then, mgons generates an all-one matrix with the same spatial size as the tensor T. Then, Deconv 2 takes this all-one matrix as input, and generates a weight matrix of the same size as the compressed image, each element of this weight matrix representing the number of p × p image blocks spanned by a pixel at a corresponding position in the compressed image. Finally, the final deblock image I is obtained by dividing the output of the Deconv 1 element by using the weight matrix. The process of appeasing can be succinctly expressed asThe following formula:
I=Deconv1(T)./Deconv2(MGOnes(T))
it should be noted that the network for reconstructing overlapped image blocks according to the present invention is not only a pixel reconstruction operation similar to the sub-pixel convolution layer, i.e. reconstructing the tensor into a matrix, but also a low-pass filter, which averages the pixel values of the overlapping image blocks. Blocking artifacts typically manifest as discontinuities in the boundaries between different image blocks, which correspond to high frequency components in the frequency domain. Therefore, the overlapping image block reconstruction network of the present invention is advantageous for removing blocking artifacts. Furthermore, the weights of the overlapping image block reconstruction network of the present invention are predefined and sparse, so the sub-network does not put an additional training burden on the entire compressed image deblocking network, and can be performed efficiently.
The invention adopts the mean square error criterion as the loss function of the wide activation residual error network model for training enhancement, and is expressed by the following formula:
Figure BDA0002460082300000041
where Θ represents trainable network parameters, H (-) represents the proposed enhanced wide-activation residual network model, Xi、YiRespectively representing the ith compressed image and the corresponding original image in the training samples, wherein M is the number of the training samples in each batch, and is set as 16 in the invention.
To verify the effectiveness of the present invention, a number of comparative experiments were performed on the commonly used test galleries Classic5 (containing 5 classical images) and BSDS500 (containing 200 training images, 100 verification images, 200 test images). In the experiment, the invention is compared with 6 typical compressed image deblocking methods, which are algorithms based on a convolutional neural network. The 6 methods for removing the block effect of the compressed image as comparison are as follows:
the method comprises the following steps: the method proposed by Dong et al, reference "Dong C, Deng Y, Change Long C, et al. compression efficiencies reduction by a default connected network [ C ]. IEEE International Conference on Computer Vision,2015: 576-.
The method 2 comprises the following steps: the method proposed by Zhang et al, reference "Zhang K, Zuo W, Chen Y, et al, beyond a Gaussian noise". Residual learning of deep cnn for Image noise [ J ]. IEEE Transactions on Image Processing,2017,26(7): 3142-.
The method 3 comprises the following steps: the method proposed by Tai et al, reference is made to "Tai Y, Yang J, Liu X, et al. Memnet network: A personal memory network for image retrieval [ C ]. IEEE international conference on computer vision,2017: 4539-.
The method 4 comprises the following steps: the method proposed by Chen et al is described in "Chen H, He X, Qing L, et al, DPW-SDNet: Dual pixel-horizontal domain deep CNNs for soft decoding of JPEG-compressed images [ C ]. IEEE Conference on Computer Vision and Pattern registration works, 2018: 711-.
The method 5 comprises the following steps: the method proposed by Liu et al, reference "Liu P, Zhang H, Zhang K, et al. Multi-level wave-CNN for image restoration [ C ]. IEEE Conference on Computer Vision and Pattern Recognition works, 2018: 773-.
The method 6 comprises the following steps: the method proposed by Zhang et al, reference "Zhang Y, Li K, Li K, et al.
The contents of the comparative experiment are as follows:
in experiment 1, 200 test images of a data set BSDS500 were compressed by a MATLAB JPEG encoder with Quality Factors (QF) of 10, 20, 30, and 40, respectively, and then the compressed images were deblock processed by methods 1 to 6 and the present invention, respectively. Table one shows the average of the evaluation indices obtained on 200 test images of the data set BSDS500 using the comparative method and the present invention. The authors of method 3 provided only training models with QF of 10 and 20, so the table one only lists the test results of this method in both cases. The objective evaluation indexes include PSNR (Peak Signal to Noise ratio), SSIM (Structure Similarity index), and PSNR-B. PSNR-B is an objective evaluation index designed for JPEG compressed images. Higher values of these three indicators indicate better deblocking. In addition, for subjective visual comparison, FIG. 5 shows the deblocking results for image "29030" at QF of 10 in 200 test images of data set BSDS 500. The original image, the JPEG compressed image, method 1, method 2, method 3, method 4, method 5, method 6 and the deblocking result of the present invention are shown in fig. 5(a), fig. 5(b), fig. 5(c), fig. 5(d), fig. 5(e), fig. 5(f), fig. 5(g), fig. 5(h) and fig. 5(i), respectively.
Experiment 2, a test gallery Classic5 was compressed with a MATLAB JPEG encoder with QF 10, 20, 30, 40, respectively, and then deblocking was performed on the compressed images with methods 1 to 6 and the present invention, respectively. Table II shows the average PSNR, SSIM and PSNR-B values obtained by the comparative method and the method on the test gallery Classic 5. Likewise, for subjective visual comparison, FIG. 6 shows the deblocking results for the image "Barbara" at a QF of 10 in test gallery Classic 5. The original image, the JPEG compressed image, method 1, method 2, method 3, method 4, method 5, method 6 and the deblocking result of the present invention are shown in fig. 6(a), fig. 6(b), fig. 6(c), fig. 6(d), fig. 6(e), fig. 6(f), fig. 6(g), fig. 6(h) and fig. 6(i), respectively.
Watch 1
Figure BDA0002460082300000051
Watch two
Figure BDA0002460082300000061
As can be seen from the first and second tables, compared with the other 6 compressed image deblocking methods, the present invention achieves the highest average PSNR, SSIM and PSNR-B values under all QFs. As can be seen from fig. 5, method 1 and method 2 are prone to generate severe blurring in the edge and texture regions in the image, and method 3, method 4, method 5 and method 6 alleviate this problem to some extent. The enlarged region in fig. 6 contains two differently oriented striped textures, one resembling a slash and the other resembling a backslash, which are interlaced to form a diamond-like pattern. The method 1, the method 2, the method 3, and the method 4 hardly recover the stripe-like texture of the reverse-slash direction, so that the pattern of the diamond pattern is hardly recognized in the images processed by them. While methods 5 and 6 can repair somewhat sparsely visible backset striped textures, they tend to produce a distorted diamond pattern. In contrast, the invention can obtain relatively regular diamond pattern patterns, and generate visual effect closer to the original image.
In summary, compared with the other 6 methods, the compressed image deblocking results of the invention have great advantages in subjective and objective evaluation. Therefore, the invention is an effective method for removing the block effect of the compressed image.

Claims (3)

1. The compressed image deblocking method based on the enhanced wide activation residual error network is characterized by comprising the following steps of:
the method comprises the following steps: taking an enhanced wide-activation residual error module as a main construction unit, and simultaneously introducing overlapped image block extraction and reconstruction in a full convolution mode, thereby constructing a network model for removing a compressed image blocking effect;
step two: respectively training compressed image deblocking models under different quality factors by using the convolutional neural network in the step one;
step three: and taking the compressed image as input to obtain a final deblocking image on the basis of the trained compressed image deblocking model.
2. The method according to claim 1, wherein the enhanced wide-activation residual error network-based compressed image deblocking method is characterized in that the enhanced wide-activation residual error module in the first step mainly comprises two convolutional layers with convolutional kernel sizes of 3 × 3, two convolutional layers with convolutional kernel sizes of 1 × 1, and a ReLU activation function, wherein the first 1 × 1 convolutional layer increases the number of channels from the previous convolutional layer, so that more information can pass through the ReLU, and the second 1 × 1 convolutional layer reduces the number of channels of the tensor activated by the ReLU function to the original number; because the 1 × 1 convolutional layer has less parameters than the 3 × 3 convolutional layer, the enhanced wide-activation residual module can obtain an activation width much larger than that of a general wide-activation residual module under the condition of not increasing the parameters and the calculation complexity, and the information utilization efficiency of the convolutional neural network and the deblocking performance of a compressed image are greatly improved.
3. The method for removing the blocking effect of the compressed image based on the enhanced wide activation residual error network according to claim 1, characterized in that the reconstruction of the overlapped image blocks in the form of full convolution in the step one, i.e. the overlapped image block reconstruction network, mainly comprises two deconvolution layers, a matrix generator; the network for reconstructing the overlapped image blocks is not only a pixel reconstruction operation, but also a tensor operation in which p is located2The vector of the dimension is restored into p multiplied by p image blocks, the image blocks are put back to the original positions of the image blocks in the image, and the vector of the dimension is a low-pass filter, the pixel values overlapped among the image blocks are averaged, and the blocking effect in the compressed image is effectively removed; furthermore, the weights of the overlapping image block reconstruction network are predefined and sparse, so the sub-network does not put an additional training burden on the entire compressed image deblocking network, and can be performed efficiently.
CN202010317399.1A 2020-04-21 2020-04-21 Compressed image deblocking method based on enhanced wide-activation residual error network Active CN113542772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010317399.1A CN113542772B (en) 2020-04-21 2020-04-21 Compressed image deblocking method based on enhanced wide-activation residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010317399.1A CN113542772B (en) 2020-04-21 2020-04-21 Compressed image deblocking method based on enhanced wide-activation residual error network

Publications (2)

Publication Number Publication Date
CN113542772A true CN113542772A (en) 2021-10-22
CN113542772B CN113542772B (en) 2023-03-24

Family

ID=78093834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010317399.1A Active CN113542772B (en) 2020-04-21 2020-04-21 Compressed image deblocking method based on enhanced wide-activation residual error network

Country Status (1)

Country Link
CN (1) CN113542772B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730453A (en) * 2017-11-13 2018-02-23 携程计算机技术(上海)有限公司 Picture quality method for improving
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
CN110084862A (en) * 2019-04-04 2019-08-02 湖北工业大学 Compression of images perception algorithm based on multi-scale wavelet transformation and deep learning
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
CN107730453A (en) * 2017-11-13 2018-02-23 携程计算机技术(上海)有限公司 Picture quality method for improving
CN110084862A (en) * 2019-04-04 2019-08-02 湖北工业大学 Compression of images perception algorithm based on multi-scale wavelet transformation and deep learning
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王新欢、任超等: "基于双域学习的JPEG压缩图像去压缩效应算法", 《智能算法》 *

Also Published As

Publication number Publication date
CN113542772B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
Yu et al. Deep convolution networks for compression artifacts reduction
Fu et al. Jpeg artifacts reduction via deep convolutional sparse coding
Dong et al. Compression artifacts reduction by a deep convolutional network
Chen et al. DPW-SDNet: Dual pixel-wavelet domain deep CNNs for soft decoding of JPEG-compressed images
Anwar et al. Real image denoising with feature attention
Li et al. An efficient deep convolutional neural networks model for compressed image deblocking
Li et al. Multi-channel and multi-model-based autoencoding prior for grayscale image restoration
Wu et al. Deep multi-level wavelet-cnn denoiser prior for restoring blurred image with cauchy noise
Qiao et al. Learning non-local image diffusion for image denoising
Huang et al. Two-step approach for the restoration of images corrupted by multiplicative noise
CN109978772A (en) Based on the deep learning compression image recovery method complementary with dual domain
Zini et al. Deep residual autoencoder for blind universal jpeg restoration
Chen et al. A feature-enriched deep convolutional neural network for JPEG image compression artifacts reduction and its applications
Anwar et al. Attention-based real image restoration
Jia et al. Pixel-attention CNN with color correlation loss for color image denoising
Amaranageswarao et al. Joint restoration convolutional neural network for low-quality image super resolution
CN113542772B (en) Compressed image deblocking method based on enhanced wide-activation residual error network
Peng et al. Lightweight Adaptive Feature De-drifting for Compressed Image Classification
Shin et al. Deep orthogonal transform feature for image denoising
Yeganli et al. Super-resolution using multiple structured dictionaries based on the gradient operator and bicubic interpolation
Zhang et al. CFPNet: A denoising network for complex frequency band signal processing
Amaranageswarao et al. Blind compression artifact reduction using dense parallel convolutional neural network
Zhao et al. Fast blind decontouring network
Yu et al. Local excitation network for restoring a jpeg-compressed image
Albluwi et al. Artifacts reduction in jpeg-compressed images using cnns

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant