CN112188217B - JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning - Google Patents

JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning Download PDF

Info

Publication number
CN112188217B
CN112188217B CN201910584104.4A CN201910584104A CN112188217B CN 112188217 B CN112188217 B CN 112188217B CN 201910584104 A CN201910584104 A CN 201910584104A CN 112188217 B CN112188217 B CN 112188217B
Authority
CN
China
Prior art keywords
image
convolutional neural
jpeg compressed
domain
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910584104.4A
Other languages
Chinese (zh)
Other versions
CN112188217A (en
Inventor
何小海
李兴龙
任超
孙梦笛
熊淑华
普拉迪普卡恩
滕奇志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201910584104.4A priority Critical patent/CN112188217B/en
Publication of CN112188217A publication Critical patent/CN112188217A/en
Application granted granted Critical
Publication of CN112188217B publication Critical patent/CN112188217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The invention discloses a JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning. The method mainly comprises the following steps: respectively constructing a DCT domain network and a pixel domain network structure model based on a convolutional neural network aiming at a JPEG compressed image; respectively training the two constructed convolutional neural network models by using a training image set; respectively predicting and outputting JPEG compressed images by using two convolutional neural networks obtained by training; fusing the prediction results of the two networks by using a weighted average mode; and (4) recombining the fused images into an image to obtain a final decompression effect processing result. The method of the invention can reduce the compression effect existing in the JPEG compressed image and recover the detail information lost in the compression process of the image to a certain extent. The method can be applied to the fields of image and video compression, digital multimedia communication and the like.

Description

JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning
Technical Field
The invention relates to a quality improvement technology of a compressed image, in particular to a JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning, belonging to the field of digital image processing.
Background
Image compression is an image processing technique that reduces the amount of data by reducing redundancy among original image data, and can save storage space and bandwidth resources to some extent. However, under the condition of limited coding bits, that is, high compression multiple, the compressed image has obvious distortion and compression effect, which seriously reduces the subjective and objective quality of the compressed image and limits the further application thereof.
The nature of the distortion generated by the JPEG compression method is that the DCT coefficients are quantized before entropy coding, introducing rounding errors. The decompression effect method of the compressed image is a post-processing method independent of a coder and a decoder, and can reduce rounding errors introduced in the encoding process, so that the subjective and objective quality of the compressed image is obviously improved under the condition of not changing the code rate of the compressed image. Has high practicability. In recent years, with the rapid development of machine learning and deep learning methods, a compressed image decompression effect method based on a convolutional neural network enters the field of view of the public. Compared with the traditional post-processing technology, the decompression effect method based on the convolutional neural network can obtain higher-quality images and has higher processing speed. However, the compressed image decompression effect algorithm based on the convolutional neural network at the present stage has a further improved space in the aspects of prediction performance, network structure effectiveness and the like.
Disclosure of Invention
The invention aims to improve the quality of JPEG compressed images widely existing in daily life.
The invention provides a JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning, which mainly comprises the following operation steps:
(1) respectively constructing a DCT domain network and a pixel domain network structure model based on a convolutional neural network aiming at a JPEG compressed image;
(2) respectively training the two convolutional neural network models constructed in the step (1) by utilizing a training image set;
(3) respectively predicting and outputting the JPEG compressed image by utilizing the two convolutional neural networks obtained by training in the step (2);
(4) fusing the two prediction results in the step (3) in a weighted average mode;
(5) and (4) recombining the fused images into an image to obtain a final decompression effect processing result.
Drawings
FIG. 1 is a block diagram of the JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning according to the invention
FIG. 2 is a filter bank of block pixel extraction layers in a convolutional neural network structure
Fig. 3 is a schematic diagram of image block extraction and analysis of its receptive field in a convolutional neural network structure: wherein (a) is the image block extraction operation diagram in the network, and (b) is the receptive field diagram for proposing the network structure
FIG. 4 is a schematic diagram of a wide activation residual block structure for use in a convolutional neural network structure
FIG. 5 is a comparison graph of the decompression effect processing results of the test image 'Barbara' according to the present invention and eight methods (JPEG compression quality factor of 10): wherein, (a) is the original uncompressed image, (b) is the JPEG compressed image, (c) to (j) are the experimental results of the comparison methods 1 to 8, and (k) is the processing result of the present invention
FIG. 6 is a comparison graph of the decompression effect processing results of the test image 'Lighthouse 3' according to the present invention and eight methods (JPEG compression quality factor of 10): wherein, (a) is the original uncompressed image, (b) is the JPEG compressed image, (c) to (j) are the experimental results of the comparison methods 1 to 8, and (k) is the processing result of the present invention
FIG. 7 is a comparison graph of the decompression effect processing results of the test image 'Buildings' according to the invention and eight methods (JPEG compression quality factor is 10): wherein, (a) is the original uncompressed image, (b) is the JPEG compressed image, (c) to (j) are the experimental results of the comparison methods 1 to 8, and (k) is the processing result of the present invention
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
in fig. 1, the JPEG compressed image decompression effect method combining DCT domain and pixel domain learning specifically includes the following five steps:
(1) respectively constructing a DCT domain network and a pixel domain network structure model based on a convolutional neural network aiming at a JPEG compressed image;
(2) respectively training the two convolutional neural network models constructed in the step (1) by utilizing a training image set;
(3) respectively predicting and outputting the JPEG compressed image by utilizing the two convolutional neural networks obtained by training in the step (2);
(4) fusing the two prediction results in the step (3) in a weighted average mode;
(5) and (4) recombining the fused images into an image to obtain a final decompression effect processing result.
Specifically, in the step (1), the constructed convolutional neural network model based on dual-domain (DCT domain and pixel domain) learning is as shown in fig. 1. The convolutional neural network model mainly comprises a block extraction convolutional layer, a DCT and IDCT transformation convolutional layer, a wide activation residual block structure and the like. First, in the DCT domain branch and the pixel domain branch, both share a block extraction convolutional layer, which can convert an m × m sized image block into m2The image tensor of x 1 size, the block lifting operation E can be formulated as:
Figure BDA0002113916150000031
wherein
Figure BDA0002113916150000032
Representing convolution operations, weight WeIs one comprising m2A filter bank of size m x m. Specifically, m for E2The image is convolved by filter groups, each filter extracts one pixel in m × m image blocks, and after the convolution, each m × m image block in the original image becomes m × m image blocks2 X 1 image block tensor. As shown in FIG. 2, to extract the top left pixel of an image block, the left of an m by m filter is filteredThe upper corner is set to be 1, the other elements are set to be zero, and meanwhile, the gradient updating operation of the block extraction layer is closed when the network is trained, namely the learning rate of the block extraction layer is set to be zero. In turn, when the elements of the first row and second column of the m × m filter are set to 1 and the remaining elements remain zero, the filter implements the extraction of the pixels of the first row and second column in the image block. Operation layer m due to block extraction2The individual filters are all sparse and therefore the block extraction operation E can be performed efficiently.
The larger the receptive field of the neuron is, the more the original image information utilized by each pixel point of the output result is, that is, the more the features with higher-level semantic information can be obtained. Accordingly, if the receptive field is smaller, the more likely it contains features that are locally informative. Through the overlapped image block extraction operation, the receptive field of the network model in the invention is indirectly increased, and the parameter m in the schematic diagram is set to be 8. Taking the convolution operation with two convolution kernels of 3 × 3 as an example, where padding is 0 and stride is 1, if there is no image block extraction operation, the reception fields of the two convolution operations of 3 × 3 are 5 × 5; after using the proposed overlapped block extraction operation, the receptive field of the network is increased to 16 × 16. It should be noted that although the image block extraction operation is a convolution operation, the extraction result is still information of the image pixel domain, i.e., the image tensor, as shown in fig. 3 (a). Thus, the calculation of the receptive field is only calculated by the next two convolution operations, convolution 1 and convolution 2 in FIG. 3 (b). On one hand, the overlapping extraction operation of the image blocks increases the receptive field, so that the predicted output of the network can utilize more input image information; on the other hand, the operation carries out multiple predictions on the pixel to be predicted to a certain extent, and can effectively eliminate the blocking effect while improving the network robustness.
As shown in fig. 4, in the present invention, the DCT domain and pixel domain convolutional neural network structures, both using wide activation residual blocks. In the wide activation residual block, the channel expansion is carried out on the features before the ReLU activation, and the channel restoration is carried out on the activated features, so that the network prediction performance can be effectively improved, and more network parameters and calculated amount are not introduced. In the present invention, the convolutional layer before ReLU has 128 filters, and the convolutional layer after ReLU has 64 filters.
The output of the block extraction operation layer contains m2A channel outputting each m of the results2The features of x 1 each represent the pixel values of an image block in the original image (in this specification the multi-dimensional data is expressed in terms of number x channel x row x column, e.g. m2 X 1X 1 represents the number of channels m2And the row and column values are 1). After the block extraction operation, the image blocks with the size of m multiplied by m in the original image are stretched into m2Vector of dimensions, using m immediately after2×m2The DCT transformation matrix of (a) transforms it into the DCT domain. In the invention, DCT transformation is realized in the convolutional neural network through convolution operation so as to increase the training speed and reduce the complexity of network training. The DCT transform designed in this invention is expressed as operation D:
Figure BDA0002113916150000041
wherein
Figure BDA0002113916150000042
Representing a convolution operation, Y representing a tensor of the image block after the block extraction operation, and a weight WDIs one comprising m2Each size is m2 X 1 filter bank. In particular, the weights W are initialized by the DCT transform matrixDAnd no gradient update is performed during training of the network. M for D2The DCT transform filters convolve the image, each filter convolves the result of the block extraction layer to obtain a DCT coefficient in the image block, and when all the filters convolve, m of the m × m image block can be obtained2And the DCT coefficient is calculated, thereby realizing the DCT transformation of the image. The output result after DCT transform layer still has m2A channel. Then, the output result of the DCT conversion layer is input into a network structure containing 18 wide active residual blocks to learn the residual between the DCT coefficients of the compressed image and the real DCT coefficients. Design of IDCT transform convolutional layer domain DCT transform convolutional layer is similar, and its weight is in trainingAnd the updating is not carried out in the process.
The DCT coefficients mainly contain global information of the image, so that the information in the DCT domain does not represent the spatial continuity of the image well. Therefore, it is difficult to achieve optimal performance by solely performing the task of decompression effects from the DCT domain. In order to fully utilize the residual redundant information among the pixels of the compressed image, the invention proposes to utilize a similar convolutional neural network structure to learn the mapping relation between the compressed image and the original uncompressed image in the pixel domain. The network branches in the pixel domain are similar to the DCT domain branches, and an effective wide-activation residual network structure is also adopted and is connected in series to form a direct connection network. Firstly, stretching image blocks into tensors through an image block extraction layer E operation, and in the proposed two-domain decompression effect network structure, sharing an image block extraction layer by a pixel domain and a DCT domain; then, in order to utilize redundant information in a pixel domain, the output of an image block extraction layer is directly connected into a network structure containing 18 wide active residual blocks, and the redundant information is expressed and extracted in a nonlinear mode; finally, mapping the information through a single convolution layer and outputting a prediction result of a pixel domain.
In the convolutional neural network model of the invention, 18 wide activation residual blocks are used in the convolutional neural network structure of the DCT domain and the pixel domain. The image block extraction layer includes 64 filters of 8 × 8, that is, the parameter m is set to 8, the DCT transform layer and the IDCT transform layer each include 64 filters of 64 × 1 × 1 to implement the DCT transform and the IDCT transform in the network, and the filter size in the convolutional layer is set to 3 × 3 in addition thereto.
In the step (2), firstly, a JPEG compression standard is used, original uncompressed images in a training set are compressed under different Quantization Factors (QFs), then the compressed images under the same QF and the corresponding original uncompressed images form a training sample pair, and the decompression effect convolutional neural network based on the two-domain learning provided by the invention is trained. In the training process, the networks of the DCT domain branch and the pixel domain branch are respectively trained firstly, and the prediction results are fused during testing.
And (3) performing prediction processing on the compressed image by using the pixel domain and DCT domain network model trained in the step (2). The pixel domain branch and the DCT domain branch are predicted in parallel, and since the predictions are generated in different spaces, they have different characteristics.
In the step (4), in order to better fuse the results of the two branches and realize effective complementation of information, the prediction outputs of the pixel domain and the DCT domain are organically combined by adopting a weighted average mode in the invention, that is:
Figure BDA0002113916150000051
in the formula
Figure BDA0002113916150000052
Representing the de-compression effect result of the final prediction output, lambda represents a weight parameter,
Figure BDA0002113916150000053
and
Figure BDA0002113916150000054
respectively representing the predicted output results of the DCT domain and the pixel domain. In the invention, the optimal weight parameter lambda is selected through a large number of experiments, and the final experiment result shows that the performance of the decompression effect can be further improved by the weighted average mode. In the present invention, the parameter λ is set to 0.489 when performing two-domain prediction result fusion.
In the step (5), the fused image tensors are recombined into an image, that is, m is2The × 1 × 1 pieces are reconstructed into m × m pieces. Since the overlapped block is adopted in the image block extraction, the prediction result of the overlapped area is obtained in an averaging manner in the recombination.
The comparative decompression effect algorithm is:
the method comprises the following steps: the method proposed by Zhang et al, reference "Zhang J, Xiong R, Chen Z, et al. CONCOLOR: Constrained non-convergent low-rank model for Image deblocking [ J ]. IEEE Transactions on Image Processing,2016,25(3): 1246-.
The method 2 comprises the following steps: the method proposed by Liu et al, reference is made to "Liu X, Wu X, Zhou J, et al.
The method 3 comprises the following steps: the method proposed by Zhao et al, reference "Zhao C, Zhang J, Ma S, et al.reduction image compression aspects by structural space representation and qualification constraint software [ J ]. IEEE Transactions on Circuits and Systems for Video Technology,2017,27(10):2057-2071.
The method 4 comprises the following steps: the method proposed by Dong et al, reference "Dong C, Deng Y, Change Long C, et al. compression efficiencies reduction by a default connected network [ C ]. IEEE International Conference on Computer Vision,2015: 576-.
The method 5 comprises the following steps: methods proposed by Chen et al, reference is made to "Chen Y, pack T. convertible nonlinear interaction differentiation A flexible frame for fast and effective image reduction [ J ]. IEEE Transactions on Pattern Analysis and Machine Analysis, 2017,39(6): 1256-1272".
The method 6 comprises the following steps: the method proposed by Zhang et al, reference "Zhang K, Zuo W, Chen Y, et al, beyond a Gaussian noise". Residual learning of deep cnn for Image noise [ J ]. IEEE Transactions on Image Processing,2017,26(7): 3142-.
The method 7 comprises the following steps: the method proposed by Tai et al, reference "Tai Y, Yang J, Liu X, et al. Memnet: A personal memory network for image retrieval [ C ]. IEEE International Conference on Computer Vision,2017: 4539-.
The method 8 comprises the following steps: the method proposed by Chen et al is described in the references "Chen H, He X, Qing L, et al, DPW-SDNet: Dual pixel-horizontal domain deep CNNs for soft decoding of JPEG-compressed images [ C ]. IEEE Conference on Computer Vision and Pattern registration works, 2018: 824-.
The contents of the comparative experiment are as follows:
experiment 1, the results of the decompression effect of compressed images 'Barbara' were tested using methods 1 to 8 and the method of the present invention, respectively. In this experiment, the quality factor of JPEG compression was set to 10. The original uncompressed image of 'barbarbara', the JPEG compressed image, and the decompression effect processing results of the respective methods are shown in fig. 5(a) to 5(k), respectively. The table shows the PSNR (Peak Signal to Noise ratio), SSIM (Structure Similarity index) and PSNR-B parameter values of the processing results of the methods in the experiment, wherein the unit of PSNR and PSNR-B is dB.
Watch 1
Figure BDA0002113916150000061
Experiment 2, the decompression effect results of the compressed image 'Lighthouse 3' were tested using methods 1 to 8 and the method of the present invention, respectively. In this experiment, the quality factor of JPEG compression was set to 10. The original uncompressed image of 'Lighthouse 3', the JPEG compressed image, and the decompression effect processing results of the methods are shown in fig. 6(a) to 6(k), respectively. And the second table is PSNR, SSIM and PSNR-B parameter values of processing results of each method in the experiment, wherein the unit of PSNR and PSNR-B is dB.
Watch two
Figure BDA0002113916150000071
Experiment 3, the decompression effect results of the compressed images 'Buildings' were tested using methods 1 to 8 and the method of the present invention, respectively. In this experiment, the quality factor of JPEG compression was set to 10. The original uncompressed image of 'Buildings', the JPEG compressed image, and the decompression effect processing results of the methods are shown in fig. 7(a) to 7(k), respectively. And the third table is PSNR, SSIM and PSNR-B parameter values of processing results of each method in the experiment, wherein the unit of PSNR and PSNR-B is dB.
Watch III
Figure BDA0002113916150000072
Experiment 4, the image in the data set Classic5 was first compressed at different QFs using JPEG compression, and then the compressed image was de-compressed using methods 1 through 8 and the method of the present invention. In this experiment, the quality factors for JPEG compression were set to 10, 20, 30, and 40. And the fourth table is the average PSNR, SSIM and PSNR-B parameter values of the data set of each method in the experiment, and the ordering of the objective parameter values in the table is PSNR/SSIM/PSNR-B.
Watch four
Figure BDA0002113916150000081
As can be seen from the experimental results shown in fig. 5, 6 and 7, the JPEG compressed image at a higher compression rate has a relatively serious distortion, and the image has obvious compression effects such as blocking effect, artifact and the like; although the contrast method can reduce the compression effect existing in the compressed image to a certain extent, the processing result image has an over-smooth phenomenon, and the artifact of an area with rich texture details in the image is difficult to remove; in contrast, the processing result of the invention has no obvious compression noise, and is clearer than the image, the edge is better kept, and the visual effect is better.
From the PSNR, SSIM and PSNR-B parameters given by the table I, the table II, the table III and the table IV, the invention obtains the highest values on three indexes; compared with a comparison method, the improvement is obvious; and can efficiently process JPEG compressed images under each QF.
By combining subjective visual effect and objective evaluation parameters of the decompression effect processing result, the method disclosed by the invention has better processing effect on JPEG compressed images, and can recover image information lost in the compression process while effectively removing the compression effect in the JPEG compressed images. Therefore, the invention is an effective decompression effect method for JPEG compressed images.

Claims (2)

1. The JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning is characterized by comprising the following steps of:
the method comprises the following steps: respectively constructing a DCT domain network and a pixel domain network structure model based on a convolutional neural network aiming at a JPEG compressed image;
step two: respectively training two convolutional neural network models constructed in the first step by using a training image set;
step three: respectively predicting and outputting the JPEG compressed image by utilizing the two convolutional neural networks obtained by training in the step two;
step four: fusing the two prediction results in the third step by using a weighted average mode;
step five: and (4) recombining the fused images into an image to obtain a final decompression effect processing result.
2. The method according to claim 1, wherein the image block overlap extraction in the convolutional neural network structure constructed in step one and the image tensor reorganization strategy in step five are implemented by transforming m x m size image blocks into m by a predefined filter2Tensor of x 1 size, the size of the overlapped area when the image blocks are extracted is specified by the sliding step length of the convolution layer, after network decompression mapping, the image tensor recombination strategy in the fifth step is to use m2The image block tensor of x 1 × 1 size is changed back to the image block of m × m size, then the gray value of the image block is assigned to the prediction result image according to the position of the image block in the image, if the overlapped blocks are used for blocking, the gray value of the overlapped area is the average value of the prediction results when the blocks are recombined.
CN201910584104.4A 2019-07-01 2019-07-01 JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning Active CN112188217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910584104.4A CN112188217B (en) 2019-07-01 2019-07-01 JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910584104.4A CN112188217B (en) 2019-07-01 2019-07-01 JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning

Publications (2)

Publication Number Publication Date
CN112188217A CN112188217A (en) 2021-01-05
CN112188217B true CN112188217B (en) 2022-03-04

Family

ID=73915299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910584104.4A Active CN112188217B (en) 2019-07-01 2019-07-01 JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning

Country Status (1)

Country Link
CN (1) CN112188217B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096019B (en) * 2021-04-28 2023-04-18 中国第一汽车股份有限公司 Image reconstruction method, image reconstruction device, image processing equipment and storage medium
CN113225590B (en) * 2021-05-06 2023-04-14 深圳思谋信息科技有限公司 Video super-resolution enhancement method and device, computer equipment and storage medium
CN114615507B (en) * 2022-05-11 2022-09-13 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Image coding method, decoding method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097758A9 (en) * 2001-05-25 2004-02-26 Univ Nanyang Drowning early warning system
FR2906433A1 (en) * 2006-09-22 2008-03-28 Canon Kk METHODS AND DEVICES FOR ENCODING AND DECODING IMAGES, COMPUTER PROGRAM USING THEM AND INFORMATION SUPPORT FOR IMPLEMENTING THEM
CN102026000A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Distributed video coding system with combined pixel domain-transform domain
CN103117818A (en) * 2013-01-16 2013-05-22 南京邮电大学 Broadband spectrum sensing method based on space-frequency joint compressed sensing
US8897586B2 (en) * 2012-06-15 2014-11-25 Comcast Cable Communications, Llc Dynamic generation of a quantization matrix for compression of a digital object
CN107563965A (en) * 2017-09-04 2018-01-09 四川大学 Jpeg compressed image super resolution ratio reconstruction method based on convolutional neural networks
CN109255317A (en) * 2018-08-31 2019-01-22 西北工业大学 A kind of Aerial Images difference detecting method based on dual network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097758A9 (en) * 2001-05-25 2004-02-26 Univ Nanyang Drowning early warning system
FR2906433A1 (en) * 2006-09-22 2008-03-28 Canon Kk METHODS AND DEVICES FOR ENCODING AND DECODING IMAGES, COMPUTER PROGRAM USING THEM AND INFORMATION SUPPORT FOR IMPLEMENTING THEM
CN102026000A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Distributed video coding system with combined pixel domain-transform domain
US8897586B2 (en) * 2012-06-15 2014-11-25 Comcast Cable Communications, Llc Dynamic generation of a quantization matrix for compression of a digital object
CN103117818A (en) * 2013-01-16 2013-05-22 南京邮电大学 Broadband spectrum sensing method based on space-frequency joint compressed sensing
CN107563965A (en) * 2017-09-04 2018-01-09 四川大学 Jpeg compressed image super resolution ratio reconstruction method based on convolutional neural networks
CN109255317A (en) * 2018-08-31 2019-01-22 西北工业大学 A kind of Aerial Images difference detecting method based on dual network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image deblocking via joint domain learning;Wenshu Zhan, Xiaohai He, Shuhua Xiong, Chao Ren, Honggang Chen;《Journal of Electronic Image》;20180531;第3小节 *
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition;Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinq;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20180623;第3.2小节 *
神经网络实现的任意长度DCT快速算法;朱幼莲,黄成;《微电子学与计算机》;20070228;第3-5小节 *

Also Published As

Publication number Publication date
CN112188217A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
Zhang et al. DMCNN: Dual-domain multi-scale convolutional neural network for compression artifacts removal
CN112188217B (en) JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning
Liu et al. A comprehensive benchmark for single image compression artifact reduction
CN108900848B (en) Video quality enhancement method based on self-adaptive separable convolution
CN107197260A (en) Video coding post-filter method based on convolutional neural networks
CN110351568A (en) A kind of filtering video loop device based on depth convolutional network
CN112801877B (en) Super-resolution reconstruction method of video frame
CN109978772B (en) Compressed image restoration method based on deep learning and double-domain complementation
CN110602494A (en) Image coding and decoding system and method based on deep learning
Sun et al. Reduction of JPEG compression artifacts based on DCT coefficients prediction
CN113766249B (en) Loop filtering method, device, equipment and storage medium in video coding and decoding
CN113554720A (en) Multispectral image compression method and system based on multidirectional convolutional neural network
CN105791877A (en) Adaptive loop filter method in video coding and decoding
CN112218094A (en) JPEG image decompression effect removing method based on DCT coefficient prediction
Luo et al. Lattice network for lightweight image restoration
CN112509094A (en) JPEG image compression artifact elimination algorithm based on cascade residual error coding and decoding network
CN111031315A (en) Compressed video quality enhancement method based on attention mechanism and time dependency
CN113962882B (en) JPEG image compression artifact eliminating method based on controllable pyramid wavelet network
CN112150356A (en) Single compressed image super-resolution reconstruction method based on cascade framework
CN110650339A (en) Video compression method and device and terminal equipment
CN110677644B (en) Video coding and decoding method and video coding intra-frame predictor
CN116418990A (en) Method for enhancing compressed video quality based on neural network
CN110728726A (en) Image compression method based on user interaction and deep neural network
Hu et al. Combine traditional compression method with convolutional neural networks
CN115131254A (en) Constant bit rate compressed video quality enhancement method based on two-domain learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant