CN107251053A - A kind of method and device for the compression artefacts for reducing lossy compression method image - Google Patents
A kind of method and device for the compression artefacts for reducing lossy compression method image Download PDFInfo
- Publication number
- CN107251053A CN107251053A CN201580075726.4A CN201580075726A CN107251053A CN 107251053 A CN107251053 A CN 107251053A CN 201580075726 A CN201580075726 A CN 201580075726A CN 107251053 A CN107251053 A CN 107251053A
- Authority
- CN
- China
- Prior art keywords
- group
- high dimensional
- dimensional feature
- subgraph
- wave filter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 126
- 230000006835 compression Effects 0.000 title claims abstract description 119
- 238000007906 compression Methods 0.000 title claims abstract description 119
- 239000013598 vector Substances 0.000 claims abstract description 153
- 238000006116 polymerization reaction Methods 0.000 claims abstract description 36
- 238000000638 solvent extraction Methods 0.000 claims abstract description 32
- 238000013507 mapping Methods 0.000 claims abstract description 31
- 238000011084 recovery Methods 0.000 claims abstract description 30
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 230000009977 dual effect Effects 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims description 56
- 230000002708 enhancing effect Effects 0.000 claims description 29
- 238000013527 convolutional neural network Methods 0.000 claims description 20
- 238000005070 sampling Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 238000002360 preparation method Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 4
- 210000005036 nerve Anatomy 0.000 claims description 3
- 230000000379 polymerizing effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000003860 storage Methods 0.000 description 7
- 238000012546 transfer Methods 0.000 description 6
- 230000000903 blocking effect Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000005611 electricity Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000000513 principal component analysis Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/247—Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Disclose a kind of device for the compression artefacts for reducing lossy compression method image.The device can include:Feature extracting device, it includes first group of wave filter, and first group of wave filter is configured as from lossy compression method image zooming-out block, and the block of extraction is mapped as into first group of high dimensional feature vector;The feature for performing telecommunication with feature extraction strengthens equipment, and it includes second group of wave filter, and second group of wave filter is configured as to the vectorial denoising of each high dimensional feature in first group, and is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising;Being coupled to feature strengthens the mapped device of equipment, and it includes the 3rd group of wave filter, and the 3rd group of wave filter is configured as each high dimension vector in second group being non-linearly mapped as the Partitioning Expression of A of recovery;And polymerization unit, it performs telecommunication with mapped device and is configured as polymerizeing Partitioning Expression of A, to generate the picture rich in detail of recovery.
Description
Technical field
Invention relates generally to image processing field, more particularly to a kind of side for the compression artefacts for reducing lossy compression method image
Method and device.
Background technology
Lossy compression method is to abandon to represent the class number of content being encoded using inaccurate approximate or partial data
According to coding method.This compress technique is used to reduce those data for needing to store, handle and/or send represented content
Amount.Image Lossy Compression form has a variety of, for example JPEG, WebP, JPEG XR and HEVC-MSP.Jpeg format is still various
Most widely employed form in optional mode.
Lossy compression method introduces compression artefacts, particularly when it is used in low bit rate/quantized level.For example, JPEG is pressed
Contracting distortion is to include the complex combination of blocking effect, ringing effect and fuzzy different certain distortions.Encoded when to each piece,
Without considering during the relevance with adjacent block, when causing the discontinuity of boundary, blocking effect is produced.It is thick because of high fdrequency component
Quantify the ringing effect for occurring edge.Because the loss of high fdrequency component occurs to obscure.
It can be divided into the method based on deblocking effect and the method based on recovery for eliminating the existing algorithm of distortion.It is based on
The method of deblocking effect focuses on removal blocking effect and ringing effect.However, most of methods based on deblocking effect can not be weighed
Existing sharp keen edge, and tend to exceedingly carry out texture region smoothly.Squeeze operation is considered as mistake by the method based on recovery
Very, and restoration algorithm is proposed.Because the method based on recovery is tended to directly reconstruct original image, therefore sharpen the usual companion of output
Ringing effect and the lofty transition of smooth region with edge.
The content of the invention
The brief overview of the disclosure given below, to provide a kind of reduction lossy compression method image of disclosure in terms of some
The device of compression artefacts.This general introduction is not the extensive overview of the disclosure.It is not intended as the crucial or important member of the identification disclosure
Element, is not intended to any scope of description disclosure specific embodiment or any scope of claim.Its sole purpose be with
Some concepts of the disclosure are presented in simplified form, are used as the preamble of the more detailed description presented hereinafter.
According to embodiments herein, a kind of device for the compression artefacts for reducing lossy compression method image is disclosed.Device can
With including:Feature extracting device, including first group of wave filter, first group of wave filter are configured as from lossy compression method image zooming-out
Block, and the block of extraction is mapped as first group of high dimensional feature vector;And, feature enhancing equipment, it holds with feature extracting device
Row telecommunication, and including second group of wave filter, second group of wave filter be configured as to each high dimensional feature in first group to
Amount carries out denoising, and is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising.The device also includes:Mapping is set
Standby, it is electrically coupled with feature enhancing equipment, and including the 3rd group of wave filter, the 3rd group of wave filter is configured as in second group
Each high dimensional feature vector nonlinear be mapped as restore Partitioning Expression of A;And, polymerization unit, it is performed with mapped device
Telecommunication, and be configured as polymerizeing the Partitioning Expression of A by all high dimensional feature DUAL PROBLEMS OF VECTOR MAPPINGs in second group, with life
Into the picture rich in detail of recovery.
On the one hand, first group of wave filter can be configured as from lossy compression method image zooming-out block, and by each extraction
Block is non-linearly mapped as high dimensional feature vector, and all pieces of map vector forms first group of high dimensional feature vector.
On the other hand, second group of wave filter can be configured as removing each high dimensional feature vector in first group
Make an uproar, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
On the one hand, first group of wave filter, second group of wave filter and the 3rd group of wave filter and polymerization unit can be based respectively on
Predetermined the first parameter, the second parameter and the 3rd parameter map vector, or Partitioning Expression of A can be carried out based on the 4th parameter
Polymerization.
On the other hand, device also includes comparing equipment, and it may be coupled to polymerization unit, and is configured as from predetermined
Training set sampling obtains true uncompressed image corresponding with lossy compression method image, and compares the process received by polymerization unit
The dissimilar degree between obtained picture rich in detail and corresponding true uncompressed image is restored in polymerization, to generate reconstructed error, its
In, reconstructed error is reversed transmission, to optimize the first parameter, the second parameter and the 3rd parameter.
According to embodiments herein, device can also include being electrically coupled to the training set for comparing equipment preparation equipment, its
In, the training set, which prepares equipment, also to be included:Cropping tool, it is configured as randomly cutting out from randomly selected training image
Multiple subgraphs, to generate one group of true uncompressed subgraph;With lossy compression method subgraph maker, it performs electricity with cropping tool
Communication, and be configured as, based on the true uncompressed subgraph of one group received from cropping tool, generating one group of lossy compression method subgraph
Picture;In addition, the training set, which prepares equipment, also includes paired device, it is performed with cropping tool and lossy compression method subgraph maker
Telecommunication, and be configured as, true uncompressed subgraph is matched with corresponding lossy compression method subgraph;And collection
Device, it performs telecommunication with paired device, and is configured as collecting with paired true uncompressed subgraph and lossy compression method
Subgraph is to generate predetermined training set.
On the one hand, the lossy compression method subgraph maker also includes compression device, and it performs telecommunication with cropping tool,
And it is configured as, true uncompressed subgraph is coded and decoded by condensing encoder and decoder, pressure is damaged to generate
Contracting subgraph.
On the one hand, the reconstructed error includes mean square error.
According to a kind of embodiment of the application, a kind of method for the compression artefacts for reducing lossy compression method image, institute are disclosed
The method of stating can include:By including the feature extracting device of first group of wave filter, from lossy compression method image zooming-out block, and it will carry
The block taken is mapped as first group of high dimensional feature vector;By performing telecommunication and including second group of wave filter with feature extracting device
Feature enhancing equipment, denoising is carried out to each high dimensional feature vector in first group, and the high dimensional feature vector of denoising reflected
Penetrate as second group of high dimensional feature vector;By being electrically coupled to feature enhancing equipment and including the mapped device of the 3rd group of wave filter,
Each high dimensional feature vector nonlinear in second group is mapped as to the Partitioning Expression of A restored;With by being performed with mapped device
The polymerization unit of telecommunication, polymerize to the Partitioning Expression of A by all high dimensional feature DUAL PROBLEMS OF VECTOR MAPPINGs in second group, to generate
The picture rich in detail of recovery.
According to a kind of embodiment of the application, a kind of device for the compression artefacts for reducing lossy compression method image is disclosed, should
Device can include:Reconfiguration unit, it is configured as would detract from compressing clear figure of the Image Reconstruction for recovery based on predefined parameter
Picture, and training unit, it is configured with predetermined training set and carrys out training convolutional neural networks system, to determine that reconstruct is single
The parameter that member is used.Reconfiguration unit can include:Feature extracting device, including first group of wave filter, it is configured as from damaging
Image zooming-out block is compressed, and the block of extraction is mapped as first group of high dimensional feature vector;Feature strengthens equipment, itself and feature extraction
Equipment performs telecommunication, and including second group of wave filter, second group of wave filter is configured as to each higher-dimension in first group
Characteristic vector carries out denoising, and is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising;Mapped device, its electricity
Feature enhancing equipment is coupled to, and including the 3rd group of wave filter, the 3rd group of wave filter is configured as will be each in second group
High dimension vector is non-linearly mapped as the Partitioning Expression of A restored;And polymerization unit, itself and mapped device perform telecommunication, its by with
It is set to and the Partitioning Expression of A by all high dimension vectors mapping in second group is polymerize, generates the picture rich in detail of recovery;It is special
Levy extraction equipment, feature enhancing equipment, mapped device and polymerization unit includes at least one convolutional layer respectively, and convolutional layer according to
It is secondary to be interconnected so as to form convolutional neural networks system.
According to a kind of embodiment of the application, a kind of system for the compression artefacts for reducing lossy compression method image is disclosed.Should
System can include the memory for being used to store executable component;And processor, for performing executable component, with execution system
Operation;Executable component includes:Feature extraction component, is configured as from lossy compression method image zooming-out block, and by the block of extraction
It is mapped as first group of high dimensional feature vector;Feature strengthens component, is configured as entering each high dimensional feature vector in first group
Row denoising, and be second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising;Map component, is configured as
Each high dimension vector in two groups is non-linearly mapped as the Partitioning Expression of A restored;And polymerizing component, it is configured as to by second
The Partitioning Expression of A of all high dimension vectors mapping in group is polymerize, to generate the picture rich in detail of recovery.
The following description and drawings illustrate some schematic aspects of the disclosure.However, these aspects only indicate to use
Some modes in the various modes of the principle of the present invention.When be considered in conjunction with the accompanying the present invention it is described in detail below when, this hair
Other bright aspects will become obvious.
Brief description of the drawings
The exemplary, non-limitative embodiment of the present invention is described with reference to the accompanying drawings.Accompanying drawing is illustrative, is not pressed generally
Drawn according to accurate dimension.Same or analogous element on different figures is presented with like reference characters.
Fig. 1 is the device for showing the compression artefacts for reduce lossy compression method image consistent with embodiments herein
Schematic diagram.
Fig. 2 is the dress for showing the compression artefacts for reduce lossy compression method image consistent with another embodiment of the application
The schematic diagram put.
Fig. 3 is the schematic diagram for showing the convolutional neural networks system consistent with some disclosed embodiments.
Fig. 4 is the schematic diagram for the training unit for showing the device consistent with some disclosed embodiments.
Fig. 5 is that the training set for showing the training unit consistent with some disclosed embodiments prepares the schematic diagram of equipment.
Fig. 6 is to show the method that is used to reduce the compression artefacts of lossy compression method image consistent with some disclosed embodiments
Indicative flowchart.
Fig. 7 is to show the be used for compression mistake that training be used to reduce lossy compression method image consistent with some disclosed embodiments
The indicative flowchart of the method for genuine convolutional neural networks system.
Fig. 8 is to show the system that is used to reduce the compression artefacts of lossy compression method image consistent with embodiments herein
Schematic diagram.
Embodiment
Now with detailed reference to some specific embodiments of the present invention, including inventor realizes the present invention and contemplated most
Good pattern.The example of these specific embodiments is shown in the drawings.Although describing the present invention with reference to these specific embodiments,
It is understood that the present invention, which is not intended as, limits the invention to described embodiment.On the contrary, it is intended to cover by appended
Replacement, modification and the equivalent that may include in spirit and scope of the invention as.In the following description, it is
There is provided and the present invention is fully understood, elaborate many details.Can in some in these details or
Implement the present invention in the case of whole.In other cases, in order to avoid unnecessarily obscuring the present invention, it is not described in detail known
Processing operation.
Terms used herein is only used for describing the purpose of specific embodiment, and is not intended to the limitation present invention.Such as this paper institutes
Use, singulative " one ", " one kind " and " this " are also intended to including plural form, are clearly referred to unless the context otherwise
Show.It is further appreciated that ought in this manual by use, term " comprising " and/or "comprising" show the feature, it is whole
Body, step, operation, the presence of element and/or part, but do not preclude the presence or addition of one or more of the other feature, entirety, step
Suddenly, operation, element, part and/or its combination.
Referring to Fig. 1, device 1000 can include feature extracting device 100, feature enhancing equipment 200, the and of mapped device 300
Polymerization unit 400.Hereinafter, equipment 200, mapped device 300 and polymerization will be strengthened to feature extracting device 100, feature to set
Standby 400 are described in detail.For the ease of description, lossy compression method image is represented with Y, the picture rich in detail for representing to restore with F (Y),
The picture rich in detail of recovery is similar as much as possible to true unpressed image X.
According to one embodiment, feature extracting device 100 includes first group of wave filter.First group of wave filter is configured as,
First group of high dimensional feature vector is mapped as from lossy compression method image zooming-out block (patch), and by the block of extraction.For example, first group
Wave filter passes through the parameters of function F'(first) block of extraction is mapped as first group of high dimensional feature vector, wherein, F'(x) it is non-thread
Property function (for example, max (0, x), 1/ (1+exp (- x)) or tanh (x)), and by associated with lossy compression method image predetermined
Parameter determines the first parameter.
In one embodiment, these first group of high dimensional feature vector can include one group of characteristic pattern, the quantity of characteristic pattern
Equal to the dimension of vector.The popular strategy of image restoration is densely to extract block, then by the basis of one group of training in advance come
These blocks are represented, foregoing basis is for example, PCA (principal component analysis), DCT (discrete cosine transform), Haar etc..
According to one embodiment, the operation of feature extracting device 100 can be formulated as:
F1(Y)=F ' (W1*Y+B1) (1)
Wherein, W1And B1Represent wave filter and error, F'(x respectively) be nonlinear function (for example, max (and 0, x), 1/ (1+
Exp (- x)) or tanh (x)).Here, W1Size be c × f1×f1×n1, wherein, c is the port number of input picture, f1It is filter
The spatial domain size of ripple device, n1It is the quantity of wave filter.Intuitively, W1To image application n1Individual convolution, and each convolution has c
×f1×f1The core of size.Output is by n1Individual characteristic pattern composition.B1 is n1Dimensional vector, each of which element is associated with wave filter.
Feature enhancing equipment 200 can perform telecommunication with feature extracting device 100, and can include second group of filtering
Device, second group of wave filter is configured as carrying out denoising to each high dimensional feature vector in first group, and by the higher-dimension of denoising
Maps feature vectors are second group of high dimensional feature vector, for example, one group of cleaner characteristic vector relatively.
According to one embodiment, feature enhancing equipment 200 is configured as entering each high dimensional feature vector in first group
Row denoising, and by the parameters of function F'(second) the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising is vectorial for second group of high dimensional feature,
Wherein, F'(x) be nonlinear function (for example, max (0, x), 1/ (1+exp (- x)) or tanh (x)), and the second parameter by with
The predefined parameter of first group of high dimensional feature vector correlation connection is determined.
In this embodiment, feature extracting device 100 is that each block extracts n1Dimensional feature.Second group of wave filter is by these n1
Dimensional vector is mapped to one group of n2Dimensional vector.Each vector through mapping is conceptually relatively cleaner characteristic vector.These
Vector includes another group of characteristic pattern.
According to one embodiment, feature enhancing can be formulated as:
F2(Y)=F ' (W2*Y+B2) (2)
Wherein, W2Size be n1×f2×f2×n2, B2It is n2Dimensional vector.
As illustrated, device 1000 may also include mapped device 300.Mapped device 300 can be coupled to feature enhancing equipment
200, and including the 3rd group of wave filter, it is configured as non-linearly being mapped as each high dimension vector in second group to restore
Partitioning Expression of A.
According to one embodiment, mapped device 300 is configured as by the parameters of function F'(the 3rd) by each high dimension vector
Non-linearly be mapped as Partitioning Expression of A, wherein, F'(x) be nonlinear function (for example, max (and 0, x), 1/ (1+exp (- x)) or
Tanh (x)), and the 3rd parameter is by associated with second group of high dimensional feature vector (i.e. cleaner high dimensional feature is vectorial) pre-
Determine parameter determination.
In one embodiment, feature enhancing equipment 200 generates one group of n2Dimensional feature vector.Mapped device 300 is by these n2
Each n is mapped in dimensional vector3Dimensional vector.Each vector through mapping is conceptually the expression of the block of a recovery.This
A little vectors include another group of characteristic pattern.
According to one embodiment, mapping can be formulated as:
F3(Y)=F ' (W3*F2(Y)+B3) (3)
Wherein, W3Size be n2×f3×f3×n3, B3For a n3Dimensional vector.The n of output3In dimensional vector it is each
Conceptive is by the expression for the block of the recovery of reconstruct.
As illustrated, device 1000 may also include polymerization unit 400.Polymerization unit 400 can perform electricity with mapped device 300
Communication, and be configured as polymerizeing Partitioning Expression of A, to generate the picture rich in detail of recovery.
The Partitioning Expression of A of 400 pairs of recoveries of polymerization unit polymerize, to generate the picture rich in detail of recovery.Polymerization can be by formula
Turn to:
F (Y)=W4*F3(Y)+B4 (4)
Wherein, W4Size be n3×f4×f4× c, B4It is c dimensional vectors.
According to the embodiment, device 1000, which may also include, compares equipment (not shown), and it is coupled to polymerization unit 400,
And it is configured as, true uncompressed subgraph corresponding with lossy compression method subgraph is obtained from the sampling of predetermined training set, and
Between the true uncompressed subgraph that the clear subgraph and sampling for comparing the polymerization recovery received from polymerization unit 400 are obtained
Dissimilar degree, to generate reconstructed error.For example, reconstructed error includes mean square error.By reconstructed error reverse transfer, to determine example
Such as W1、W2、W3、W4、B1、B2、B3And B4Etc. parameter.
Fig. 2 is the dress for showing the compression artefacts for reduce lossy compression method image consistent with another embodiment of the application
Put 1000' schematic diagram.As shown in Fig. 2 device 1000' may include reconfiguration unit 100' and training unit 200'.Reconfiguration unit
100' is configured as, and would detract from compressing picture rich in detail of the Image Reconstruction for recovery based on predefined parameter.
Embodiment according to Fig. 2, reconfiguration unit 100' may also include feature extracting device 110', feature enhancing equipment
120', mapped device 130' and polymerization unit 140'.In one embodiment, feature extracting device 110', feature enhancing equipment
120', mapped device 130' and polymerization unit 140' can include at least one convolutional layer respectively, and convolutional layer is mutually interconnected successively
Connect, to form convolutional neural networks system.
Fig. 3 shows the Rotating fields of convolutional neural networks system in mathematical simulation model.In one embodiment, feature is carried
Each in taking equipment 110', feature enhancing equipment 120', mapped device 130' and polymerization unit 140' can be modeled as respectively
At least one convolutional layer.Different operations are carried out in different convolutional layers respectively.
In this embodiment, feature extracting device 110' is configured as, from lossy compression method image zooming-out block, and by extraction
Block is mapped as first group of high dimensional feature vector.This carries out convolution equivalent to by wave filter group as described above to image.
Feature enhancing equipment 120' is configured as, and telecommunication is performed with feature extracting device 110', and in first group
The vectorial denoising of each high dimensional feature, and be second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising, for example,
One group of cleaner characteristic vector relatively.This is equivalent to using second group of wave filter as described above.
Mapped device 130' be configured to coupled to feature enhancing equipment 120', and by each higher-dimension in second group to
Amount is non-linearly mapped as the Partitioning Expression of A restored.This is equivalent to using the 3rd group of wave filter as described above.
Polymerization unit 140' is configured as, and telecommunication is performed with mapped device 130', and to by all in second group
The Partitioning Expression of A of high dimension vector mapping is polymerize, to generate the picture rich in detail of recovery.
In one embodiment, feature extracting device 110', feature enhancing equipment 120', mapped device 130' and polymerization are set
Standby 140' includes at least one convolutional layer respectively, and convolutional layer is connected with each other successively, to form convolutional neural networks system.Volume
Product nerve network system can be traced back to before many decades, and volatile epidemic status is shown recently, be partly because it in figure
As classificatory success.Convolutional neural networks system is generally used for natural image denoising and removes noise pattern (dirt/rain).
Alternately, more convolutional layers can be added non-linear to increase.However, this can dramatically increase convolutional neural networks
The complexity of system, so as to need more training datas and time.
Training unit 200' is configured as, using predetermined training set training convolutional neural networks system, so as to optimal reconfiguration
Parameter used in unit, such as W1、W2、W3、W4、B1、B2、B3And B4Deng.Embodiment according to Fig. 4, training unit 200'
It may also include sample devices 210', compare equipment 220' and reverse transfer equipment 230'.
Sample devices 210' can be configured as, and predetermined training set is sampled, and obtain lossy compression method subgraph and its phase
The true uncompressed subgraph answered, and would detract from compression subgraph be input to convolutional neural networks system.Herein, " subgraph " refers to
These samples are considered as small " image " rather than " block ", in some sense, and multiple " blocks " are overlapping, it is necessary to which some are made even
Post-process, and " subgraph " does not need the post processing.
Comparing equipment 220' can be configured as, and compare by lossy compression method subgraph of the convolutional neural networks system based on input
Dissimilar degree between the obtained clear subgraph of reconstruct and corresponding true uncompressed subgraph, to generate reconstructed error.Example
Such as, reconstructed error may include mean square error, and by using the stochastic gradient descent of standard reverse transfer that error is minimum
Change.
Reverse transfer equipment 230' is configured as, and reconstructed error is transmitted by convolutional neural networks system reverse, to adjust
The weight of connection between the neuron of entire volume product nerve network system.
It should be noted that if only reconstructed error can be derived, convolutional neural networks system is not excluded for using other
The reconstructed error of species.If preferably perceived during the training period, excitation measurement is given, and convolutional neural networks system can spirit
The measurement is adapted to livingly.
In one embodiment, device 1000 and 1000', which may also include, is coupled to the training set for comparing equipment preparation equipment,
And the training set prepares equipment and is configured as, and prepares the predetermined training set for training convolutional neural networks system.Fig. 5 is to show
The schematic diagram that training set prepares equipment is gone out.As illustrated, training set, which prepares equipment, may include cropping tool 241', lossy compression method
Image composer 242', paired device 243' and collector 244'.
Cropping tool 241' can be configured as, and random cropping goes out multiple subgraphs from randomly selected training image, with life
Into one group of true unpressed subgraph.For example, cropping tool 241' can cut out the subgraph that n pixel is m × m.Damage pressure
Contracting subgraph maker 242' can perform telecommunication with cropping tool 241', and be configured as, based on being received from cropping tool 241'
To true uncompressed subgraph generate one group of lossy compression method subgraph.Paired device 243' can be with cropping tool 241' and damaging
Compress subgraph maker 242' and perform telecommunication, and be configured as, each true uncompressed subgraph is had with corresponding
Compression subgraph is damaged to be matched.Collector 244' can perform telecommunication with paired device 243', and be configured as, and collect institute
There is pairing subgraph, to form predetermined training set.
According to one embodiment, lossy compression method subgraph maker 242' may include compression device, itself and cropping tool 241'
Telecommunication is performed, and is configured as, true subgraph is coded and decoded by condensing encoder and decoder, with life
Into one group of lossy compression method subgraph.
Fig. 6 is the side for showing the compression artefacts for reduce lossy compression method image consistent with some disclosed embodiments
The indicative flowchart of method 2000.Method 2000 can be described in detail with reference to Fig. 6 below.
In step S210, by including the feature extracting device of first group of wave filter, from lossy compression method image zooming-out block, and
The block of each extraction is mapped as high dimensional feature vector, to form first group of high dimensional feature vector.In one embodiment, these
Vector includes one group of characteristic pattern, and the quantity of characteristic pattern is equal to the dimension of vector.The prevalence strategy of image restoration, is densely to extract
Block, then represents these blocks, foregoing basis is for example, PCA, DCT, Haar etc. by the basis of one group of training in advance.
In step S220, by performing telecommunication with feature extracting device and including the feature enhancing of second group of wave filter
Equipment, denoising is carried out to each high dimensional feature vector in first group, and is second group by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising
In high dimensional feature vector.In the present embodiment, feature extracting device is that each block extracts n1Dimensional feature.Second group of wave filter by this
A little n1Dimensional vector is mapped to one group of n2Dimensional vector.Each vector through mapping is conceptually relatively cleaner characteristic vector.This
A little vectors include another group of characteristic pattern.
In step S230, strengthen equipment by being coupled to feature and include the mapped device of the 3rd group of wave filter, by the
Each high dimension vector in two groups is non-linearly mapped as the Partitioning Expression of A restored.In this embodiment, feature enhancing equipment life
Into one group of n2Dimensional feature vector.Mapped device each will be mapped to n in these n2 dimensional vectors3Dimensional vector.It is each through mapping to
Amount is conceptually the Partitioning Expression of A restored.These vectors include another group of characteristic pattern.
In step S240, by performing the polymerization unit of telecommunication with mapped device, by by all higher-dimensions in second group
The Partitioning Expression of A that DUAL PROBLEMS OF VECTOR MAPPING is obtained is polymerize, to generate the picture rich in detail of recovery.In one embodiment, these steps
S210-S230 can be simulated by above-mentioned formula (1)-(3).
According to one embodiment, can from lossy compression method image zooming-out block, and by the parameters of function F'(first) carried each
The block taken is mapped as high dimensional feature vector.Wherein, F'(x) be nonlinear function (for example, max (and 0, x), 1/ (1+exp (- x)) or
Tanh (x)), the first parameter is determined to obtain by the predefined parameter associated with lossy compression method image.
According to one embodiment, first group of high dimensional feature vector can be by denoising, and the high dimensional feature vector of denoising can lead to
Cross the parameters of function F'(second) be non-linearly mapped as second group of high dimensional feature vector, i.e., one group relatively cleaner feature to
Amount.Wherein, F'(x) be nonlinear function (for example, max (0, x), 1/ (1+exp (- x)) or tanh (x)), the second parameter by with
The predefined parameter of first group of high dimensional feature vector correlation connection determines to obtain.
According to one embodiment, by the parameters of function F'(the 3rd) can be by each high dimension vector in second group non-linearly
It is mapped as the Partitioning Expression of A restored.Wherein, F'(x) be nonlinear function (for example, max (and 0, x), 1/ (1+exp (- x)) or tanh
(x)), the 3rd parameter with second group of associated predefined parameter of high dimension vector by determining to obtain.
According to one embodiment, after the removing image to generate recovery is polymerize to Partitioning Expression of A, method 2000
It may also include from predetermined and train the step of cluster sampling obtains true uncompressed subgraph corresponding with lossy compression method subgraph, ratio
Compared with the dissimilar degree between the clear subgraph restored after polymerization and corresponding true uncompressed subgraph, to generate reconstruct
The step of error.By reconstructed error reverse transfer with Optimal Parameters, for example, W1、W2、W3、W4、B1、B2、B3And B4。
According to one embodiment, from predetermined training set sample obtain it is corresponding with lossy compression method subgraph truly not
Before compression subgraph, the step of method 2000 also includes preparing predetermined training set.Specifically, first from randomly selected training
Multiple subgraphs are cut out in image, to generate one group of true uncompressed subgraph.For example, n m × m pixel can be cut out
Subgraph.Next, generating one group of lossy compression method subgraph based on one group of true uncompressed subgraph.Then, will be each true
Uncompressed subgraph is matched with corresponding lossy compression method subgraph.Afterwards, all pairing subgraphs are collected, it is pre- to be formed
Determine training set.
According to one embodiment, the convolutional Neural for training the compression artefacts for being used to reduce lossy compression method image is shown
The method 3000 of network system.Training method 3000 can be described in detail with reference to Fig. 7 below.
As shown in fig. 7, in step S310, lossy compression method subgraph is obtained and its corresponding true from predetermined training cluster sampling
Real uncompressed subgraph.In step S320, compression subgraph would detract from by convolutional neural networks system and be reconstructed into the clear of recovery
Clear subgraph.In step S330, by comparing the dissimilar degree between the clear subgraph of reconstruct and true uncompressed subgraph,
Generate reconstructed error.In step S340, convolutional neural networks system is given by reconstructed error reverse transfer, to adjust convolutional Neural net
The weight connected between the neuron of network system.Repeat step S310-S340, until the average value of reconstructed error is less than predetermined threshold
Untill value, for example, predetermined threshold is equal between the lossy compression method subgraph in predetermined training set and true uncompressed subgraph
The half of square error.
With reference to Fig. 8, system 4000 is shown.System 4000 includes the memory 402 of the executable component of storage and is coupled to
The processor 404 of memory 402, processor 404 performs executable component, with the operation of execution system 4000.Executable component
It may include:Feature extraction component 410, is configured as from lossy compression method image zooming-out block, and the block extracted is mapped as into first
Group high dimensional feature vector;Feature strengthens component 420, is configured as carrying out denoising to each high dimensional feature vector in first group,
And it is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising.In addition, executable component may also include:Mapping
Component 430, is configured as each high dimension vector in second group being non-linearly mapped as the Partitioning Expression of A of recovery;And polymerization
Component 440, is configured as polymerizeing the Partitioning Expression of A by all high dimension vectors mapping in second group, to generate recovery
Picture rich in detail.
On the one hand, feature extraction component 410 is configured as, from lossy compression method image zooming-out block, and by the block of extraction
It is each be non-linearly mapped as high dimensional feature vector, and all pieces of the vector by mapping forms first group of higher-dimension
Characteristic vector.
In one embodiment, feature enhancing component 420 is configured as, and each high dimensional feature vector in first group is entered
Row denoising, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
In one embodiment, feature extraction component 410, feature enhancing component 420 and map component 430 are based respectively on pre-
Fixed the first parameter, the second parameter and the 3rd parameter map vector.
According to another embodiment, executable component also includes comparing component, and it is coupled to polymerizing component, and is configured
To obtain true uncompressed image corresponding with lossy compression method subgraph from the sampling of predetermined training set, and compare from aggregation group
Dissimilar degree between the clean image that the polymerization that part is received is restored and corresponding true uncompressed image, is missed with generating reconstruct
Difference, wherein, reconstructed error is reversed transmission to optimize the first parameter, the second parameter and the 3rd parameter.
In one embodiment, executable component also includes the training set preparation component for being coupled to comparing component.The instruction
Practicing collection preparation component also includes:Cropping tool, is configured as going out multiple subgraphs from randomly selected training image random cropping, with
Generate one group of true uncompressed subgraph;Lossy compression method subgraph maker, performs telecommunication, and be configured with cropping tool
To generate one group of lossy compression method subgraph based on the true uncompressed subgraph received from cropping tool;Matching module, with cropping tool
Telecommunication is performed with maker, and is configured as, will each true uncompressed subgraph and corresponding lossy compression method subgraph
Matched;And collector, telecommunication is performed with matching module, and be configured as, collect with pairs true uncompressed
Subgraph and lossy compression method subgraph, to form predetermined training set.
In one embodiment, lossy compression method subgraph maker also includes compression module, itself and cropping tool and maker
Telecommunication is performed, and is configured as, true subgraph is coded and decoded by condensing encoder and decoder, with life
Into lossy compression method subgraph group.
With existing method on the contrary, the present processes and ambiguously learn for block domain model dictionary or manifold, this
It is by being realized inside convolutional layer a bit.In addition, feature extraction, feature enhancing and polymerization are also formed as convolutional layer, therefore
It is involved in optimization.The present processes and device disclose different types of compression artefacts, and there is provided to different images
Effective reduction of various compression artefacts in region.In the present processes and device, whole convolutional neural networks lead to completely
Cross training to obtain, need not move through pretreatment/post processing.Due to using light structures, the more existing skill of apparatus and method of the application
Art realizes more superior performance.
Embodiment in the scope of the invention can realize in Fundamental Digital Circuit, or computer hardware, firmware, software or
It is realized in combining.Device within the scope of the present invention can be in machine readable storage device be tangibly embodied in be used for by can
Realized in the computer program product that programmed process device is performed;Also, the method and step in the scope of the invention can be by execute instruction
The programmable processor of program is performed, to perform the function of the present invention by the way that output is operated and produced to input data.
It can valuably be realized by performing one or more computer programs on programmable system in the scope of the invention
Embodiment, the programmable system includes at least one programmable processor and at least one input equipment and at least one is defeated
Go out equipment, programmable processor is coupled to receive data and instruction from data-storage system, and data and instruction are sent to
Data-storage system.Each computer program can be realized by the programming language of high level procedural or object-oriented, or
If desired, being realized with assembler language or machine language;Under any circumstance, the language can be compiling or interpretative code.
Suitable processor includes for example general and special microprocessor.Generally, processor will be deposited from read-only storage and/or at random
Access to memory receives instruction and data.Generally, computer will be deposited including one or more Large Copacities for data storage file
Store up equipment.
Embodiment within the scope of the invention includes can for carrying or being stored with computer executable instructions, computer
The computer-readable medium of reading instruction or data structure.Such computer-readable medium can be calculated by universal or special
Any usable medium that machine system is accessed.The example of computer-readable medium can include physical storage medium, such as RAM,
ROM, EPROM, CD-ROM or other disk storages, magnetic disk storage or other magnetic storage apparatus, or available for carrying or deposit
Any other medium of desired program code is stored up, program code is with computer executable instructions, computer-readable instruction or number
Represented according to the form of structure, and can be by universal or special computer system accesses.Any of above content can be (special by ASIC
Integrated circuit) supplement or be incorporated in ASIC.Although the particular embodiment of the present invention has been shown and described, do not departing from
In the case of the true scope of the present invention, it can change these embodiments and change.
Although it have been described that the preferred embodiments of the present invention, but those skilled in the art are knowing basic invention structure
During think of, these embodiments can be deformed or changed.Appended claims are intended to be believed to comprise preferred embodiment, and
It is all to there is deformation or modification to be within the scope of the present invention.
Obviously, without departing from the spirit and scope of the present invention, those skilled in the art can enter to the present invention
Row deformation is changed.Therefore, if these deformations or modification belong to the scope of claim and equivalent technologies, they can also fall into
In the scope of the present invention.
Claims (20)
1. a kind of device for the compression artefacts for reducing lossy compression method image, including:
Feature extracting device, including first group of wave filter, first group of wave filter are configured as, from lossy compression method image zooming-out block,
And the block of extraction is mapped as first group of high dimensional feature vector;
Feature strengthens equipment, and it performs telecommunication with feature extracting device, and including second group of wave filter, second group of wave filter
It is configured as, denoising is carried out to each high dimensional feature vector in first group, and is the by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising
Two groups of high dimensional feature vectors;
Mapped device, it is electrically coupled with feature enhancing equipment, and including the 3rd group of wave filter, the 3rd group of wave filter is configured
For each high dimensional feature vector nonlinear in second group to be mapped as to the Partitioning Expression of A restored;With
Polymerization unit, itself and mapped device perform telecommunication, and are configured as, to from all high dimensional features in second group to
The Partitioning Expression of A that amount mapping is obtained is polymerize, to generate the picture rich in detail of recovery.
2. device according to claim 1, wherein, first group of wave filter is configured as, from the lossy compression method figure
High dimensional feature vector is non-linearly mapped as extracting block, and by each block extracted, and all pieces of processes map
To vector form first group of high dimensional feature vector.
3. device according to claim 1, wherein, second group of wave filter is configured as, to each in first group
High dimensional feature vector carries out denoising, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
4. the device according to any one of claim 1-3, wherein, first group of wave filter, second group of wave filter and
3rd group of wave filter is based respectively on the first predetermined parameter, the second parameter and the 3rd parameter and maps the vector.
5. device according to claim 4, in addition to:
Compare equipment, it is electrically coupled to polymerization unit, and is configured as, obtained and lossy compression method figure from the sampling of predetermined training set
As corresponding true uncompressed image, and compare picture rich in detail and phase that the aggregated recovery received from polymerization unit is obtained
Dissimilar degree between the true uncompressed image answered, to generate reconstructed error, wherein, reconstructed error is reversed transmission to optimize
First parameter, the second parameter and the 3rd parameter.
6. device according to claim 5, in addition to the training set preparation equipment for comparing equipment is electrically coupled to, wherein, institute
Training set preparation equipment is stated to further comprise:
Cropping tool, it is configured as, and multiple subgraphs are randomly cut out from randomly selected training image, true to generate one group
Real uncompressed subgraph;
Lossy compression method subgraph maker, it performs telecommunication with cropping tool, and is configured as, based on what is received from cropping tool
Described one group true uncompressed subgraph generates one group of lossy compression method subgraph;
Pairing unit, it performs telecommunication with cropping tool and lossy compression method subgraph maker, and is configured as, to described true
Real uncompressed subgraph is matched with corresponding lossy compression method subgraph;With
Collector, itself and pairing unit perform telecommunication, and are configured as, collect with paired true uncompressed subgraph and
Lossy compression method subgraph, to generate predetermined training set.
7. device according to claim 6, wherein, the lossy compression method subgraph maker further comprises:Compression is set
Standby, it performs telecommunication with cropping tool, and is configured as, and is coded and decoded by condensing encoder and decoder truly not
Subgraph is compressed, to generate lossy compression method subgraph.
8. device according to claim 5, wherein, the reconstructed error includes mean square error.
9. a kind of method for the compression artefacts for reducing lossy compression method image, including:
By including the feature extracting device of first group of wave filter, mapped from lossy compression method image zooming-out block, and by the block of extraction
For first group of high dimensional feature vector;
By performing telecommunication with feature extracting device and including the feature enhancing equipment of second group of wave filter, in first group
Each high dimensional feature vector carries out denoising, and is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising;
By being electrically coupled to feature enhancing equipment and including the mapped device of the 3rd group of wave filter, by each higher-dimension in second group
Characteristic vector is non-linearly mapped as the Partitioning Expression of A restored;With
By performing the polymerization unit of telecommunication with mapped device, it will be obtained by all high dimensional feature DUAL PROBLEMS OF VECTOR MAPPINGs in second group
Partitioning Expression of A polymerize, to generate the picture rich in detail of recovery.
10. method according to claim 9, wherein, it is described from lossy compression method image zooming-out block, and the block of extraction is mapped
Further comprise for first group of high dimensional feature vector:
High dimensional feature vector is non-linearly mapped as from the lossy compression method image zooming-out block, and by the block of each extraction, wherein,
All pieces of vectors obtained by mapping form first group of high dimensional feature vector.
11. method according to claim 9, wherein, denoising is carried out to each high dimensional feature vector in first group, and will
The high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising is that second group of high dimensional feature vector further comprises:
Denoising is carried out to each high dimensional feature vector in first group, and the high dimensional feature vector nonlinear of denoising is mapped as
Second group of high dimensional feature vector.
12. the method according to any one of claim 9-11, wherein, first group of wave filter, second group of wave filter
The first predetermined parameter, the second parameter and the 3rd parameter, which are based respectively on, with the 3rd group of wave filter maps the vector.
13. method according to claim 12, also includes after polymerisation:
True uncompressed image corresponding with the lossy compression method image is obtained from the sampling of predetermined training set;With
Compare the dissimilar degree between the picture rich in detail restored by polymerization and corresponding true uncompressed image, to generate reconstruct
Error, wherein, reconstructed error is reversed transmission, to optimize the first parameter, the second parameter and the 3rd parameter.
14. method according to claim 13, wherein, obtained and the lossy compression method image from the sampling of predetermined training set
Before corresponding true uncompressed image, in addition to:
Go out multiple subgraphs from randomly selected training image random cropping, to generate one group of true uncompressed subgraph;
One group of lossy compression method subgraph is generated based on described one group true uncompressed subgraph;
Each it will be matched with corresponding lossy compression method subgraph true uncompressed subgraph;With
Collect with paired true uncompressed subgraph and lossy compression method subgraph, to generate predetermined training set.
15. method according to claim 14, wherein, it is described to generate one group based on described one group true uncompressed subgraph
Lossy compression method subgraph further comprises:
True uncompressed subgraph is coded and decoded using condensing encoder and decoder, damaged with generating described one group
Compress subgraph.
16. method according to claim 13, wherein, reconstructed error includes mean square error.
17. a kind of device for the compression artefacts for reducing lossy compression method image, device includes:
Reconfiguration unit, is configured as, and would detract from compressing picture rich in detail of the Image Reconstruction for recovery based on predefined parameter, wherein, institute
Stating reconfiguration unit includes:
Feature extracting device, including first group of wave filter, first group of wave filter are configured as, from lossy compression method image zooming-out block,
And the block of extraction is mapped as first group of high dimensional feature vector;
Feature strengthens equipment, and it performs telecommunication with feature extracting device, and including second group of wave filter, second group of wave filter
It is configured as, denoising is carried out to each high dimensional feature vector in first group, and is the by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising
Two groups of high dimensional feature vectors;
Mapped device, it is electrically coupled to feature enhancing equipment and including the 3rd group of wave filter, and the 3rd group of wave filter is configured as,
The Partitioning Expression of A that each high dimension vector in second group is non-linearly mapped as restoring;With
Polymerization unit, it performs telecommunication with mapped device, and is configured as, to being reflected by all high dimension vectors in second group
Penetrate obtained Partitioning Expression of A to be polymerize, to generate the picture rich in detail of recovery;
Wherein, the feature extracting device, feature enhancing equipment, the mapped device and the polymerization unit include respectively
At least one convolutional layer, and the convolutional layer is connected with each other successively, to form convolutional neural networks system;
Training unit, it performs telecommunication with reconfiguration unit, and is configured as, and the convolution is trained by predetermined training set
Nerve network system, to change the predefined parameter that reconfiguration unit is used.
18. a kind of system for the compression artefacts for reducing lossy compression method image, including:
Memory, for storing executable component;With
Processor, is electrically coupled with the memory, for performing the executable component, with the operation of execution system;Wherein, institute
Stating executable component includes:
Feature extraction component, is configured as from the lossy compression method image zooming-out block, and the block of extraction is mapped as first group high
Dimensional feature vector;
Feature strengthens component, is configured as carrying out denoising to each high dimensional feature vector in first group, and by the height of denoising
Dimensional feature vector is mapped as second group of high dimensional feature vector;
Map component, is configured as each high dimension vector in second group being non-linearly mapped as the Partitioning Expression of A of recovery;With
Polymerizing component, is configured as polymerizeing the Partitioning Expression of A obtained by all high dimension vectors mapping in second group, with
Generate the picture rich in detail restored.
19. system according to claim 18, wherein, feature extraction component is configured as, from the lossy compression method image
Block is extracted, and each will non-linearly be mapped as high dimensional feature vector in the block of extraction, and all pieces of processes map
To vector form first group of high dimensional feature vector.
20. system according to claim 18, wherein, feature enhancing component is configured as, to each height in first group
Dimensional feature vector carries out denoising, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/000093 WO2016127271A1 (en) | 2015-02-13 | 2015-02-13 | An apparatus and a method for reducing compression artifacts of a lossy-compressed image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107251053A true CN107251053A (en) | 2017-10-13 |
CN107251053B CN107251053B (en) | 2018-08-28 |
Family
ID=56614081
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580075726.4A Active CN107251053B (en) | 2015-02-13 | 2015-02-13 | A kind of method and device for the compression artefacts reducing lossy compression image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107251053B (en) |
WO (1) | WO2016127271A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765338A (en) * | 2018-05-28 | 2018-11-06 | 西华大学 | Spatial target images restored method based on convolution own coding convolutional neural networks |
CN109801218A (en) * | 2019-01-08 | 2019-05-24 | 南京理工大学 | Multi-spectral remote sensing image Pan-sharpening method based on multi-layer-coupled convolutional neural networks |
CN111986278A (en) * | 2019-05-22 | 2020-11-24 | 富士通株式会社 | Image encoding device, probability model generation device, and image compression system |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871306B (en) * | 2016-09-26 | 2021-07-06 | 北京眼神科技有限公司 | Method and device for denoising picture |
CN109120937B (en) * | 2017-06-26 | 2020-03-27 | 杭州海康威视数字技术股份有限公司 | Video encoding method, decoding method, device and electronic equipment |
CN109151475B (en) * | 2017-06-27 | 2020-03-27 | 杭州海康威视数字技术股份有限公司 | Video encoding method, decoding method, device and electronic equipment |
US20230188759A1 (en) * | 2021-12-14 | 2023-06-15 | Spectrum Optix Inc. | Neural Network Assisted Removal of Video Compression Artifacts |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7227980B2 (en) * | 2002-12-19 | 2007-06-05 | Agilent Technologies, Inc. | Systems and methods for tomographic reconstruction of images in compressed format |
US7747082B2 (en) * | 2005-10-03 | 2010-06-29 | Xerox Corporation | JPEG detectors and JPEG image history estimators |
US8223837B2 (en) * | 2007-09-07 | 2012-07-17 | Microsoft Corporation | Learning-based image compression |
US8238675B2 (en) * | 2008-03-24 | 2012-08-07 | Microsoft Corporation | Spectral information recovery for compressed image restoration with nonlinear partial differential equation regularization |
GB201119206D0 (en) * | 2011-11-07 | 2011-12-21 | Canon Kk | Method and device for providing compensation offsets for a set of reconstructed samples of an image |
CN103517022B (en) * | 2012-06-29 | 2017-06-20 | 华为技术有限公司 | A kind of Image Data Compression and decompression method, device |
CN103475876B (en) * | 2013-08-27 | 2016-06-22 | 北京工业大学 | A kind of low bit rate compression image super-resolution rebuilding method based on study |
-
2015
- 2015-02-13 WO PCT/CN2015/000093 patent/WO2016127271A1/en active Application Filing
- 2015-02-13 CN CN201580075726.4A patent/CN107251053B/en active Active
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765338A (en) * | 2018-05-28 | 2018-11-06 | 西华大学 | Spatial target images restored method based on convolution own coding convolutional neural networks |
CN109801218A (en) * | 2019-01-08 | 2019-05-24 | 南京理工大学 | Multi-spectral remote sensing image Pan-sharpening method based on multi-layer-coupled convolutional neural networks |
CN109801218B (en) * | 2019-01-08 | 2022-09-20 | 南京理工大学 | Multispectral remote sensing image Pan-sharpening method based on multilayer coupling convolutional neural network |
CN111986278A (en) * | 2019-05-22 | 2020-11-24 | 富士通株式会社 | Image encoding device, probability model generation device, and image compression system |
CN111986278B (en) * | 2019-05-22 | 2024-02-06 | 富士通株式会社 | Image encoding device, probability model generating device, and image compression system |
Also Published As
Publication number | Publication date |
---|---|
WO2016127271A1 (en) | 2016-08-18 |
CN107251053B (en) | 2018-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107251053B (en) | A kind of method and device for the compression artefacts reducing lossy compression image | |
CN108805840A (en) | Method, apparatus, terminal and the computer readable storage medium of image denoising | |
CN113450288B (en) | Single image rain removing method and system based on deep convolutional neural network and storage medium | |
CN107301662B (en) | Compression recovery method, device and equipment for depth image and storage medium | |
CN110276726A (en) | A kind of image deblurring method based on the guidance of multichannel network prior information | |
CN107862687B (en) | Early warning system for monitoring agricultural diseases and insect pests | |
CN110009656B (en) | Target object determination method and device, storage medium and electronic device | |
CN103037212A (en) | Adaptive block compressing sensing image coding method based on visual perception | |
CN106658004B (en) | A kind of compression method and device based on image flat site feature | |
CN102148986B (en) | Method for encoding progressive image based on adaptive block compressed sensing | |
CN103208097A (en) | Principal component analysis collaborative filtering method for image multi-direction morphological structure grouping | |
CN111583152A (en) | Image artifact detection and automatic removal method based on U-net structure | |
CN112308085B (en) | Light field image denoising method based on convolutional neural network | |
CN105160667A (en) | Blind image quality evaluation method based on combining gradient signal and Laplacian of Gaussian (LOG) signal | |
CN107909558A (en) | A kind of non-local mean image de-noising method based on unsupervised learning | |
CN115063326B (en) | Infrared night vision image efficient communication method based on image compression | |
CN109859131A (en) | A kind of image recovery method based on multi-scale self-similarity Yu conformal constraint | |
CN116485741A (en) | No-reference image quality evaluation method, system, electronic equipment and storage medium | |
CN105338219A (en) | Video image denoising processing method and apparatus | |
CN109344860A (en) | A kind of non-reference picture quality appraisement method based on LBP | |
CN103903271B (en) | Image forensics method for natural image and compressed and tampered image based on DWT | |
CN103841583B (en) | A kind of radio network optimization magnanimity signaling data acquisition method based on compressed sensing | |
CN112150360A (en) | IVUS image super-resolution reconstruction method based on dense residual error network | |
CN106780398B (en) | A kind of image de-noising method based on noise prediction | |
CN113658282A (en) | Image compression and decompression method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |