CN107251053B - A kind of method and device for the compression artefacts reducing lossy compression image - Google Patents

A kind of method and device for the compression artefacts reducing lossy compression image Download PDF

Info

Publication number
CN107251053B
CN107251053B CN201580075726.4A CN201580075726A CN107251053B CN 107251053 B CN107251053 B CN 107251053B CN 201580075726 A CN201580075726 A CN 201580075726A CN 107251053 B CN107251053 B CN 107251053B
Authority
CN
China
Prior art keywords
group
high dimensional
dimensional feature
feature vector
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201580075726.4A
Other languages
Chinese (zh)
Other versions
CN107251053A (en
Inventor
汤晓鸥
董超
吕健勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Publication of CN107251053A publication Critical patent/CN107251053A/en
Application granted granted Critical
Publication of CN107251053B publication Critical patent/CN107251053B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Disclose a kind of device for the compression artefacts reducing lossy compression image.The device may include:Feature extracting device comprising first group of filter, first group of filter are configured as from lossy compression image zooming-out block, and the block of extraction is mapped as first group of high dimensional feature vector;The feature of telecommunication is executed with feature extraction enhances equipment comprising second group of filter, second group of filter are configured as to each high dimensional feature vector denoising in first group, and are second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising;It is coupled to the mapped device of feature enhancing equipment comprising third group filter, third group filter are configured as the Partitioning Expression of A for being non-linearly mapped as each high dimension vector in second group to restore;And polymerization unit, it executes telecommunication with mapped device and is configured as polymerizeing Partitioning Expression of A, to generate the clear image restored.

Description

A kind of method and device for the compression artefacts reducing lossy compression image
Technical field
Invention relates generally to image processing field more particularly to a kind of sides for the compression artefacts reducing lossy compression image Method and device.
Background technology
Lossy compression is abandoned come a kind of number for the content for indicating to be encoded using inaccurate approximate or partial data According to coding method.This compress technique is used to reduce those and needs storage, processing and/or the data for sending represented content Amount.There are many Image Lossy Compression formats, such as JPEG, WebP, JPEG XR and HEVC-MSP.Jpeg format is still various Most widely employed format in optional mode.
Lossy compression introduces compression artefacts, especially when it is used in low bit rate/quantized level.For example, JPEG is pressed Contracting distortion be include blocking artifact, the complex combination of ringing effect and fuzzy different certain distortions.It is encoded when to each piece, When without considering the relevance with adjacent block, when leading to the discontinuity of boundary, blocking artifact is generated.It is thick because of high fdrequency component The ringing effect at edge occurs for quantization.Because the loss of high fdrequency component occurs to obscure.
Existing algorithm for eliminating distortion can be divided into method based on deblocking effect and based on the method for recovery.It is based on The method of deblocking effect focuses on removal blocking artifact and ringing effect.However, most of methods based on deblocking effect can not weigh Existing sharp keen edge, and tend to exceedingly carry out texture region smooth.Squeeze operation is considered as mistake based on the method for recovery Very, and restoration algorithm is proposed.Since the method based on recovery is tended to directly reconstruct original image, sharpens and export usual companion With the ringing effect at edge and the lofty transition of smooth region.
Invention content
The brief overview of the disclosure is given below, to provide a kind of reduction lossy compression image of disclosure some aspects The device of compression artefacts.This general introduction is not the extensive overview of the disclosure.It is not intended to the crucial or important member of the identification disclosure Element is not intended to any range of description disclosure specific embodiment or any range of claim.Its sole purpose be with Some concepts of the disclosure, the preamble as the more detailed description presented hereinafter is presented in simplified form.
According to an embodiment of the present application, a kind of device for the compression artefacts reducing lossy compression image is disclosed.Device can To include:Feature extracting device, including first group of filter, first group of filter are configured as from lossy compression image zooming-out Block, and the block of extraction is mapped as first group of high dimensional feature vector;And feature enhances equipment, is held with feature extracting device Row telecommunication, and include second group of filter, second group of filter is configured as to every in first group of high dimensional feature vector A high dimensional feature vector carries out denoising, and is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising.The device Further include:Mapped device is electrically coupled with feature enhancing equipment, and includes third group filter, third group filter by with It is set to the Partitioning Expression of A that each high dimensional feature vector nonlinear in second group of high dimensional feature vector is mapped as to restore;With And polymerization unit, telecommunication is executed with mapped device, and be configured as to by all in second group of high dimensional feature vector The Partitioning Expression of A of the recovery of high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING is polymerize, to generate the clear image restored, wherein the recovery The size of clear image is identical as the size of lossy compression image.
On the one hand, first group of filter can be configured as from lossy compression image zooming-out block, and by each extraction Block is non-linearly mapped as high dimensional feature vector, and all pieces of map vector forms first group of high dimensional feature vector.
On the other hand, second group of filter can be configured as special to each higher-dimension in first group of high dimensional feature vector Sign vector carries out denoising, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
On the one hand, first group of filter, second group of filter and third group filter and polymerization unit can be based respectively on Scheduled first parameter, the second parameter and third parameter are mapped, or can be based on the 4th parameter and be carried out to Partitioning Expression of A Polymerization.
On the other hand, device further includes comparing equipment, may be coupled to polymerization unit, and is configured as from predetermined Training set samples to obtain true uncompressed image corresponding with lossy compression image, and compares the process received by polymerization unit The dissimilar degree between obtained clear image and corresponding true uncompressed image is restored in polymerization, to generate reconstructed error, In, reconstructed error is reversed transmission, to optimize the first parameter, the second parameter and third parameter.
According to an embodiment of the present application, device can also include being electrically coupled to the training set preparation equipment for comparing equipment, In, the training set prepares equipment and further includes:Cropping tool is configured as randomly cutting out from randomly selected training image Multiple subgraphs, to generate one group of really uncompressed subgraph;With lossy compression subgraph generator, electricity is executed with cropping tool Communication, and be configured as, based on the true uncompressed subgraph of one group received from cropping tool, generating one group of lossy compression subgraph Picture;In addition, it further includes paired device that the training set, which prepares equipment, executed with cropping tool and lossy compression subgraph generator Telecommunication, and be configured as, true uncompressed subgraph is matched with corresponding lossy compression subgraph;And collection Device executes telecommunication with paired device, and is configured as collecting with pairs of true uncompressed subgraph and lossy compression Subgraph is to generate predetermined training set.
On the one hand, the lossy compression subgraph generator further includes compression device, and telecommunication is executed with cropping tool, And it is configured as, true uncompressed subgraph is coded and decoded by condensing encoder and decoder, pressure is damaged to generate Contracting subgraph.
On the one hand, the reconstructed error includes mean square error.
According to a kind of embodiment of the application, a kind of method for the compression artefacts reducing lossy compression image, institute are disclosed The method of stating may include:By including the feature extracting device of first group of filter, from lossy compression image zooming-out block, and will carry The block taken is mapped as first group of high dimensional feature vector;By executing telecommunication with feature extracting device and including second group of filter Feature enhance equipment, denoising is carried out to each high dimensional feature vector in first group of high dimensional feature vector, and by the height of denoising Dimensional feature vector is mapped as second group of high dimensional feature vector;By being electrically coupled to feature enhancing equipment and including third group filter Mapped device, each high dimensional feature vector nonlinear in second group of high dimensional feature vector is mapped as restore piecemeal table Show;With the polymerization unit by executing telecommunication with mapped device, to special by all higher-dimensions in second group of high dimensional feature vector The Partitioning Expression of A for levying the recovery of DUAL PROBLEMS OF VECTOR MAPPING is polymerize, to generate the clear image restored, wherein the clear figure of the recovery The size of picture is identical as the size of lossy compression image.
Optionally, described from lossy compression image zooming-out block, and the block of extraction is mapped as first group of high dimensional feature vector Further comprise:From the lossy compression image zooming-out block, and by the block of each extraction be non-linearly mapped as high dimensional feature to Amount, wherein all pieces form first group of high dimensional feature vector by the vector that mapping obtains.
Optionally, denoising is carried out to each high dimensional feature vector in first group of high dimensional feature vector, and by the height of denoising Dimensional feature vector is mapped as second group of high dimensional feature vector and further comprises:To each higher-dimension in first group of high dimensional feature vector Feature vector carries out denoising, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
Optionally, first group of filter, second group of filter and third group filter are based respectively on scheduled first Parameter, the second parameter and third parameter are mapped.
Optionally, further include after polymerisation:It samples to obtain from predetermined training set corresponding with the lossy compression image True uncompressed image;And compare by the dissmilarity between the polymerization clear image restored and corresponding true uncompressed image Degree, to generate reconstructed error, wherein reconstructed error is reversed transmission, to optimize the first parameter, the second parameter and third parameter.
Optionally, from predetermined training set sample to obtain true uncompressed image corresponding with the lossy compression image it Before, further include:Go out multiple subgraphs from randomly selected training image random cropping, to generate one group of really uncompressed subgraph Picture;Based on described one group really uncompressed subgraph generate one group of lossy compression subgraph;To each true uncompressed subgraph It is matched with corresponding lossy compression subgraph;Match pairs of true uncompressed subgraph and lossy compression subgraph with collecting Picture, to generate predetermined training set.
Optionally, it is described based on described one group really uncompressed subgraph generate one group of lossy compression subgraph and further wrap It includes:True uncompressed subgraph is coded and decoded using condensing encoder and decoder, is damaged with generating described one group Compress subgraph.
Optionally, reconstructed error includes mean square error.
According to a kind of embodiment of the application, a kind of device for the compression artefacts reducing lossy compression image is disclosed, it should Device may include:Reconfiguration unit is configured as would detract from compression image reconstruction based on predefined parameter being the clear figure restored Picture and training unit are configured with predetermined training set and carry out training convolutional neural networks system, to determine that reconstruct is single The parameter that member uses.Reconfiguration unit may include:Feature extracting device, including first group of filter are configured as from damaging Image zooming-out block is compressed, and the block of extraction is mapped as first group of high dimensional feature vector;Feature enhances equipment, with feature extraction Equipment executes telecommunication, and includes second group of filter, and second group of filter is configured as to first group of high dimensional feature vector In each high dimensional feature vector carry out denoising, and be second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising; Mapped device is electrically coupled to feature enhancing equipment, and includes third group filter, and third group filter is configured as the Each high dimension vector in two groups of high dimensional feature vectors is non-linearly mapped as the Partitioning Expression of A restored;And polymerization unit, with Mapped device executes telecommunication, is configured as to the recovery by all high dimension vectors mapping in second group of high dimensional feature vector Partitioning Expression of A polymerize, with generate restore clear image, wherein the size of the clear image of the recovery has with described The size of damage compression image is identical;Feature extracting device, feature enhancing equipment, mapped device and polymerization unit respectively include at least One convolutional layer, and convolutional layer is interconnected so as to form convolutional neural networks system successively.
According to a kind of embodiment of the application, a kind of system for the compression artefacts reducing lossy compression image is disclosed.It should System may include the memory for storing executable component;And processor, for executing executable component, to execute system Operation;Executable component includes:Feature extraction component is configured as from lossy compression image zooming-out block, and by the block of extraction It is mapped as first group of high dimensional feature vector;Feature enhances component, is configured as to each height in first group of high dimensional feature vector Dimensional feature vector carries out denoising, and is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising;Map component, It is configured as the Partitioning Expression of A for being non-linearly mapped as each high dimension vector in second group of high dimensional feature vector to restore;With it is poly- Seaming element is configured as carrying out the Partitioning Expression of A of the recovery by all high dimension vectors mapping in second group of high dimensional feature vector Polymerization, to generate the clear image restored, wherein the size of the clear image of the recovery is big with the lossy compression image It is small identical.
Optionally, feature extraction component is configured as, from the lossy compression image zooming-out block, and will be in the block of extraction Each of be non-linearly mapped as high dimensional feature vector, and all pieces of vectors obtained by mappings formed it is described first group high Dimensional feature vector.
Optionally, feature enhancing component is configured as, to each high dimensional feature vector in first group of high dimensional feature vector Denoising is carried out, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
The following description and drawings illustrate certain schematic aspects of the disclosure.However, the only instruction of these aspects can use Some of the various ways in the principles of the present invention.When be considered in conjunction with the accompanying the present invention it is described in detail below when, this hair Other bright aspects will become obvious.
Description of the drawings
The exemplary, non-limitative embodiment of the present invention is described with reference to the accompanying drawings.Attached drawing is illustrative, is not pressed usually It is drawn according to accurate dimension.Same or analogous element on different figures is presented with like reference characters.
Fig. 1 is the device for showing the compression artefacts for reducing lossy compression image consistent with embodiments herein Schematic diagram.
Fig. 2 is the dress for showing the compression artefacts for reducing lossy compression image consistent with another embodiment of the application The schematic diagram set.
Fig. 3 is the schematic diagram for showing the convolutional neural networks system consistent with some disclosed embodiments.
Fig. 4 is the schematic diagram for the training unit for showing the device consistent with some disclosed embodiments.
Fig. 5 is to show that the training set of the training unit consistent with some disclosed embodiments prepares the schematic diagram of equipment.
Fig. 6 is to show and the method for the disclosed consistent compression artefacts for reducing lossy compression image of some embodiments Schematic flow chart.
Fig. 7 is to show the compression mistake for training for reducing lossy compression image consistent with some disclosed embodiments The schematic flow chart of the method for genuine convolutional neural networks system.
Fig. 8 is the system for showing the compression artefacts for reducing lossy compression image consistent with embodiments herein Schematic diagram.
Specific implementation mode
Imagine most to realize the present invention with detailed reference to some specific embodiments of the present invention, including inventor now Good pattern.The example of these specific embodiments is shown in the accompanying drawings.Although describing the present invention in conjunction with these specific embodiments, It is understood that the present invention is not intended to limit the invention to described embodiment.On the contrary, it is intended to cover by appended Replacement, modification and the equivalent that may include in spirit and scope of the invention.In the following description, it is It provides and the present invention is fully understood, numerous specific details are set forth.Can in some in these details or Implement the present invention in the case of whole.In other cases, it in order to avoid unnecessarily obscuring the present invention, is not described in detail known Processing operation.
Terms used herein are used only for the purpose of describing specific embodiments, and are not intended to limit the invention.Such as this paper institutes Use, singulative "one", "an" and " this " be also intended to including plural form, clearly refer to unless the context otherwise Show.It is further appreciated that when being used in this manual, term " include " and or " include " shows the feature, whole The presence of body, step, operations, elements, and/or components, but do not preclude the presence or addition of one or more of the other feature, entirety, step Suddenly, operation, component, assembly unit and/or a combination thereof.
Referring to Fig. 1, device 1000 may include feature extracting device 100, feature enhancing equipment 200,300 and of mapped device Polymerization unit 400.Hereinafter, feature extracting device 100, feature enhancing equipment 200, mapped device 300 and polymerization will be set Standby 400 are described in detail.For ease of description, lossy compression image is indicated with Y, the clear image of recovery is indicated with F (Y), The clear image of recovery and true unpressed image X are similar as much as possible.
According to one embodiment, feature extracting device 100 includes first group of filter.First group of filter is configured as, It is mapped as first group of high dimensional feature vector from lossy compression image zooming-out block (patch), and by the block of extraction.For example, first group Filter passes through the first parameters of function F'() block of extraction is mapped as first group of high dimensional feature vector, wherein F'(x) it is non-thread Property function (for example, max (0, x), 1/ (1+exp (- x)) or tanh (x)), and by associated with lossy compression image predetermined Parameter determines the first parameter.
In one embodiment, these first group of high dimensional feature vector may include one group of characteristic pattern, the quantity of characteristic pattern Equal to the dimension of vector.The popular strategy of image restoration is densely to extract block, the basis then trained in advance by one group come Indicate these blocks, basis above-mentioned is for example, PCA (principal component analysis), DCT (discrete cosine transform), Haar etc..
According to one embodiment, the operation of feature extracting device 100 can be formulated as:
F1(Y)=F ' (W1*Y+B1) (1)
Wherein, W1And B1Respectively represent filter and error, F'(x) it is nonlinear function (for example, max (0, x), 1/ (1+ Exp (- x)) or tanh (x)).Here, W1Size be c × f1×f1×n1, wherein c is the port number of input picture, f1It is filter The spatial domain size of wave device, n1It is the quantity of filter.Intuitively, W1To image application n1A convolution, and each convolution has c ×f1×f1The core of size.Output is by n1A characteristic pattern composition.B1 is n1Dimensional vector, each of which element are associated with filter.
Feature, which enhances equipment 200, to execute telecommunication with feature extracting device 100, and may include second group of filtering Device, second group of filter are configured as carrying out denoising to each high dimensional feature vector in first group, and by the higher-dimension of denoising Maps feature vectors are second group of high dimensional feature vector, for example, one group of relatively cleaner feature vector.
According to one embodiment, feature enhancing equipment 200 be configured as to each high dimensional feature vector in first group into Row denoising, and pass through the second parameters of function F'() the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising is vectorial for second group of high dimensional feature, Wherein, F'(x) be nonlinear function (for example, max (0, x), 1/ (1+exp (- x)) or tanh (x)), and the second parameter by with The predefined parameter of first group of high dimensional feature vector correlation connection determines.
In this embodiment, feature extracting device 100 is that each block extracts n1Dimensional feature.Second group of filter is by these n1 Dimensional vector is mapped to one group of n2Dimensional vector.Each the vector through mapping is conceptually relatively cleaner feature vector.These Vector includes another group of characteristic pattern.
According to one embodiment, feature enhancing can be formulated as:
F2(Y)=F ' (W2*Y+B2) (2)
Wherein, W2Size be n1×f2×f2×n2, B2It is n2Dimensional vector.
As shown, device 1000 may also include mapped device 300.Mapped device 300 can be coupled to feature enhancing equipment 200, and include third group filter, it is configured as non-linearly being mapped as restoring by each high dimension vector in second group Partitioning Expression of A.
According to one embodiment, mapped device 300 is configured as through function F'(thirds parameter) by each high dimension vector Non-linearly be mapped as Partitioning Expression of A, wherein F'(x) be nonlinear function (for example, max (0, x), 1/ (1+exp (- x)) or Tanh (x)), and third parameter is by associated pre- with second group of high dimensional feature vector (i.e. cleaner high dimensional feature vector) Determine parameter determination.
In one embodiment, feature enhancing equipment 200 generates one group of n2Dimensional feature vector.Mapped device 300 is by these n2 Each of dimensional vector is mapped to n3Dimensional vector.Each the vector through mapping is conceptually the expression of the block of a recovery.This A little vectors include another group of characteristic pattern.
According to one embodiment, mapping can be formulated as:
F3(Y)=F ' (W3*F2(Y)+B3) (3)
Wherein, W3Size be n2×f3×f3×n3, B3For a n3Dimensional vector.The n of output3Each of dimensional vector exists The expression of the conceptive block for being the recovery that will be used to reconstruct.
As shown, device 1000 may also include polymerization unit 400.Polymerization unit 400 can execute electricity with mapped device 300 Communication, and be configured as polymerizeing Partitioning Expression of A, to generate the clear image restored.
Polymerization unit 400 polymerize the Partitioning Expression of A of recovery, to generate the clear image restored.Polymerization can be by formula It turns to:
F (Y)=W4*F3(Y)+B4 (4)
Wherein, W4Size be n3×f4×f4× c, B4It is c dimensional vectors.
According to this embodiment, device 1000, which may also include, compares equipment (not shown), is coupled to polymerization unit 400, And it is configured as, samples to obtain true uncompressed subgraph corresponding with lossy compression subgraph from predetermined training set, and Compare between the clear subgraph that the polymerization received from polymerization unit 400 is restored and the true uncompressed subgraph that sampling obtains Dissimilar degree, to generate reconstructed error.For example, reconstructed error includes mean square error.By reconstructed error reverse transfer, to determine example Such as W1、W2、W3、W4、B1、B2、B3And B4Etc. parameters.
Fig. 2 is the dress for showing the compression artefacts for reducing lossy compression image consistent with another embodiment of the application Set the schematic diagram of 1000'.As shown in Fig. 2, device 1000' may include reconfiguration unit 100' and training unit 200'.Reconfiguration unit 100' is configured as, and would detract from compression image reconstruction based on predefined parameter is the clear image restored.
Embodiment according to Fig.2, reconfiguration unit 100' may also include feature extracting device 110', feature enhancing equipment 120', mapped device 130' and polymerization unit 140'.In one embodiment, feature extracting device 110', feature enhance equipment 120', mapped device 130' and polymerization unit 140' can respectively include at least one convolutional layer, and convolutional layer mutually interconnects successively It connects, to form convolutional neural networks system.
Fig. 3 shows the layer structure of convolutional neural networks system in mathematical simulation model.In one embodiment, feature carries Each of taking equipment 110', feature enhancing equipment 120', mapped device 130' and polymerization unit 140' can be modeled as respectively At least one convolutional layer.Different operations is carried out in different convolutional layers respectively.
In this embodiment, feature extracting device 110' is configured as, from lossy compression image zooming-out block, and by extraction Block is mapped as first group of high dimensional feature vector.This is equivalent to carries out convolution by filter group as described above to image.
Feature enhancing equipment 120' is configured as, and executes telecommunication with feature extracting device 110', and in first group Each high dimensional feature vector denoising, and be second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising, for example, One group of relatively cleaner feature vector.This is equivalent to using second group of filter as described above.
Mapped device 130' be configured to coupled to feature enhancing equipment 120', and by each higher-dimension in second group to Amount is non-linearly mapped as the Partitioning Expression of A restored.This is equivalent to using third group filter as described above.
Polymerization unit 140' is configured as, and executes telecommunication with mapped device 130', and to by all in second group The Partitioning Expression of A of high dimension vector mapping is polymerize, to generate the clear image restored.
In one embodiment, feature extracting device 110', feature enhancing equipment 120', mapped device 130' and polymerization are set Standby 140' respectively includes at least one convolutional layer, and convolutional layer is connected with each other successively, to form convolutional neural networks system.Volume Product nerve network system can be traced back to decades ago, show volatile epidemic status recently, be partly because it and scheming As classificatory success.Convolutional neural networks system is commonly used in natural image denoising and removal noise pattern (dirt/rain).
Alternatively, it is non-linear to increase that more convolutional layers can be added.However, this can dramatically increase convolutional neural networks The complexity of system, to need more training datas and time.
Training unit 200' is configured as, using predetermined training set training convolutional neural networks system, so as to optimal reconfiguration Parameter used in unit, such as W1、W2、W3、W4、B1、B2、B3And B4Deng.Embodiment according to Fig.4, training unit 200' It may also include sample devices 210', compare equipment 220' and reverse transfer equipment 230'.
Sample devices 210' can be configured as, and be sampled to predetermined training set, and lossy compression subgraph and its phase are obtained The true uncompressed subgraph answered, and would detract from compression subgraph and be input to convolutional neural networks system.Herein, " subgraph " refers to These samples are considered as small " image " rather than " block ", and in some sense, multiple " blocks " are overlappings, some is needed to make even Post-process, and " subgraph " does not need the post-processing.
Comparing equipment 220' can be configured as, and compare the lossy compression subgraph based on input by convolutional neural networks system Dissimilar degree between the obtained clear subgraph of reconstruct and corresponding true uncompressed subgraph, to generate reconstructed error.Example Such as, reconstructed error may include mean square error, and by using the stochastic gradient descent of standard reverse transfer that error is minimum Change.
Reverse transfer equipment 230' is configured as, and reconstructed error is transmitted by convolutional neural networks system reverse, to adjust Entire volume accumulates the weight of the connection between the neuron of nerve network system.
It should be noted that if only reconstructed error can derive, convolutional neural networks system is not excluded for using other The reconstructed error of type.If preferably perception excitation measurement is given during the training period, convolutional neural networks system can spirit The measurement is adapted to livingly.
In one embodiment, device 1000 and 1000', which may also include, is coupled to the training set for comparing equipment preparation equipment, And the training set prepares equipment and is configured as, and prepares the predetermined training set for training convolutional neural networks system.Fig. 5 is to show The schematic diagram that training set prepares equipment is gone out.As shown, training set, which prepares equipment, may include cropping tool 241', lossy compression Image composer 242', paired device 243' and collector 244'.
Cropping tool 241' can be configured as, and random cropping goes out multiple subgraphs from randomly selected training image, with life At one group of true unpressed subgraph.For example, cropping tool 241' can cut out the subgraph that n pixel is m × m.Damage pressure Contracting subgraph generator 242' can execute telecommunication with cropping tool 241', and be configured as, and be based on receiving from cropping tool 241' To true uncompressed subgraph generate one group of lossy compression subgraph.Paired device 243' can with cropping tool 241' and damage It compresses subgraph generator 242' and executes telecommunication, and be configured as, each true uncompressed subgraph is had with corresponding Damage compression subgraph is matched.Collector 244' can execute telecommunication with paired device 243', and be configured as, and collect institute There is pairing subgraph, to form predetermined training set.
According to one embodiment, lossy compression subgraph generator 242' may include compression device, with cropping tool 241' Telecommunication is executed, and is configured as, true subgraph is coded and decoded by condensing encoder and decoder, with life At one group of lossy compression subgraph.
Fig. 6 is to show and the side of the disclosed consistent compression artefacts for reducing lossy compression image of some embodiments The schematic flow chart of method 2000.Method 2000 is described in detail in combination with Fig. 6 below.
In step S210, by including the feature extracting device of first group of filter, from lossy compression image zooming-out block, and The block of each extraction is mapped as high dimensional feature vector, to form first group of high dimensional feature vector.In one embodiment, these Vector includes one group of characteristic pattern, and the quantity of characteristic pattern is equal to the dimension of vector.The prevalence strategy of image restoration, is densely to extract Block, the basis then trained in advance by one group indicate these blocks, and basis above-mentioned is for example, PCA, DCT, Haar etc..
In step S220, by executing telecommunication with feature extracting device and to include the feature of second group of filter enhance Equipment carries out denoising to each high dimensional feature vector in first group, and is second group by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising In high dimensional feature vector.In the present embodiment, feature extracting device is that each block extracts n1Dimensional feature.Second group of filter by this A little n1Dimensional vector is mapped to one group of n2Dimensional vector.Each the vector through mapping is conceptually relatively cleaner feature vector.This A little vectors include another group of characteristic pattern.
In step S230, enhance equipment by being coupled to feature and include the mapped device of third group filter, by the Each high dimension vector in two groups is non-linearly mapped as the Partitioning Expression of A restored.In this embodiment, feature enhancing equipment life At one group of n2Dimensional feature vector.Each of these n2 dimensional vectors are mapped to n by mapped device3Dimensional vector.Each through mapping to Amount is conceptually the Partitioning Expression of A restored.These vectors include another group of characteristic pattern.
It, will be by all higher-dimensions in second group by executing the polymerization unit of telecommunication with mapped device in step S240 The Partitioning Expression of A that DUAL PROBLEMS OF VECTOR MAPPING obtains is polymerize, to generate the clear image restored.In one embodiment, these steps S210-S230 can be simulated by above-mentioned formula (1)-(3).
, can be from lossy compression image zooming-out block according to one embodiment, and pass through the first parameters of function F'() carried each The block taken is mapped as high dimensional feature vector.Wherein, F'(x) be nonlinear function (for example, max (0, x), 1/ (1+exp (- x)) or Tanh (x)), the first parameter is determined to obtain by predefined parameter associated with lossy compression image.
According to one embodiment, first group of high dimensional feature vector can be by denoising, and the high dimensional feature vector of denoising can lead to Cross the second parameters of function F'() be non-linearly mapped as second group of high dimensional feature vector, i.e. one group of relatively cleaner feature to Amount.Wherein, F'(x) be nonlinear function (for example, max (0, x), 1/ (1+exp (- x)) or tanh (x)), the second parameter by with The predefined parameter of first group of high dimensional feature vector correlation connection determines to obtain.
According to one embodiment, pass through function F'(thirds parameter) it can be by each high dimension vector in second group non-linearly It is mapped as the Partitioning Expression of A restored.Wherein, F'(x) it is nonlinear function (for example, max (0, x), 1/ (1+exp (- x)) or tanh (x)), third parameter is determined to obtain by predefined parameter associated with second group of high dimension vector.
According to one embodiment, after being polymerize to Partitioning Expression of A to generate the removing image restored, method 2000 It may also include the step of obtaining true uncompressed subgraph corresponding with lossy compression subgraph from predetermined trained cluster sampling, ratio Compared with the dissimilar degree between the clear subgraph restored after polymerization and corresponding true uncompressed subgraph, to generate reconstruct The step of error.By reconstructed error reverse transfer with Optimal Parameters, for example, W1、W2、W3、W4、B1、B2、B3And B4
According to one embodiment, from predetermined training set sampled to obtain it is corresponding with lossy compression subgraph it is true not Before compressing subgraph, method 2000 further includes the steps that preparing predetermined training set.Specifically, first from randomly selected training Multiple subgraphs are cut out in image, to generate one group of really uncompressed subgraph.For example, n m × m pixel can be cut out Subgraph.Next, being based on one group of really one group of lossy compression subgraph of uncompressed subgraph generation.It then, will be each true Uncompressed subgraph is matched with corresponding lossy compression subgraph.Later, all pairing subgraphs are collected, it is pre- to be formed Determine training set.
According to one embodiment, the convolutional Neural for reducing the compression artefacts of lossy compression image for training is shown The method 3000 of network system.Training method 3000 is described in detail in combination with Fig. 7 below.
As shown in fig. 7, in step S310, lossy compression subgraph and its corresponding true is obtained from predetermined trained cluster sampling Real uncompressed subgraph.In step S320, compression subgraph would detract from by convolutional neural networks system and be reconstructed into the clear of recovery Clear subgraph.Dissimilar degree between step S330, the clear subgraph reconstructed by comparing and true uncompressed subgraph, Generate reconstructed error.In step S340, reconstructed error reverse transfer is given to convolutional neural networks system, to adjust convolutional Neural net The weight connected between the neuron of network system.Step S310-S340 is repeated, until the average value of reconstructed error is less than predetermined threshold Until value, for example, predetermined threshold is the lossy compression subgraph in predetermined training set and equal between true uncompressed subgraph The half of square error.
With reference to figure 8, system 4000 is shown.System 4000 includes that storage can perform the memory 402 of component and be coupled to The processor 404 of memory 402, processor 404, which executes, can perform component, to execute the operation of system 4000.Executable component It may include:Feature extraction component 410 is configured as from lossy compression image zooming-out block, and the block extracted is mapped as first Group high dimensional feature vector;Feature enhances component 420, is configured as carrying out denoising to each high dimensional feature vector in first group, And it is second group of high dimensional feature vector by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising.In addition, executable component may also include:Mapping Component 430 is configured as the Partitioning Expression of A for being non-linearly mapped as each high dimension vector in second group to restore;And polymerization Component 440 is configured as polymerizeing the Partitioning Expression of A by all high dimension vectors mapping in second group, to generate recovery Clear image.
On the one hand, feature extraction component 410 is configured as, from lossy compression image zooming-out block, and will be in the block of extraction Each of be non-linearly mapped as high dimensional feature vector, and by mapping all pieces of vector form first group of higher-dimension Feature vector.
In one embodiment, feature enhancing component 420 be configured as, to each high dimensional feature vector in first group into Row denoising, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of high dimensional feature vector.
In one embodiment, feature extraction component 410, feature enhancing component 420 and map component 430 are based respectively on pre- Fixed the first parameter, the second parameter and third parameter map vector.
According to another embodiment, executable component further includes comparing component, is coupled to polymerizing component, and be configured To obtain true uncompressed image corresponding with lossy compression subgraph from the sampling of predetermined training set, and compare from aggregation group Dissimilar degree between the clean image that the polymerization that part receives is restored and corresponding true uncompressed image, is missed with generating reconstruct Difference, wherein reconstructed error is reversed transmission to optimize the first parameter, the second parameter and third parameter.
In one embodiment, executable component further includes the training set preparation component for being coupled to comparing component.The instruction Practicing collection preparation component further includes:Cropping tool is configured as going out multiple subgraphs from randomly selected training image random cropping, with Generate one group of really uncompressed subgraph;Lossy compression subgraph generator executes telecommunication with cropping tool, and is configured To generate one group of lossy compression subgraph based on the true uncompressed subgraph received from cropping tool;Matching module, with cropping tool Telecommunication is executed with generator, and is configured as, it will each true uncompressed subgraph and corresponding lossy compression subgraph It is matched;And collector, telecommunication is executed with matching module, and be configured as, collected in pairs true uncompressed Subgraph and lossy compression subgraph, to form predetermined training set.
In one embodiment, lossy compression subgraph generator further includes compression module, with cropping tool and generator Telecommunication is executed, and is configured as, true subgraph is coded and decoded by condensing encoder and decoder, with life At lossy compression subgraph group.
With existing method on the contrary, the present processes and ambiguously study are used for the dictionary or manifold of block domain modeling, this It is by being realized inside convolutional layer a bit.In addition, feature extraction, feature enhancing and polymerization are also formed as convolutional layer, therefore It is involved in optimization.The present processes and device disclose different types of compression artefacts, and provide to different images Effective reduction of various compression artefacts in region.In the present processes and device, entire convolutional neural networks are completely logical Training acquisition is crossed, pretreatment/post-processing is needed not move through.Due to using light structures, the more existing skill of device and method of the application Art realizes more superior performance.
Embodiment in the scope of the invention can realize in Fundamental Digital Circuit, or computer hardware, firmware, software or It is realized in a combination thereof.Device within the scope of the present invention can be in being tangibly embodied in machine readable storage device for by can It is realized in the computer program product that programmed process device executes;Also, the method and step in the scope of the invention can be by executing instruction The programmable processor of program executes, to execute the function of the present invention by the way that output is operated and generated to input data.
It can valuably be realized in the scope of the invention by executing one or more computer programs on programmable systems Embodiment, which includes at least one programmable processor and at least one input equipment and at least one defeated Go out equipment, programmable processor is coupled to receive data and instruction from data-storage system, and data and instruction are transmitted to Data-storage system.Each computer program can be realized by the programming language of high level procedural or object-oriented, or If desired, being realized with assembler language or machine language;Under any circumstance, which can be compiling or interpretative code. Suitable processor includes for example general and dedicated microprocessor.In general, processor will be deposited from read-only memory and/or at random Access to memory receives instruction and data.In general, computer deposits the large capacity including one or more files for storing data Store up equipment.
Embodiment within the scope of the invention include for carrying or being stored with computer executable instructions, computer can The computer-readable medium of reading instruction or data structure.Such computer-readable medium can be can be by general or specialized calculating Any usable medium that machine system accesses.The example of computer-readable medium may include physical storage medium, such as RAM, ROM, EPROM, CD-ROM or other disk storages, magnetic disk storage or other magnetic storage apparatus, or can be used for carrying or deposit Any other medium of desired program code is stored up, program code is with computer executable instructions, computer-readable instruction or number It is indicated according to the form of structure, and can be by general or specialized computer system accesses.Any of above content can be (special by ASIC Integrated circuit) it supplements or is incorporated in ASIC.Although the particular embodiment of the present invention has been shown and described, do not departing from In the case of the true scope of the present invention, it can change these embodiments and change.
Although it have been described that the preferred embodiment of the present invention, but those skilled in the art are knowing basic invention structure When think of, these embodiments can be deformed or be changed.Appended claims are intended to be believed to comprise preferred embodiment, and It is all to there is deformation or modification to be within the scope of the present invention.
Obviously, without departing from the spirit and scope of the present invention, those skilled in the art can to the present invention into Row deformation or modification.Therefore, if these deformations or modification belong to the range of claim and equivalent technologies, they can also fall into In the scope of the present invention.

Claims (20)

1. a kind of device for the compression artefacts reducing lossy compression image, including:
Feature extracting device, including first group of filter, first group of filter are configured as, from lossy compression image zooming-out block, And the block of extraction is mapped as first group of high dimensional feature vector;
Feature enhances equipment, executes telecommunication with feature extracting device, and include second group of filter, second group of filter It is configured as, denoising is carried out to each high dimensional feature vector in first group of high dimensional feature vector, and by the high dimensional feature of denoising DUAL PROBLEMS OF VECTOR MAPPING is second group of high dimensional feature vector;
Mapped device is electrically coupled with feature enhancing equipment, and includes third group filter, and third group filter is configured For each high dimensional feature vector nonlinear in second group of high dimensional feature vector to be mapped as to the Partitioning Expression of A restored;With
Polymerization unit executes telecommunication with mapped device, and is configured as, to by the institute in second group of high dimensional feature vector There is the Partitioning Expression of A for the recovery that high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING obtains to be polymerize, to generate the clear image restored, wherein described The size of the clear image of recovery is identical as the size of lossy compression image.
2. the apparatus according to claim 1, wherein first group of filter is configured as, from the lossy compression figure It is non-linearly mapped as high dimensional feature vector as extraction block, and by each block extracted, and all pieces of processes map To vector form first group of high dimensional feature vector.
3. the apparatus according to claim 1, wherein second group of filter is configured as, to first group of high dimensional feature Each high dimensional feature vector in vector carries out denoising, and the high dimensional feature vector nonlinear of denoising is mapped as second group high Dimensional feature vector.
4. device according to any one of claim 1-3, wherein first group of filter, second group of filter and Third group filter is based respectively on scheduled first parameter, the second parameter and third parameter and is mapped.
5. device according to claim 4, further includes:
Compare equipment, be electrically coupled to polymerization unit, and be configured as, samples to obtain and lossy compression figure from predetermined training set As corresponding true uncompressed image, and compare clear image and phase that the aggregated recovery received from polymerization unit obtains Dissimilar degree between the true uncompressed image answered, to generate reconstructed error, wherein reconstructed error is reversed transmission to optimize First parameter, the second parameter and third parameter.
6. device according to claim 5 further includes being electrically coupled to the training set preparation equipment for comparing equipment, wherein institute Training set preparation equipment is stated to further comprise:
Cropping tool is configured as, and multiple subgraphs are randomly cut out from randomly selected training image, true to generate one group Real uncompressed subgraph;
Lossy compression subgraph generator executes telecommunication with cropping tool, and is configured as, based on what is received from cropping tool Described one group really uncompressed subgraph generate one group of lossy compression subgraph;
Pairing unit executes telecommunication with cropping tool and lossy compression subgraph generator, and is configured as, to described true Real uncompressed subgraph is matched with corresponding lossy compression subgraph;With
Collector executes telecommunication with pairing unit, and is configured as, collect with pairs of true uncompressed subgraph and Lossy compression subgraph, to generate predetermined training set.
7. device according to claim 6, wherein the lossy compression subgraph generator further comprises:Compression is set It is standby, telecommunication is executed with cropping tool, and be configured as, is coded and decoded really not by condensing encoder and decoder Subgraph is compressed, to generate lossy compression subgraph.
8. device according to claim 5, wherein the reconstructed error includes mean square error.
9. a kind of method for the compression artefacts reducing lossy compression image, including:
By including the feature extracting device of first group of filter, mapped from lossy compression image zooming-out block, and by the block of extraction For first group of high dimensional feature vector;
By executing telecommunication with feature extracting device and to include the feature of second group of filter enhance equipment, to first group of higher-dimension Each high dimensional feature vector in feature vector carries out denoising, and is that second group of higher-dimension is special by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising Sign vector;
Enhance equipment by being electrically coupled to feature and include the mapped device of third group filter, by second group of high dimensional feature vector In each high dimensional feature vector nonlinear be mapped as restore Partitioning Expression of A;With
It, will be by all high dimensional features in second group of high dimensional feature vector by executing the polymerization unit of telecommunication with mapped device The Partitioning Expression of A for the recovery that DUAL PROBLEMS OF VECTOR MAPPING obtains is polymerize, with generate restore clear image, wherein the recovery it is clear The size of image is identical as the size of lossy compression image.
10. it is described from lossy compression image zooming-out block according to the method described in claim 9, wherein, and the block of extraction is mapped Further comprise for first group of high dimensional feature vector:
It is non-linearly mapped as high dimensional feature vector from the lossy compression image zooming-out block, and by the block of each extraction, wherein All pieces form first group of high dimensional feature vector by the vector that mapping obtains.
11. according to the method described in claim 9, wherein, to each high dimensional feature vector in first group of high dimensional feature vector Denoising is carried out, and is that second group of high dimensional feature vector further comprises by the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising:
Denoising is carried out to each high dimensional feature vector in first group of high dimensional feature vector, and the high dimensional feature vector of denoising is non- Linearly it is mapped as second group of high dimensional feature vector.
12. according to the method described in any one of claim 9-11, wherein first group of filter, second group of filter Scheduled first parameter, the second parameter and third parameter are based respectively on third group filter to be mapped.
13. according to the method for claim 12, further including after polymerisation:
It samples to obtain true uncompressed image corresponding with the lossy compression image from predetermined training set;With
Compare by the dissimilar degree between the polymerization clear image restored and corresponding true uncompressed image, to generate reconstruct Error, wherein reconstructed error is reversed transmission, to optimize the first parameter, the second parameter and third parameter.
14. according to the method for claim 13, wherein sampling to obtain and the lossy compression image from predetermined training set Before corresponding true uncompressed image, further include:
Go out multiple subgraphs from randomly selected training image random cropping, to generate one group of really uncompressed subgraph;
Based on described one group really uncompressed subgraph generate one group of lossy compression subgraph;
Each true uncompressed subgraph is matched with corresponding lossy compression subgraph;With
It collects with pairs of true uncompressed subgraph and lossy compression subgraph, to generate predetermined training set.
15. according to the method for claim 14, wherein described based on described one group really uncompressed one group of subgraph generation Lossy compression subgraph further comprises:
True uncompressed subgraph is coded and decoded using condensing encoder and decoder, is damaged with generating described one group Compress subgraph.
16. according to the method for claim 13, wherein reconstructed error includes mean square error.
17. a kind of device for the compression artefacts reducing lossy compression image, device include:
Reconfiguration unit is configured as, and would detract from compression image reconstruction based on predefined parameter is the clear image restored, wherein institute Stating reconfiguration unit includes:
Feature extracting device, including first group of filter, first group of filter are configured as, from lossy compression image zooming-out block, And the block of extraction is mapped as first group of high dimensional feature vector;
Feature enhances equipment, executes telecommunication with feature extracting device, and include second group of filter, second group of filter It is configured as, denoising is carried out to each high dimensional feature vector in first group of high dimensional feature vector, and by the high dimensional feature of denoising DUAL PROBLEMS OF VECTOR MAPPING is second group of high dimensional feature vector;
Mapped device is electrically coupled to feature enhancing equipment and includes third group filter, and third group filter is configured as, Each high dimension vector in second group of high dimensional feature vector is non-linearly mapped as to the Partitioning Expression of A restored;With
Polymerization unit executes telecommunication with mapped device, and is configured as, to by the institute in second group of high dimensional feature vector There is the Partitioning Expression of A for the recovery that high dimension vector maps to be polymerize, to generate the clear image restored, wherein the recovery Clear image size it is identical as the size of lossy compression image;
Wherein, the feature extracting device, feature enhancing equipment, the mapped device and the polymerization unit respectively include At least one convolutional layer, and the convolutional layer is connected with each other successively, to form convolutional neural networks system;
Training unit executes telecommunication with reconfiguration unit, and is configured as, and the convolution is trained by predetermined training set Nerve network system, to change the predefined parameter that reconfiguration unit uses.
18. a kind of system for the compression artefacts reducing lossy compression image, including:
Memory, for storing executable component;With
Processor is electrically coupled with the memory, for executing the executable component, to execute the operation of system;Wherein, institute Stating executable component includes:
Feature extraction component is configured as from the lossy compression image zooming-out block, and the block of extraction is mapped as first group high Dimensional feature vector;
Feature enhances component, is configured as carrying out denoising to each high dimensional feature vector in first group of high dimensional feature vector, and And it is the high dimensional feature DUAL PROBLEMS OF VECTOR MAPPING of denoising is vectorial for second group of high dimensional feature;
Map component is configured as non-linearly for each high dimension vector in second group of high dimensional feature vector being mapped as recovery Partitioning Expression of A;With
Polymerizing component is configured as point of the recovery to being mapped by all high dimension vectors in second group of high dimensional feature vector Block expression is polymerize, to generate the clear image restored, wherein the size of the clear image of the recovery damages pressure with described The size of contract drawing picture is identical.
19. system according to claim 18, wherein feature extraction component is configured as, from the lossy compression image Block is extracted, and each high dimensional feature that is non-linearly mapped as in the block of extraction is vectorial, and all pieces of processes map To vector form first group of high dimensional feature vector.
20. system according to claim 18, wherein feature enhancing component be configured as, to first group of high dimensional feature to Each high dimensional feature vector in amount carries out denoising, and the high dimensional feature vector nonlinear of denoising is mapped as to second group of higher-dimension Feature vector.
CN201580075726.4A 2015-02-13 2015-02-13 A kind of method and device for the compression artefacts reducing lossy compression image Active CN107251053B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/000093 WO2016127271A1 (en) 2015-02-13 2015-02-13 An apparatus and a method for reducing compression artifacts of a lossy-compressed image

Publications (2)

Publication Number Publication Date
CN107251053A CN107251053A (en) 2017-10-13
CN107251053B true CN107251053B (en) 2018-08-28

Family

ID=56614081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580075726.4A Active CN107251053B (en) 2015-02-13 2015-02-13 A kind of method and device for the compression artefacts reducing lossy compression image

Country Status (2)

Country Link
CN (1) CN107251053B (en)
WO (1) WO2016127271A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871306B (en) * 2016-09-26 2021-07-06 北京眼神科技有限公司 Method and device for denoising picture
CN109120937B (en) * 2017-06-26 2020-03-27 杭州海康威视数字技术股份有限公司 Video encoding method, decoding method, device and electronic equipment
CN109151475B (en) * 2017-06-27 2020-03-27 杭州海康威视数字技术股份有限公司 Video encoding method, decoding method, device and electronic equipment
CN108765338A (en) * 2018-05-28 2018-11-06 西华大学 Spatial target images restored method based on convolution own coding convolutional neural networks
CN109801218B (en) * 2019-01-08 2022-09-20 南京理工大学 Multispectral remote sensing image Pan-sharpening method based on multilayer coupling convolutional neural network
CN111986278B (en) * 2019-05-22 2024-02-06 富士通株式会社 Image encoding device, probability model generating device, and image compression system
US20230188759A1 (en) * 2021-12-14 2023-06-15 Spectrum Optix Inc. Neural Network Assisted Removal of Video Compression Artifacts

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103475876A (en) * 2013-08-27 2013-12-25 北京工业大学 Learning-based low-bit-rate compression image super-resolution reconstruction method
CN103517022A (en) * 2012-06-29 2014-01-15 华为技术有限公司 Image data compression and decompression method and device
CN103975587A (en) * 2011-11-07 2014-08-06 佳能株式会社 Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7227980B2 (en) * 2002-12-19 2007-06-05 Agilent Technologies, Inc. Systems and methods for tomographic reconstruction of images in compressed format
US7747082B2 (en) * 2005-10-03 2010-06-29 Xerox Corporation JPEG detectors and JPEG image history estimators
US8223837B2 (en) * 2007-09-07 2012-07-17 Microsoft Corporation Learning-based image compression
US8238675B2 (en) * 2008-03-24 2012-08-07 Microsoft Corporation Spectral information recovery for compressed image restoration with nonlinear partial differential equation regularization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103975587A (en) * 2011-11-07 2014-08-06 佳能株式会社 Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
CN103517022A (en) * 2012-06-29 2014-01-15 华为技术有限公司 Image data compression and decompression method and device
CN103475876A (en) * 2013-08-27 2013-12-25 北京工业大学 Learning-based low-bit-rate compression image super-resolution reconstruction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A Contrast Enhancement Framework with JPEG Artifacts Suppression";Yu Li ET AL;《ECCV》;20141231;全文 *
"Image Hallucination with Feature Enhancement";Zhiwei Xiong ET AL;《IEEE》;20091231;第2078页 *
"ImageNet Classification with Deep Convolutional Neural Networks";Alex Krizhevsky ET AL;《In: NIPS》;20121231;全文 *
"Learning a Deep Convolutional Network for Image Super-Resolution";Chao Dong ET AL;《In ECCV》;20141231;第187-189页 *

Also Published As

Publication number Publication date
CN107251053A (en) 2017-10-13
WO2016127271A1 (en) 2016-08-18

Similar Documents

Publication Publication Date Title
CN107251053B (en) A kind of method and device for the compression artefacts reducing lossy compression image
CN103037212B (en) The adaptive block compressed sensing method for encoding images of view-based access control model perception
KR100690784B1 (en) Compressed video quality testing method for picture quality estimation
CN108805840A (en) Method, apparatus, terminal and the computer readable storage medium of image denoising
CN110827198B (en) Multi-camera panoramic image construction method based on compressed sensing and super-resolution reconstruction
CN102148986B (en) Method for encoding progressive image based on adaptive block compressed sensing
CN106658004B (en) A kind of compression method and device based on image flat site feature
CN103354617B (en) Boundary strength compressing image quality objective evaluation method based on DCT domain
Danielyan et al. Noise variance estimation in nonlocal transform domain
CN109118428B (en) Image super-resolution reconstruction method based on feature enhancement
CN108093264A (en) Core image compression, decompressing method and the system perceived based on splits' positions
US8634671B2 (en) Methods and apparatus to perform multi-focal plane image acquisition and compression
Zini et al. Deep residual autoencoder for blind universal jpeg restoration
CN111163314A (en) Image compression method and system
Wu et al. A generative adversarial network framework for JPEG anti-forensics
CN108416425B (en) Convolution operation method and device
CN116630131A (en) Coding and decoding system and method for invisible screen watermark
CN112150360A (en) IVUS image super-resolution reconstruction method based on dense residual error network
Zini et al. Deep residual autoencoder for quality independent JPEG restoration
CN115861472A (en) Image reconstruction method, device, equipment and medium
Su et al. Multimedia source identification using an improved weight photo response non-uniformity noise extraction model in short compressed videos
CN113658282A (en) Image compression and decompression method and device
Stagliano et al. Learning adaptive and sparse representations of medical images
CN114549300A (en) Image dictionary generation method, image reconstruction method and related device
CN113422965A (en) Image compression method and device based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant