CN102726045A - Spatial prediction technique for video coding - Google Patents

Spatial prediction technique for video coding Download PDF

Info

Publication number
CN102726045A
CN102726045A CN2011800070278A CN201180007027A CN102726045A CN 102726045 A CN102726045 A CN 102726045A CN 2011800070278 A CN2011800070278 A CN 2011800070278A CN 201180007027 A CN201180007027 A CN 201180007027A CN 102726045 A CN102726045 A CN 102726045A
Authority
CN
China
Prior art keywords
pixel
threshold value
current block
threshold
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011800070278A
Other languages
Chinese (zh)
Other versions
CN102726045B (en
Inventor
D.索罗
A.马丁
E.弗朗索瓦
J.维尔隆
P.波蒂斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
International Digital Madison Patent Holding SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Publication of CN102726045A publication Critical patent/CN102726045A/en
Application granted granted Critical
Publication of CN102726045B publication Critical patent/CN102726045B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/192Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/48Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to a method for coding a block of pixels comprising the following steps: determining (10), for each pixel of the current block, a prediction pixel by thresholding, with a current threshold value, of coefficients resulting from a transformation applied on a window covering at least the pixel of the current block and by inverse transformation applied to the thresholded coefficients, - extracting (12) from the current block a prediction block formed of prediction pixels to generate a residue block, and - coding (14) said residue block. According to the invention, the current threshold value is determined or coded from neighbouring reconstructed pixels of the current block.

Description

The spatial prediction technology of video coding
Technical field
The present invention relates to the general field of image encoding.
The present invention relates more specifically to the method for coded picture block and the method for decoded image blocks.
Background technology
Well-known in the prior art, an image of coded video sequences is that this image division is become block of pixels, then through spatial prediction (INTRA pattern) or time prediction (INTER pattern) each piece of encoding.The coding current block generally comprise through, for example, the DCT (discrete cosine transform) of the difference (difference) between the pixel of current block and the pixel of predict blocks is transformed into coefficient block.This coding further comprises quantization parameter, then the coefficient of entropy coding quantification.
Well-known in the prior art, with the INTRA pattern, that is, and through the spatial predictive encoding current block be from before predict current block the view data of spatial neighbor (neighbouring) piece of coding.For example, well-known in video encoding standard H.264, above being in current block or the pixel on the current block left side, predict current block.More particularly, the pixel of current block is predicted according to preferred prediction direction (for example, level, vertical etc.) and the contiguous pixel of current block through linear combination.Predict pixel, that is, the predict pixel of the neighborhood pixels gained of linear combination current block forms predict blocks.When current block to be predicted comprised profile, this Forecasting Methodology was especially effective.In fact, if the edge of an object corresponding to by one of prediction direction of standard definition H.264 the time, this profile is in fact expanded in current block to be predicted with one way system.But when having two-dimensional structure, this Forecasting Methodology has been lost its efficient.
Summary of the invention
The objective of the invention is to overcome at least one shortcoming of prior art.For this reason, the present invention relates to be called the method for the block of pixels of current block through spatial predictive encoding.Comprise following steps according to coding method of the present invention:
-through utilizing current threshold value that the coefficient of the conversion gained on the window that is applied in the pixel that covers current block is at least added threshold and through being applied to add the inverse transformation of threshold coefficient, for each pixel of current block is confirmed predict pixel;
-from current block, extract the predict blocks that forms by predict pixel with the generation residual block; And
-coded residual piece.
Advantageously, definite or this current threshold value of coding from the contiguous reconstructed pixel of current block.In definite from the contiguous reconstructed pixel of current block or this current threshold value of coding, improved code efficiency.
According to a specific embodiment, utilize each threshold values of a plurality of threshold values to repeat to confirm the step of predict pixel for each pixel of current block.This method also comprises and is chosen in the minimum threshold value of the predicated error calculated between current block and the predict blocks in the middle of a plurality of threshold values as current threshold value.
According to a particular aspects of the present invention, through with the prediction threshold value the current threshold value of differential coding, this predicted value depends on the contiguous reconstructed pixel of current block.
According to particular characteristics, this prediction threshold value equals to be used in the mean value of the threshold value in the contiguous block of current block.
According to first modification, this prediction threshold value equals to be used in the intermediate value of the threshold value in the contiguous block of current block.
According to second modification, this prediction threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers reconstructed pixel is at least added threshold and through being applied to add the inverse transformation of threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the step of predict pixel for each pixel of adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as the prediction threshold value.
According to another embodiment, this current threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers reconstructed pixel is at least added threshold and through being applied to add the inverse transformation of threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the step of predict pixel for each pixel of adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as current threshold value.
Advantageously, for each reconstructed pixel of the adjacent domain of current block, the predicated error of between the reconstructed pixel of the adjacent domain of current block and respective predicted pixel, calculating is considered their distances with respect to the edge of current block.
According to another aspect of the present invention, when having selected current threshold value for the current block of size 8 * 8, multiply by through the current threshold value that will select strict with 1 alpha, for each piece of the size 4 * 4 of current block calculates current threshold value.
Advantageously, the size of window depends on pixel to be predicted position in current block.
The invention still further relates to the method through the current block of spatial prediction decoded pixel, it comprises following steps:
-decoded residual piece;
-through utilizing current threshold value that the coefficient of the conversion gained on the window that is applied in the pixel that covers current block is at least added threshold and through being applied to add the inverse transformation of threshold coefficient, for each pixel of current block is confirmed predict pixel; And
-through residual block that merges decoding and the said current block of predict blocks reconstruct that forms by predict pixel.
Advantageously, from the contiguous reconstructed pixel of current block, confirm this current threshold value.
According to a specific embodiment, this current threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers reconstructed pixel is at least added threshold and through being applied to add the inverse transformation of threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the step of predict pixel for each pixel of adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as current threshold value.
According to another specific embodiment, also comprise following steps according to coding/decoding method of the present invention:
The difference of-decoding threshold value;
-definite prediction threshold value from the contiguous reconstructed pixel of current block; And
This difference of-calculating and this prediction threshold value sum are somebody's turn to do and value is current threshold value.
Advantageously, this prediction threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers reconstructed pixel is at least added threshold and through being applied to add the inverse transformation of threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the step of predict pixel for each pixel of adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as the prediction threshold value.
Description of drawings
Through with reference to accompanying drawing by restrictive embodiment and favourable implementation anything but, can understand better and illustration the present invention, in the accompanying drawings:
-Fig. 1 shows according to coding method of the present invention;
-Fig. 2 shows and comprises to be predicted image section with the window that is used to predict this piece;
-Fig. 3 and 4 shows in detail the step according to coding method of the present invention;
-Fig. 5 and 6 shows and comprises to be predicted image section with the different windows that is used to predict this piece;
-Fig. 7 show comprise to be predicted, with the contiguous cause and effect band Zc of this piece be used to predict the image section of window of the pixel of this cause and effect band;
-Fig. 8 illustrates according to coding/decoding method of the present invention;
-Fig. 9 illustration according to encoding device of the present invention; And
-Figure 10 shows according to decoding device of the present invention.
Embodiment
Image comprises each pixel or picture point of being associated with at least one item of image data.One item of image data is, for example, and a brightness data or a chroma data.
The data that obtain after other data have been extracted in term " residual error " expression.This extraction generally is from source pixel, to deduct predict pixel.But this extraction is more general, especially comprises weighted subtraction.
Term " reconstruct " expression merges the data (for example, pixel, piece) that obtain afterwards with residual error and prediction data.This merging generally is a residual prediction pixel sum.But this merging is more general, and especially comprises weighting summation.Reconstructed blocks is the piece of reconstructed pixel.
About picture decoding, term " reconstruct " and " decoding " are often as synonym.Therefore, " reconstructed blocks " is also illustrated under the term of " decoding block ".
The present invention relates to be called the method for the block of pixels of current block through spatial predictive encoding.It is applied to the coding of image or image sequence.According to coding method of the present invention based on the method that is described in the extrapolated signal in the following document: Guleryuz; O.G.; Title is " Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising ", Image Processing, IEEE Transactions volume 15; 3 phases, in March, 2006 page or leaf: 539-571.This Extrapolation method is used to shelter the purpose of (masking) mistake at first.
The method of describing according to the current block of coded image of the present invention with reference to figure 1 below.
During step 10, for each pixel of current block is confirmed predict pixel.Predict pixel forms the predict blocks of current block.Predict pixel is through utilizing current threshold value that the coefficient of the conversion gained on the window that is applied in the pixel that covers current block to be predicted is at least added threshold (thresholding) acquisition.This window is corresponding to the support of conversion.The conversion of using is, for example, and DCT.But the present invention is not limited to the latter.Also can use other conversion as the DFT.According to the present invention, from the contiguous reconstructed pixel of current block, confirm or the current threshold value of encoding.When from the contiguous reconstructed pixel of current block, confirming or encoding current threshold value, improved code efficiency.
During step 12, from current block, extract the predict blocks that forms by predict pixel with the generation residual block.
During step 14, residual block is coded among the stream S.For example, through, for example, DCT or wavelet transformation are transformed into coefficient block with residual block, and it is quantized then and is encoded through entropy coding.According to a kind of modification, only quantize then through entropy coding coded residual piece.
With reference to figure 2 and 3 step 10 of confirming predict pixel is described more accurately below.In Fig. 2, predict pixel p 0,0Corresponding to the pixel on the upper left side of current block B to be predicted.Pixel with the cross sign in Fig. 2 is a known pixels, that is, and and reconstructed pixel.Predict pixel p 0,0Give it so that will represent the value of its environment.Window F is at its initial position F 0,0On cover pixel p to be predicted at least 0,0In step 10, on this window, use conversion exactly.
During step 100, to pixel p 0,0Specify initial value.As a simple case, will specify to pixel p from the mean value of neighborhood pixels 0,0This value representation is become p Av0,0: p Av0,0=(a+b+c)/3.According to a kind of modification, the intermediate value of pixel a, b and c is specified to pixel p 0,0According to another modification, will be worth one of a, b or c and specify to pixel p 0,0According to other modification, consider to be in pixel p 0,0The cause and effect adjacent domain in other pixel, think pixel p 0,0Confirm initial value.The cause and effect adjacent domain of current pixel is included in the pixel groups of the present image of reconstruct during coding (correspondingly decoding) current pixel.
During step 110, conversion is applied to the pixel of window F.Then these pixels are transformed into coefficient.
During step 120, use threshold value th OptIn transform domain, coefficient is added threshold.This threshold that adds has the noise of elimination so that only remain with the effect of meaning coefficient.
During step 130; The inverse transformation that is applied in the conversion of using in the step 110 is to turn back to pixel domain; So that recovery table is shown as the new predicted pixel values of , the index greater than of null value is corresponding to the zero offset with respect to the initial position of this window of the row and column of window F.
With reference to Figure 4 and 5, utilize window F offset applications with reference to the identical method of figure 3 described methods, think that other pixel of current block B is confirmed predict pixel.(iteratively) predict pixel p repeatedly 0,0To p N-1, m-1Position F with respect to the window F among Fig. 3 0,0, with window 1 pixel that squints to the right, so as to confirm with current block just in time in pixel p 0,0The corresponding predict pixel p of pixel on the right 0,1In Fig. 5, a, b and c are in pixel p to be predicted respectively 0,1The left side, the pixel p on the top and diagonal angle 0,1Contiguous reconstructed pixel, wherein, more particularly, under present case, the pixel of inserting in before a equals Value.
According to a kind of modification, with position F 0,0On window 2 pixels that squint to the right, that is, be displaced to position F 0,2In this situation, in second repeats, predict pixel p 0,1And p 0,2More generally, can be with m pixel with m pixel of window F skew.For prediction is too demoted, it is favourable that the value of m keeps less.
During step 100, will specify to p from the mean value of neighborhood pixels 0,1, for example, this value is
Figure BDA00001928552700062
This value representation is become Also can be applied in reference pixel p in the step 100 0,0Described modification.
During step 110, conversion is applied to window F 0,1Pixel.Then these pixels are transformed into coefficient.
During step 120, utilize threshold value th OptIn transform domain, coefficient is added threshold.This threshold that adds has the noise of elimination so that only remain with the effect of meaning coefficient.
During step 130; The inverse transformation that is applied in the conversion of using in the step 110 turns back to pixel domain; So that recovery table is shown as the new predicted pixel values of
Figure BDA00001928552700064
, the index greater than
Figure BDA00001928552700065
is listed as with 1 corresponding to window F 0 row that squinted.As shown in Figure 5, pixel p 0,0Be included in position F 0,1On window F in.Therefore, calculating predict pixel p 0,1During this time, also be pixel p 0,0Calculate new value.In fact, during inverse transformation, will be worth
Figure BDA00001928552700066
Specify and give pixel p 0,0This value
Figure BDA00001928552700067
Can be different from when window F does not squint (window is at F in preceding repetition 0,0On the position) middle that calculates
Figure BDA00001928552700068
In order to be thought of as pixel p 0,0Two values calculating, that is, window F zero offset in preceding repetition, obtain that
Figure BDA00001928552700069
With obtain in the current repetition that is listed as at skew 0 row and 1
Figure BDA000019285527000610
To newly be worth to specify and give predict pixel p 0,0This new value that is expressed as equals; For example; The mean value of two values
Figure BDA000019285527000612
and
Figure BDA000019285527000613
; That is,
Figure BDA000019285527000614
Repeat this method and all obtain prediction up to all pixels of piece B.For this reason, during step 140, whether the checking current pixel is to be predicted last pixel.If this situation, then the step of predict blocks is confirmed in termination.Under opposite situation, if in current line, still there is pixel to be predicted, then with the window F row that squint to the right, or with the window F delegation that squints downwards, so that once more window is placed the beginning of being expert at.But, do not fix in each mode that repeats hour offset window F.It depends on the scanning sequence for to be predicted definition.With reference to preceding figure, the scanning of pixel from left to right individual element is carried out then line by line.This scanning is not exclusive, sawtooth pattern scanning and picture, for example, the 1st row the 1st row then earlier, then the 2nd row then other such type of scanning of the 2nd row or the like also be possible.
To the reposition of window applying step 100 to 140 once more.Be new pixel p to be predicted Sk, slDetermined value
Figure BDA00001928552700071
The pixel that also is included among the window F for the current block that has calculated predicted value or a plurality of predicted values during the former repetition is calculated new predicted value.For these pixels, like top reference pixel p 0,0Describe, confirm new predicted value as follows:
p Int . k , l sk , sl = Σ p = k sk Σ q = l sl ( p tr . k , l p , q ) / ( ( sk - k + 1 ) ( sl - l + 1 ) ) ,
Wherein,
-
Figure BDA00001928552700073
Be with the position F of window F Sk, slDuring the corresponding repetition to be predicted k the pixel of prediction in the capable and l row;
-sk and sl: the skew that is the row and column of window F respectively;
-
Figure BDA00001928552700074
Be through window F is displaced to position F in succession Sk, slThe position of recurrence prediction (k, the value of the predict pixel in l).
According to a kind of modification, replace weighted sum with median or histogram peak type function.
According to first embodiment, from cause and effect band Zc, that is, comprise the reconstructed pixel in the adjacent domain of current block B, but may not be adjacent with this piece, confirm threshold value th OptThis embodiment describes with reference to figure 7.In this Fig. 7, cross is represented reconstructed pixel.Cross on the gray background representes to belong to the pixel of cause and effect band Zc.This band Zc is used to current block to be predicted and confirms threshold value.For this reason, will be applied to the pixel of this band Zc, so that be that their central each are confirmed predict pixel and are directed against several threshold value th with reference to the described method of Figure 4 and 5 iConfirm this predict pixel.Therefore, to each threshold value th i, calculating energy level (energy level) on band Zc.As a kind of simple illustration, this energy will calculate according to following formula:
SSE i = Σ p ∈ ZC ( Y ( p ) - p Int . ( p ) ) 2 ,
Wherein,
-p representative is included in the locations of pixels in the piece;
-Y is the value of an item of image data (for example, brightness and/or colourity) of the pixel in the current block to be predicted;
-p IntBe to threshold value th iThe predicted value of confirming.
For band Zc, with threshold value th ZcConfirm as and generate minimum prediction energy SSE iThat.
According to a kind of modification, press following calculating energy:
SAD i = Σ p ∈ Bloc _ NxM | Y ( p ) - p Int . ( p ) |
According to another modification, press following calculating energy:
E i = Max p ∈ Bloc _ NxM | Y ( p ) - p Int . ( p ) |
According to another modification, introducing the prediction errors make Zc can be according to the weighting function apart from relativization in the forward position of pixel with respect to be predicted of Zc.The value of this weighting function so basis, for example, the distance at the center of pixel with respect to be predicted and becoming makes:
w 8 x 8 ( i , j ) = c × ρ ( i - 11.5 ) 2 + ( j - 11.5 ) 2 ,
w 4 x 4 ( i , j ) = c × ρ 2 × ( i - 5.5 ) 2 + ( j - 5.5 ) 2 ,
Wherein:
-c is a normalisation coefft;
-ρ=0.8;
-i and j be corresponding to the coordinate of weight coefficient in the framework of weighting windows, wherein to be predicted center for the piece of 4 * 4 and 8 * 8 yardsticks (dimension) respectively in (5.5,5.5) and (11.5,11.5);
-initial point (0,0) is on the upper left side.
The threshold value th of current block to be predicted OptEqual th ZcBand Zc can have especially multi-form according to the availability of neighborhood pixels.Equally, the thickness of band can surpass 1 pixel.
According to second embodiment, through utilizing different threshold value th iThe described method of repeated reference Figure 4 and 5 and be that current block is confirmed threshold value th through the minimum threshold value of the predicated error confirming between predict blocks and current block, to calculate Opt
Therefore, for each threshold value th i, the calculating energy level.As a simple case, this energy will calculate according to following formula:
SSE i = Σ p ∈ Bloc _ NxM ( Y ( p ) - p Int . ( p ) ) 2 ,
Wherein,
-p representative is included in the locations of pixels in the piece;
-Y is the value of an item of image data (for example, brightness and/or colourity) of the pixel in the current block to be predicted;
-p IntBe to threshold value th iThe predicted value of confirming.
Threshold value th OptBe to generate minimum prediction energy SSE iThat.
According to modification, press following calculating energy:
SAD i = Σ p ∈ Bloc _ NxM | Y ( p ) - p Int . ( p ) | .
According to another modification, press following calculating energy:
E i = Max p ∈ Bloc _ NxM | Y ( p ) - p Int . ( p ) |
The current threshold value th that confirms according to this second embodiment OptValue by direct coding in stream S or advantageously through with prediction threshold value th PredDifference be coded in stream S in case reduce its coding cost.
For example, prediction threshold value th PredEqual to be the definite threshold value th of piece contiguous with current block and that encoded OptMean value.Consider the piece on the left side, top piece and upper left.According to modification, also consider top-right.
According to another modification, prediction threshold value th PredEqual to be the definite threshold value th of piece contiguous with current block and that encoded OptIntermediate value.
According to another modification, prediction threshold value th PredEqual th ZcShould be noted that, if confirm th according to first embodiment Opt, th then Opt=th Zc, in this case, the threshold value of not encoding is because can confirm it with the mode identical with coder side at decoder-side from the pixel of Zc.
According to a specific embodiment; The current block of size 8 * 8 can be predicted through following steps: be 8 * 8 and confirm current threshold value; And utilize the current threshold application that is expressed as th8 * 8 with reference to the described method of Figure 4 and 5; Perhaps be divided into four 4 * 4 with these 8 * 8; And utilize and to be expressed as will being applied in independently on each 4 * 4 with reference to the described method of Figure 4 and 5 all identical current threshold value of each piece of th4 * 4, th4 * 4th, derive from th8 * 8 according to following equation: th4 * 4=α * th8 * 8, wherein the α strictness is less than 1.
The invention still further relates to 8 described coding/decoding methods with reference to figure.
During step 20, be current block decoded residual piece.For example, the part with stream S is decoded into coefficient.Coefficient is gone to quantize, then, if be necessary, in addition conversion of the inverse transformation of the conversion through being used in coder side.Therefore obtain residual block.
During step 22, for each pixel of current block is confirmed predict pixel.Predict pixel forms the predict blocks of current block.Predict pixel is to obtain through utilizing current threshold value that the coefficient from the conversion on the window that is applied in the pixel that covers current block to be predicted is at least added threshold.This window is corresponding to the support of conversion.The conversion of using is, for example, and DCT.But the present invention is not limited to the latter.Also can use other conversion as the DFT.
During step 24, merge the reconstruct current block through residual block with predict blocks and decoding.
The step 22 of confirming predict pixel is identical with the step 10 of coding method.But, at decoder-side, threshold value th OptUnder situation about being coded in coder side among the stream S, from this same stream, decode, or directly confirm in the reconstructed pixel from the adjacent domain of current block.
According to first embodiment, th OptBe directly or at coded prediction value th PredThe situation of difference (differential) under through decode value being added in this value from this stream decoding.For example, prediction threshold value th PredEqual to be the definite threshold value th of piece contiguous with current block and that encoded OptMean value.For example, consider the piece on the left side, top piece and upper left.According to modification, also consider top-right.
According to another modification, prediction threshold value th PredEqual to be the definite threshold value th of piece contiguous with current block and that encoded OptIntermediate value.
According to another modification, prediction threshold value th PredEqual th Zc, th wherein ZcSuch as coder side with reference to figure 7 description confirm.
According to second embodiment, threshold value th OptBe with from the reconstructed pixel of band the Zc, confirm with reference to the figure 7 identical mode of being described in coder side.In this case, th OptEqual th Zc
According to a specific embodiment that can be applied to the Code And Decode method, the size of the window F position of pixel to be predicted in current block of depending on as shown in Figure 6.In this drawing, the F in the position 0,0On window have less than F in the position N-1, m-1On the size of window.This has the advantage of the correlation that improves predict blocks.For example, when current block has 4 * 4 sizes, for being in above the current block and left side edge, that is, the 1st row and the 1st pixel that lists, the size of window is 4 * 4, and for other pixel of current block, window has 8 * 8 size.The window that uses or the yardstick of a plurality of windows are not limited to 2 power (powers of 2).In fact, the present invention be not limited to be called " fast " conversion be applied to 2 NThe use of conversion of manyfold sample.In addition, the conversion of use may not be dispersed.
The invention still further relates to encoding device of describing with reference to figure 9 12 and the decoding device of describing with reference to Figure 10 13.In Fig. 9 and 10, but shown module is to may or may not correspond to the functional unit of discrimination unit physically.For example, some of these modules or they can be grouped in the single component together, or constitute the function of same software.On the contrary, some modules can be made up of the discrete physical entity.
With reference to figure 9, encoding device 12 receives one or more images on input.Encoding device 12 can be realized with reference to figure 1 described according to coding method of the present invention.Each image is divided into the piece of each pixel that is associated with at least one item of image data.Encoding device 12 especially realizes utilizing the coding of spatial prediction.In Fig. 9, only show encoding device 12 and coding that passes through spatial prediction or the relevant module of INTRA coding.Unshowned other module of knowing with the technical staff video encoder field realizes scramble time prediction (for example, estimation, motion compensation).Encoding device 12 especially comprised can, for example, subtract each other through individual element, from current block B, extract predict blocks Pr to generate the computing module 1200 of residual block Bres.Computing module 1200 can be realized the step 12 according to coding method of the present invention.It further comprises the module 1202 that can conversion residual block Bres then it be quantized into quantized data.Conversion T is, for example, and discrete cosine transform (or DCT).Encoding device 12 further comprises the entropy coding module 1204 that can quantized data be encoded into encoded data stream S.It also comprises the module 1206 of the inverse operation that carries out module 1202.Module 1206 is carried out re-quantization Q -1Carry out inverse transformation T afterwards -1Module 1206 is connected with computing module 1208, computing module 1208 can, for example,, merge data block and predict blocks Pr from module 1206 through the individual element addition, be stored in the reconstructed blocks in the memory 1210 with generation.
Prediction module 1216 is confirmed predict blocks Pr.Prediction module 1216 can realize the step 10 according to coding method of the present invention.The step 14 of this coding method realizes in module 1202 and 1204.
With reference to Figure 10, decoder module 13 receives the encoded data stream S of representative image on input.Stream S is that for example, encoding device 12 sends via channel.Decoding device 13 can be realized with reference to figure 8 described according to coding/decoding method of the present invention.Decoding device 13 comprise can the generating solution code data entropy decoder module 1300.Then decoded data is sent to and to carry out the module 1302 that re-quantization carries out inverse transformation then.Module 1302 is identical with the module 1206 of the encoding device 12 that generates stream S.Module 1302 is connected with computing module 1304, computing module 1304 can, for example,, merge piece and predict blocks Pr from module 1302 through the individual element addition, be stored in the reconstruct current block Bc in the memory 1306 with generation.Computing module 1304 can be realized the step 24 of coding/decoding method.Decoding device 13 also comprises prediction module 1308.Prediction module 1308 is confirmed predict blocks Pr.Prediction module 1308 can realize the step 22 according to coding/decoding method of the present invention.The step 20 of this coding/decoding method realizes in module 1300 and 1302.
Obviously, the present invention is not limited to the above embodiments.Especially, those of ordinary skill in the art can be with any alternative applications in said embodiment, and makes up them so that from their various advantages, benefit.Therefore, the present invention is not limited to employed alternative types (for example, DCT, wavelet transformation, DFT etc.).Equally, can change the scanning sequence (for example, raster scan (raster scan), zigzag (zigzag) etc.) of pixel.In addition, the present invention will never receive mode (for example, SSE, SAD, the Max etc.) restriction of calculating energy level.
The present invention is applied to the coding of still image or image sequence.

Claims (14)

1. one kind is called the method for the block of pixels of current block through spatial predictive encoding, and it comprises following steps:
-through utilizing current threshold value the coefficient from the conversion on the window that is applied in the said pixel that covers said current block is at least added threshold and through being applied to the said inverse transformation that adds the threshold coefficient, is definite (10) predict pixel of each pixel of said current block;
-from said current block, extract predict blocks that (12) is formed by predict pixel with the generation residual block;
And
-coding (14) said residual block,
Said method is characterised in that from the contiguous reconstructed pixel of current block to be confirmed or the said current threshold value of encoding.
2. according to the described method of claim 1; Wherein utilize each threshold values of a plurality of threshold values to repeat to confirm for each pixel of said current block the step of predict pixel, this method further comprises and is chosen in the minimum threshold value of the predicated error calculated between current block and the predict blocks in the middle of a plurality of threshold values as current threshold value.
3. according to the described method of claim 2, wherein through with the said current threshold value of differential coding of prediction threshold value, this predicted value depends on the contiguous reconstructed pixel of current block.
4. according to the described method of claim 3, wherein said prediction threshold value equals to be used in the mean value of the threshold value in the contiguous block of said current block.
5. according to the described method of claim 3, wherein said prediction threshold value equals to be used in the intermediate value of the threshold value in the contiguous block of said current block.
6. according to the described method of claim 3, wherein said prediction threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers said reconstructed pixel is at least added threshold and through being applied to the said inverse transformation that adds the threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the said step of predict pixel for each pixel of said adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as the prediction threshold value.
7. according to the described method of claim 1, wherein said current prediction threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers said reconstructed pixel is at least added threshold and through being applied to add the inverse transformation of threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the said step of predict pixel for each pixel of said adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as current threshold value.
8. according to claim 6 or 7 described methods; Wherein for each reconstructed pixel of the adjacent domain of current block, the predicated error of between the reconstructed pixel of the adjacent domain of current block and respective predicted pixel, calculating is considered their distances with respect to the edge of current block.
9. according to the described method of one of claim 2 to 8, wherein select said current threshold value for the current block of size 8 * 8, multiply by through the said current threshold value that will select strict with 1 alpha, for each piece of the size 4 * 4 of current block calculates current threshold value.
10. according to the described method of one of claim 1 to 5, the size of wherein said window depends on pixel to be predicted position in current block.
11. the method through the current block of spatial prediction decoded pixel, it comprises following steps:
-decoding (20) residual block;
-through utilizing current threshold value the coefficient from the conversion on the window that is applied in the said pixel that covers said current block is at least added threshold and through being applied to the said inverse transformation that adds the threshold coefficient, is definite (22) predict pixel of each pixel of said current block; And
-the said current block of predict blocks reconstruct (24) through merging said decoded residual piece and forming by said predict pixel,
Said method is characterised in that confirms said current threshold value from the contiguous reconstructed pixel of current block.
12. according to the described method of claim 11, wherein said current prediction threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers said reconstructed pixel is at least added threshold and through being applied to the said inverse transformation that adds the threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the said step of predict pixel for each pixel of said adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as current threshold value.
13., also comprise following steps in addition according to the described method of claim 11:
The difference of-decoding threshold value;
-definite prediction threshold value from the contiguous reconstructed pixel of current block; And
-calculate said difference and said prediction threshold value sum, said and value is said current threshold value.
14. according to the described method of claim 13, wherein said prediction threshold value is confirmed according to following steps:
-through utilizing threshold value that the coefficient that is applied in the conversion gained on the window that covers said reconstructed pixel is at least added threshold and through being applied to the said inverse transformation that adds the threshold coefficient, for each reconstructed pixel in the adjacent domain of current block is confirmed predict pixel;
-utilize each threshold values of a plurality of threshold values to repeat to confirm the said step of predict pixel for each pixel of said adjacent domain; And
-be chosen in the minimum threshold value of the predicated error calculated between reconstructed pixel and the respective predicted pixel of adjacent domain of current block in the middle of a plurality of threshold values as the prediction threshold value.
CN201180007027.8A 2010-01-25 2011-01-19 The method of Code And Decode image block Expired - Fee Related CN102726045B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR1050466 2010-01-25
FR1050466A FR2955730A1 (en) 2010-01-25 2010-01-25 CODING AND DECODING METHODS
PCT/EP2011/050693 WO2011089158A1 (en) 2010-01-25 2011-01-19 Spatial prediction technique for video coding

Publications (2)

Publication Number Publication Date
CN102726045A true CN102726045A (en) 2012-10-10
CN102726045B CN102726045B (en) 2016-05-04

Family

ID=43415367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201180007027.8A Expired - Fee Related CN102726045B (en) 2010-01-25 2011-01-19 The method of Code And Decode image block

Country Status (8)

Country Link
US (1) US9363514B2 (en)
EP (1) EP2529552A1 (en)
JP (1) JP5715647B2 (en)
KR (1) KR101819762B1 (en)
CN (1) CN102726045B (en)
BR (1) BR112012017865A2 (en)
FR (1) FR2955730A1 (en)
WO (1) WO2011089158A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225356A (en) * 2013-04-08 2019-09-10 Ge视频压缩有限责任公司 Multiple view decoder
US11677966B2 (en) 2013-01-04 2023-06-13 Ge Video Compression, Llc Efficient scalable coding concept

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2955730A1 (en) * 2010-01-25 2011-07-29 Thomson Licensing CODING AND DECODING METHODS
US11172215B2 (en) * 2018-10-08 2021-11-09 Qualcomm Incorporated Quantization artifact suppression and signal recovery by the transform domain filtering
JP7125330B2 (en) 2018-11-08 2022-08-24 松本システムエンジニアリング株式会社 Self-propelled logging machine

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090238276A1 (en) * 2006-10-18 2009-09-24 Shay Har-Noy Method and apparatus for video coding using prediction data refinement

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW366669B (en) 1996-10-30 1999-08-11 Matsushita Electric Ind Co Ltd Picture encoding device and picture encoding method, picture decoding device and picture decoding method, and data recording media
US6680974B1 (en) * 1999-12-02 2004-01-20 Lucent Technologies Inc. Methods and apparatus for context selection of block transform coefficients
JPWO2008012918A1 (en) 2006-07-28 2009-12-17 株式会社東芝 Image encoding and decoding method and apparatus
JP4786623B2 (en) * 2007-09-25 2011-10-05 Kddi株式会社 Moving picture encoding apparatus and moving picture decoding apparatus
EP2223527A1 (en) * 2007-12-21 2010-09-01 Telefonaktiebolaget LM Ericsson (publ) Adaptive intra mode selection
PL2288163T3 (en) * 2008-05-07 2015-11-30 Lg Electronics Inc Method and apparatus for decoding video signal
US8446949B2 (en) * 2008-06-23 2013-05-21 Sungkyunkwan University Foundation For Corporate Collaboration Distributed coded video decoding apparatus and method capable of successively improving side information on the basis of reliability of reconstructed data
KR101458471B1 (en) * 2008-10-01 2014-11-10 에스케이텔레콤 주식회사 Method and Apparatus for Encoding and Decoding Vedio
CN102210153A (en) * 2008-10-06 2011-10-05 Lg电子株式会社 A method and an apparatus for processing a video signal
TWI566586B (en) * 2009-10-20 2017-01-11 湯姆生特許公司 Method for coding a block of a sequence of images and method for reconstructing said block
FR2955730A1 (en) * 2010-01-25 2011-07-29 Thomson Licensing CODING AND DECODING METHODS

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090238276A1 (en) * 2006-10-18 2009-09-24 Shay Har-Noy Method and apparatus for video coding using prediction data refinement

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GULERYUZ O G: "Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising-Part I: Theory", 《IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 3, MARCH 2006 (2006-03), PAGES 539-554, XP002525251, IEEE, PISCATAWAY, NJ, US》 *
GULERYUZ O G: "Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising-Part II: Adaptive Algorithms", 《IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 3, MARCH 2006 (2006-03), PAGES 555-571, XP002525252, IEEE, PISCATAWAY, NJ, US 》 *
MARTIN A ET AL: "Atomic decomposition dedicated to AVC and spatial SVC prediction", 《PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2008), 12 OCTOBER 2008 (2008-10-12), PAGES 2492-2495, XP031374546, IEEE, PISCATAWAY, NJ, US》 *
MARTIN A ET AL: "Phase refinement for image prediction based on sparse representation", 《PROCEEDINGS OF THE SPIE, VOL. 7543, 19 JANUARY 2010 (2010-01-19), PAGES 1-8, XP002596730, SPIE, BELLINGHAM, VA, US》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11677966B2 (en) 2013-01-04 2023-06-13 Ge Video Compression, Llc Efficient scalable coding concept
CN110225356A (en) * 2013-04-08 2019-09-10 Ge视频压缩有限责任公司 Multiple view decoder
US11582473B2 (en) 2013-04-08 2023-02-14 Ge Video Compression, Llc Coding concept allowing efficient multi-view/layer coding
CN110225356B (en) * 2013-04-08 2024-02-13 Ge视频压缩有限责任公司 multi-view decoder

Also Published As

Publication number Publication date
CN102726045B (en) 2016-05-04
WO2011089158A1 (en) 2011-07-28
JP2013518456A (en) 2013-05-20
KR20120118466A (en) 2012-10-26
US9363514B2 (en) 2016-06-07
US20130195181A1 (en) 2013-08-01
EP2529552A1 (en) 2012-12-05
BR112012017865A2 (en) 2016-04-19
JP5715647B2 (en) 2015-05-13
KR101819762B1 (en) 2018-01-17
FR2955730A1 (en) 2011-07-29

Similar Documents

Publication Publication Date Title
RU2551794C2 (en) Method and apparatus for image encoding and decoding using large transformation unit
KR101354151B1 (en) Method and apparatus for transforming and inverse-transforming image
CN102763410B (en) To the method that the bit stream using oriented conversion to generate is decoded
CN104822063A (en) Compressed sensing video reconstruction method based on dictionary learning residual-error reconstruction
CN101268475B (en) classified filtering for temporal prediction
US8285064B2 (en) Method for processing images and the corresponding electronic device
CN105187829A (en) Apparatus And Method For Decoding Or Encoding Transform Coefficient Blocks
CN103329522A (en) Method for coding videos using dictionaries
CN105284111A (en) Dynamic-image coding device, dynamic-image decoding device, dynamic-image coding method, dynamic-image decoding method, and program
CN102714721A (en) Method for coding and method for reconstruction of a block of an image
CN102726045A (en) Spatial prediction technique for video coding
CN102752596A (en) Rate distortion optimization method
CN101268477B (en) Multi-staged linked process for adaptive motion vector sampling in video compression
CN102918838B (en) The coding method of a block of image sequence and reconstructing method
KR101845622B1 (en) Adaptive rdpcm method for video coding, video encoding method based on adaptive rdpcm and video decoding method based on adaptive rdpcm
CN103139563A (en) Method for coding and reconstructing a pixel block and corresponding devices
JP5931747B2 (en) Method for coding and decompressing blocks of an image sequence
Naqvi et al. Sparse representation of image and video using easy path wavelet transform
Naqvi et al. Adaptive geometric wavelet transform based two dimensional data compression
CN103002279B (en) Encode the method and its corresponding device of simultaneously reconstructed pixel block
JP2013017128A (en) Intra-prediction mode estimation device, image encoder, image decoder and program
RU2575868C2 (en) Method and apparatus for image encoding and decoding using large transformation unit
Medvedeva et al. Motion compensation method for video encoding
Hussien et al. DWT based-video compression using (4SS) matching algorithm
JP2023117786A (en) Encoding device, program, and model generation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190131

Address after: Paris France

Patentee after: International Digital Madison Patent Holding Co.

Address before: I Si Eli Murli Nor, France

Patentee before: THOMSON LICENSING

Effective date of registration: 20190131

Address after: I Si Eli Murli Nor, France

Patentee after: THOMSON LICENSING

Address before: I Si Eli Murli Nor, France

Patentee before: THOMSON LICENSING

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160504

Termination date: 20200119