GB2503658A - Method of harmonising the IPCM and Transquant modes in HEVC standard by harmonising the signalling of the two coding modes. - Google Patents

Method of harmonising the IPCM and Transquant modes in HEVC standard by harmonising the signalling of the two coding modes. Download PDF

Info

Publication number
GB2503658A
GB2503658A GB1211616.6A GB201211616A GB2503658A GB 2503658 A GB2503658 A GB 2503658A GB 201211616 A GB201211616 A GB 201211616A GB 2503658 A GB2503658 A GB 2503658A
Authority
GB
United Kingdom
Prior art keywords
modes
syntax element
loop filtering
pixels
encoded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1211616.6A
Other versions
GB201211616D0 (en
GB2503658B (en
Inventor
Edouard Francois
Guillaume Laroche
Patrice Onno
Shimo Masato
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1211616.6A priority Critical patent/GB2503658B/en
Publication of GB201211616D0 publication Critical patent/GB201211616D0/en
Publication of GB2503658A publication Critical patent/GB2503658A/en
Application granted granted Critical
Publication of GB2503658B publication Critical patent/GB2503658B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method for improving the coding efficiency and reducing the complexity of a video encoder and a video decoder, by the harmonization of two coding modes (such as the Intra Pulse Code Modulation/IPCM mode and the Transquant/Lossless mode) of coding units resulting from the partitioning of an areas of pixels. The coding mode is selected as one of at least two modes that do not use quantisation and transform coding, with one of the modes (such the transquant/lossless mode) allowing pixel prediction. An associated syntax element (such as a flag) is encoded in the video bit stream which represents coding unit (CU) sizes for which the two modes are selectable. Alternative methods are also disclosed in which a syntax element indicates whether loop filtering is enabled or whether an area of pixels is encoded according to one of the coding modes.

Description

METHOD AND DEVICE FOR ENCODING AND DECODING DATA IN A
VIDEO ENCODER AND A VIDEO DECODER
The invention is related to video coding and decoding. More precisely, the present invention is related to the coding and the decoding of a set of parameters in the upcoming ITU-T/MPEG video standard, HEVC (High Efficiency Video Coding).
The HEVC standard defines two specific modes, named IPCM (Intra Pulse Coding Modulation) mode and Lossless mode also called the "Transquant" mode.
IPCM mode is a first specific INTRA mode consisting of skipping the prediction, quantization and transform, and entropy coding steps. This mode is relevant for blocks for which compression turns out to be more costly in rate than just doing no compression. In IPCM mode, the loop filtering process (i.e. the deblocking filter, the Sample Adaptive Offset (SAO) and the Adaptive Loop Filter) can be applied, depending on the value of a picture-level syntax element.
The Lossless (Transquant) mode is a second specific encoding mode that skips quantization, transform steps and loop filtering. For the sake of simplicity, we will name this lossless mode as the Transquant mode in the following.
In this invention, we propose to harmonize the use of these two modes in order to reduce the decoding and coding complexity and to improve the coding efficiency. As can be seen later, the modifications of the decoding process will induce modifications of the encoding process in order to allow an encoder to generate a video bitstream consistent with the decoding process. In addition the modifications of the decoding process will induce also modifications of the syntax of the bitstream.
Figure 1 shows the coding structure used in HEVC. According to HEVC, the original video sequence 101 is a succession of digital images. As is known per se, a digital image is represented by one or more matrices, the coefficients of the matrices being pixels.
The images 102 are divided into slices 103. A slice is a part of the image or the entire image. In HEVC these slices are divided into non-overlapping Largest Coding Units (LCUs), also called Coding Tree Blocks (CTB) 104, generally areas of pixels of size 64 pixels x 64 pixels. Each CTB may in its turn be iteratively divided into smaller variable size Coding Units (CU5) 105 using a quadtree decomposition. Coding units are the elementary coding elements and are constituted of two sub units. These sub-units are the Prediction Units (PU) and the Transform Units (TU) of maximum size equal to the CU's size. A Prediction Unit corresponds to the partition of the CU for prediction of pixels values. Each CU can be turther partitioned into a maximum of 4 square Prediction Units or 2 symmetric rectangular Prediction Units 106 or in asymmetric partitions (not represented).
Transform units are used to represent the elementary units that are spatially transformed with DCT. A CU can be partitioned in TU using a quadtree representation.
Each slice is embedded in one NAL (Network Abstraction Layer) unit which is a kind of container for transporting video data on a communication network. In addition, the coding parameters of the video sequence are stored in dedicated NAL units called parameter sets. In HEVC and H.264/AVC, three kinds of parameter sets NAL units dedicated to the coding parameters are employed: first, the Sequence Parameter Set (SPS) NAL unit that gathers all parameters that are unchanged during the whole video sequence. Typically, it handles the coding profile, the size of the video frames and other parameters.
Secondly, the Picture Parameter Sets (PPS) codes the different values that may change from one frame to another. HEVC also includes Adaptation Parameter Sets (APS) which contains parameters that may change from one slice to another.
Figure 2 shows a diagram of a classical HEVC video encoder 20 that can be considered as a superset of one of its predecessors (H.264/AVC).
Each frame of the original video sequence 101 is first divided into a grid of CTB during stage 201. This stage controls also the definition of slices. In general, two methods define slice boundaries that are: either use a fixed number of CU per slices or a fixed number of bytes.
The subdivision of the CTB in CUs and the partitioning of the CU in TUs and PUs are determined based on a rate-distortion criterion. Each PU of the CU is spatially predicted by an "Intra" predictor 217, or temporally by an "Inter" predictor (218). Each predictor is a block of pixels generated from data of the same image or from another image, from which a difference block (or "residual") is derived. Thanks to the identification of the predictor area and the coding of the residual, it is possible to reduce the quantity of information actually to be encoded.
The encoded frames are of two types: temporal predicted frames (either predicted from one reference frame (P-frames) or predicted from two reference frames (B4rames)) and non-temporal predicted frames (called Intra frames or I-frames). In I-frames, only Intra prediction is considered for coding CUs/PUs. In P-frames and B-frames, Intra and Inter prediction are considered for coding CUs/PUs.
In the "Intra" prediction processing module 217, the current PU is predicted by means of an "Intra" predictor, a block of pixels constructed from the information already encoded of the current image. The module 202 determines an angular prediction direction that is used to predict pixels from the neighbors reconstructed pixels. In HEVC, up to 34 directions are considered. A residual block is obtained by computing the difference between the intra predictor block and current block of pixel to encode. An intra-predicted block is therefore composed of a prediction direction and a residual. The coding of the intra prediction direction is inferred from the neighbors prediction units' prediction direction. This inferring process (203) of prediction direction permits to reduce the coding rate of the intra prediction direction mode. Intra prediction processing module uses also the spatial dependencies of the frame either for predicting the pixels but also to infer the intra prediction direction of the prediction unit.
With regard to the second processing module 218 that is "Inter" coding, two prediction types are possible: Mono-prediction (F-type) consists of predicting a block by referring to one reference block from one reference picture; Bi-prediction (B-type) consists in predicting a block by referring to two reference blocks from one or two reference pictures. An estimation of motion 204 between a current PU to encode and a reference images 215 is made in order to identify, in one or several of these reference images, one (P-type) or several (B-type) blocks of pixels to use them as predictors of this current block.
In case where several block predictors are used (B-type), they are merged to generate one single prediction block. The reference images are images in the video sequence that have already been coded and then reconstructed (by decoding).
The reference block is identified in the reference frame by a motion vector (MV) that is equal to the displacement between the PU to encode in current frame and the reference block. Next stage (205) of inter prediction process consists in computing the difference between the predictor block and the current block to encode. This block of difference is the residual of the inter predicted block. At the end of the inter prediction process the current PU is composed of at least one motion vector and a residual.
In order to benefit of spatial dependencies of movement between neighboring PUs, HEVC provides a method to predict the motion vectors of each PU. Several motion vector predictors are employed: typically, the motion vector of the PU localized on the top, the left or the top left corner of the current PU are a first set of spatial predictors. A temporal motion vector candidate is also used. One of the predictor is selected based on a criterion that minimizes the difference between the MV predictor and the one of current PU to encode.
In HEVC, this process is referred as AMVP (Adaptive Motion Vector Prediction).
Finally, current PU's motion vector is coded (206) with an index that identifies the predictor within the set of candidates and a MV difference (MVD) of PU's MV with the selected MV candidate.
The results of these two types of coding are compared in a module 216 for selecting the best coding mode.
The residual obtained at the end of inter or intra prediction process is then transformed (207). The transform applies to a Transform Unit (TU) that is included into a CU. A TU can be further split into smaller TU5 using a so-called Residual OuadTree (ROT) decomposition. In HEVC, generally 2 or 3 levels of decompositions are used and authorized transform sizes are from 32x32, 16x16, 8x8 and 4x4. The transform basis is derived from a discrete cosine transform DCI.
The residual transformed coefficients are then quantized (208). The coefficients of the quantized transformed residual are then coded by means of an entropy coding (209) and then inserted in the compressed bitstream 210.
Coding syntax elements are also coded with help of the stage 209. This processing module uses spatial dependencies between syntax elements to increase the coding efficiency. Note that, as described above, the compressed video bitstream comprises also parameter sets such as SPS, PPS and APS.
In order to calculate the "Intra" predictors orto make an estimation of the motion for the "Inter" predictors, the encoder performs a decoding of the blocks already encoded by means of a so-called decoding" loop (211, 212, 213, 214, 220, 215). This decoding loop makes it possible to reconstruct the blocks and images from the quantized transformed residuals.
Thus the quantized transformed residual is inverse quantized (211) by applying the inverse quantization to the one performed at step 208 and reconstructed (212) by applying the inverse transform to that of the step 207.
If the residual comes from an "Intra" coding 217, the used "Intra" predictor is added to this residual in order to recover a reconstructed block corresponding to the original block modified by the losses resulting from the quantization operation.
If the residual on the other hand comes from an "Inter" coding 218, the blocks pointed to by the current motion vectors (these blocks belong to the reference images 215 referred by the current reference image indices) are merged and then added to this decoded residual. In this way the original block is modified by the losses resulting from the quantization operations.
A final loop filtering processing module 219 is applied to the reconstructed signal in order to reduce the effects created by heavy quantization of the residuals obtained and to improve the signal quality. In the current HEVC standard, 3 types of loop filters are used: deblocking filter, sample adaptive offset (SAO) and adaptive loop filter (ALE). The parameters of the filters are coded in the video bitstream. In the current version of HEVC (HM7.0) which is still subject to evolution, the deblocking filter parameters are encoded in slices headers, the SAO parameters associated to a CTB are encoded along this CTB, the ALF parameters are encoded in Adaptation Parameter Sets (APS).
The filtered images, also called reconstructed images, are then stored as reference images 215 in order to allow the subsequent "Inter" predictions taking place during the compression of the following images of the current video sequence.
Figure 3 shows a global diagram of a classical video decoder 30 of HEVC type. The decoder 30 receives as an input a video bitstream 210 corresponding to a video sequence 101 compressed by an encoder of the HEVC type, like the one in figure 2.
During the decoding process, the video bitstream 210 is first of all parsed with help of the entropy decoding module (301). This processing module uses the previously entropy decoded elements to decode the encoded data. It decodes in particular the parameter sets of the video sequence to initialize the decoder. Each NAL unit that corresponds to slices is then decoded, this NAL unit comprising CTB of a video frame.
The partition of the CTB is parsed and CU, PU and TU subdivision are identified. The processing of each CU is then performed with help of intra (307) and inter (306) processing modules, inverse quantization (211) and inverse transform (212) modules and finally loop filter processing module (219).
The "Inter" or "Intra" coding mode for the current block is parsed from the bitstream 210 with help of the parsing process module 301. Depending on the coding mode, either intra prediction processing module 307 or inter prediction processing module 306 is employed. If the coding mode of the current block is "Intra" type, the prediction direction is extracted from the bitstream and decoded with help of neighbors' prediction direction during stage 302 of intra prediction processing module 307. The intra predictor block is then computed (303) with the decoded prediction direction and the already decoded pixels at the boundaries of current PU.
If the coding mode of the current block indicates that this block is of "Inter" type, the motion information is extracted from the bitstream 301 and decoded (304). AMVP process is performed during step 304. The obtained motion vector predictor is used in the reverse motion compensation module 305 in order to determine the "Inter" predictor block contained in the reference images 215 of the decoder 30. In a similar manner to the encoder, these reference images 215 are composed of images that precede the image currently being decoded and that are reconstructed from the bitstream (and therefore decoded previously).
Next decoding step consists in decoding the residual block that has been transmitted in the bitstream. The parsing module 301 extracts the residual coefficients from the bitstream and performs successively the inverse quantization (211) and inverse transform (212) to obtain the residual block. This residual block is added to the predictor block obtained at output of intra or inter processing module.
At the end of the decoding of all the blocks of the current image, the loop filter processing module 219 is used to eliminate the block effects and improve the signal quality in order to obtain the reference images 215. As done at the encoder, this processing module employs the deblocking filter 213, then SAO 220 filter and finally the ALF 214.
The images thus decoded constitute the output video signal 308 of the decoder, which can then be displayed and used.
In the current HEVC decoder specification, two particular modes are added to what can be considered as the normal modes, i.e. modes involving entropy decoding, inverse quantization, inverse transform, prediction and loop filtering.
Figure 4 compares these two particular modes to the normal mode. The diagram 400 depicts the normal case for the reconstructed samples of a Cu.
After having decoded the bitstream through the entropy decoder, two types of data are reconstructed: the prediction data and the residual data. In Inter mode presented in module 218 of Figure 2, the prediction data traditionally represent the motion vectors that are used in the "Pred" box to create a prediction of the samples for the current CU by motion compensation. For Intra mode corresponding to module 217 of Figure 2, the prediction corresponds to the prediction of the samples based on the neighboring pixels of top and left previously decoded CU. The residual data is generated by applying an inverse quantization ("lnv 0") followed by an inverse transform ("lnv I"). The residual data and the prediction data are added to produce the reconstructed data. A loop filtering is then applied to correct some artefacts or enhance the video quality of the video sequence. The loop filtering process corresponds either to the Sample Adaptive Offset (SAO) method, the Adaptive Loop Filter (ALF) method or the Deblocking filter method.
The diagram 410 depicts the Transquant mode. In that mode the inverse quantization and the inverse transform are skipped. The principle of this mode is similar to the normal mode where the data coming from the bitstream are decoded to produce prediction data and the residual data. The residual data do not go through the inverse quantization and the inverse transform steps. The residual data is then added to the prediction data to produce the final data. In the current specification, no loop filtering is applied for the Transquant mode.
This mode is enabled at the picture level in the PPS syntax element. If a CU is coded with the Transquant mode, a flag named cu_transquant_bypass_flag", is set to "1" in the bitstream portion corresponding to the CU. Otherwise it is set to "0". This decoding of this flag is represented by the reference 601 of Figure 6 representing the pseudo code corresponding to the decoding of a Coding Unit.
The diagram 420 illustrates the IPCM mode where the data encoded in the bitstream correspond to the pixels samples values of the Coding Unit. This mode is generally used when the pixel samples of the CU correspond to some noise or unpredictable texture and it is preferable to directly encode the raw data instead of performing the steps of transformation and quantization. If a 2Nx2N PU is coded with the "IPCM" mode, a flag, named "pcm_flag", is set to "1" in the bitstream portion corresponding to the prediction unit. Otherwise it is set to 0". The decoding of this flag is represented by the reference 501 of Figure 5 representing the pseudo code corresponding to the decoding of the syntax of a Prediction Unit. The Loop filtering can be applied to blocks using the IPCM mode depending on a flag. This loop filter control is enabled at the sequence level and more precisely in the SF5 syntax element by setting the flag "pcm loop filter disable flag" to "0".
The following Table 1 summarizes the main difference between the IPCM mode and the Transquant mode. In this table, the acronym "SE" stands for "Syntax Element".
*H**HType. .. _____________________ Activation SPS PPS Granularity 2Nx2N PU CU ______________________ mm/max size in SPS No restriction Location first SE of intra PU first SE of CU Loop filter On/Off flag in SPS Bypassed Transform Bypassed Bypassed Quantization Bypassed Bypassed Prediction No Intra / Inter Entropy coding Skipped Cabac directly coded on xbits, x being bit depth of ________________________ samples ________________________ Table 1: Table summarizing the differences between the two modes.
These figures Sand 6 show in the form of tables a pseudo code allowing a decoder to decode the syntax of a Prediction Unit and a Coding Unit provided in the current HEVC specifications. Syntax elements to be decoded from the bitstream are represented in bold characters. The right part of the table describes how a given syntax element has to be decoded in the bitstream. Here is the definition of the different descriptors which are well known for a person skilled in the art: -ae(v): context-adaptive arithmetic entropy-coded (CABAC) syntax element.
-se(v): signed integer Exp-Golomb-coded syntax element with the left bit first.
-tu(v): truncated unary coded syntax element with left bit first.
-u(n): unsigned integer using n bits. When n is "v' in the syntax table, the number of bits varies in a manner dependent on the value of other syntax elements. The parsing process for this descriptor is specified by the return value of the function read_bits( n) interpreted as a binary representation of an unsigned integer with most significant bit written first.
-ue(v): unsigned integer Exp-Golomb-coded syntax element with the left bit first.
Figure 14 gives a simplified flowchart of the Prediction Unit syntax decoding, also represented in pseudo-code in figure 5. The value of skip_flag is checked in (1401). If skip_flag is true, the syntax element merge_idx is decoded (1402). Otherwise, an equality check of the prediction mode predMode to INTRA is done (1403). If predMode is not INTRA, the inter data are decoded (1404). Otherwise, a new check is performed to check if the PU is 2Nx2N (the PU and the CU have the same size) and the pcm_enabled_flag is true and the CU size is in the authorized range of size of IPCM CUs (1405). If this check is true, the pcm flag is decoded (1406). Otherwise pcm_flag is inferred to be 0 (false). The final process is the decoding of the remaining intra data of the PU (1407).
Figure 15 gives a simplified flowchart of the Coding Unit syntax decoding, also represented in pseudo-code in figure 6. First the CU address is computed (1501). Then the value of the flag transquant_bypass_enable_flag is checked (1502). If the flag is true, the flag cu_transquant_bypass_flag is decoded (1503). Otherwise cu_transquant_bypass_flag is inferred to be 0 (false). The next process checks if the slice type is INTRA (1504). If the slice type is not INTRA, the skip_flag is decoded (1505). Otherwise skip_flag is inferred to be equal to 0 (false). Then the value of skip flag is checked (1506). If skip_flag is true, the call to the prediction unit decoding function is made (1507).
Otherwise, pred_mode_flag or part_mode is decoded (1508). Then it is checked if part_mode is 2Nx2N (1509) (i.e. we check here is the CU has been split in several parts or not). If this is true, there is only one PU in the CU and therefore one call to the prediction unit decoding function is made (1510). Otherwise, there is only more than one PUs in the CU and therefore as many calls to the prediction unit decoding function as the PUs number are made (1511). Finally the remaining CU data are decoded (1512).
The present invention recognizes Three issues in the current design of these two specific modes.
A first issue is the lack of consistency between two modes even though these modes have very close behavior in terms of decoding process. These two modes have several commonalities, but each of them has a specific syntax. The main drawback of the duplication of syntax elements is the increase of the coding and decoding complexity.
A second issue is about the compression cost of the signalization of the IPCM mode and the Transquant mode. In the current version of HEVC, the IPCM mode is signaled by a flag (pcm_flag) for each Prediction Unit of same size as the Coding Unit (2Nx2N Prediction Unit). The compression cost of this syntax element has been limited by adding a restriction on the CU size on which this mode is applied. The Transquant mode is signaled at the CU level by the flag (cu_transquant_bypass_flag) but here no attempt of reducing the compression cost of this syntax element has been proposed.
A third issue relates to the necessity to check for each CU if the loop filtering must be applied or not. Here the concerned loop filtering methods are the Sample Adaptive Offset (SAO) method or the Adaptive Loop filtering (ALE) method. Again, the multiplication of checks increases the decoder complexity.
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention there is provided a method for encoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode being selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, wherein for all cases of one of the at least two modes being selected, an associated syntax element is encoded in the video bitstream, the syntax element being representative of coding units sizes for which the at least two modes are selectable.
In another embodiment said syntax element is common to said at least two modes.
In another embodiment the syntax element is representative of a minimum size of a coding unit and maximum size of a coding unit.
In another embodiment the syntax element comprises two sub-syntax elements, a first sub-syntax element being representative of a minimum size of a coding unit, a second sub-syntax element being representative of the maximum size of a coding unit.
In another embodiment the syntax element is representative of a minimum size of a coding unit or of a maximum size of a coding unit.
In another embodiment the syntax element is representative of a range of sizes of a coding unit.
In another embodiment the syntax element is encoded in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment one first mode of the at least two modes is an IPCM mode and one second mode of the at least two modes is a Transquant mode.
In another embodiment when the syntax element is encoded in the Parameter Set, a further syntax element indicating the usage of the Transquant mode is encoded in said Parameter Set.
In another embodiment a second syntax element common to the at least 2 modes is encoded in the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes.
In another embodiment the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the second syntax element is encoded in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to the pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are encoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment at least one third syntax element is encoded in the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
In another embodiment the at least one third syntax element is common to the at least two modes, only one third syntax element being encoded in the video bitstream for the area of pixels.
In another embodiment one third syntax element is encoded in the video bitstream for each of the at least two modes.
In another embodiment the at least one third syntax element is encoded in a bitstream portion corresponding to the area of pixels.
In another embodiment the at least one third syntax element is encoded in the video bitstream if at least one of the at least two modes is authorized.
In another embodiment the at least one third syntax element is encoded in the video bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one third syntax element is encoded in the video bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one third syntax element is encoded in the video bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes.
In another embodiment the at least one third syntax element is encoded in the bitstream it a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one third syntax element is encoded in the bitstream if a loop filtering is disabled for the area of pixels.
In another embodiment loop filtering parameters corresponding to the area of pixels are encoded in the bitstream at a location following the location of the encoded at least one third syntax element if loop filtering is enabled for the area of pixels.
In another embodiment the at least one third syntax element is encoded in a slice header.
In another embodiment when at least one of the at least two modes is selected for a coding unit, a mode representative syntax element representing the selected coding mode is encoded in a bitstream portion of the video bitstream corresponding to the coding unit.
According to a second aspect of the invention there is provided a method for decoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode having been selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, wherein for all cases of one of the at least two modes being selected, an associated syntax element is decoded from the video bitstream, the syntax element being representative of coding units sizes for which the at least two modes are selectable.
In another embodiment said syntax element is common to said at least two modes.
In another embodiment the syntax element is representative of a minimum size of a coding unit and/or maximum size of a coding unit.
In another embodiment the syntax element comprises two sub-syntax elements, a first sub-syntax element being representative of a minimum size of a coding unit, a second sub-syntax element being representative of the maximum size of a coding unit.
In another embodiment the syntax element is representative of a range of sizes of a coding unit.
In another embodiment the syntax element is decoded from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
In another embodiment a further syntax element indicating the usage of the transquant mode is decoded from said Parameter Set.
In another embodiment a second syntax element common to the at least two modes is decoded from the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes.
In another embodiment the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the second syntax element is decoded from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to the pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are decoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment at least one third syntax element is decoded from the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
In another embodiment the at least one third syntax element is common to the at least two modes, only one third syntax element being decoded from the video bitstream for the area of pixels.
In another embodiment one third syntax element is decoded from the video bitstream for each of the at least two modes.
In another embodiment the at least one third syntax element is decoded from a bitstream portion corresponding to the area of pixels.
In another embodiment the at least one third syntax element is decoded from the video bitstream if at least one of the at least two modes is authorized.
In another embodiment the at least one third syntax element is decoded from the video bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one third syntax element is decoded from the video bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes.
In another embodiment the at least one third syntax element is decoded from the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one third syntax element is decoded from the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes.
In another embodiment the at least one third syntax element is decoded from the bitstream if a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one third syntax element is decoded from the bitstream if a loop filtering is disabled for the area of pixels.
In another embodiment loop filtering parameters corresponding to the area of pixels are decoded from the bitstream after the decoding of the at least one third syntax element if loop filtering is enabled for the area of pixels.
In another embodiment the at least one third syntax element is decoded from a slice header.
According to a third aspect of the invention there is provided a device for encoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode being selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, the device for encoding comprising a means for encoding an associated syntax element in the video bitstream for all cases of one of the at least two modes being selected, the syntax element being representative of coding units sizes for which the at least two modes are selectable.
In another embodiment one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
In another embodiment the device for encoding comprises a means for encoding a further syntax element indicating the usage of the transquant mode in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream when the syntax element is encoded in the Parameter Set.
In another embodiment the device for encoding comprises a means for encoding a second syntax element common to the at least two modes in the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes, the loop filtering comprising at least one of a Sample Adaptive Offset (SÃO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the device for encoding comprises a means for encoding the second syntax element in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment the device for encoding comprises a means for applying a loop filtering to the pixels of an area of pixels wherein the activation of the means for loop filtering depends on a value indicated by the second syntax element.
In another embodiment the device for encoding comprises a means for encoding loop filtering parameters in the video bitstream wherein the activation of the means for encoding loop filtering parameters depends on a value indicated by the second syntax element.
In another embodiment the device for encoding comprises a means for encoding at least one third syntax element in the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels was encoded according to one of the at least two modes.
In another embodiment of the device for encoding, the means for encoding loop filtering parameters encodes the loop filtering parameters corresponding to the area of pixels in the bitstream at a location following the location of the encoded at least one third syntax element if loop filtering is enabled for the area of pixels.
In another embodiment of the device for encoding the means for encoding loop filtering parameters encodes the loop filtering parameters in a slice header.
In another embodiment the device for comprises a means for encoding a mode representative syntax element representing the selected coding mode in a bitstream portion of the video bitstream corresponding to the coding unit when the coding unit was encoded according to one of the at least two modes.
According to a fourth aspect of the invention there is provided a device for decoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode having been selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, the device for decoding comprising means for decoding an associated syntax element from the video bitstream for all cases of one of the at least two modes being selected, the syntax element being representative of coding units sizes for which the at least two modes are
selectable.
In another embodiment of the device for decoding one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
In another embodiment the device for decoding comprises a means for decoding a further syntax element indicating the usage of the Transquant mode from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream when the syntax element is encoded in the Parameter Set.
In another embodiment the device for decoding comprises a means for decoding a second syntax element common to the at least 2 modes from the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes, the loop filtering comprising at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the device for decoding comprises a means for decoding the second syntax element in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment the device for decoding comprises a means for applying a loop filtering to the pixels of an area of pixels wherein the activation of the means for loop filtering depends on a value indicated by the second syntax element.
In another embodiment the device for decoding comprises a means for decoding loop filtering parameters in the video bitstream wherein the activation of the means for decoding loop filtering parameters depends on a value indicated by the second syntax element.
In another embodiment the device for decoding comprises a means for decoding at least one third syntax element in the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels was encoded according to one of the at least two modes.
In another embodiment of the device for decoding the means for decoding loop filtering parameters decodes the loop filtering parameters corresponding to the area of pixels from the bitstream at a location following the location of the at least one third syntax element if loop filtering is enabled for the area of pixels.
In another embodiment of the device for decoding the means for decoding loop filtering parameters decodes the loop filtering parameters from a slice header.
In another embodiment the device for decoding comprises a means for decoding a mode representative syntax element representing the selected coding mode from a bitstream portion of the video bitstream corresponding to the coding unit when the coding unit was encoded according to one of the at least two modes.
According to a fifth aspect of the invention there is provided a signal carrying an information dataset compliant with the method for decoding as described in relation with the second aspect of the invention.
According to a sixth aspect of the invention there is provided a method for encoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode being selected as one of at least 2 modes preventing the usage of quantization and transform, one first mode of the at least 2 modes allowing pixel prediction, wherein a common syntax element common to the at least 2 modes is encoded in the video bitstream, the common syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes.
In another embodiment one first mode of the at least two modes is an IPCM mode and one second mode of the at least two modes is a Transquant mode.
In another embodiment the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the common syntax element is encoded in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to any pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are encoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment at least one additional syntax element is encoded in the video bitstream, said at least one additional syntax element indicating whether at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
In another embodiment the at least one additional syntax element is common to the at least two modes, only one additional syntax element being encoded in the video bitstream.
In another embodiment one additional syntax element is encoded in the bitstream for each of the at least two modes.
In another embodiment the at least one additional syntax element is encoded in a bitstream portion corresponding to the area of pixels.
In another embodiment the at least one additional syntax element is encoded in the bitstream if at least one of the at least two modes is authorized.
In another embodiment the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes.
In another embodiment the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes.
In another embodiment the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one additional syntax element is encoded in the bitstream it a loop filtering is disabled for the area of pixels.
In another embodiment loop filtering parameters corresponding to the area of pixels are encoded in the bitstream at a location following the location of the encoded at least one additional syntax element if loop filtering is enabled for the area of pixels.
In another embodiment the at least one aditional syntax element is encoded in a slice header.
In another embodiment when at least one of the at least two modes is selected for a coding unit, a mode representative syntax element representing the selected coding mode is encoded in a bitstream portion of the video bitstream corresponding to the coding unit.
According to a seventh aspect of the invention there is provided a method for decoding from a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode having been selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, wherein a common syntax element common to the at least two modes is decoded from the video bitstream, the common syntax element indicating whether loop filtering is enabled far all coding units encoded according to any one of the at least two modes.
In another embodiment one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
In another embodiment the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the common syntax element is decoded from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to any pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are decoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
In another embodiment during the decoding of an area of pixels a counter counts the number of coding units encoded according to one of the at least two modes, no loop filtering being applied to the area of pixels when the counter value is above a given threshold.
In another embodiment at least one additional syntax element is decoded from the video bitstream, said at least one additional syntax element indicating whether at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
In another embodiment the at least one additional syntax element is common to the at least two modes, only one additional syntax element being decoded from the video bitstream.
In another embodiment one additional syntax element is decoded from the bitstream for each of the at least two modes.
In another embodiment the at least one additional syntax element is decoded from a bitstream portion corresponding to the area of pixels.
In another embodiment the at least one additional syntax element is decoded from the bitstream if at least one of the at least two modes is authorized.
In another embodiment the at least one additional syntax element is decoded from the bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one additional syntax element is decoded from the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one additional syntax element is decoded from the bitstream if a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment loop filtering parameters corresponding to the area of pixels are decoded from the bitstream after the decoding of the at least one additional syntax element if loop filtering is enabled for the area of pixels.
According to an eighth aspect of the invention there is provided a device for encoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode being selected as one of at least 2 modes preventing the usage of quantization and transform, one first mode of the at least 2 modes allowing pixel prediction, the device for encoding comprising means for encoding a common syntax element common to the at least 2 modes in the video bitstream, the common syntax element indicating whether 1oop filtering is enabled for all coding units encoded according to any one of the at least two modes.
In another embodiment the device for encoding comprises loop filtering means capable of loop filtering according to at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the device for encoding comprises a means for encoding the common syntax element in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment of the device for encoding the activation of the loop filtering means depends on a value indicated by the common syntax element.
In another embodiment the device for encoding comprises a means for encoding loop filtering parameters in the video bitstream wherein the activation of the means for encoding loop filtering parameters depends on a value indicated by the common syntax element.
In another embodiment the device for encoding comprises a means for encoding at least one additional syntax element in the video bitstream, said at least one additional syntax element indicating whether at least one coding unit of the area of pixels was encoded according to one of the at least two modes.
In another embodiment of the device for encoding the means for encoding loop filtering parameters encodes the loop filtering parameters corresponding to the area of pixels in the video bitstream at a location following the location of the encoded at least one additional syntax element if loop filtering is enabled for the area of pixels.
In another embodiment the device for encoding the means for encoding loop filtering parameters encodes the loop filtering parameters in a slice header.
In another embodiment the device for encoding comprises a means for encoding a mode representative syntax element representing the selected coding mode in a bitstream portion of the video bitstream corresponding to the coding unit when the coding unit was encoded according to one of the at least two modes.
According to a ninth aspect of the invention there is provided a device for decoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode having been selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, the device for decoding comprising means for decoding a common syntax element common to the at least 2 modes from the video bitstream, the common syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes In another embodiment the device for decoding comprises loop filtering means capable of loop filtering according to at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
In another embodiment the device for decoding comprises a means for decoding the common syntax element from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
In another embodiment of the device for decoding the activation of the means for loop filtering depends on a value indicated by the common syntax element.
In another embodiment the device for decoding comprises a means for decoding loop filtering parameters in the video bitstream wherein the activation of the means for decoding loop filtering parameters depends on a value indicated by the common syntax element.
In another embodiment the device for decoding comprises a means for decoding at least one additional syntax element in the video bitstream, said at least one additional syntax element indicating whether at least one coding unit of the area of pixels was encoded according to one of the at least two modes.
In another embodiment of the device for decoding the means for decoding loop filtering parameters decodes the loop filtering parameters corresponding to the area of pixels from the bitstream at a location following the location of the at least one additional syntax element if loop filtering is enabled for the area of pixels.
In another embodiment of the device for decoding the means for decoding loop filtering parameters decodes the loop filtering parameters from a slice header.
In another embodiment the device for decoding comprises a means for decoding a mode representative syntax element representing the selected coding mode from a bitstream portion of the video bitstream corresponding to the coding unit when the coding unit was encoded according to one of the at leasttwo modes.
According to a tenth aspect of the invention there is provided a signal carrying an information dataset compliant with the method for decoding as described in relation with the seventh aspect of the invention.
According to an eleventh aspect of the invention there is provided a method for controlling loop filtering of pixels of a block of pixels of an image, said block of pixels being partitioned in at least one coding unit, each coding unit being encoded according to a coding mode, said coding mode being one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, wherein the method comprises a first determination step of determining whether at least one coding unit of the block of pixels is encoded according to one of the at least two modes, and wherein a loop filtering process is disabled for all pixels of the block of pixels according to the result of said first determination step.
In another embodiment said method further comprises a second determination step of determining whether or not a loop filter disable flag present in the video bitstream enables the loop filtering for at least one of the at least two modes, and wherein said loop filtering process is disabled for all pixels of the block according to the result of said second determination.
In another embodiment a method for encoding a video bitstream comprises the method of controlling a loop filtering.
In another embodiment the method for encoding further comprises including in said video bitstream a syntax element indicating whether at least one coding unit of the block of pixels is encoded according to one of the at least two modes.
In another embodiment a method for decoding a video bitstream comprises the method of controlling a loop filtering.
In another embodiment the method for decoding further comprises decoding from said video bitstream a syntax element indicating whether at least one coding unit of the block of pixels is encoded according to one of the at least two modes In another embodiment of the method for decoding, when at least one coding unit of the area of pixels is encoded according to one of the at least two modes, no loop filtering parameters are decoded for the area of pixels.
In another embodiment of the method for decoding, during the decoding of an area of pixels a counter counts the number of coding units encoded according to one of the at least two modes, no loop filtering parameters being decoded for the area of pixels when the counter value is above a given threshold.
According to a twelfth aspect of the invention there is provided a device for encoding video data comprising * A means for partitioning into coding units an area of pixels of an image, * A means for encoding coding units according to a coding mode, said coding mode being selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, * A means for selectively applying a loop filter to reconstructed pixels of a coding unit wherein the loop filtering is disabled for the area of pixels when at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
According to a thirteenth aspect of the invention there is provided a device for decoding video data comprising * A means for decoding coding units according to a coding mode, said coding units corresponding to partitions of an area of pixels of an image, said coding mode having been selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, * A means for selectively applying a loop filter to reconstructed pixels of a coding units wherein the loop filtering is disabled for an area of pixels when at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
According to a fourteenth aspect of the invention there is provided a method for encoding in a video bitstream characteristics of an area of pixels, said area of pixels being partitioned in coding units, the coding units being encoded according to a coding mode selected in a plurality of coding modes comprising at least 2 modes preventing the usage of quantization and transform, one first mode of the at least 2 modes allowing pixel prediction, wherein at least one syntax element is encoded in the video bitstream, said at least one syntax element indicating if the area of pixels comprises at least one coding unit encoded according to one of the at least two modes.
In another embodiment the syntax element is common to the at least two modes, only one syntax element being encoded in the video bitstream.
In another embodiment one syntax element is encoded in the bitstream for each of the at least two modes.
In another the at least one syntax element is encoded in a bitstream portion corresponding to the area of pixels.
In another embodiment the at least one syntax element is encoded in the bitstream if at least one of the at least two modes is authorized.
In another embodiment the at least one syntax element is encoded in the bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one syntax element is encoded in the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one syntax element is encoded in the bitstream if a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment loop filtering parameters corresponding to the area of pixels are encoded in the bitstream at a location following the location of the encoded at least one syntax element if loop filtering is enabled for the area of pixels.
In another embodiment the encoded at least one syntax element is encoded in a slice header.
According to a fifteenth aspect of the invention there is provided a method for decoding in a video bitstream characteristics of an area of pixels, said area of pixels having been partitioned in coding units, the coding units being encoded according to a coding mode selected in a plurality of coding modes comprising at least 2 modes preventing the usage of quantization and transform, one first mode of the at least 2 modes allowing pixel prediction, wherein at least one syntax element is decoded from the video bitstream, said at least one syntax element indicating if the area of pixels comprises at least one coding unit encoded according to one of the at least two modes.
In another embodiment the syntax element is common to the at least two modes, only one syntax element being decoded from the video bitstream.
In another embodiment one syntax element is decoded from the bitstream for each of the at least two modes.
In another embodiment the at least one syntax element is decoded from a bitstream portion corresponding to the area of pixels.
In another embodiment the at least one syntax element is decoded from the bitstream if at least one of the at least two modes is authorized.
In another embodiment the at least one syntax element is decoded from the bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one syntax element is decoded from the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment the at least one syntax element is decoded from the bitstream if a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
In another embodiment loop filtering parameters corresponding to the area of pixels are decoded from the bitstream at a location following the location of the at least one syntax element if loop filtering is enabled for the area of pixels.
According to a sixteenth aspect of the invention there is provided a device for encoding in a video bitstream characteristics of an area of pixels, said device for encoding comprising * Means for partitioning area of pixels in coding units * Means for encoding coding units according to a coding mode selected in a plurality of coding modes comprising at least 2 modes preventing the usage of quantization and transform, one first mode of the at least 2 modes allowing pixel prediction, * Means for encoding at least one syntax element in the video bitstream, said at least one syntax element indicating if the area of pixels comprises at least one coding unit encoded according to one of the at least two modes.
According to a seventeenth aspect of the invention there is provided a device for decoding in a video bitstream characteristics of an area of pixels partitioned in coding units, said device for decoding comprising * Means for decoding coding units according to a coding mode having been selected in a plurality of coding modes comprising at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, * Means for decoding at least one syntax element in the video bitstream, said at least one syntax element indicating if the area of pixels comprises at least one coding unit encoded according to one of the at least two modes.
In another embodiment the device for decoding comprises a means decoding loop filtering parameters corresponding to the area of pixels in the bitstream at a location following the location of the at least one syntax element if loop filtering is enabled for the area of pixels.
According to a eighteenth aspect of the invention there is provided a signal carrying an information dataset compliant with the method for decoding as described in relation with the fifteenth aspect of the invention.
According to a nineteenth aspect of the invention there is provided a method for encoding in a video bitstream characteristics of a coding mode applied to pixels of an area of pixels, the area of pixels being partitioned in coding units, comprising for each coding unit, specifying a coding mode selected from one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, and encoding in the video bitstream, at the coding unit level, one or more syntax elements indicating the specified mode.
According to a twentieth aspect of the invention there is provided a method for decoding from a video bitstream characteristics of a coding mode applied to pixels of a block of pixels, the block of pixels being partitioned in coding units, comprising for each coding unit, specifying a coding mode selected from one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, and decoding from the video bitstream, at the coding unit level, one or more syntax elements indicating the specified mode.
According to a 2lth aspect of the invention there is provided a computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method according to any one of the first, second, sixth, seventh, eleventh, fourteenth, fifteenth, nineteenth, twentieth aspect of the invention when loaded into and executed by the programmable apparatus.
According to a 22nd aspect of the invention there is provided a computer-readable storage medium storing instructions of a computer program for implementing a method, according to any one of the first, second, sixth, seventh, eleventh, fourteenth, fifteenth, nineteenth, twentieth aspect of the invention.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system". Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RE signal.
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which: Figure 1 graphically illustrates a video compression method of the prior art; Figure 2 is a flow chart illustrating a video encoder of the prior art; Figure 3 is a flow chart illustrating a video decoder of the prior art; Figure 4 is a flow chart illustrating three types of coding modes of the
prior art;
Figure 5 illustrates in the form of a pseudo code the processes performed by a decoder of the prior art for decoding a Prediction Unit; Figure 6 illustrates in the form of a pseudo code the processes performed by a decoder of the prior art for decoding a Coding Unit; Figure 7 illustrates in the form of a pseudo code the processes performed by a decoder of the invention for decoding a Coding Unit; Figure 8 illustrates in the form of a pseudo code the processes performed by a decoder of the prior art for decoding a Sequence Parameter Set; Figure 9 illustrates in the form of a pseudo code the processes performed by a decoder of the invention for decoding a Sequence Parameter Set; Figure 10 illustrates in the form of a pseudo code the processes performed by a decoder of the invention for decoding a Coded Tree Block; Figure 11 illustrates in the form of a pseudo code the processes performed by a decoder of the prior art for decoding a slice; Figure 12 illustrates in the form of a pseudo code a first embodiment of the processes performed by a decoder of the invention for decoding a slice; Figure 13 illustrates in the form of a pseudo code a second embodiment of the processes performed by a decoder of the invention for decoding a slice; Figure 14 is a flow chart illustrating a part of the processes performed by a decoder of the prior art for decoding a Prediction Unit; Figure 15 is a flow chart illustrating a part of the processes performed by a decoder of the prior art for decoding a Coding Unit; Figure 16 is a flow chart illustrating a part of the processes performed by a decoder of the invention for decoding a Coding Unit; Figure 17 is a flow chart illustrating a part of the processes performed by a decoder of the invention for decoding a Coded Tree Block; Figure 18 is a flow chart illustrating a part of the processes performed by
a decoder of the prior art for decoding a slice;
Figure 19 is a flow chart illustrating a first embodiment of the processes performed by a decoder of the invention for decoding a slice; Figure 20 is a flow chart illustrating a second embodiment of the processes performed by a decoder of the invention for decoding a slice; Figure 21 is a block diagram illustrating components of a processing device in which one or more embodiments of the invention may be implemented; Figure 22 is a block diagram schematically illustrating a data communication system in which one or more embodiments of the invention may be implemented; Figure 22 illustrates a data communication system in which one or more embodiments of the invention may be implemented. The data communication system comprises a transmission device, in this case a server 2201, which is operable to transmit data packets of a data stream to a receiving device, in this case a client terminal 2202, via a data communication network 2200. The data communication network 2200 may be a Wide Area Network (WAN) or a Local Area Network (LAN). Such a network may be for example a wireless network (Wifi I 802.lla or borg), an Ethernet network, an Internet network or a mixed network composed of several different networks. In a particular embodiment of the invention the data communication system may be a digital television broadcast system in which the server 2201 sends the same data content to multiple clients.
The data stream (bitstream) 2204 provided by the server 2201 may be composed of multimedia data representing video and audio data. Audio and video data streams may, in some embodiments of the invention, be captured by the server 2201 using a microphone and a camera respectively. In some embodiments data streams may be stored on the server 2201 or received by the server 2201 from another data provider, or generated at the server 2201.
The server 2201 is provided with an encoder for encoding video and audio streams in particular to provide a compressed bitstream for transmission that is a more compact representation of the data presented as input to the encoder.
In order to obtain a better ratio of the quality of transmitted data to quantity of transmitted data, the compression of the video data may be for example in accordance with the HEVC format modified according to the invention.
The client 2202 receives the transmitted bitstream and decodes the reconstructed bitstream to reproduce video images on a display device and the audio data by a loud speaker.
Although a streaming scenario is considered in the example of Figure 22, it will be appreciated that in some embodiments of the invention the data communication between an encoder and a decoder may be performed using for example a media storage device such as an optical disc.
Figure 21 schematically illustrates a processing device 2100 configured to implement at least one embodiment of the present invention. The processing device 2100 may be a device such as a micro-computer, a workstation or a light portable device. The device 2100 comprises a communication bus 2113 connected to: -a central processing unit 2111, such as a microprocessor, denoted CPU; -a read only memory 2107, denoted ROM, for storing computer programs for implementing the invention; -a random access memory 2112, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method of encoding a sequence of digital images and/or the method of decoding a bitstream according to embodiments of the invention; and -a communication interface 2102 connected to a communication network 2103 over which digital data to be processed are transmitted or received Optionally, the apparatus 2100 may also include the following components: -a data storage means 2104 such as a hard disk, for storing computer programs for implementing methods of one or more embodiments of the invention and data used or produced during the implementation of one or more embodiments of the invention; -a disk drive 2105 for a disk 2106, the disk drive being adapted to read data from the disk 2106 orto write data onto said disk; -a screen 2109 for displaying data and/or serving as a graphical interface with the user, by means of a keyboard 2110 or any other pointing means.
The apparatus 2100 can be connected to various peripherals, such as for example a digital camera 2120 or a microphone 2108, each being connected to an input/output card (not shown) so as to supply multimedia data to the apparatus 2100.
The communication bus provides communication and interoperability between the various elements included in the apparatus 2100 or connected to it. The representation of the bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element of the apparatus 2100 directly or by means of another element of the apparatus 2100.
The disk 2106 can be replaced by any information medium such as for example a compact disk (CD-ROM), rewritable or not, a ZIP disk or a memory card and, in general terms, by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables the method of encoding a sequence of digital images according to the invention and/or the method of decoding a bitstream according to the invention to be implemented.
The executable code may be stored either in read only memory 2107, on the hard disk 2104 or on a removable digital medium such as for example a disk 2106 as described previously. According to a variant, the executable code of the programs can be received by means of the communication network 2103, via the interface 2102, in order to be stored in one of the storage means of the apparatus 2100 before being executed, such as the hard disk 2104.
The central processing unit 2111 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, instructions that are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a non-volatile memory, for example on the hard disk 2104 or in the read only memory 2107, are transferred into the random access memory 2112, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In this embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
As already seen above, the lack of harmonization of the IPCM and Transquant mode even though these mode have similar features, induces encoding and decoding complexity issues and compression issues. The current invention aims at solving these issues by harmonizing the signaling of these two coding modes. Three aspects of this harmonization are considered in the following.
In a first aspect of the invention several commonalities between these two modes are identified and simplifications of the signaling are proposed accordingly: o Currently the IPCM mode is signaled at the PU level while the Transquant mode is signaled at the CU level. One first embodiment of the first aspect of the Invention relates to signaling the IPCM and the Transquant mode at the CU level.
o Currently in order to avoid spending too much bits to signal the IPCM mode in CUs, a range of CU size is defined in the Sequence Parameter Set on which the IFCM can be applied. For the Transquant mode, there is no limitation on the size of the CU.
One second embodiment of the first aspect of the invention relates to applying a similar restriction to the Transquant mode.
c Regarding the loop filtering of these two modes, no loop filtering is authorized for the Transquant mode. For the IPCM mode, the loop filtering is applied only if the state of a flag located in the Sequence Parameter Set (SPS) indicates that the loop filter is activated. One third embodiment of the first aspect of the invention relates to mapping the behavior for the IPCM mode onto the Transquant mode.
The implications on the decoding and encoding processes of the different embodiments related to the first aspect will be described further in the following.
One advantage of this embodiment is to reduce the complexity of the decoding process and the encoding process.
In embodiments of a second aspect of the invention, we want to limit the number of checks when a loop filtering technique is enabled in order to reduce the complexity of the decoder. In the current version of HEVC (HEVC HM7.0), when the loop filtering is enabled for the current sequence, we need to check for each CU if this CU is IPCM. If the mode of the CU is IPCM then no loop filtering is applied. In this invention, we insert a new constraint. The condition is to not apply SAO (or ALF) to a CTB as soon as this CTB contains at least one IPCM and/or Transquant block, even if SAO (or ALF) is enabled in the sequence Parameter set. This condition can be checked once the syntax of all the CUs of the CTB has been decoded. Alternatively, this condition can be easily checked thanks to a global flag introduced through the third aspect of the invention. The implications on the decoding and encoding processes of an embodiment of the third aspect will be described further in the following.
In embodiments of a third aspect of the invention a global flag per Coded Tree Block(CTB) is used to signal the presence of at least one CU which is coded by using the IPCM or the Transquant mode inside this Coded Tree Block.
This will avoid the coding of individual flag for each CU when the global flag indicates that no CU is coded according to the IPCM and/or Transquant mode.
This will also ease the individual checking for each CU when applying the loop filtering process. The implications on the decoding and encoding processes of an embodiment of the second aspect will be described further in the following.
Note that the main advantage of the second and third aspect is again to reduce the decoding and encoding process complexity.
In embodiments of a fourth aspect of the invention, some compression gains can be obtained in signaling the SAO (or ALF) parameters if the considered CTB (or LCU) contains one IPCM or one Transquant CU. The implications on the decoding and encoding processes of an embodiment of the fourth aspect will be described further in the following. The main advantage of an embodiment of the fourth aspect is to improve the compression efficiency of an encoder according to the invention comparing to an encoder of the prior art.
The following Table 2 summarizes the main modifications introduced by the invention comparing to Table 1. On this table modifications are in bold, italic and underlined format.
:. 7 ***IPtM 7.H7H 7 7.H7H HH I a 3: HH Activation SPS SPS Granularity 42 CU ____________________ mm/max size in SPS mm/max size in SPS Location SE of CU FirstSEofCU Loop filter On/Off flag in SPS On/Off flag in SPS Transform Bypassed Bypassed Quantization Bypassed Bypassed Prediction No Intra / Inter Entropy coding Skipped Cabac directly coded on xbits, x being bit depth of ________________________ samples ________________________ Table 2: Table summarizing the differences between the two modes according to the invention In relation with the first embodiment of the first aspect of the invention the following modifications of the signalization and the decoding process of the CU is proposed. The signalization of the two modes will be done at the CU level as depicted in pseudo code of Figure 7 representing the decoding process of a coding unit. Figure 16 gives a simplified flowchart of the new Coding Unit syntax decoding. First the CU address is computed (1501). Then it is checked if the flag transquant_bypass_enable_flag is true (1602). This last test is associated to a second test related to the second embodiment of the first aspect checking if the CU size is in the authorized range of size of Transquant CUs. The second embodiment of the first aspect of the invention will be described in details in the following. If this is true, the flag cu_transquant_bypass_flag is decoded (1503). Otherwise cu_transquant_bypass_flag is inferred to be 0 (false). The next process checks if the slice type is INTRA (1504). If the slice type is not INTRA, the skip_flag is decoded (1505). Otherwise skip_flag is inferred to be equal to 0 (false). Then the value of skip_flag is checked (1506). If skip_flag is true, the call to the prediction unit decoding function is made (1507). Otherwise, pred_mode_fiag or part_mode is decoded (1508). Then it is checked if part_mode is 2Nx2N (1509).
If this is false, there are more than one PUs in the CU and therefore as many calls to the prediction unit decoding function as the PUs number are made (1511).
Otherwise, there is a check if the CU is INTRA. This last check is associated to a second check related to the first embodiment of the first aspect checking if the pcm_enable_flag is true and a third check checking if the Cli size is in the authorized range of size of IPCM Cl.Js (1603). If this is true, the pcrn_fiag is decoded (1406). Otherwise it is inferred to be 0 (false). Then the 2Nx2N PU is decoded (1510). Finally the remaining CU data are decoded (1512).
As can be seen, we do not change the signaling of the Transquant mode in the CU which is called cu_transquant_bypass_flag". This syntax element still uses the same coding descriptor which is illustrated in block 601 of Figure 6.
However, we add some additional conditions in relation with the second embodiment of the first aspect.
Regarding the IPCM flag, we move the flag called "pcm_flag" from the Prediction Unit syntax (surrounded by the block 501 in Figure 5) to the Coding Unit syntax. The surrounded block 702 in Figure 7 shows the additional decoding process added to the Coding Unit decoding process where the "pcm_flag" is introduced in the coding unit.
Obviously an encoder according to the invention is modified accordingly in order to generate video bitstreams consistent with the modifications of the decoding process according to the first and second embodiment of the first aspect of the invention.
As already seen above, the second embodiment of the first aspect of the invention consists in limiting the usage of the cu_transquant_bypass_flag" to a limited range of CU. New variables similar to Log2M1nIPCMGUSizo and Log2MaxIPCMCUS1ze are used to represent this size limitation. These new variables will determine for which size of CU, the "cu_transquant_bypass_flag" will be present. For this purpose, we have defined Log2MinTSQTGuSize and Log2MaxTSQTCUSIze to respectively define the minimum size and the maximum size of the CU where the "Transquant" mode will be considered. The induced modification of the decoding process is represented by the reference 701 of Figure 7 to be compared to the original description in block 601 in Figure 6. The comparison of these two syntax elements with the CU size is performed in step 1602 of figure 16 already described. An example of range of authorized coding units sizes could be from (8 pixels X 8 pixels) to (32 pixels X 32 pixels).
In the proposed second embodiment of the first aspect, the computation of these two variables requires the definition two new syntax elements: "log2_m in_TSQT_cod ing_block_size_minus3" and "log2_diff_max_min_TSQT_coding_block_size".
These new syntax elements are coded in the Sequence Parameter Set (SPS) and are represented by the additional decoding process represented by reference 902 in Figure 9. The SPS decoding process of the prior art is partially represented in Figure 8. The modified SPS decoding process is partially represented in Figure 9. The two added variables Log2MinTSQTCuSize and Log2MaxTSQTCUSize added as a condition in reference 701 are computed from these two syntax elements as follows: * The first variable Log2M1nTSQTCUS1ze is set equal to "log2_m in_TSQT_cod ing_block_size_minus3"+3. The variable Log2MinTSQTCUSize shall be equal or less than Min(Log2Ctbsize,5) where Mm (x,y) computes the minimum between x and y and Log2Ctbsize represents the size of the current Coding Tree Block. In addition the size of the CU is given by Log2cbsize and the exact size of the Coding Unit is NxN where N= 2L2i± * The second variable Log2MaxTSQTSize is set equal to "log2_m in_TSQT_cod ing_block_size_minus3" + 3 + "log2_diff_max_min_TSQT_cod ing_block_size". The variable Log2MaxTSQTCUSize shall be equal or less than Min( Log2CtbSize, 5).
As these new sizes are only taken into account if the "transquant_bypass_enable_flag" is equal to "1", this flag which is present in the Picture Parameter Set (PPS) in the prior art is moved to the Sequence Parameter Set (SPS). This corresponding modification of the SPS decoding process is illustrated by the additional reference 901 in Figure 9 to be compared to the original syntax of the SPS presented in Figure 8.
It is advantageous to be able to control the IPCM and Transquant modes for each picture and even slice. Therefore two additional syntax elements, related to these two modes, are in addition added in the PPS.
pps_pcm_enable_flag relates to the IPCM mode. It enables to control the IPCM mode for the slices referring to the PPS. When pps pcm enable flag is equal to 1, the IPCM mode is enabled for slices referring to this PPS. When pps_pcm_enable_flag is equal to 0, the IFCM mode is disabled for slices referring to this PPS. A constraint is set to enforce pps_pcm_enable_flag to be equal to 0 when the SF8 flag pcm_enable_flag is equal to 0.
Similarly, for the Transquant mode, pps_transquant_bypass_enable_flag relates to the Transquant mode. It enables to control the Transquant mode for the slices referring to the FF8. When pps_transquant_bypass_enable_flag is equal to 1, the Transquant mode is enabled for slices referring to this PPS.
When pps_transquant_bypass_enable_flag is equal to 0, the Transquant mode is disabled for slices referring to this PPS. A constraint is set to enforce pps transquant bypass enable flag to be equal to 0 when the SF8 flag transquant_bypass_enable_flag is equal to 0.
It is important to note that common syntax elements could be also used to jointly signal the minimum and the maximum size of CU for the two modes (IPCM and Transquant) instead of having independent syntax elements for each mode. This feature allows decreasing further the decoding and encoding complexity.
In addition any alternative could be also applied, for example by just defining a minimum size or similarly by just defining a maximum size of coding units.
The missing boundary of the range could then be inferred from the boundary obtained from the syntax element encoded in the bitstream.
Again, a person skilled in the art would obviously derive an encoder according to the invention in view of the decoder modifications in order to generate video bitstreams consistent with the modifications of the decoding process according to the second embodiment of the first aspect of the invention.
Considering the third embodiment of the first aspect of the invention, it is recalled that one objective of the first aspect is to allow having a similar behavior for the modes IPCM and Transquant with regard to the loop filtering.
Accordingly, we propose to define a common flag enabling or disabling the loop filtering for both modes. The loop filtering process includes the SAC and the ALF filtering processes in embodiments.
Note that the in a further embodiment the deblocking filter can also be considered.
In the current version of the video decoder according to the prior art (HEVC HM7.0), only the IPCM CUs can be processed by the loop filtering process. In the third embodiment of the first aspect of the invention we propose to modify the name and the meaning of the current flag "pcm_loop_filter_disable_flag" (see reference 801 of Figure 8) signaled in the Sequence Parameter Set in order to allow a similar behavior on IPCM CUs and TransQuant CUs. This flag is renamed as "pcm transquant loop filter disable flag" and is illustrated by reference 903 in Figure 9. Similarly the meaning of the flag cu_transquant_bypass_flag is modified. In the prior art when this flag is equal to 1, the inverse transform, inverse quantization and loop filtering are bypassed. In the new meaning, only the inverse transform and inverse quantization are bypassed. The control of the loop filtering for this mode is made by the flag pcrn_transquant_loop_filter_disable_flag.
In the current version of the decoder of the prior art (HEVC HM7.0), if one or more of the following conditions are true, the SAC loop filtering is not applied.
-pcm_loop_filter_disable_flag and pcm_flag are both equal to 1.
-cu_transquant_bypass_flag is equal to 1.
We propose a modification of the decoding process according to the invention taking into account the new flag "pcm_transquant_loop_filter_disable_flag". In this new process, if one or more of the following conditions are true, the SAC loop filtering is not applied to CUs processed by the IPCM or TransQuant mode.
-pcm_transquant_loop_filter_d isable_fiag and pcm_fiag are both equal to 1.
-pcm_transquant_loop_filter_d isable_flag and cu_transquant_bypass_flag are both equal to 1.
Note that a similar principle may be applied to the ALF process and to the deblocking filter process.
Again, a person skilled in the art would obviously derive an encoder according to the invention in view of the decoder modifications in order to generate video bitstreams consistent with the modifications of the decoding process according to the third embodiment of the first aspect of the invention.
As can be seen above, we have introduced in the previous section some harmonizations and now both modes (IPCM and TransQuant) can be processed with the loop filtering process with the deactivation of the new flag "pcm_transquant_loop_filter_disable_flag".
According to embodiments of the second aspect of the invention, we propose to go further in the harmonization of these two modes. Accordingly, in order to have the same behavior regarding the loop filtering process, in one embodiment of the second aspect we propose to modify the application of the loop filtering process.
In the current version of the decoder of the prior art (HEVC HM7.O), the flag "pcm_loop_filter_disable_flag" has only a relationship to the IPCM CUs. In the embodiment of the second aspect of the invention, we propose to disable the loop filtering process for all CUs of the considered CTB when the CTB contains at least one CU encoded in IPCM or Transquant mode.
Consequently, the loop filtering process is disabled for an entire CTB if it satisfies the two following conditions: the "pcm_transquant_loop_filter_disable_flag" is equal to 1 * the Coded Tree Block contains at least one IPCM CU or one Transquant CU. In that case, we need to detect IPCM or Transquant CU when decoding each CU.
The loop filtering is enabled for an entire CTB if the following condition is met: * the "pcm transquant loop filter disable flag" is equal to 0. If this flag is not present in the bitstream it is inferred to be "0".
Of course, it is also required that the loop filters are enabled for the sequence, picture and slice to which the CTB belongs. This is controlled by high-level flags. For instance, SAO is enabled for pictures of the sequence when the flag sample_adaptive_offset_enabled_flag located in the SPS is equal to 1. A slice level flag slice_sample_adaptive_offset_flag also controls the SAD per slice and per color component. Similarly, the control of the ALF is done by the flag adaptive_loop_filter_enabled_flag in the SPS.
This embodiment of the second aspect of the invention allows improving the compression efficiency of the encoder by avoiding encoding the syntax elements related to the SAO parameters (ref 1101 in figure 11), transmitted on top of the CTB. Indeed, if it is detected that the CTB contains at least one IPCM or Transquant CU, these parameters are not signaled. A similar principle can be applied to the ALF parameters.
Another consequence of this embodiment is to reduce the decoder complexity. Indeed, the presence of the SAO parameters doesn't need to be tested by the decoder if it is detected that the CTB contains at least one IPCM or Transquant CU. This can be achieved by using a counter, updated during the decoding process of the CTB, and incremented by one each time a CU of the CTB is coded in IPCM or Transquant mode. As soon as the counter value is above 0, the decoder can conclude that no SAO parameters are present.
Note that in the previous embodiments, we considered until now that loop filter is not applied on a CTB as soon as this CTB contains at least one IPCM or a Transquant CU. In another embodiment, other conditions are possible and for instance loop filtering is not applied as soon as the number of CU encoded in IPCM or Transquant mode is above a given threshold.
Again, a person skilled in the art would obviously derive an encoder according to the invention in view of the decoder modifications in order to generate video bitstreams consistent with the modifications of the decoding process according to the second aspect of the invention.
According to embodiments of the second aspect of the invention, a detection means is required to detect if a CTB contains or not IFCM or Transquant CUs.
This detection means is used to control the loop filtering process at the CTB level. According to embodiments of the third aspect of the invention, we propose a new syntax element (or flag) at the Coded Tree Block (CBT) level in order to simplify the control of the application of the loop filtering process and obtain a reduction of the decoder complexity.
As already described above, in the current version of the decoder of the prior art (HEVC HM7.0), the decoder checks for each CU if it is coded in IPCM or Transquant mode. In addition, for a Transquant CU, no loop filter is applied, while for the IPCM mode, the loop filtering is applied depending on a syntax element (or flag) "pcm loop filter disable" which is signaled in the Sequence Parameter Set. If the flag "pcm_loop_filter_disable" is equal to "0", the loop filtering process can be applied for an IPCM CU otherwise the loop filtering is skipped.
According to embodiments of the second aspect of the invention, we have proposed a common flag "pcm transquant loop filter disable flag" allowing having exactly the same behavior for the IPCM and the Transquant CUs with regard to the loop filtering. However, it is still necessary to check if at least one CU is either an IPCM or a Transquant CU in a CIB.
In order to avoid the checking of each CU, we propose in this third aspect of the invention a new syntax element (or flag) at the Coded Tree Block (CTB) level. This flag indicates the presence or the absence of at least one of either an IPCM CU or a Transquant CU inside the CTB. If this flag is equal to "1", then it means that there is at least one IPCM CU or one Transquant CU in the Coded Tree Block. Figure 10 represents with reference 1001 the modification of the pseudo code corresponding to the decoding process of a Coded Tree block induced by the usage of the new syntax element (or flag) pcrn_or_transquant_present_flag.
Figure 17 gives a simplified flowchart of the initial part of the Coding Tree Block decoding, also represented in pseudo-code in figure 10. First it is checked if, the flag transquant_bypass_enable_flag is true or the flag pcrn_enabled_fiag is true, and also if the flag pcrn_transquant_loop_filter_disable_flag is true (1701). If this check is verified, the flag pcm_or_transquant_present_flag is decoded (1702). Otherwise pcm_or_transquant_present_flag is inferred to be 0 (false). Finally the remaining CTB data are decoded (1703).
This flag "pcm_or_transquant_present_flag" is present at the CTB level only if the following conditions are satisfied: * at least one of the two modes is authorized, this means that the PCM_enabled_flag or the transquant_bypass_enable_flag is equal to "1" * the "pcm transquant loop filter disable flag" is equal to "1" The loop filtering process is disabled for the entire CTB if: * the "pcm_or_transquant_present_flag" is equal to "1" which means that at least one IFCM or one Transquant CU is present in the current Coded Tree Block (CTB).
The loop filtering process is enabled for the entire CIB: * if the "pcm or transquant present flag" is equal to "0" which means that no IPCM nor Transquant CU is present in the current Coded Tree Block.
If this flag is not present in the bitstream it is inferred to be "0".
Of course, it is also required that the loop filters are enabled by the high-level flags previously mentioned (sample_adaptive_offset_enabled_flag, slice_sample_adaptive_offset_flag, adaptive_loop_filter_enabled_flag).
It is important to note that the proposed syntax element (or flag) pcm_or_transquant_present_flag is common for the two modes. However in another embodiment of the third aspect of the invention we could have one flag for each mode to independently notify the presence of IPCM and Transquant CUs in a CTB.
Again, a person skilled in the art would obviously derive an encoder according to the invention in view of the decoder modifications in order to generate video bitstreams consistent with the modifications of the decoding process according to the third aspect of the invention.
As mentioned in the description of the third aspect of the invention, the flag "pcm_or_transquant_present_flag" allows skipping the loop filtering process for an entire CTB while in the decoder of the prior art (HEVC HM7.0) the loop filtering can be skipped for only one IPCM Cu.
With the new decoding rules, introduced in Figures 9 and 10, it is possible now to turn off the SAO filtering for an entire CTB containing several CUs with at least one IPCM CU or one Transquant CU. In that case, the SAO parameters which are associated to each CTB can be saved if the SAO filtering process is disabled for this entire CTB. As illustrated by the reference 1101 of Figure 11, SAO parameters for a CTB (corresponding to the sao param() syntax) are currently defined before the Coded Tree Block corresponding to the coding_tree() syntax in that block 1101.
Figure 16 gives a simplified flowchart of the Slice data syntax decoding as implemented in the current version of the decoder of the prior art (HEVC HMJ.0), also represented in pseudo-code from in figure 11. First the CTB address is computed (1801). Then it is checked if the flags adaptive_loop_filter_flag and alf_cu_control_flag are true (1802). If this is true, the variable AlfCuFlagldx is set to -1(1803). The next step is the decoding of the SAO parameters (1804).
Then the call to the coding tree decoding is done (1805). This call returns a variable more-data, which indicates if remaining data have to be decoded for the CTB. An RBSP (Raw Byte Sequence Payload) byte-alignment is then achieved (1806). Finally, a check of more-data is made (1807). If it is true, the process goes back to step (1804). Otherwise the Slice data syntax decoding is finished.
In Figure 12, according to the fourth aspect of the invention, we propose to modify the organization of the slice decoding process so that the coding_tree() syntax is processed before the definition of the SAO parameters.
The modifications correspond to reference 1201 in Figure 12. In addition, a test is performed on the flag "pcm_or_transquant_present_flag". Consequently, if there is no IPCM CU and no Transquant CU in the current CTB, this means that the SAO loop filtering can be processed and the SAO parameters for this CTB are decoded.
Figure 19 gives a modified flowchart of the simplified Slice data syntax decoding, also represented in pseudo-code in figure 12. First the CTB address is computed (1801). Then it is checked if the flags adaptive_loop_filter_flag and alf_cu_control_flag are true (1802). If this is true, the variable AlfCuFlagldx is set to -1 (1803). The next step is the call to the coding tree decoding (1805).
Then the flag pcm_or_transquant_present_flag is checked (1901). If it is true, the SAO parameters are decoded (1804). The RBSP byte-alignment is then achieved (1806). Finally, a check of more-data is made (1807). If it is true, the process goes back to step (1805). Otherwise the Slice decoding is finished.
Figure 12 and 19 illustrates an example of compression and complexity improvement regarding the SAO loop filter when the SAO loop filtering is not enabled on one CTB. Similarly, still in the scope of the fourth embodiment, the same syntax saving could be done for the savings of parameters related to the Adaptive Loop Filter (ALF) thanks to the flag "pcm_or_transquant_present_flag".
One can note in addition that the SOA parameters prediction scheme consisting in predicting SAO parameters of a CTB from SAO parameters of neighboring CTB, implemented in the current version of HEVC (HEVC HM7.0) is not modified by the embodiment of the fourth aspect of the invention.
Again, a person skilled in the art would obviously derive an encoder according to the invention in view of the decoder modifications in order to generate video bitstreams consistent with the modifications of the decoding process according to the fourth aspect of the invention.
Alternatively to the figure 12 where the goal is to save some bits dedicated to the SAO parameters signaling, the flag "pcm_or_transquant_present_flag" could be coded in the slice_data() syntax as presented by the reference 1301 of Figure 13 representing a second embodiment of the fourth aspect of the invention. This will enable not to modify the current order of the two syntax elements sao_param() and coding-tree as originally designed in Figure 11.
Figure 20 gives another modified flowchart of the simplified Slice data syntax decoding, also represented in pseudo-code in figure 13. First the CTB address is computed (1801). Then it is checked if the flags adaptive_loop_filter_flag and alf_cu_control_flag are true (1802). When this is true, the variable AlfCuFlagldx is set to -1(1803). In the next step, it is checked if on one side, the flag transquant_bypass_enable_flag is true or the flag pcrn_enabled_flag is true, and on the other side if the flag pcrn_transquant_loop_filter_disable_flag is true (1701). If this check is verified, the flag pcm or transquant present flag is decoded (1702). Otherwise pcrn or transquant present flag is inferred to be 0 (false). Next, the value of pcrn_or_transquant_present_flag is checked (1901). When it is false, the SAD parameters are decoded (1804). The next step is the call to the coding tree decoding (1805). The RBSP byte-alignment is then achieved (1806). Finally, a check of more-data is made (1807). If it is true, the process goes back to step (1701). Otherwise the slice decoding is finished.
Note that it is preferably proposed in the invention to code the flag pcrn_or_transquant_present_flag once per CTB. In an embodiment, the flag can be inserted per blocks of the CTB. For instance, if the CTB is of size 64x64, and the minimum CU size is 8x8, as specified in the state-of-the-art current HEVC specification, one flag pcm_or_transquant_present_flag can be coded for each 32x32 block of the CTB, this block being potentially divided into more than one Cu.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention. Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, features of method aspects may be applied to apparatus aspects, and vice versa. As explained above for example, the second and third described embodiments can be combined particularly advantageously. Other advantageous combinations can equally be envisaged.
For example, while some of the previous embodiments were described independently, all embodiments could be combined in a global strategy for reducing the complexity of the decoder and the encoder or for improving the compression efficiency with regard to the IPCM and the Transquant modes.
Other combinations are possible, a combination of the three embodiments of the first aspect may be particularly advantageous when combined for example.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
Although the invention has been described particularly with reference to IPCM and Transquant modes (these being known modes in the art) it will be apparent that these are examples of a broader range of possible similar or equivalent modes, which modes prevent the usage of quantization and transform. One of such modes should typically allow pixel prediction.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

Claims (99)

  1. CLAIMS1. A method for encoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode being selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, wherein for all cases of one of the at least two modes being selected, an associated syntax element is encoded in the video bitstream, the syntax element being representative of coding units sizes for which the at least two modes are selectable.
  2. 2. A method according to claim 1 wherein said syntax element is common to said at least two modes.
  3. 3. A method according to claim 1 or 2 wherein the syntax element is representative of a minimum size of a coding unit and maximum size of a coding unit.
  4. 4. A method according to claim 1 or 2 wherein the syntax element comprises two sub-syntax elements, a first sub-syntax element being representative of a minimum size of a coding unit, a second sub-syntax element being representative of the maximum size of a coding unit.
  5. 5. A method according to claim 1 or 2 wherein the syntax element is representative of a minimum size of a coding unit or of a maximum size of a coding unit.
  6. 6. A method according to claim 1 or 2 wherein the syntax element is representative of a range of sizes of a coding unit.
  7. 7. A method according to any previous claim wherein the syntax element is encoded in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  8. 8. A method according to any previous claim wherein one first mode of the at least two modes is an IPCM mode and one second mode of the at least two modes is a Transquant mode.
  9. 9. A method according to claim 7 and 8 wherein when the syntax element is encoded in the Parameter Set, a further syntax element indicating the usage of the Transquant made is encoded in said Parameter Set.
  10. 10. A method according to any previous claim wherein a second syntax element common to the at least 2 modes is encoded in the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes.
  11. 11. A method according to the previous claim wherein the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
  12. 12. A method according to any previous claim from claim 10 to claim 11 wherein the second syntax element is encoded in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  13. 13. A method according to any previous claim from claim 10 to claim 12 wherein when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to the pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  14. 14. A method according to any previous claim from claim 10 to claim 13 wherein when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are encoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  15. 15. A method according to any previous claims wherein at least one third syntax element is encoded in the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
  16. 16. A method according to claim 15 wherein the at least one third syntax element is common to the at least two modes, only one third syntax element being encoded in the video bitstream for the block.
  17. 17. A method according to claim 15 wherein one third syntax element is encoded in the video bitstream for each of the at least two modes.
  18. 18. A method according to any previous claim from claim 15 to claim 17 wherein the at least one third syntax element is encoded in a bitstream portion corresponding to the area of pixels.
  19. 19. A method according to any previous claim from claim 15 to claim 18 wherein the at least one third syntax element is encoded in the video bitstream if at least one of the at least two modes is authorized.
  20. 20. A method according to any previous claim from claim 15 to claim 19 wherein the at least one third syntax element is encoded in the video bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
  21. 21. A method according to any previous claim from claim 15 to claim 19 wherein the at least one third syntax element is encoded in the video bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
  22. 22. A method according to any previous claim from claim 15 to claim 19 wherein the at least one third syntax element is encoded in the bitstream if a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
  23. 23. A method according to any previous claim from claim 15 to 22 wherein loop filtering parameters corresponding to the area of pixels are encoded in the bitstream at a location following the location of the encoded at least one third syntax element if loop filtering is enabled for the area of pixels.
  24. 24. A method according to any previous claim from claim 15 to 23 wherein the at least one third syntax element is encoded in a slice header.
  25. 25. A method according to any previous claim wherein when at least one of the at least two modes is selected for a coding unit, a mode representative syntax element representing the selected coding mode is encoded in a bitstream portion of the video bitstream corresponding to the coding unit.
  26. 26. A method for decoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode having been selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, wherein for all cases of one of the at least two modes being selected, an associated syntax element is decoded from the video bitstream, the syntax element being representative of coding units sizes for which the at least two modes are selectable.
  27. 27. A method according to claim 26 wherein said syntax element is common to said at least two modes.
  28. 28. A method according to claim 26 or 27 wherein the syntax element is representative of a minimum size of a coding unit and/or maximum size of a coding unit.
  29. 29. A method according to claim 26 or 27 wherein the syntax element comprises two sub-syntax elements, a first sub-syntax element being representative of a minimum size of a coding unit, a second sub-syntax element being representative of the maximum size of a coding unit.
  30. 30. A method according to claim 26 or 27 wherein the syntax element is representative of a range of sizes of a coding unit.
  31. 31. A method according to any previous claim from claim 26 to 30 wherein the syntax element is decoded from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  32. 32. A method according to any previous claim from claim 26 to 30 wherein one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
  33. 33. A method according to claim 31 and 32 wherein a further syntax element indicating the usage of the transquant mode is decoded from said Parameter Set.
  34. 34. A method according to any previous claim from claim 26 to claim 33 wherein a second syntax element common to the at least 2 modes is decoded from the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes.
  35. 35. A method according to the previous claim wherein the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
  36. 36. A method according to any previous claim from claim 34 to claim 35 wherein the second syntax element is decoded from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  37. 37. A method according to any previous claim from claim 34 to claim 36 wherein when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to the pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  38. 38. A method according to any previous claim from claim 34 to 37 wherein when the second syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are decoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  39. 39. A method according to any previous claims from claim 26 to claim 38 wherein at least one third syntax element is decoded from the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
  40. 40. A method according to claim 39 wherein the at least one third syntax element is common to the at least two modes, only one third syntax element being decoded from the video bitstream.
  41. 41. A method according to claim 39 wherein one third syntax element is decoded from the video bitstream for each of the at least two modes.
  42. 42. A method according to any previous claim from claim 39 to claim 41 wherein the at least one third syntax element is decoded from a bitstream portion corresponding to the area of pixels.
  43. 43. A method according to any previous claim from claim 39 to claim 42 wherein the at least one third syntax element is decoded from the video bitstream if at least one of the at least two modes is authorized.
  44. 44. A method according to any previous claim from claim 39 to claim 43 wherein the at least one third syntax element is decoded from the video bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
  45. 45. A method according to any previous claim from claim 39 to claim 43 wherein the at least one third syntax element is decoded from the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
  46. 46. A method according to any previous claim from claim 39 to claim 43 wherein the at least one third syntax element is decoded from the bitstream if a loop filtering is disabled for the area of pixels and a loop filtering is enabled for at least one mode different from the at least two modes.
  47. 47. A method according to any previous claim from claim 39 to 46 wherein loop filtering parameters corresponding to the area of pixels are decoded from the bitstream after the decoding of the at least one third syntax element if loop filtering is enabled for the area of pixels.
  48. 48. A method according to any previous claim from claim 39 to 47 wherein the at least one third syntax element is decoded from a slice header.
  49. 49. A device for encoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode being selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, the device for encoding comprising a means for encoding an associated syntax element in the video bitstream for all cases of one of the at least two modes being selected, the syntax element being representative of coding units sizes for which the at least two modes are selectable.
  50. 50. A device for encoding according to claim 49 wherein the syntax element is characterized according to anyone of claim 2 to 7.
  51. 51. A device for encoding according to claim 49 or 50 wherein one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
  52. 52. A device for encoding according any previous claim from claim 49 to 51 comprising a means for encoding a further syntax element indicating the usage of the transquant mode in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream when the syntax element is encoded in the Parameter Set.
  53. 53. A device for encoding according any previous claim from claim 49 to 52 comprising a means for encoding a second syntax element common to the at least 2 modes in the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes, the loop filtering comprising at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
  54. 54. A device for encoding according to claim 53 comprising a means for encoding the second syntax element in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  55. 55. A device for encoding according to claim 53 or 54 comprising a means for applying a loop filtering to the pixels of an area of pixels wherein the activation of the means for loop filtering depends on a value indicated by the second syntax element.
  56. 56. A device for encoding according to claim 53 or 55 comprising a means for encoding loop filtering parameters in the video bitstream wherein the activation of the means for encoding loop filtering parameters depends on a value indicated by the second syntax element.
  57. 57. A device for encoding according to any previous claim from claim 49 to claim 56 comprising a means for encoding at least one third syntax element in the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels was encoded according to one of the at least two modes.
  58. 58. A device for encoding according to claim 57 wherein the at least one third syntax element is characterized according to anyone of claims 16 to 22.
  59. 59. A device for encoding according to claim 58 wherein the means for encoding loop filtering parameters encodes the loop filtering parameters corresponding to the area of pixels in the bitstream at a location following the location of the encoded at least one third syntax element if loop filtering is enabled for the area of pixels.
  60. 60. A device for encoding according to claim 58 wherein the means for encoding loop filtering parameters encodes the loop filtering parameters in a slice header.
  61. 61. A device for encoding according to any previous claim from claim 49 to claim 60 comprising a means for encoding a mode representative syntax element representing the selected coding mode in a bitstream portion of the video bitstream corresponding to the coding unit when the coding unit was encoded according to one of the at least two modes.
  62. 62. A device for decoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode having been selected as one of at least two modes preventing the usage of quantization and transform, one first mode of the at least two modes allowing pixel prediction, the device for decoding comprising means for decoding an associated syntax element from the video bitstream for all cases of one of the at least two modes being selected, the syntax element being representative of coding units sizes for which the at least two modes are selectable.
  63. 63. A device for decoding according to claim 62 wherein the syntax element is characterized according to anyone of claim 27 to claim 31.
  64. 64. A device for decoding according to claim 62 or 63 wherein one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
  65. 65. A device for decoding according any previous claim from claim 62 to 64 comprising a means for decoding a further syntax element indicating the usage of the transquant mode from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream when the syntax element is encoded in the Parameter Set.
  66. 66. A device for decoding according any previous claim from claim 62 to 65 comprising a means for decoding a second syntax element common to the at least 2 modes from the video bitstream, the second syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes, the loop filtering comprising at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
  67. 67. A device for decoding according to claim 66 comprising a means for decoding the second syntax element from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  68. 68. A device for decoding according to claim 66 or 67 comprising a means for applying a loop filtering to the pixels of an area of pixels wherein the activation of the means for loop filtering depends on a value indicated by the second syntax element.
  69. 69. A device for decoding according to any previous claim from claim 66 to 68 comprising a means for decoding ioop filtering parameters in the video bitstream wherein the activation of the means for decoding loop filtering parameters depends on a value indicated by the second syntax element.
  70. 70. A device for decoding according to any previous claim from claim 62 to claim 69 comprising a means for decoding at least one third syntax element in the video bitstream, said at least one third syntax element indicating whether at least one coding unit of the area of pixels was encoded according to one of the at least two modes.
  71. 71. A device for decoding according to claim 70 wherein the at least one third syntax element is characterized according to anyone of claims 40 to 46.
  72. 72. A device for decoding according to claim 71 wherein the means for decoding loop filtering parameters decodes the loop filtering parameters corresponding to the area of pixels from the bitstream at a location following the location of the at least one third syntax element if loop filtering is enabled for the area of pixels.
  73. 73. A device for decoding according to claim 72 wherein the means for decoding loop filtering parameters decodes the loop filtering parameters from a slice header.
  74. 74. A device for decoding according to any previous claim from claim 62 to claim 73 comprising a means for decoding a mode representative syntax element representing the selected coding mode from a bitstream portion of the video bitstream corresponding to the coding unit when the coding unit was encoded according to one of the at least two modes.
  75. 75. A signal carrying an information dataset compliant with the method for decoding according to claim 26 to claim 48.
  76. 76. A method for encoding in a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode being selected as one of at least 2 modes preventing the usage of quantization and transform, one first mode of the at least 2 modes allowing pixel prediction, wherein a common syntax element common to the at least 2 modes is encoded in the video bitstream, the common syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes.
  77. 77. A method according to claim 51 wherein one first mode of the at least two modes is an IPCM mode and one second mode of the at least two modes is a Transquant mode.
  78. 78. A method according to any previous claim from claim 76 to claim 77 wherein the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
  79. 79. A method according to any previous claim from claim 76 to claim 78 wherein the common syntax element is encoded in a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  80. 80. A method according to any previous claim from claim 76 to claim 79 wherein when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to any pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  81. 81. A method according to any previous claim from claim 76 to claim 80 wherein when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are encoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  82. 82. A method according to any previous claims from claim 76 to claim 81 wherein at least one additional syntax element is encoded in the video bitstream, said at least one additional syntax element indicating whether at least one coding unit of the area of pixels is encoded according to one of the at least two modes.
  83. 83. A method according to claim 82 wherein the at least one additional syntax element is common to the at least two modes, only one additional syntax element being encoded in the video bitstream.
  84. 84. A method according to claim 82 wherein one additional syntax element is encoded in the bitstream for each of the at least two modes.
  85. 85. A method according to any previous claim from claim 82 to claim 84 wherein the at least one additional syntax element is encoded in a bitstream portion corresponding to the area of pixels.
  86. 86. A method according to any previous claim from claim 82 to claim 85 wherein the at least one additional syntax element is encoded in the bitstream if at least one of the at least two modes is authorized.
  87. 87. A method according to any previous claim from claim 82 to claim 86 wherein the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
  88. 88. A method according to any previous claim from claim 82 to claim 86 wherein the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for areas of pixels comprising coding units encoded according to one of the at least two modes and a loop filtering is enabled for at least one mode different from the at least two modes.
  89. 89. A method according to any previous claim from claim 82 to claim 86 wherein the at least one additional syntax element is encoded in the bitstream if a loop filtering is disabled for the area of pixels and a 1oop filtering is enabled for at least one mode different from the at least two modes.
  90. 90. A method according to any previous claim from claim 82 to 89 wherein loop filtering parameters corresponding to the area of pixels are encoded in the bitstream at a location following the location of the encoded at least one additional syntax element if loop filtering is enabled for the area of pixels.
  91. 91. A method according to any previous claim from claim 82 to 90 wherein the at least one aditional syntax element is encoded in a slice header.
  92. 92. A method according to any previous claims from claim 76 to 91 wherein when at least one of the at least two modes is selected for a coding unit, a mode representative syntax element representing the selected coding mode is encoded in a bitstream portion of the video bitstream corresponding to the coding unit.
  93. 93. A method for decoding from a video bitstream characteristics of a coding mode applied to pixels of a coding unit resulting from the partitioning of an area of pixels, said coding mode having been selected as one of at least 2 modes preventing the usage of quantization and transform, one first mode of the at least 2 modes allowing pixel prediction, wherein a common syntax element common to the at least 2 modes is decoded from the video bitstream, the common syntax element indicating whether loop filtering is enabled for all coding units encoded according to any one of the at least two modes.
  94. 94. A method according to claim 93 wherein one first mode of the at least two modes is a IPCM mode and one second mode of the at least two modes is a Transquant mode.
  95. 95. A method according to any previous claim from claim 93 to claim 94 wherein the loop filtering comprises at least one of a Sample Adaptive Offset (SAO) method or an Adaptive Loop Filtering (ALF) method.
  96. 96. A method according to any previous claim from claim 93 to claim 95 wherein the common syntax element is decoded from a Parameter Set representative of characteristics of at least a part of the video sequence represented by the video bitstream.
  97. 97. A method according to any previous claim from claim 93 to claim 96 wherein when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering is applied to any pixels of areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  98. 98. A method according to any previous claim from claim 93 to 97 wherein when the common syntax element indicates that the loop filtering is disabled for reconstructed pixels of coding units encoded according to at least one of the two modes, no loop filtering parameters are decoded for areas of pixels comprising at least one coding unit encoded according to one of the at least two modes.
  99. 99. A method according to claim 98 wherein during the decoding of an area of pixels a counter counts the number of coding units encoded according to one of the at least two modes, no loop filtering being applied to the area of pixels when the counter value is above a given threshold.
    Claims are truncated...
GB1211616.6A 2012-06-29 2012-06-29 Method and device for encoding and decoding data in a video encoder and a video decoder Expired - Fee Related GB2503658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1211616.6A GB2503658B (en) 2012-06-29 2012-06-29 Method and device for encoding and decoding data in a video encoder and a video decoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1211616.6A GB2503658B (en) 2012-06-29 2012-06-29 Method and device for encoding and decoding data in a video encoder and a video decoder

Publications (3)

Publication Number Publication Date
GB201211616D0 GB201211616D0 (en) 2012-08-15
GB2503658A true GB2503658A (en) 2014-01-08
GB2503658B GB2503658B (en) 2015-07-01

Family

ID=46721666

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1211616.6A Expired - Fee Related GB2503658B (en) 2012-06-29 2012-06-29 Method and device for encoding and decoding data in a video encoder and a video decoder

Country Status (1)

Country Link
GB (1) GB2503658B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130336395A1 (en) * 2012-06-18 2013-12-19 Qualcomm Incorporated Unification of signaling lossless coding mode and pulse code modulation (pcm) mode in video coding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120128067A1 (en) * 2010-11-22 2012-05-24 Mediatek Singapore Pte. Ltd. Apparatus and Method of Constrained Partition Size for High Efficiency Video Coding
US20120213274A1 (en) * 2011-02-22 2012-08-23 Chong Soon Lim Filtering method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120128067A1 (en) * 2010-11-22 2012-05-24 Mediatek Singapore Pte. Ltd. Apparatus and Method of Constrained Partition Size for High Efficiency Video Coding
US20120213274A1 (en) * 2011-02-22 2012-08-23 Chong Soon Lim Filtering method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130336395A1 (en) * 2012-06-18 2013-12-19 Qualcomm Incorporated Unification of signaling lossless coding mode and pulse code modulation (pcm) mode in video coding
US9706200B2 (en) * 2012-06-18 2017-07-11 Qualcomm Incorporated Unification of signaling lossless coding mode and pulse code modulation (PCM) mode in video coding

Also Published As

Publication number Publication date
GB201211616D0 (en) 2012-08-15
GB2503658B (en) 2015-07-01

Similar Documents

Publication Publication Date Title
US11722669B2 (en) Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US10687056B2 (en) Deriving reference mode values and encoding and decoding information representing prediction modes
JP6490264B2 (en) Method and apparatus for decoding a compensation offset for a set of reconstructed image samples
US10701360B2 (en) Method and device for determining the value of a quantization parameter
US10313668B2 (en) Method and device for encoding or decoding an image comprising encoding of decoding information representing prediction modes
US9100648B2 (en) Method and apparatus for decoding a video signal
US20180288441A1 (en) Filter information sharing among color components
US20180278961A1 (en) Method and apparatus for decoding a video signal
US20150341638A1 (en) Method and device for processing prediction information for encoding or decoding an image
US12028528B2 (en) Image encoding/decoding method and apparatus using adaptive transform, and method for transmitting bitstream
US20220337835A1 (en) Image encoding/decoding method and device using adaptive color transform, and method for transmitting bitstream
US20230291903A1 (en) Image encoding/decoding method and apparatus performing residual processing by using adaptive color space transformation, and method for transmitting bitstream
US20220312014A1 (en) Image encoding/decoding method and apparatus for performing residual processing using adaptive transformation, and method of transmitting bitstream
GB2498225A (en) Encoding and Decoding Information Representing Prediction Modes
CN114762339B (en) Image or video coding based on transform skip and palette coding related high level syntax elements
GB2498982A (en) Encoding or Decoding an Image Composed of a Plurality of Colour Components
GB2503658A (en) Method of harmonising the IPCM and Transquant modes in HEVC standard by harmonising the signalling of the two coding modes.
WO2019233997A1 (en) Prediction of sao parameters
US20240195996A1 (en) Coding enhancement in cross-component sample adaptive offset
US20240205438A1 (en) Coding enhancement in cross-component sample adaptive offset
CN114762335B (en) Image or video coding based on transform skip and palette coding related data
US20240137546A1 (en) Coding enhancement in cross-component sample adaptive offset
US20240214595A1 (en) Coding enhancement in cross-component sample adaptive offset
CN115668948A (en) Image encoding/decoding method and apparatus for signaling PTL-related information and computer-readable recording medium storing bitstream
GB2574425A (en) Video coding and decoding

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20220629