EP2044773A1 - Methods and systems for combining layers in a multi-layer bitstream - Google Patents

Methods and systems for combining layers in a multi-layer bitstream

Info

Publication number
EP2044773A1
EP2044773A1 EP07768418A EP07768418A EP2044773A1 EP 2044773 A1 EP2044773 A1 EP 2044773A1 EP 07768418 A EP07768418 A EP 07768418A EP 07768418 A EP07768418 A EP 07768418A EP 2044773 A1 EP2044773 A1 EP 2044773A1
Authority
EP
European Patent Office
Prior art keywords
layer
transform
spatial
combined
transform coefficients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP07768418A
Other languages
German (de)
French (fr)
Other versions
EP2044773A4 (en
Inventor
Christopher Andrew Segall
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/694,955 external-priority patent/US8130822B2/en
Priority claimed from US11/694,956 external-priority patent/US8059714B2/en
Priority claimed from US11/694,959 external-priority patent/US8422548B2/en
Priority claimed from US11/694,954 external-priority patent/US8532176B2/en
Priority claimed from US11/694,958 external-priority patent/US7885471B2/en
Priority claimed from US11/694,957 external-priority patent/US7840078B2/en
Application filed by Sharp Corp filed Critical Sharp Corp
Publication of EP2044773A1 publication Critical patent/EP2044773A1/en
Publication of EP2044773A4 publication Critical patent/EP2044773A4/en
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • Embodiments of the present invention comprise methods and systems for processing and process management in a multi-layer bitstream.
  • the present invention relates particularly to 1 ) methods and systems for combining layers in a multi-layer bitstream, 2) methods and systems for conditional transform-domain residual accumulation, 3) methods and systems for residual layer scaling, 4) methods and systems for image processing control based on adj acent block characteristics, 5) methods and systems for maintenance and use of coded block pattern information, and 6) methods and systems for transform selection and management.
  • a scalable bit-stream may comprise a form of inter-layer prediction.
  • Exemplary systems comprise inter-layer prediction within the scalable video extensions for the
  • AVC I H .264 video coding standards are commonly known as SVC, and the SVC system, described in T. Wiegand, G. Sullivan, J. Reichel, H. Schwarz and M. Wien, "Joint Draft 9 of SVC amendment (revision 2)" , JVT-V201 , Marrakech, Morocco, January 13- 19, 2007.
  • SVC SVC
  • inter-layer prediction is realized by proj ecting motion and mode information from an enumerated lower layer to an enumerated higher layer.
  • prediction residual is projected from an enumerated lower layer to an enumerated higher layer.
  • the higher layer bit-stream may then contain additional residual to improve the quality of the decoded output.
  • a method for combining layers in a multi-layer bitstream comprising: a) inverse quantizing a first-layer quantized transform coefficient thereby creating a first-layer transform coefficient; b) scaling the first-layer transform coefficient to match a characteristic of a second- layer thereby creating a scaled, first-layer transform coefficient; c) inverse quantizing a second-layer quantized transform coefficient thereby creating a second-layer transform coefficient; and d) combining the scaled, first-layer transform coefficient with the second-layer transform coefficient to form a combined coefficient.
  • the first-layer may be a base layer.
  • the inverse quantizing a first-layer quantized transform coefficient may comprise using a first quantization parameter and the inverse quantizing a second-layer quantized transform coefficient may comprise using a second quantization parameter.
  • the first layer and the second layer may have different spatial resolutions.
  • the second-layer may be an enhancement layer.
  • the method may further comprise inverse transforming the combined coefficient thereby generating a spatial-domain residual value.
  • the method may further comprise combining the spatial- domain residual value with the spatial-domain prediction value.
  • the method may further comprise: a) inverse quantizing a third-layer quantized transform coefficient thereby creating a third-layer transform coefficient; b) scaling the combined coefficient to match a characteristic of a third layer thereby creating a scaled-combined coefficient; and c) combining the scaled-combined coefficient with the third-layer transform coefficient.
  • the method may further comprise generating a combined bitstream comprising the combined coefficient.
  • the combined bitstream may further comprise an intra- - 4 -
  • the combined bitstream may further comprise a motion vector.
  • a system for combining layers in a multi-layer bitstream comprising: a) a first inverse quantizer for inverse quantizing a first-layer quantized transform coefficient thereby creating a first-layer transform coefficient; b) a sealer for scaling the first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; c) a second inverse quantizer for inverse quantizing a second-layer quantized transform coefficient thereby creating a second-layer transform coefficient; and d) a coefficient combiner for combining the scaled, first-layer transform coefficient with the second-layer transform coefficient to form a combined coefficient.
  • the system may further comprise a bitstream generator for generating a combined bitstream comprising the combined coefficient.
  • the system may further comprise an inverse transformer for inverse transforming the combined coefficient thereby generating a spatial-domain residual value and a second combiner for combining the spatial-domain residual value with the spatial-domain prediction value.
  • a method for converting an SVC-compliant bitstream to AVC-compliant data comprising: a) receiving an SVC-compliant bitstream comprising prediction data, base- layer residual data and enhancement-layer residual data; b) inverse quantizing the base-layer residual data thereby creating base-layer transform coefficients; c) inverse quantizing the enhancement-layer residual data thereby creating enhancement-layer transform coefficients; d) scaling the base-layer transform coefficients to match a quantization characteristic of the enhancement-layer thereby creating scaled base-layer transform coefficients; and e) combining the scaled base-layer transform coefficients with the enhancement-layer transform coefficients to form combined coefficients.
  • the method may further comprise combining the combined coefficients with the prediction data to form an AVC-compliant bitstream.
  • the prediction data may comprise an intra-prediction mode indicator.
  • the prediction data may comprise a motion vector.
  • the method may further comprise inverse transforming the combined coefficients thereby creating spatial-domain residual values.
  • the method may further comprise obtaining spatial- domain prediction values and combining the spatial-domain prediction values with the spatial-domain residual values to form a decoded image.
  • a method for combining layers in a multi-layer bitstream comprising: a) receiving a first-layer quantized transform coefficient; b) scaling the first-layer quantized transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer quantized transform coefficient; c) receiving a second-layer quantized transform coefficient; and d) combining the scaled, first-layer transform coefficient with the second-layer quantized transform coefficient to form a combined quantized coefficient.
  • the method further comprise inverse quantizing the combined quantized coefficient to produce a combined coefficient.
  • the method further comprise inverse transforming the combined coefficient thereby producing a spatial domain residual value.
  • the method further comprise combining the spatial domain residual value with a spatial domain prediction value.
  • the method further comprise generating a combined bitstream comprising the combined quantized coefficient.
  • a method for conditionally combining layers in a multi-layer bitstream comprising: a) receiving a first-layer quantized transform coefficient; b) receiving a second-layer quantized transform coefficient; c) receiving a layer combination indicator; d) scaling the first-layer quantized transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer quantized transform coefficient, when the layer combination indicator indicates transform domain accumulation; and e) combining the scaled, first-layer transform coefficient with the second- layer quantized transform coefficient to form a combined quantized coefficient, when the layer combination indicator indicates transform domain accumulation.
  • the layer combination indicator may be derived from data in a second-layer bitstream.
  • the method may further comprise disabling smoothed reference prediction, when the layer combination indicator indicates transform domain accumulation.
  • a method for reconstructing an enhancement layer from a multi-layer bitstream comprising: a) receiving a first-layer intra-prediction mode; b) receiving a second-layer bitstream prediction indicator, the indicator indicating that the first-layer prediction mode is to be used for prediction of the second layer; c) using the first-layer prediction mode to construct a second-layer prediction based on adjacent block data in the second layer; and d) combining the second-layer prediction with residual information thereby creating a reconstructed second layer.
  • a method for combining layers in a multi-layer bitstream comprising: a) determining a first spatial resolution of a first layer of a multi-layer image ; b) determining a second spatial resolution of a second layer of the multi-layer image; c) comparing the first spatial resolution with the second spatial resolution; d) performing steps e) through f) when the first spatial resolution is substantially equal to the second spatial resolution; e) scaling a first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; f) combining the scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; g) performing steps h) through k) when the first-layer spatial resolution is not substantially equal to the second-layer spatial resolution; h) inverse transforming the first-layer transform coefficient thereby producing a first- layer spatial domain value; i) inverse transforming the second-layer transform coefficient
  • the characteristic may comprise a quantization parameter.
  • the transform coefficients may be de-quantized transform coefficients.
  • the transform coefficients may be quantized transform coefficients and the transform coefficients may be inverse quantized prior to the inverse transforming.
  • the scaling may comprise: a) determining a first-layer quantization parameter; b) determining a second-layer quantization parameter; and c) scaling the first-layer transform coefficient based on the first-layer quantization parameter and the second-layer quantization parameter.
  • the method may further comprise inverse transforming the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution, thereby generating a spatial-domain residual value.
  • the method may further comprise combining the spatial- domain residual value with a spatial-domain prediction value when the first spatial resolution is substantially equal to the second spatial resolution.
  • the method may further comprise combining the combined spatial-domain residual value with a spatial-domain prediction value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined spatial-domain value.
  • the method may further comprise generating a combined bitstream comprising the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise a motion vector when the first spatial resolution is substantially equal to the second spatial resolution.
  • the method may further comprise transforming the combined spatial-domain value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined transform-domain coefficient.
  • the method may further comprise generating a combined bitstream comprising the combined transform-domain coefficient when the first spatial resolution is not substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is not substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise a motion vector when the first spatial resolution is not substantially equal to the second spatial resolution.
  • a system for combining layers in a multi-layer bitstream comprising: a) a resolution determiner for determining a first spatial resolution of a first layer of a multi-layer image and for determining a second spatial resolution of a second layer of the multi-layer image; b) a comparator for comparing the first spatial resolution with the second spatial resolution; c) a controller for selectively performing steps d) through e) when the first spatial resolution is substantially equal to the second spatial resolution; d) a coefficient sealer for scaling a first-layer transform coefficient to match a characteristic of a second- layer thereby creating a scaled, first-layer transform coefficient; e) a coefficient combiner for combining the scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; f) the controller selectively performing steps g) through i) when the first-layer spatial resolution is not substantially equal to the second- layer spatial resolution; g) an inverse transformer for
  • the inverse transformer may further inverse transforms the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution, thereby generating a spatial-domain residual value.
  • the spatial-domain combiner may further combine the spatial-domain residual value with a spatial-domain prediction value when the first spatial resolution is substantially equal to the second spatial resolution.
  • the spatial domain combiner may further combine the combined spatial-domain residual value with a spatial-domain prediction value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined spatial-domain value.
  • the system may further comprise a bitstream generator for generating a combined bitstream comprising the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise a motion vector when the first spatial resolution is substantially equal to the second spatial resolution.
  • the system may further comprise a transformer for transforming the combined spatial-domain value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined transform-domain coefficient.
  • the system may further comprise a bitstream generator for generating a combined bitstream comprising the combined transform-domain coefficient when the first spatial resolution is not substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is not substantially equal to the second spatial resolution.
  • the combined bitstream may further comprise a motion vector when the first spatial resolution is not substantially equal to the second spatial resolution.
  • a method for combining layers ' in a multi-layer bitstream comprising: a) receiving de-quantized transform coefficients for a first layer of a first spatial resolution; b) receiving de-quantized transform coefficients for a second layer of the first spatial resolution; c) scaling the first-layer transform coefficients, thereby creating scaled first-layer transform coefficients; d) combining the scaled first-layer transform coefficients with the second-layer transform coefficients thereby creating combined transform coefficients; e) inverse transforming the combined transform coefficients thereby creating combined residual spatial- domain values; f) receiving de-quantized transform coefficients for a third layer of a second spatial resolution; g) resampling the combined residual spatial-domain values to the second spatial resolution, thereby creating resampled combined spatial-domain values; h) inverse transforming the third layer transform coefficients, thereby creating third-layer spatial-domain values; and i) combining the resampled combined spatial-
  • a method for combining layers in a multi-layer bitstream comprising: a) receiving quantized transform coefficients for a first layer of a first spatial resolution; b) receiving quantized transform coefficients for a second layer of the first spatial resolution; c) scaling the quantized first-layer transform coefficients, thereby creating scaled quantized first-layer transform coefficients; d) combining the scaled quantized first-layer transform coefficients with the second-layer quantized transform coefficients thereby creating combined quantized transform coefficients; e) inverse quantizing the combined quantized transform coefficients thereby creating combined transform coefficients; f) inverse transforming the combined transform coefficients thereby creating combined residual spatial- domain values; g) receiving quantized transform coefficients for a third layer of a second spatial resolution; h) resampling the combined residual spatial-domain values to the second spatial resolution, thereby creating resampled combined spatial-domain values; i) inverse quantizing the third-layer quantized transform
  • a method for combining layers in a multi-layer bitstream comprising: a) receiving de-quantized transform coefficients for a first layer of a first spatial resolution; b) inverse transforming the de-quantized first- layer transform coefficients thereby producing first-layer spatial domain values; c) receiving de-quantized transform coefficients for a second layer of a second spatial resolution that is higher than the first spatial resolution; d) receiving de-quantized transform coefficients for a third layer of the second spatial resolution; e) upsampling the first-layer spatial domain values to the second spatial resolution thereby producing upsampled first-layer spatial domain values; f) combining the second-layer de-quantized transform coefficients with the third-layer de-quantized transform coefficients thereby creating combined transform coefficients; g) inverse transforming the combined transform coefficients thereby creating first combined residual spatial-domain values; and h) combining the upsampled first-layer spatial domain values with the first combined residual spatial-
  • a method for combining layers in a multi-layer bitstream comprising: a) receiving quantized transform coefficients for a first layer of a first spatial resolution; b) receiving quantized transform coefficients for a second layer of the first spatial resolution; c) receiving quantized transform coefficients for a third layer of the first spatial resolution; d) scaling the quantized first-layer transform coefficients to match properties of the second-layer, thereby creating scaled quantized first-layer transform coefficients; e) combining the scaled quantized first-layer transform coefficients with the second-layer quantized transform coefficients thereby creating combined quantized transform coefficients; f) inverse quantizing the combined quantized transform coefficients thereby creating combined transform coefficients; g) inverse quantizing the third-layer quantized transform coefficients thereby creating third-layer de-quantized transform coefficients; h) combining the combined transform coefficients with the third-layer de- quantized transform coefficients thereby creating three-layer combined transform coefficients
  • a method for combining layers in a multi-layer bitstream comprising: i) determining whether a second layer of a multi-layer image employs residual prediction; ii) performing the following steps only if the second layer employs residual prediction; iii) determining a first spatial resolution of a first layer of a multi-layer image; iv) determining a second spatial resolution of the second layer; v) comparing the first spatial resolution with the second spatial resolution; vi) performing steps vii) through viii) when the first spatial resolution is substantially equal to the second spatial resolution; vii) scaling a first-layer transform coefficient to match a characteristic of a second- layer thereby creating a scaled, first-layer transform coefficient; viii) combining the scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; ix) performing steps x) through xiii) when the first-layer spatial resolution is not substantially equal to the second-layer
  • a fourteenth aspect of the invention there is provided a method for scaling transform coefficients in a multi-layer bitstream, the method comprising: determining a first-layer quantization parameter based on the multi-layer bitstream; determining a second-layer quantization parameter based on the multi-layer bitstream; and scaling a first-layer transform coefficient based on the first-layer quantization parameter and the second-layer quantization parameter.
  • the scaling may be performed according to the following relationship: Qp _ FirstLayer -Qp _ SecondLaye r
  • T SecondLaye r T FirstLayer ' 1 k
  • TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively
  • k is an integer
  • QpJFirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
  • the k may be equal to 6.
  • the scaling may be performed according to the following relationship:
  • / / denotes integer division, % denotes the modulo operation; M and ScaleMatrix are constants; TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; k is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
  • QP-Diff may be reset to zero when Qp _Diff is found to be less than zero .
  • the ScaleMatrix may be equal to [512 573 642 719 806
  • the M may be equal to 512.
  • the ScaleMatrix may be equal to [8 9 10 1 1 13 14] and the M may be equal to 8.
  • the scaling may be performed according to the following relationship:
  • T SecondLayer ((T FirstLayer « QP _ Diff Il 6) * ScaleMaMx[QP _ Diff%6 + 5] + M 12) » M
  • TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; k is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
  • Qp ⁇ Diff may be reset to zero when Qp _Diff is found to be less than zero .
  • the ScaleMatrix may be equal to [29 1 325 364 408 457 512 573 642 719 806 902] and the M may be equal to 512.
  • the method may further comprise combining the scaled first-layer transform coefficient with a second-layer transform coefficient thereby creating a combined coefficient.
  • the method may further comprise generating a combined bitstream comprising the combined coefficient.
  • the method may further comprise determining a first- layer transform-coefficient-dependent weighting factor, SF, and a second-layer transform-coefficient-dependent weighting factor, Ss, wherein the scaling is performed according to the following relationship:
  • T SecondLaye r T FirstLayer - 2 k
  • TSecondLayer and TFirstLayer denote a transform coefficient at the second layer and first layer, respectively;
  • k is an integer, and Qp_FirstLayer and
  • Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
  • the method may further comprise: determining a first- layer transform-coefficient-dependent weighting factor, S F ; determining a second-layer transform-coefficient-dependent weighting factor, Ss; and wherein the scaling is performed according to the following relationship:
  • T s econd r ((T FirstLayer « QP _ Diff Il 6) * ScaleMatri x[QP _ Diff %6] + M 12) » M
  • / / denotes integer division, % denotes the modulo operation; M and ScaleMatrix are constants; TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; /c is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
  • SF and Ss may be explicitly present in the multi-layer bitstream.
  • SF and Ss may be derived from the multi-layer bitstream.
  • the method may further comprise : determining a multiplicative transform-coefficient-dependent weighting factor, W l ; determining an additive transform-coefficient- dependent weighting factor, W2; and wherein the scaling is performed according to the following relationship:
  • Qp _ Diff Wl(Qp _ FirstLayer - Qp _ SecondLaye r) + W2
  • T SecolldLaye ((T FirstLayer « QP _ Diff Il 6) * ScaleMatri x[QP _ Diff %6] + M /2) » M
  • TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; k is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
  • W l and W2 may be explicitly present in the multi-layer bitstream.
  • Wl and W2 may be derived from the multi-layer bitstream.
  • the method may further comprise inverse quantizing the combined coefficient thereby generating a de-quantized combined transform coefficient.
  • the method may further comprise inverse transforming the de-quantized combined transform coefficient thereby generating a spatial-domain residual value .
  • the method may further comprise combining the spatial- domain residual value with a spatial-domain prediction value.
  • a system for scaling transform coefficients in a multi-layer bitstream comprising: a first parameter determiner for determining a first-layer quantization parameter based on the multi-layer bitstream; a second parameter determiner for determining a second-layer quantization parameter based on the multi-layer bitstream; and a sealer for scaling a first-layer transform coefficient based on the first-layer quantization parameter and the second-layer quantization parameter.
  • a method for controlling entropy coding processes comprising: a) identifying a first adj acent macroblock that is adjacent to a target macroblock; b) identifying a second adj acent macroblock that is adjacent to the target macroblock; c) determining a first macroblock indicator indicating whether the first adjacent macroblock is coded with reference to another layer; d) determining a second macroblock indicator indicating whether the second adjacent macroblock is coded with reference to another layer; and e) determining an entropy coding control value based on the first macroblock indicator and the second macroblock indicator.
  • the method may further comprise using the entropy coding control value to encode an intra-prediction mode.
  • the method may further comprise using the entropy coding control value to decode an intra-prediction mode.
  • the method may further comprise using the entropy coding control value to encode the target macroblock.
  • the method may further comprise using the entropy coding control value to decode the target macroblock.
  • the target macroblock may be a chroma macroblock.
  • a macroblock may be determined to be coded with reference to another layer when the macroblock is of type IntraBL.
  • the entropy coding control value may comprise a context.
  • the context may be based on cumulative macroblock information.
  • a method for controlling entropy coding processes comprising: a) identifying a first adjacent macroblock that is adjacent to a target macroblock; b) identifying a second adjacent macroblock that is adjacent to the target macroblock; c) determining whether the first adjacent macroblock is available; d) determining whether the first adjacent macroblock is coded in inter prediction mode; e) determining whether the first adj acent macroblock is encoded in the spatial domain; f) determining whether the first adjacent macroblock is intra predicted with a DC prediction mode; g) determining whether the first adjacent macroblock is coded with reference to another layer; h) setting a first adjacent block flag to one when any of steps c) through g) are true; i) determining whether the second adjacent macroblock is available; j) determining whether the second adjacent macroblock is coded in inter prediction mode;
  • the method may further comprise using the entropy coding control value to encode the target macroblock.
  • the method may further comprise using the entropy coding control value to decode the target macroblock.
  • the target macroblock may be a chroma macroblock.
  • a macroblock may be determined to be coded with reference to another layer when the macroblock is of type IntraBL.
  • a macroblock may be determined to be coded in the spatial domain when the macroblock is of type I_PCM.
  • the entropy coding control value may comprise a context.
  • the context may be based on cumulative macroblock information.
  • a method for prediction mode determination comprising: a) identifying a first adj acent macroblock that is adjacent to a target macroblock; b) identifying a second adjacent macroblock that is adjacent to the target macroblock; and c) setting a target block estimated prediction mode to a predetermined mode when any of conditions i) through vi) are true: i) the first adj acent macroblock is available; ii) the first adj acent macroblock is coded in inter prediction mode; iii) the first adj acent macroblock is coded with reference to another layer; iv) the second adj acent macroblock is available; v) the second adjacent macroblock is coded in inter prediction mode; vi) the second adj acent macroblock is coded with reference to another layer.
  • the predetermined mode may be a DC prediction mode.
  • the method may further comprise determining an actual prediction mode for the target block based on target block content.
  • the method may further comprise comparing the estimated prediction mode with the actual prediction mode.
  • the method may further comprise encoding a message, which instructs a decoder to use the estimated prediction mode to predict the target block when the actual prediction mode is the same as the estimated prediction mode.
  • the method may further comprise decoding a message, which instructs a decoder to use the estimated prediction mode to predict the target block when the actual prediction mode is the same as the estimated prediction mode.
  • the method may further comprise decoding a message, which instructs a decoder to use the actual prediction mode to predict the target block when the actual prediction mode is not the same as the estimated prediction mode.
  • the method may further comprise encoding a message, which instructs a decoder to use the actual prediction mode to predict the target block when the actual prediction mode is not the same as the estimated prediction mode.
  • the target block estimated prediction mode may be a luma prediction mode.
  • a system for controlling entropy coding processes comprising: a) a first identifier for identifying a first adjacent macroblock that is adjacent to a target macroblock; b) a second identifier for identifying a second adjacent - macroblock that is adjacent to the target macroblock; c) a first indicator determiner for determining a first macroblock indicator indicating whether the first adjacent macroblock is coded with reference to another layer; d) a second indicator determiner for determining a second macroblock indicator indicating whether the second adjacent macroblock is coded with reference to another layer; and e) a value determiner for determining an entropy coding control value based on the first macroblock indicator and the second macroblock indicator.
  • a method for combining layers in a multi-layer bitstream comprising: a) receiving a bitstream comprising encoded image coefficients and encoded block pattern (Cbp) information wherein the Cbp information identifies regions in the bitstream that comprise transform coefficients; b) decoding the Cbp information; c) parsing the bitstream by using the Cbp information to identify bitstream regions comprising transform coefficients; d) scaling first- layer transform coefficients in the bitstream to match a characteristic of a second-layer in the bitstream; e) adding the scaled, first-layer transform coefficients to second-layer transform coefficients to form combined coefficients in a combined layer; and f) calculating combined Cbp information for the combined layer wherein the combined Cbp information identifies regions in the combined layer that comprise transform coefficients .
  • Cbp encoded block pattern
  • the calculating may be performed only when coefficients in the second layer are predicted from the first layer.
  • the method may further comprise inverse transforming the combined coefficients thereby creating spatial domain values.
  • the method may further comprise selectively filtering regions of the spatial domain values based on the combined Cbp information.
  • the first layer and the second layer may have different spatial resolutions.
  • the first layer and the second layer may have different bit-depths.
  • the first layer may be a base layer.
  • the second layer may be an enhancement layer.
  • the calculating combined Cbp information may comprise testing the combined coefficients.
  • the calculating combined Cbp information may comprise computing the binary-OR of the first layer and the second layer.
  • the calculating combined Cbp information may comprise scanning coefficient lists to identify regions with residual information.
  • a system for combining layers in a multi-layer bitstream comprising: g) a receiver for receiving a bitstream comprising encoded image coefficients and encoded block pattern (Cbp) information wherein the Cbp information identifies regions in the bitstream that comprise transform coefficients; h) a decoder for decoding the Cbp information; i) a parser for parsing the bitstream by using the Cbp information to identify bitstream regions comprising transform coefficients; j) a sealer for scaling first-layer transform coefficients in the bitstream to match a characteristic of a second-layer in the bitstream; k) an adder for adding the scaled, first-layer transform coefficients to second-layer transform coefficients to form combined coefficients in a combined layer; and 1) a calculator for calculating combined Cbp information for the combined layer wherein the combined Cbp information identifies regions in the combined layer that comprise transform coefficients .
  • Cbp encoded block pattern
  • the calculating may be performed only when coefficients in the second layer are predicted from the first layer.
  • the system may further comprise an inverse transformer for inverse transforming the combined coefficients thereby creating spatial domain values.
  • the system may further comprise a filter for selectively filtering regions of the spatial domain values based on the combined Cbp information.
  • the first layer and the second layer may have different spatial resolutions.
  • the first layer and the second layer may have different bit-depths .
  • the first layer may be a base layer.
  • the second layer may be an enhancement layer.
  • the calculating combined Cbp information may comprise testing the combined coefficients.
  • the calculating combined Cbp information may comprise computing the binary-OR of the first layer and the second layer.
  • the calculating combined Cbp information may comprise scanning coefficient lists to identify regions with residual information.
  • a method for selecting a reconstruction transform size when a transform size is not indicated in an enhancement layer comprising: a) determining a lower-layer transform size; b) determining if the lower-layer transform size is substantially similar to a predefined transform size; c) selecting an inverse transform of the predefined transform size as a reconstruction transform when the lower-layer transform size is substantially similar to the predefined transform size; and d) selecting an inverse transform of a default transform size as a reconstruction transform when the lower-layer transform size is not substantially similar to the predefined transform size.
  • the default transform size may be used for parsing a bitstream regardless of the selection for a reconstruction transform.
  • the method may further comprise inverse transforming enhancement layer coefficients with the reconstruction transform.
  • the predefined transform size may be 8x8.
  • the predefined transform size may be 16x 16.
  • the method may further comprise determining a prediction mode for the enhancement layer and performing steps a) through d) only when the enhancement layer prediction mode indicates that the enhancement layer is predicted from the lower layer.
  • the method may further comprise extracting a plurality of enhancement-layer coefficients formatted for. the default transform size.
  • the method may further comprise reformatting for the predefined transform size the plurality of extracted enhancement-layer coefficients.
  • the method may further comprise extracting a plurality of quantized enhancement-layer coefficients formatted for the default transform size .
  • the method may further comprise reformatting for the predefined transform size the plurality of extracted ' quantized enhancement-layer coefficients thereby creating reformatted quantized enhancement-layer coefficients.
  • the method may further comprise: i) inverse quantizing a plurality of lower-layer quantized transform coefficients thereby creating lower-layer transform coefficients; ii) scaling the lower-layer transform coefficients to match a characteristic of the enhancement-layer thereby creating scaled, lower-layer transform coefficients; iii) inverse quantizing the reformatted quantized enhancement-layer coefficients thereby creating enhancement-layer transform coefficients; and iv) combining the scaled, lower-layer transform coefficients with the enhancement-layer transform coefficients to form combined coefficients .
  • the method may further comprise generating a combined bitstream comprising the combined coefficients.
  • the combined bitstream may further comprise an intra- prediction mode.
  • the combined bitstream may further comprise a motion vector.
  • the method may further comprise inverse transforming the combined coefficients using the reconstruction transform thereby generating a spatial-domain residual value .
  • the method may further comprise combining the spatial- domain residual value with the spatial-domain prediction value. .
  • the method may further comprise: i) scaling a plurality of lower-layer quantized transform coefficients to match a characteristic of the enhancement-layer thereby creating scaled, lower-layer quantized transform coefficients; and ii) combining the scaled, lower-layer quantized transform coefficients with the reformatted quantized enhancement- layer coefficients to form combined quantized coefficients.
  • the method may further comprise inverse quantizing the combined quantized coefficients, thereby creating combined coefficients.
  • the method may further comprise generating a combined bitstream comprising the combined coefficients.
  • the combined bitstream may further comprise an intra- prediction mode.
  • the combined bitstream may further comprise a motion vector.
  • the method may further comprise inverse transforming the combined coefficients using the reconstruction transform thereby generating a spatial-domain residual value .
  • the method may further comprise combining the spatial- domain residual value with the spatial-domain prediction value.
  • a system for selecting a reconstruction transform size when a transform size is not indicated in an enhancement layer comprising: a) a size determiner for determining a lower-layer transform size; b) a determiner for determining if the lower-layer transform size is substantially similar to a predefined transform size; c) a first selector for selecting an inverse transform of the predefined transform size as a reconstruction transform when the lower- layer transform size is substantially similar to the predefined transform size; and d) a second selector for selecting an inverse transform of a default transform size as a reconstruction transform when the lower-layer transform size is not substantially similar to the predefined transform size.
  • Some embodiments of the present invention comprise methods and systems for processing and process management in a multi-layer bitstream.
  • Fig. IA is a diagram showing embodiments of the present invention comprising scaling of transform domain coefficients
  • Fig. I B is a diagram showing embodiments of the present invention comprising accumulation of quantized transform coefficients and scaling of quantized transform domain coefficients
  • Fig. 2A is a diagram showing embodiments of the present invention comprising scaling of transform domain coefficients and bitstream rewriting without reconstruction
  • Fig. 2B is a diagram showing embodiments of the present invention comprising accumulation of quantized transform coefficients or indices and bitstream rewriting without reconstruction;
  • Fig. 3 is a diagram showing embodiments of the present invention comprising transform size selection
  • Fig. 4 is a diagram showing embodiments of the present invention comprising conditional transform size indication and selection
  • Fig. 5 is a diagram showing embodiments of the present invention comprising coefficient scaling based on quantization parameters
  • Fig. 6 is a diagram showing embodiments of the present invention comprising calculation of an entropy encoder control value based on adjacent macroblock data
  • Fig. 7 is a diagram showing embodiments of the present invention comprising determination of an entropy encoder control value based on a combination of adjacent macroblock conditions
  • Fig. 8 is a diagram showing embodiments of the present invention comprising a determination of an estimated prediction mode and prediction mode signaling based on adjacent macroblock data;
  • Fig. 9 is a diagram showing embodiments of the present invention comprising calculation of a combined layer coded block pattern
  • Fig. 10 is a diagram showing embodiments of the present invention comprising selective transform accumulation based on layer spatial resolutions
  • Fig. 1 1 is a block diagram showing embodiments of the present invention comprising transform size selection
  • Fig. 12 is a block diagram showing embodiments of the present invention comprising coefficient scaling based on quantization parameters
  • Fig. 13 is a diagram showing embodiments of the present invention comprising calculation of an entropy encoder control value based on adjacent macroblock data
  • Fig. 14 is a diagram showing embodiments of the present invention comprising calculation of a combined layer coded block pattern
  • Fig. 15 is a block diagram showing embodiments of the present invention comprising selective transform accumulation based on layer spatial resolutions .
  • Some embodiments of the present invention comprise methods and systems for residual accumulation for scalable video coding. Some embodiments comprise methods and systems for decoding a scalable bit-stream.
  • the bit-stream may be generated by an encoder and subsequently stored and/ or transmitted to a decoder.
  • the decoder may parse the bit-stream and convert the parsed symbols into a sequence of decoded images.
  • a scalable bit-stream may contain different representations of an original image sequence .
  • a first layer in the bit-stream contains a low quality version of the image sequence
  • a second layer in the bit- stream contains a higher quality version of the image sequence .
  • a first layer in the bit-stream contains a low resolution version of the image sequence
  • a second layer in the bit-stream contains a higher resolution version of the image sequence .
  • More sophisticated examples will be readily apparent to those skilled in the art, and these more sophisticated examples may include a plurality of representations of an image sequence and/ or a bit-stream that contains a combination of different qualities and resolutions.
  • a scalable bit-stream may comprise a form of inter-layer prediction.
  • exemplary embodiments may comprise inter-layer prediction within the scalable video extensions for the AVC I H .264 video coding standards. These extensions are commonly known as SVC, and the SVC system, described in T.
  • inter-layer prediction is realized by proj ecting motion and mode information from an enumerated lower layer to an enumerated higher layer.
  • prediction residual is projected from an enumerated lower layer to an enumerated higher layer.
  • the higher layer bit-stream may then contain additional residual to improve the quality of the decoded output.
  • Embodiments of the present invention comprise changes to the syntax and semantics of the coarse grain scalable layer to enable the fast rewriting of an SVC bit-stream into an AVC compliant bit-stream.
  • a network device can rewrite the SVC data into an AVC bit-stream without drift and without needing to reconstruct the sequence. In some embodiments, this may be accomplished by merging multiple coarse grain scalable layers.
  • Some embodiments of the present invention comprise SVC to AVC bit-stream rewriting.
  • This process may comprise taking an SVC bit-stream as input and producing an AVC bit- stream as output.
  • this is similar to transcoding.
  • some embodiments exploit the single loop structure of SVC and enable the direct mapping of an SVC bit-stream onto AVC syntax elements .
  • Some embodiments may perform this function without introducing drift and without reconstructing the video sequence.
  • AVC bit-stream obviate the need to carry the additional overhead introduced by SVC end-to-end. Thus, it can be discarded when the scalable functionality is no longer needed.
  • These embodiments can greatly expand the application space for SVC .
  • the final transmission link is rate constrained. This could be a wireless link to a portable device, or alternatively, a wireless link to a high resolution display. In either case, we can employ the scalability features of SVC to intelligently adapt the rate at the transmitter.
  • the receiving device since the receiving device has no need for the SVC functionality, it is advantageous to remove the SVC component from the bit- stream. This improves the visual quality of the transmitted video, as fewer bits are devoted to overhead and more bits are available for the visual data.
  • bit-stream rewriting As a second non-limiting example of bit-stream rewriting, consider a system that supports a large number of heterogeneous devices. Devices connected via slow transmission links receive the AVC base layer that is part of the SVC bit-stream; devices connected via faster transmission links receive the AVC base layer plus additional SVC enhancement. To view this enhancement data, these receivers must be able to decode and reconstruct the SVC sequence . For applications with a larger number of these devices, this introduces a large expense for deploying SVC . Set-top boxes (or other decoding hardware) must be deployed at each receiver. As a more cost effective solution, the process of bit- stream rewriting from SVC to AVC within the network could be employed to deliver AVC data to all devices. This reduces the deployment cost of SVC .
  • bit-stream rewriting As a third non-limiting example of bit-stream rewriting, consider an application that utilizes SVC for storing content on a media server for eventual delivery to a client device.
  • the SVC format is very appealing as it requires less storage space compared to archiving multiple AVC bit-streams at the server.
  • it also requires a transcoding operation in the server to support AVC clients or SVC capabilities at the client. Enabling SVC-to-AVC bit-stream rewriting allows the media server to utilize SVC for coding efficiency without requiring computationally demanding transcoding and/ or SVC capabilities throughout the network.
  • bit-stream rewriting As a fourth non-limiting example of bit-stream rewriting, the process of SVC-to-AVC bit-stream rewriting simplifies the design of SVC decoder hardware.
  • an SVC decoder requires modifications throughout the AVC decoding and reconstruction logic.
  • the differences between AVC and SVC are localized to the entropy decoder and coefficient scaling operations. This simplifies the design of the SVC decoding process, as the final reconstruction loop is identical to the AVC reconstruction process.
  • the SVC reconstruction step is guaranteed to contain only one prediction operation and one inverse transform operation per block. This is different than current SVC operations, which require multiple inverse transform operations and variable reference data for intra prediction.
  • Some embodiments of the present invention comprise changes to the SVC coarse grain scalability layer to enable the direct mapping of an SVC bit-stream to an AVC bit-stream.
  • These changes comprise a modified IntraBL mode and restrictions on the transform for BLSkip blocks in inter-coded enhancement layers.
  • these changes may be implemented by a flag sent on a sequence basis and, optionally, on a slice basis.
  • Some embodiments comprise changes for inter-coded blocks. These changes comprise: Blocks that are inferred from base layer blocks must utilize the same transform as the base layer block. For example, if a block in the coarse grain scalable layer has base_mode_flag equal to one and the co-located base layer block utilizes the 4x4 transform, then the enhancement layer block must also utilize a 4x4 transform.
  • the reconstruction of a block that is inferred from base layer blocks and utilizes residual prediction shall occur in the transform domain.
  • the base layer block would be reconstructed in the spatial domain and then the residual transmitted in the enhancement layer.
  • the transform coefficients of the base layer block are scaled at the decoder, refined by information in the enhancement layer and then inverse transformed.
  • the smoothed_reference_flag shall be zero when the avc_rewrite flag is one.
  • Intra-coded blocks provide additional barriers to the SVC-to-AVC rewriting problem.
  • a block in the enhancement layer may be coded with the
  • IntraBL mode This mode signals that the intra-coded block in the base layer should be decoded and used for prediction. Then, additional residual may be signaled in the enhancement layer. Within the SVC-to-AVC rewriting system, this creates difficulties since the reconstructed intra-coded block cannot be described as a spatial prediction of its neighbors plus a signaled residual. Thus, the intra-coded block must be transcoded from SVC to AVC . This requires added computational complexity; it also introduces coding errors that may propagate via motion compensation.
  • a decoder or rewriter (system) comprises a first inverse quantizer 5, a sealer 6, a second inverse quantizer 1 1 , a first adder (coefficient combiner) 7, an inverse transformer
  • a base layer residual (base layer quantized transform coefficients) 1 , prediction mode data 2 and enhancement layer residual (enhancement layer quantized transform coefficients) 3 are received at the decoder or rewriter. Neighboring block data 4 is also known at the decoder/ rewriter.
  • the base layer residual data 3 may be inverse quantized by the first inverse quantizer 5 thereby creating base-layer transform coefficients and the transform coefficients may be scaled by the sealer 6 to match a characteristic of the enhancement layer thereby creating scaled base-layer transform coefficients.
  • the matched characteristic may comprise a quantization parameter characteristic.
  • the enhancement layer residual 3 may also be inverse quantized by the second inverse quantizer 1 1 and added by the first adder 7 to the scaled base layer residual coefficients (scaled base-layer transform coefficients) thereby forming combined coefficients .
  • the combined coefficients are then inverse transformed by the inverse transformer 10 to produce spatial domain intensity values.
  • the enhancement layer information may be ignored when it is not needed.
  • Prediction mode data 2 and neighboring block data 4 are used to determine a prediction block by intra-prediction 8.
  • the prediction block is then added by the second adder 9 to the spatial domain intensity values from the base and enhancement layers to produce a decoded block 12.
  • a base layer residual 1 , prediction mode 2 and enhancement layer residual 3 are received at a decoder or rewriter. Neighboring block data 135 is also known at the decoder/ rewriter and may be used for prediction 134.
  • the base layer quantized transform coefficients 1 may be scaled 130 to match a characteristic of the enhancement layer thereby creating scaled base-layer transform coefficients.
  • the matched characteristic may comprise a quantization parameter characteristic.
  • the enhancement-layer quantized transform coefficients 3 may be added 131 to the scaled base-layer quantized transform coefficients to create combined .quantized coefficients.
  • the combined quantized coefficients may then be inverse quantized 132 to produce de-quantized combined coefficients, which may then be inverse transformed 133 to produce combined spatial domain values. These spatial domain values may then be combined 136 with prediction data to form a reconstructed image 137.
  • bitstream is re-encoded without complete reconstruction of the image.
  • base layer (BL) residual data 1 may be received at a decoder, transcoder, decoder portion of an encoder or another device or module.
  • Enhancement layer (EL) residual data 3 may also be received at the device or module.
  • the BL residual 1 may be inverse quantized by a first inverse quantizer 5 to produce BL transform coefficients. These BL transform coefficients may then be scaled by a sealer 6 to match a characteristic of the enhancement layer thereby creating scaled BL transform coefficients.
  • this enhancement layer characteristic may be a quantization parameter, a resolution parameter or some other parameter that relates the base layer to the enhancement layer.
  • the enhancement layer data 3 may also be inverse quantized by a second inverse quantizer 1 1 to produce enhancement layer coefficients 18.
  • the scaled BL coefficients 16 may then be combined with the scaled BL coefficients by a coefficient combiner 19 to produce combined coefficients 17. These combined coefficients may then be rewritten to a reduced-layer or single-layer bitstream with a bitstream encoder (bitstream generator) 13.
  • the bitstream encoder 13 may also write prediction data 2 into the bitstream.
  • the functions of bitstream encoder 13 may also comprise quantization, entropy coding and other functions.
  • bitstream is re-encoded without complete reconstruction of the image and without inverse quantization.
  • base layer (BL) residual data 36 may be received at a decoder, transcoder, decoder portion of an encoder or another device or module.
  • Enhancement layer (EL) data 37 may also be received at the device or module .
  • the BL signal 36 and enhancement layer signal 37 may be entropy decoded to produce quantized coefficients or indices 21 and 23.
  • the BL quantization indices 21 may then be scaled 20 to match a characteristic of the enhancement layer thereby creating scaled BL indices.
  • this enhancement layer characteristic may be a quantization parameter, a resolution parameter or some other parameter that relates the base layer to the enhancement layer.
  • the scaled BL indices 26 may then be combined 24 with the EL indices 23 to produce combined indices 27. These combined coefficients may then be rewritten to a reduced-layer or single-layer bitstream 28 with a bitstream encoder 25.
  • the bitstream encoder 25 may also write prediction data 35 into the bitstream.
  • the functions of bitstream encoder 25 may also comprise quantization, entropy coding and other functions.
  • the base layer block does not need to be completely reconstructed. Instead, the intra- prediction mode and residual data are both mapped to the enhancement layer. Then, additional residual data is added from the enhancement layer. Finally, the block is reconstructed.
  • the advantage of this approach is that the enhancement block may be written into a single layer bit- stream without loss and without requiring the base layer to be completely decoded.
  • Some embodiments of the present invention comprise propagation of motion data between layers in a CGS system without the use of a residual prediction flag.
  • These embodiments comprise a modified IntraBL method that propagates the intra prediction mode from the base layer to the enhancement layer. Intra prediction is then performed at the enhancement layer.
  • the transform type for IntraBL blocks must be the same as the co-located base layer block. For example, if the base layer block employs the 8x8 transform, then the enhancement layer block must also utilize the 8x8 transform.
  • an 8x8 transform flag may still be transmitted in an enhancement layer.
  • blocks coded by the 16x 16 transform in the base layer are also coded by the 16x16 transform in the enhancement layer.
  • the enhancement layer blocks are transmitted with the 4x4 scan pattern and method. That is, in some embodiments, the DC and AC coefficients of the 16x16 blocks are not sent separately.
  • Some embodiments of the present invention may be described with reference to Figure 3 and Figure 1 1 .
  • a system according to these embodiments comprises a size determiner 201 , a determiner 202 , a first selector 203 , and a second selector 204.
  • intra-prediction modes and transform data may be inferred from one layer to another.
  • a first-layer transform size may be determined by the size determiner 201 (30) .
  • the first layer may be a base layer or a layer from which another layer is predicted.
  • a predetermined transform size is established.
  • the first-layer transform size is then compared to the predetermined (predefined) transform size. That is, the determiner 202 determines if said lower-layer transform size is the same as (substantially similar to) a predetermined transform size. If the first-layer transform size is the same
  • the predetermined transform size is selected by the first selector 203 (33) , for inverse transformation operations. If the first-layer transform size is not the same (3 1) as the predetermined transform size, a default transform size is selected by the second selector 204
  • the predetermined transform size may be 8x8 and the default transform size may be 4x4.
  • the predetermined transform size may also be related to a special scan pattern and method. In these embodiments the relationship between the first-layer transform size and the predetermined transform size may also trigger special encoding methods and patterns.
  • the predetermined transform size may be 16x 16 and a match between the predetermined 16x16 size and the actual lower-layer size may indicate that the 16x16 is to be used, but that the data is to be encoded with a 4x4 scan pattern and method wherein AC and DC coefficients are transmitted together.
  • Some embodiments of the present invention may be described with reference to Figure 4. In these embodiments, a multi-layer bitstream is parsed 40 and processed to determine a base-layer transform size and to produce BL coefficient values .
  • the enhancement layer of the bitstream is also parsed 41 to determine whether a transform indicator is present. If the enhancement layer transform indicator is present in the bitstream 42 , the indicated transform size may be used for inverse transformation of the EL coefficients. If the enhancement layer transform indicator is not present in the bitstream 42 , it is determined whether the base layer transform size is 8x8 44. If the base layer transform size is 8x8, the 8x8 transform size is used to inverse transform the enhancement layer 46. If the base layer transform size is not 8x8, a default transform size, such as 4x4, may be used to inverse transform the enhancement layer 45.
  • the intra- predicted mode can be directly copied from the base layer by inferring the intra ⁇ prediction mode from the base layer in an IntraBL block. In some alternative embodiments, it can be differentially coded relative to the base layer mode. In some embodiments, the current method for signaling intra prediction modes in AVC may be used. However, in these embodiments, the predicted mode (or most probable mode) is set equal to the base layer mode . In some embodiments, the 8x8 transform flag may be omitted from the enhancement layer bit-stream , and the transform may be inferred from the base layer mode .
  • the 16x 16 transform coefficients may be signaled in the same manner in both the base and enhancement layers.
  • the presence of the 16x 16 transform can be signaled with an additional flag in the enhancement layer or inferred from the base layer bit-stream.
  • Some embodiments of the present invention comprise a residual prediction flag for IntraBL blocks. These embodiments enable the adaptive use of base layer residual for refining the enhancement layer, intra-predicted block.
  • all modes in the SVC bit-stream that cannot be directly mapped to an AVC bit-stream may be disabled by the encoder. Signaling for these embodiments, may be done in the SVC bit-streams. In some exemplary embodiments, this signaling may occur in the sequence header, sequence parameter set, picture parameter set, slice header or elsewhere. In some embodiments, this signaling may occur in an SEI message . In an exemplary embodiment, this signaling may occur in a spatial scalability
  • this signaling may occur by other out-of-band methods and, in some cases, will not require normative changes to the SVC decoding operation.
  • a decoder may assume that the encoder is generating a bit-stream, that can be translated to AVC. In some exemplary embodiments, the encoder may not utilize the IntraBL block mode or the smoothed reference tools when operating in this mode. Also, in these embodiments, the encoder may ensure that the residual data can be incorporated by scaling the base layer transform coefficients and then adding the transmitted residual. These embodiments may require the encoder to utilize the same transform method in the base and enhancement layers.
  • nal_unit_extension_flag 0 specifies that the parameters that specify the mapping of simple_priority_id to (dependency_id, temporal_level, quality_level) follow next in the sequence parameter- set.
  • nal_unit_extension_flag 1 specifies that the parameters that specify the mapping of simple_priority_id to (dependency _id, temporal_level, quality_level) are not present.
  • nal_unit_extension_flag is not present, it shall be inferred to be equal to 1.
  • the NAL unit syntax element extension_flag of all NAL units with nal_unit_type equal to 20 and 21 that reference the current sequence parameter set shall be equal to nal_unit_extension_flag.
  • the syntax element extension_flag of all NAL units with nal_unit_type equal to 20 and 2 1 that reference the current sequence parameter set shall be equal to 1.
  • number_of_simple_priority_id_values_minusl plus 1 specifies the number of values for simple_priority_id, for which a mapping to (dependency ⁇ id, tem ⁇ oral_level, quality_level) is specified by the parameters that follow next in the sequence parameter set.
  • the value of number_of_simple_priority_id_values_minus l shall be in the range of 0 to 63, inclusive.
  • priority_id, dependency_id_list[ ⁇ riority_id ], temporal_level_list[ priority_id ], quality_level_list[ priority_id ] specify the inferring process for the syntax elements dependency_id, temporal_level, and quality_level as specified in subclause F.7.4. 1.
  • dependency_list[ priority_id ] , temporal_level_list[ priority_id ] , and quality_level_list[ priority_id ] are not present, dependency_list[ priority_id ] , temporal_level_list[ priority_id ], and quality_level_list[ priority_id ] shall be inferred to be equal to 0.
  • extended_spatial_scalability specifies the presence of syntax elements related to geometrical parameters for the base layer upsampling. When extended_spatial_scalability is equal to 0, no geometrical parameter is present in the bitstream. When extended_spatial_scalability is equal to 1 , geometrical parameters are present in the sequence parameter set. When extended_spatial_scalability is equal to 2 , geometrical parameters are present in slice_data_in_scalable_extension. The value of 3 is reserved for extended_spatial_scalability. When extended_spatial_scalability is not present, it shall be inferred to be equal to 0.
  • scaled_base_left_offset specifies the horizontal offset between the upper-left pixel of an upsampled base layer picture and the upper-left pixel of a picture of the current layer in units of two luma samples .
  • scaled_base_left_offset When scaled_base_left_offset is not present, it shall be inferred to be equal to 0.
  • ScaledBaseLeftOffset 2 * scaled_base_left_offset (F-40)
  • ScaledBaseLeftOffsetC is defined as follows:
  • scaled_base_top_offset specifies vertical offset of the upper-left pixel of an upsampled base layer picture and the upper-left pixel of a picture of the current layer in units of two luma samples .
  • scaled_base_top_offset When scaled_base_top_offset is not present, it shall be inferred to be equal to 0.
  • ScaledBaseTopOffset 2 * scaled_base_top_offset (F-42)
  • scaled_base__right_offset specifies the horizontal offset between the bottom-right pixel of an upsampled based layer picture and the bottom-right pixel of a picture of the current layer in units of two luma samples.
  • scaled_base_right_offset When scaled_base_right_offset is not present, it shall be inferred to be equal to 0.
  • ScaledBaseRightOffset 2 * scaled_base_right_offset (F-44)
  • ScaledBaseWidth is defined as follow:
  • ScaledBaseWidth PicWidthlnMbs * 16 - ScaledBaseLeftOffset - ScaledBaseRightOffset (F-45)
  • ScaledBaseWidthC ScaledBaseWidth / SubWidthC (F-46)
  • scaled_base_bottom_offset specifies the vertical offset between the bottom-right pixel of an upsampled based layer picture and the bottom-right pixel of a picture of the current layer in units of two luma samples.
  • scaled_base_bottom_offset When scaled_base_bottom_offset is not present, it shall be inferred to be equal to 0.
  • ScaledBaseBottomOffset is defined as follow:
  • ScaledBaseBottomOffset 2 * scaled_base_bottom_offset (F-47)
  • ScaledBaseHeight 2 * scaled_base_bottom_offset (F-47)
  • ScaledBaseHeight PicHeightlnMbs * 16 - ScaledBaseTopOffset - ScaledBaseBottomOffset (F-48)
  • chroma_phase_x_plus l specifies the horizontal phase shift of the chroma components in units of quarter sampling space in the horizontal direction of a picture of the current layer. When chroma_phase_x_plus l is not present, it shall be inferred to be equal to 0. The chroma_phase_x_plus l is in range 0.. 1 , the values of 2 and 3 are reserved. chroma_phase_y_plusl specifies the vertical phase shift of the chroma components in units of quarter sampling space in the vertical direction of a picture of the current layer. When chroma_phase_y_ ⁇ lus l is not present, it shall be inferred to be equal to 1.
  • chroma_phase_y_plus l is in range 0..2, the value of 3 is reserved. Note: The chroma type specified in the vui_parameters should be consistent with the chroma phase parameters chroma_phase_x_plus l and chroma_phase_y_plus l in the same sequence_parameter_set.
  • avc__rewrite_flag specifies that the transmitted sequence can be rewritten without degradation as an AVC bit-stream by only decoding and coding entropy codes and scaling transform coefficients.
  • An alternative method for the IntraBL block is employed and restrictions are placed on transform size selection by the encoder.
  • avc_adaptive_rewrite_flag specifies that the avc_rewrite_flag will be sent in the slice header.
  • Some embodiments of the present invention comprise a scaling process that maps quantized transform coefficients to either a "de-quantized” version or an alternative quantization domain.
  • the decoded transform coefficients in all layers may be "de-quantized” according to the process defined in the current H.264/AVC video coding standard.
  • the avc_rewrite_flag signals that these embodiments are enabled then the decoded, quantized transform coefficients or indices are not "de-quantized” in layers preceding the desired enhancement layer.
  • the quantized coefficients or indices are mapped from a lower layer (specifically, a layer on which a desired enhancement layer depends) to the next higher layer (specifically, a layer closer to the desired enhancement layer, in order of dependency, that depends explicitly on the previously-mentioned lower layer) .
  • a system according to these embodiments comprises a first parameter determiner 21 1 , a second parameter determiner 2 12 , and a sealer 213.
  • the mapping process may operate as follows. First, the quantization parameter, or Qp value, in the lower layer bit-stream is determined by the first parameter determiner 21 1 (50) .
  • the quantization parameter, or Qp value, in the higher layer is determined by the second parameter determiner 212 (51) .
  • the lower- layer coefficients (first-layer transform coefficients) may be scaled (52) by a factor based on the quantization parameters at the sealer 213.
  • the difference between the lower layer and higher layer Qp values may be computed.
  • the transform coefficients may be scaled with the following process:
  • TmgherLayer and TiowerLayer denote the transform coefficients at the higher layer and lower layer, respectively; n is an integer, and Qp_LowerLayer and Qp_HigherLayer are the quantization parameters for the lower layer and higher layer, respectively.
  • mapping process can be implemented in a number of ways to simplify calculation.
  • the following system is equivalent:
  • Qp_Diff is always greater than 0. Accordingly, in some embodiments, applications may check the value for Qp_Diff prior to performing the scaling operation. When the value for Qp_Diff is less than zero, it can be re-assigned a value of zero prior to more processing. In some embodiments, it may be assumed that Qp_Lo ⁇ erLayer will be greater than or equal to Qp_Hig he rL a y e r.
  • the pre-defined values may be selected as:
  • the coefficients may be refined.
  • a second scaling operation may be employed. This scaling operation may "de-quantize" the transform coefficients . While some embodiments described above only describe one lower layer and one higher layer, some embodiments may comprise more than two layers. For example, an exemplary three-layer case may function as follows: First, the lowest layer may be decoded. Then, transform coefficients may be mapped to the second layer via the method described above.
  • the mapped transform coefficients may then be refined.
  • these transform coefficients may be mapped to a third layer using a method described above .
  • These transform coefficients may then be refined, and the resulting coefficients may be "de-quantized" via a scaling operation such as the one defined by the AVC/ H.264 video coding standard.
  • a system comprises a first identifier 221 , a second identifier 222 , a first indicator determiner 223, a second indicator determiner 224, and a value determiner 225.
  • information related to adj acent macroblocks may be used to inform an encoding or decoding operation for a target block or macroblock.
  • a first adjacent macroblock is identified by the first identifier 22 1 (60) and a second adjacent macroblock is identified by the second identifier 222 (61) .
  • a first adjacent macroblock indicator is then determined by the first indicator determiner 223 (62) and a second adjacent macroblock indicator is determined by the second indicator determiner
  • An entropy coder control value may then be determined by the value determiner 225 (64) , based on the adjacent macroblock indicators.
  • a first adjacent macroblock is identified 71 and a second adjacent macroblock is identified 72. Attributes of the first adjacent macroblock may then be examined to determine if the first macroblock meets pre-defined conditions 73. The second adjacent macroblock may also be examined to determine whether conditions are met 74. In some embodiments, these conditions may comprise: whether a macroblock is not available, whether a macroblock is coded in inter-prediction mode, whether a macroblock is encoded in the spatial domain, whether a macroblock is intra-predicted with DC prediction and whether a macroblock is coded with reference to another temporally-coincident layer.
  • a first macroblock flag is set to indicate the compliance 80. If no conditions are met, the flag is set to indicate a lack of compliance 76. In some embodiments, the flag may be set to "zero" if any conditions are met 80 and the flag may be set to "one" if no conditions are met 76.
  • the same process 74, 79 may be followed for the second adj acent macroblock where a flag may be set to one value if a condition is met 81 and to another value if no conditions are met 78. When both adjacent macroblocks have been examined and related flags have been set, the flags may be added 83. The resultant value may then be used as an entropy coder control value.
  • a first adjacent macroblock is identified 90 and a second adj acent macroblock is identified 91 . Attributes of the first adjacent macroblock and second adjacent macroblock may then be examined to determine if the macroblocks meet predefined conditions 92. In some embodiments, these conditions may comprise: whether the macroblock is available, whether the macroblock is encoded in inter- prediction mode and whether the macroblock is coded with reference to another layer. If any of the conditions are met for either macroblock 94 , an estimated prediction m,ode is set to a predetermined mode. In some embodiments, the predetermined mode may be a DC prediction mode.
  • an actual prediction mode may also be determined.
  • the actual prediction mode may be based on image content. Methods may be used to determine a prediction mode that results in the least error or a reduced error. If the actual prediction mode is the same as the estimated prediction mode 94, the bitstream may be encoded to indicate use of the estimated prediction mode . On the decoder side, the same process may be followed to select the estimated mode when decoding the bitstream. When the actual prediction mode is not the same as the estimated prediction mode 94, a message may be sent to indicate the actual mode and its selection 95. Details of signaling of the estimated prediction mode and the actual prediction mode may be found in the JVT AVC specification, incorporated herein by reference.
  • Some embodiments of the present invention may comprise coding of intra-prediction modes for luma and chroma information in intra-coded blocks .
  • these modes are signaled with a context adaptive method and coded in a manner dependent on the prediction modes of spatial neighbors.
  • a conditional process may be used.
  • prediction modes may be predicted from neighbors if the neighbor does not utilize inter-layer prediction.
  • Blocks that do utilize inter-layer prediction may be treated in one of the following ways.
  • the block may be treated as if it has the most probable prediction mode. In H.264 /AVC-related embodiments, this may be the DC prediction mode (mode 2) for the case of luma prediction.
  • the block may be treated as if it is an inter-coded block and OUTSIDE of the prediction region.
  • OUTSIDE has a specific context with the software utilized for testing in the JVT SVC project group. This software is commonly known as the JSVM .
  • encoding of the prediction mode and selection of the context for signaling the encoded mode may be separate processes. Different prediction methods may be used for the two processes .
  • the prediction mode may be encoded using the actual prediction mode for all intra-coded blocks - including blocks employing inter-layer prediction.
  • these same blocks may utilize another rule, such as one of the rules described above to derive contexts for coding the encoded value .
  • the contexts may assume that the intra-blocks utilizing inter- layer prediction have the most probable prediction mode.
  • Some embodiments of the present invention comprise maintenance of the "coded block pattern" information, or Cbp, as defined in the JVT SVC standard incorporated herein by reference .
  • This information defines sub-regions within an image (or macro-block) that contain residual information.
  • the bit-stream decoder first decodes the Cbp and then utilizes the information to parse the remainder of the bit- stream.
  • the Cbp may define the number of transform coefficient lists that may be present.
  • the Cbp is also utilized for reconstructing the decoded frame. For example, the decoder only needs to calculate the inverse transform if the Cbp denotes residual information.
  • the Cbp transmitted in the bit-stream may be utilized by the parsing process to extract the transform coefficients. However, it may no longer be useful to the reconstruction process since the sub-regions may contain residual information from previous layers.
  • a decoder of embodiments of the present invention may either: ( 1) not utilize the Cbp information within the reconstruction process, or (2) recalculate the Cbp after parsing the bit-stream.
  • Examples of the recalculation process include scanning through all coefficient lists to identify the sub-regions with residual information, or alternatively, generating a new Cbp by computing the binary- OR operation between the transmitted Cbp and the Cbp utilized for reconstructing the lower layer data.
  • "lower layer data” denotes the layer utilized during the inter- layer prediction process.
  • a system comprises a receiver 231 , a decoder 232 , a parser 233, a sealer 234, an adder 235, a calculator 236.
  • the receiver 231 receives ( 100) a bitstream comprising Cbp information and encoded image data.
  • the Cbp information may be decoded by the decoder 232 ( 101) and used to determine which parts of the bitstream comprise transform coefficient data.
  • the 233 may then parse ( 102) the bitstream by using the Cbp information to identify quantized indices or dequantized transform coefficients in a base layer and any enhancement layers.
  • the indices or coefficients of a base layer or a lower- layer may then be scaled ( 103) by the sealer 234 to match an enhancement layer.
  • the scaled indices or coefficients may then be added to or combined with the enhancement layer by means of the adder to form a combined layer ( 104) .
  • the Cbp information may then be re-calculated or updated ( 105) by the calculator 236 to reflect changes in coefficient location JP2007/064040
  • the new combined Cbp information may then be used for subsequent processing of the combined layer or a resulting reconstructed image.
  • the combined Cbp information may be utilized for the loop filter operation defined in the AVC specification.
  • Some embodiments of the present invention comprise methods and systems for handling of a flag that enables an 8x8 transform. These embodiments may relate to the JVT SVC standard. In these embodiments, this flag does not need to be transmitted when a block is intra-coded with inter-layer prediction and does not contain residual data. In some embodiments, the flag does not need to be transmitted when inter-frame prediction utilizes blocks smaller than a specified size, such as 8x8. These embodiments may copy the transform flag that was transmitted in the lower layer (or lower layers) and employ this flag during the reconstruction process.
  • Some embodiments of the present invention comprise alternative methods and systems for handling of a flag that enables an 8x8 transform.
  • this flag does not need to be transmitted when a block does not contain residual data. If this case occurs in a lower layer that is utilized for inter-layer prediction, then the higher layer can choose to enable the 8x8 transform when sending 064040
  • a decoder can allow the lower layer and higher layer to utilize different transforms.
  • Some embodiments of the present invention comprise methods and systems for handling of quantization matrices, which are also known as weight matrices or scaling matrices to experts in the field. These matrices may change the "de- quantization" process and allow an encoder and . decoder to apply frequency dependent (or transform coefficient dependent) quantization. In these embodiments, the presence of these scaling matrices alters the scaling process described in the mapping process described above. In some embodiments, the mapping procedure may be described as:
  • TmgherLayer and TLowerLayer denote the transform coefficients at the higher layer and lower layer, respectively; n is an integer, Qp_LowerLayer and Qp_HigherLayer are, respectively, the quantization parameters for the lower layer and higher layer, and S_L and S_H are, respectively, the scaling factors for the lower layer and higher layer.
  • S_L[n] and S_H[n] may be explicitly present or, alternatively, derived from the bit-stream.
  • an additional weighting matrix may be sent in the bit-stream.
  • This additional weighting matrix may explicitly define the frequency weighting necessary to predict a layer from a lower layer.
  • the weighting matrix can be employed as
  • W l and W2 are weighting matrices included in the bit-stream.
  • either W l or W2 may not be transmitted.
  • the matrix not transmitted may be assumed to have elements equal to zero.
  • Embodiments of the present invention comprise methods and systems for modifying, creating and/ or applying a scalable video co.dec. Some embodiments allow for the fast 64040
  • Some embodiments comprise conversion of a multi-layer bit-stream to a single layer bit-stream. Some exemplary embodiments comprise conversion of an SVC bit- stream to an AVC bit-stream.
  • Embodiments of the present invention relate to residual prediction. These embodiments may comprise a residual prediction process that operates in both the transform and spatial domains.
  • the residual prediction process may comprise mapping the residual transform coefficients from the lower layer to the higher layer. This mapping process can operate on the scaled transform coefficients or the (unsealed) transform coefficient levels.
  • the process of residual prediction of scaled transform coefficients may be specified as A.8. 1 1.4.
  • Residual accumulation process for scaled transform coefficients Inputs to this process are a variable fieldMb specifying whether a macroblock is a field or a frame macroblock a variable lumaTrafo specifying the luma transform type a list of scaled transform coefficient values sTCoeff with 256 + 2 * MbWidthC * MbHeightC elements 007/064040
  • Outputs of this process comprise a modified version of the scaled transform coefficient values sTCoeff
  • the progressive refinement process for scaled transform coefficients as specified in subclause G.8. 1 1.3 may be invoked with fieldMb, lumaTrafo and sTCoeff as input and a modified version of sTCoeff as output where G.8. 1 1 .3 is defined in the incorporated SVC standard.
  • the residual prediction process may occur in the spatial domain when the enhancement layer utilizes a lower layer for inter-layer prediction that contains a different spatial resolution.
  • the residual from the referenced layer is reconstructed in the intensity domain and interpolated to the enhancement layer resolution.
  • the residual from the referenced layer is added to a prediction derived from the referenced layer in the spatial domain. The result of this addition is then interpolated to the enhancement layer.
  • a system according to these embodiments comprises a resolution determiner 241 , a comparator 242, a controller 243, 064040
  • a current layer may be examined to determine if it employs residual prediction (110). If no residual prediction is employed, no accumulation is required
  • the spatial resolutions of the current layer and a reference layer are determined by the resolution determiner 241- (112, 113). Then, the spatial resolution of the current layer is compared to the spatial resolution of a reference layer by the comparator 242 (114).
  • the controller 243 selectively allows the coefficient sealer 244 and the coefficient combiner 245 to perform steps 116 and 117. That is, if these spatial resolutions are the same (114), the coefficients or indices of the reference layer
  • the controller 243 selectively allows the inverse transformer 246 and the spatial-domain combiner 247 to perform steps 115, 118, and 120. That is, if the spatial resolutions are not the same (114), the current layer and reference layer indices may be dequantized and the resulting coefficients may be inverse transformed (115, 118). The resulting spatial domain values in the current layer and the reference layer may then be combined by the spatial-domain combiner 247 ( 120) to form a reconstructed image.
  • the method of residual prediction depends on the resolution of the enumerated higher layer and the enumerated lower layer referenced for prediction. Unfortunately, this is problematic as the accumulation of residual information in the spatial domain may not equal the accumulation of residual in the transform domain followed by subsequent conversion to the spatial domain. For the case of a standardized decoding process this may lead to a drift between the encoder and decoder and a loss of coding efficiency.
  • the current SVC system addresses this problem by performing residual prediction only in the spatial domain.
  • some embodiments of the present invention comprise a decoding process that performs residual prediction in both domains. Specifically, when residual prediction is enabled and the enhancement layer and layer referenced for inter-layer prediction are the same resolution, then the residual is accumulated in the transform domain. However, when residual prediction is enabled and the enhancement layer and layer referenced for inter-layer prediction are different resolutions, then the residual is accumulated in the spatial domain.
  • An exemplary decoding process is described , with the following
  • intra- layer prediction may be performed at multiple layers in the scalable bit-stream.
  • the function GeneratelntraLayerPrediction may be called prior to any residual processing.
  • the output of this function may be added to the array rYCC.
  • the residual accumulation process may occur on unsealed transform coefficients.
  • the inter-layer prediction process may be performed prior to constructing the scaled transform coefficients.
  • Some embodiments of the present invention comprise a decoder that takes a scalable bit-stream as input and generates a reconstructed image sequence.
  • the scalable bit- stream employing an inter-layer prediction process to project information from enumerated lower layers of the bit-stream to enumerated higher layers of the bit-stream.
  • Some embodiments of the present invention comprise a decoding process that accumulates residual information in both the transform and spatial domain. Accumulation is performed in the transform domain between enumerated layers in the bit-stream when the layers describe an image sequence with the same resolution.
  • Some embodiments of the present invention comprise a decoding process that converts accumulated transform coefficients to the spatial domain only when processing a current layer that has a different spatial resolution than the layer utilized for inter-layer prediction.
  • the transform coefficients are converted to the spatial domain and subsequently upsampled (or interpolated) .
  • the transform coefficient list is then set equal to zero.
  • Some embodiments of the present invention comprise a decoding process that accumulates residuals in the transform domain until the resolution between the current , decoding layer and the layer utilized for inter-layer prediction differs.
  • the transform coefficient list is then set to zero, with subsequent processing of layers that reference layers with the same spatial resolution performing accumulation in the transform domain.
  • Some embodiments of the present invention comprise a decoding process that generates an output bit-stream by performing intra-layer prediction, computing the inverse transform on scaled transform coefficients, adding the output of the inverse transform operation to a possibly non-zero residual signal, and summing the result of this previous addition with the output of the intra-layer prediction process.
  • Some embodiments of the present invention comprise a decoding process that also allows for inter-layer prediction to be performed on unsealed transform coefficients or transform coefficient levels.
  • Some embodiments of the present invention comprise a decoding process that also allows for intra-layer prediction to be performed within layers of the bit-stream that are not reconstructed for output. The result of this intra-layer prediction being added to the accumulated spatial residual.
  • Some embodiments of the present invention comprise a decoding process where clipping is performed within the residual prediction process.
  • the system may include members such as:, a CPU (Central Processing Unit) that executes instructions of a control program realizing the functions; a ROM (Read Only Memory) recording the program; a RAM (Random Access Memory) on which the program is executed; and a storage device (recording medium) such as a memory, which stores the program and various kinds of data.
  • program code e. g.
  • an executable code program, intermediate code program, and source program) of the control program of the system is recorded on a recording medium in a computer-readable manner, this recording medium is supplied to the system, and the computer (or CPU or MPU) reads out the program code from the recording medium and execute the program.
  • Examples of such a recording medium include a tape, such as a magnetic tape and a cassette tape; a magnetic disk, such as a flexible disk and a hard disk; a disc including an optical disc, such as a CD-ROM / MO / MD / DVD / CD-R; a card, such as an IC card (inclusive of a memory card) ; and a semiconductor memory, such as a mask ROM, an EPROM (Erasable Programmable Read Only Memory) , an EEPROM (Electrically Erasable Programmable Read Only Memory) , or a flash ROM.
  • a tape such as a magnetic tape and a cassette tape
  • a magnetic disk such as a flexible disk and a hard disk
  • a disc including an optical disc such as a CD-ROM / MO / MD / DVD / CD-R
  • a card such as an IC card (inclusive of a memory card)
  • a semiconductor memory such as a mask ROM, an EPROM (Erasable
  • the system may be capable of being connected to a communications network, allowing the program code to be supplied via the communications network.
  • the communications network include the Internet, intranet, extranet, LAN, ISDN, VAN CATV network, virtual private network, telephone network, mobile communications network, and satellite communications network.
  • Non-limiting examples of the transmission media composing the communications network are, wired media such as IEEE 1394, USB, power line communication, cable TV lines, telephone lines, and ADSL lines, infrared light such as IrDA and remote controller, electric waves such as Bluetooth®, IEEE802. i l , HDR, mobile telephone network, satellite connection, and terrestrial digital broadcasting network.
  • the present invention may be realized by a carrier wave or as data signal sequence, which are realized by electronic transmission of the program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Embodiments of the present invention comprise systems and methods for managing and combining layers in a multi-layer bitstream.

Description

007/064040
- 1 -
DESCRIPTION
METHODS AND SYSTEMS FOR COMBINING LAYERS IN A MULTI-LAYER BITSTREAM
TECHNICAL FIELD Embodiments of the present invention comprise methods and systems for processing and process management in a multi-layer bitstream. The present invention relates particularly to 1 ) methods and systems for combining layers in a multi-layer bitstream, 2) methods and systems for conditional transform-domain residual accumulation, 3) methods and systems for residual layer scaling, 4) methods and systems for image processing control based on adj acent block characteristics, 5) methods and systems for maintenance and use of coded block pattern information, and 6) methods and systems for transform selection and management.
BACKGROUND ART
In order to reduce the bit-rate of the encoder output, a scalable bit-stream may comprise a form of inter-layer prediction. Exemplary systems comprise inter-layer prediction within the scalable video extensions for the
AVC I H .264 video coding standards. These extensions are commonly known as SVC, and the SVC system, described in T. Wiegand, G. Sullivan, J. Reichel, H. Schwarz and M. Wien, "Joint Draft 9 of SVC amendment (revision 2)" , JVT-V201 , Marrakech, Morocco, January 13- 19, 2007. In the SVC system, inter-layer prediction is realized by proj ecting motion and mode information from an enumerated lower layer to an enumerated higher layer. In addition, prediction residual is projected from an enumerated lower layer to an enumerated higher layer. The higher layer bit-stream may then contain additional residual to improve the quality of the decoded output.
DISCLOSURE OF INVENTION
According to a first aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: a) inverse quantizing a first-layer quantized transform coefficient thereby creating a first-layer transform coefficient; b) scaling the first-layer transform coefficient to match a characteristic of a second- layer thereby creating a scaled, first-layer transform coefficient; c) inverse quantizing a second-layer quantized transform coefficient thereby creating a second-layer transform coefficient; and d) combining the scaled, first-layer transform coefficient with the second-layer transform coefficient to form a combined coefficient. The first-layer may be a base layer.
The inverse quantizing a first-layer quantized transform coefficient may comprise using a first quantization parameter and the inverse quantizing a second-layer quantized transform coefficient may comprise using a second quantization parameter.
The first layer and the second layer may have different spatial resolutions.
The second-layer may be an enhancement layer. The method may further comprise inverse transforming the combined coefficient thereby generating a spatial-domain residual value.
The method may further comprise combining the spatial- domain residual value with the spatial-domain prediction value.
The method may further comprise: a) inverse quantizing a third-layer quantized transform coefficient thereby creating a third-layer transform coefficient; b) scaling the combined coefficient to match a characteristic of a third layer thereby creating a scaled-combined coefficient; and c) combining the scaled-combined coefficient with the third-layer transform coefficient.
The method may further comprise generating a combined bitstream comprising the combined coefficient. The combined bitstream may further comprise an intra- - 4 -
prediction mode.
The combined bitstream may further comprise a motion vector.
According to a second aspect of the invention, there is provided a system for combining layers in a multi-layer bitstream, the system comprising: a) a first inverse quantizer for inverse quantizing a first-layer quantized transform coefficient thereby creating a first-layer transform coefficient; b) a sealer for scaling the first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; c) a second inverse quantizer for inverse quantizing a second-layer quantized transform coefficient thereby creating a second-layer transform coefficient; and d) a coefficient combiner for combining the scaled, first-layer transform coefficient with the second-layer transform coefficient to form a combined coefficient.
The system may further comprise a bitstream generator for generating a combined bitstream comprising the combined coefficient.
The system may further comprise an inverse transformer for inverse transforming the combined coefficient thereby generating a spatial-domain residual value and a second combiner for combining the spatial-domain residual value with the spatial-domain prediction value. 40
- 5 -
According to a third aspect of the invention, there is provided a method for converting an SVC-compliant bitstream to AVC-compliant data, the method comprising: a) receiving an SVC-compliant bitstream comprising prediction data, base- layer residual data and enhancement-layer residual data; b) inverse quantizing the base-layer residual data thereby creating base-layer transform coefficients; c) inverse quantizing the enhancement-layer residual data thereby creating enhancement-layer transform coefficients; d) scaling the base-layer transform coefficients to match a quantization characteristic of the enhancement-layer thereby creating scaled base-layer transform coefficients; and e) combining the scaled base-layer transform coefficients with the enhancement-layer transform coefficients to form combined coefficients.
The method may further comprise combining the combined coefficients with the prediction data to form an AVC-compliant bitstream.
The prediction data may comprise an intra-prediction mode indicator.
The prediction data may comprise a motion vector. The method may further comprise inverse transforming the combined coefficients thereby creating spatial-domain residual values. The method may further comprise obtaining spatial- domain prediction values and combining the spatial-domain prediction values with the spatial-domain residual values to form a decoded image.
According to a fourth aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: a) receiving a first-layer quantized transform coefficient; b) scaling the first-layer quantized transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer quantized transform coefficient; c) receiving a second-layer quantized transform coefficient; and d) combining the scaled, first-layer transform coefficient with the second-layer quantized transform coefficient to form a combined quantized coefficient.
The method further comprise inverse quantizing the combined quantized coefficient to produce a combined coefficient.
The method further comprise inverse transforming the combined coefficient thereby producing a spatial domain residual value. The method further comprise combining the spatial domain residual value with a spatial domain prediction value. The method further comprise generating a combined bitstream comprising the combined quantized coefficient.
According to a fifth aspect of the invention, there is provided a method for conditionally combining layers in a multi-layer bitstream, the method comprising: a) receiving a first-layer quantized transform coefficient; b) receiving a second-layer quantized transform coefficient; c) receiving a layer combination indicator; d) scaling the first-layer quantized transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer quantized transform coefficient, when the layer combination indicator indicates transform domain accumulation; and e) combining the scaled, first-layer transform coefficient with the second- layer quantized transform coefficient to form a combined quantized coefficient, when the layer combination indicator indicates transform domain accumulation.
The layer combination indicator may be derived from data in a second-layer bitstream. The method may further comprise disabling smoothed reference prediction, when the layer combination indicator indicates transform domain accumulation.
According to a sixth aspect of the invention, there is provided a method for reconstructing an enhancement layer from a multi-layer bitstream, the method comprising: a) receiving a first-layer intra-prediction mode; b) receiving a second-layer bitstream prediction indicator, the indicator indicating that the first-layer prediction mode is to be used for prediction of the second layer; c) using the first-layer prediction mode to construct a second-layer prediction based on adjacent block data in the second layer; and d) combining the second-layer prediction with residual information thereby creating a reconstructed second layer.
According to a seventh aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: a) determining a first spatial resolution of a first layer of a multi-layer image ; b) determining a second spatial resolution of a second layer of the multi-layer image; c) comparing the first spatial resolution with the second spatial resolution; d) performing steps e) through f) when the first spatial resolution is substantially equal to the second spatial resolution; e) scaling a first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; f) combining the scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; g) performing steps h) through k) when the first-layer spatial resolution is not substantially equal to the second-layer spatial resolution; h) inverse transforming the first-layer transform coefficient thereby producing a first- layer spatial domain value; i) inverse transforming the second-layer transform coefficient thereby producing a second-layer spatial domain value; j) scaling the first-layer spatial domain value to match the resolution of the second layer thereby producing a scaled, first-layer spatial domain value; and k) combining the scaled, first-layer spatial domain value with the second-layer spatial domain value thereby producing a combined spatial domain residual value.
The characteristic may comprise a quantization parameter.
The transform coefficients may be de-quantized transform coefficients.
The transform coefficients may be quantized transform coefficients and the transform coefficients may be inverse quantized prior to the inverse transforming.
The scaling may comprise: a) determining a first-layer quantization parameter; b) determining a second-layer quantization parameter; and c) scaling the first-layer transform coefficient based on the first-layer quantization parameter and the second-layer quantization parameter.
The method may further comprise inverse transforming the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution, thereby generating a spatial-domain residual value. The method may further comprise combining the spatial- domain residual value with a spatial-domain prediction value when the first spatial resolution is substantially equal to the second spatial resolution.
The method may further comprise combining the combined spatial-domain residual value with a spatial-domain prediction value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined spatial-domain value.
The method may further comprise generating a combined bitstream comprising the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution.
The combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is substantially equal to the second spatial resolution.
The combined bitstream may further comprise a motion vector when the first spatial resolution is substantially equal to the second spatial resolution.
The method may further comprise transforming the combined spatial-domain value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined transform-domain coefficient.
The method may further comprise generating a combined bitstream comprising the combined transform-domain coefficient when the first spatial resolution is not substantially equal to the second spatial resolution.
The combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is not substantially equal to the second spatial resolution. The combined bitstream may further comprise a motion vector when the first spatial resolution is not substantially equal to the second spatial resolution.
According to a eighth aspect of the invention, there is provided a system for combining layers in a multi-layer bitstream, the system comprising: a) a resolution determiner for determining a first spatial resolution of a first layer of a multi-layer image and for determining a second spatial resolution of a second layer of the multi-layer image; b) a comparator for comparing the first spatial resolution with the second spatial resolution; c) a controller for selectively performing steps d) through e) when the first spatial resolution is substantially equal to the second spatial resolution; d) a coefficient sealer for scaling a first-layer transform coefficient to match a characteristic of a second- layer thereby creating a scaled, first-layer transform coefficient; e) a coefficient combiner for combining the scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; f) the controller selectively performing steps g) through i) when the first-layer spatial resolution is not substantially equal to the second- layer spatial resolution; g) an inverse transformer for inverse transforming the first-layer transform coefficient thereby producing a first-layer spatial domain value and for inverse transforming the second-layer transform coefficient thereby producing a second-layer spatial domain value; h) a spatial- domain sealer for scaling the first-layer spatial domain value to match the resolution of the second layer thereby producing a scaled, first-layer spatial domain value; and i) a spatial- domain combiner for combining the scaled, first-layer spatial domain value with the second-layer spatial domain value thereby producing a combined spatial domain residual value.
The inverse transformer may further inverse transforms the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution, thereby generating a spatial-domain residual value.
The spatial-domain combiner may further combine the spatial-domain residual value with a spatial-domain prediction value when the first spatial resolution is substantially equal to the second spatial resolution.
The spatial domain combiner may further combine the combined spatial-domain residual value with a spatial-domain prediction value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined spatial-domain value.
The system may further comprise a bitstream generator for generating a combined bitstream comprising the combined coefficient when the first spatial resolution is substantially equal to the second spatial resolution. The combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is substantially equal to the second spatial resolution.
The combined bitstream may further comprise a motion vector when the first spatial resolution is substantially equal to the second spatial resolution.
The system may further comprise a transformer for transforming the combined spatial-domain value when the first spatial resolution is not substantially equal to the second spatial resolution, thereby creating a combined transform-domain coefficient.
The system may further comprise a bitstream generator for generating a combined bitstream comprising the combined transform-domain coefficient when the first spatial resolution is not substantially equal to the second spatial resolution. The combined bitstream may further comprise an intra- prediction mode when the first spatial resolution is not substantially equal to the second spatial resolution.
The combined bitstream may further comprise a motion vector when the first spatial resolution is not substantially equal to the second spatial resolution.
According to a ninth aspect of the invention, there is provided a method for combining layers' in a multi-layer bitstream, the method comprising: a) receiving de-quantized transform coefficients for a first layer of a first spatial resolution; b) receiving de-quantized transform coefficients for a second layer of the first spatial resolution; c) scaling the first-layer transform coefficients, thereby creating scaled first-layer transform coefficients; d) combining the scaled first-layer transform coefficients with the second-layer transform coefficients thereby creating combined transform coefficients; e) inverse transforming the combined transform coefficients thereby creating combined residual spatial- domain values; f) receiving de-quantized transform coefficients for a third layer of a second spatial resolution; g) resampling the combined residual spatial-domain values to the second spatial resolution, thereby creating resampled combined spatial-domain values; h) inverse transforming the third layer transform coefficients, thereby creating third-layer spatial-domain values; and i) combining the resampled combined spatial-domain values with the third-layer spatial- domain values.
According to a tenth aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: a) receiving quantized transform coefficients for a first layer of a first spatial resolution; b) receiving quantized transform coefficients for a second layer of the first spatial resolution; c) scaling the quantized first-layer transform coefficients, thereby creating scaled quantized first-layer transform coefficients; d) combining the scaled quantized first-layer transform coefficients with the second-layer quantized transform coefficients thereby creating combined quantized transform coefficients; e) inverse quantizing the combined quantized transform coefficients thereby creating combined transform coefficients; f) inverse transforming the combined transform coefficients thereby creating combined residual spatial- domain values; g) receiving quantized transform coefficients for a third layer of a second spatial resolution; h) resampling the combined residual spatial-domain values to the second spatial resolution, thereby creating resampled combined spatial-domain values; i) inverse quantizing the third-layer quantized transform coefficients thereby creating third-layer transform coefficients; j) inverse transforming the third layer transform coefficients, thereby creating third-layer spatial- domain values; and k) combining the resampled combined spatial-domain values with the third-layer spatial-domain values.
According to a eleventh aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: a) receiving de-quantized transform coefficients for a first layer of a first spatial resolution; b) inverse transforming the de-quantized first- layer transform coefficients thereby producing first-layer spatial domain values; c) receiving de-quantized transform coefficients for a second layer of a second spatial resolution that is higher than the first spatial resolution; d) receiving de-quantized transform coefficients for a third layer of the second spatial resolution; e) upsampling the first-layer spatial domain values to the second spatial resolution thereby producing upsampled first-layer spatial domain values; f) combining the second-layer de-quantized transform coefficients with the third-layer de-quantized transform coefficients thereby creating combined transform coefficients; g) inverse transforming the combined transform coefficients thereby creating first combined residual spatial-domain values; and h) combining the upsampled first-layer spatial domain values with the first combined residual spatial- domain values.
According to a twelfth aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: a) receiving quantized transform coefficients for a first layer of a first spatial resolution; b) receiving quantized transform coefficients for a second layer of the first spatial resolution; c) receiving quantized transform coefficients for a third layer of the first spatial resolution; d) scaling the quantized first-layer transform coefficients to match properties of the second-layer, thereby creating scaled quantized first-layer transform coefficients; e) combining the scaled quantized first-layer transform coefficients with the second-layer quantized transform coefficients thereby creating combined quantized transform coefficients; f) inverse quantizing the combined quantized transform coefficients thereby creating combined transform coefficients; g) inverse quantizing the third-layer quantized transform coefficients thereby creating third-layer de-quantized transform coefficients; h) combining the combined transform coefficients with the third-layer de- quantized transform coefficients thereby creating three-layer combined transform coefficients; and i) inverse transforming the three-layer combined transform coefficients thereby creating combined spatial-domain values.
According to a thirteenth aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: i) determining whether a second layer of a multi-layer image employs residual prediction; ii) performing the following steps only if the second layer employs residual prediction; iii) determining a first spatial resolution of a first layer of a multi-layer image; iv) determining a second spatial resolution of the second layer; v) comparing the first spatial resolution with the second spatial resolution; vi) performing steps vii) through viii) when the first spatial resolution is substantially equal to the second spatial resolution; vii) scaling a first-layer transform coefficient to match a characteristic of a second- layer thereby creating a scaled, first-layer transform coefficient; viii) combining the scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; ix) performing steps x) through xiii) when the first-layer spatial resolution is not substantially equal to the second-layer spatial resolution; x) inverse transforming the first-layer transform coefficient thereby producing a first-layer spatial domain value; xi) inverse transforming the second-layer transform coefficient thereby producing a second-layer spatial domain value; xii) scaling the first-layer spatial domain value to match the resolution of the second layer thereby producing a scaled, first-layer spatial domain value; and xiii) combining the scaled, first- layer spatial domain value with the second-layer spatial domain value thereby producing a combined spatial domain value.
According to a fourteenth aspect of the invention, there is provided a method for scaling transform coefficients in a multi-layer bitstream, the method comprising: determining a first-layer quantization parameter based on the multi-layer bitstream; determining a second-layer quantization parameter based on the multi-layer bitstream; and scaling a first-layer transform coefficient based on the first-layer quantization parameter and the second-layer quantization parameter.
The scaling may be performed according to the following relationship: Qp _ FirstLayer -Qp _ SecondLaye r
T SecondLaye r = T FirstLayer ' 1 k wherein TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; k is an integer, and QpJFirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
The k may be equal to 6.
The scaling may be performed according to the following relationship:
Qp __ Diff = Qp _ FirstLayer -Qp _ SecondLaye r TSecondLme r = {{TFιrslLayer « QP _ Diff Il 6) * ScaleMatri x[QP _ Diff %6] + M 12) » M
wherein / / denotes integer division, % denotes the modulo operation; M and ScaleMatrix are constants; TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; k is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
QP-Diff may be reset to zero when Qp _Diff is found to be less than zero . The ScaleMatrix may be equal to [512 573 642 719 806
902] and the M may be equal to 512.
The ScaleMatrix may be equal to [8 9 10 1 1 13 14] and the M may be equal to 8.
The scaling may be performed according to the following relationship:
Qp _ Diff = Qp_ FirstLayer - Qp _ SecondLayer
TSecondLayer = ((TFirstLayer « QP _ Diff Il 6) * ScaleMaMx[QP _ Diff%6 + 5] + M 12) » M
wherein / / denotes integer division, % denotes the modulo operation; M and ScaleMatrix are constants;
TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; k is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively. Qp^Diff may be reset to zero when Qp _Diff is found to be less than zero .
The ScaleMatrix may be equal to [29 1 325 364 408 457 512 573 642 719 806 902] and the M may be equal to 512.
The method may further comprise combining the scaled first-layer transform coefficient with a second-layer transform coefficient thereby creating a combined coefficient.
The method may further comprise generating a combined bitstream comprising the combined coefficient.
The method may further comprise determining a first- layer transform-coefficient-dependent weighting factor, SF, and a second-layer transform-coefficient-dependent weighting factor, Ss, wherein the scaling is performed according to the following relationship:
SFQp _ FirstLayer -S8Qp _ SecondLaye r
T SecondLaye r = T FirstLayer - 2 k wherein TSecondLayer and TFirstLayer denote a transform coefficient at the second layer and first layer, respectively; k is an integer, and Qp_FirstLayer and
Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
The method may further comprise: determining a first- layer transform-coefficient-dependent weighting factor, SF; determining a second-layer transform-coefficient-dependent weighting factor, Ss; and wherein the scaling is performed according to the following relationship:
Qp _ Diff = SFQp _ FirstLayer - S8Qp _ SecondLaye r
T second r = ((T FirstLayer « QP _ Diff Il 6) * ScaleMatri x[QP _ Diff %6] + M 12) » M
wherein / / denotes integer division, % denotes the modulo operation; M and ScaleMatrix are constants; TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; /c is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively.
SF and Ss may be explicitly present in the multi-layer bitstream.
SF and Ss may be derived from the multi-layer bitstream.
The method may further comprise : determining a multiplicative transform-coefficient-dependent weighting factor, W l ; determining an additive transform-coefficient- dependent weighting factor, W2; and wherein the scaling is performed according to the following relationship:
Qp _ Diff = Wl(Qp _ FirstLayer - Qp _ SecondLaye r) + W2
TSecolldLaye , = ((TFirstLayer « QP _ Diff Il 6) * ScaleMatri x[QP _ Diff %6] + M /2) » M
wherein / / denotes integer division, % denotes the modulo operation; M and ScaleMatrix are constants;
TsecondLayer and TFirstLayer denote the transform coefficients at the second layer and first layer, respectively; k is an integer and Qp_FirstLayer and Qp_SecondLayer are the quantization parameters for the first layer and second layer, respectively. W l and W2 may be explicitly present in the multi-layer bitstream.
Wl and W2 may be derived from the multi-layer bitstream.
The method may further comprise inverse quantizing the combined coefficient thereby generating a de-quantized combined transform coefficient.
The method may further comprise inverse transforming the de-quantized combined transform coefficient thereby generating a spatial-domain residual value . The method may further comprise combining the spatial- domain residual value with a spatial-domain prediction value. According to a fifteenth aspect of the invention, there is provided a system for scaling transform coefficients in a multi-layer bitstream, the system comprising: a first parameter determiner for determining a first-layer quantization parameter based on the multi-layer bitstream; a second parameter determiner for determining a second-layer quantization parameter based on the multi-layer bitstream; and a sealer for scaling a first-layer transform coefficient based on the first-layer quantization parameter and the second-layer quantization parameter.
According to a sixteenth aspect of the invention, there is provided a method for controlling entropy coding processes, the method comprising: a) identifying a first adj acent macroblock that is adjacent to a target macroblock; b) identifying a second adj acent macroblock that is adjacent to the target macroblock; c) determining a first macroblock indicator indicating whether the first adjacent macroblock is coded with reference to another layer; d) determining a second macroblock indicator indicating whether the second adjacent macroblock is coded with reference to another layer; and e) determining an entropy coding control value based on the first macroblock indicator and the second macroblock indicator. The method may further comprise using the entropy coding control value to encode an intra-prediction mode.
The method may further comprise using the entropy coding control value to decode an intra-prediction mode.
The method may further comprise using the entropy coding control value to encode the target macroblock. The method may further comprise using the entropy coding control value to decode the target macroblock.
The target macroblock may be a chroma macroblock. A macroblock may be determined to be coded with reference to another layer when the macroblock is of type IntraBL.
The entropy coding control value may comprise a context. The context may be based on cumulative macroblock information. According to a seventeenth aspect of the invention, there is provided a method for controlling entropy coding processes, the method comprising: a) identifying a first adjacent macroblock that is adjacent to a target macroblock; b) identifying a second adjacent macroblock that is adjacent to the target macroblock; c) determining whether the first adjacent macroblock is available; d) determining whether the first adjacent macroblock is coded in inter prediction mode; e) determining whether the first adj acent macroblock is encoded in the spatial domain; f) determining whether the first adjacent macroblock is intra predicted with a DC prediction mode; g) determining whether the first adjacent macroblock is coded with reference to another layer; h) setting a first adjacent block flag to one when any of steps c) through g) are true; i) determining whether the second adjacent macroblock is available; j) determining whether the second adjacent macroblock is coded in inter prediction mode; k) determining whether the second adjacent macroblock is encoded in the spatial domain; 1) determining whether the second adjacent macroblock is intra predicted with a DC prediction mode; m) determining whether the second adjacent macroblock is coded with reference to another layer; n) setting a second adj acent block flag value to one when any of steps i) through m) are true; and o) adding the first adjacent block flag value and the second adjacent block flag value to produce an entropy coding control value .
The method may further comprise using the entropy coding control value to encode the target macroblock.
The method may further comprise using the entropy coding control value to decode the target macroblock. The target macroblock may be a chroma macroblock.
A macroblock may be determined to be coded with reference to another layer when the macroblock is of type IntraBL.
A macroblock may be determined to be coded in the spatial domain when the macroblock is of type I_PCM.
The entropy coding control value may comprise a context.
The context may be based on cumulative macroblock information.
According to a eighteenth aspect of the invention, there is provided a method for prediction mode determination, the method comprising: a) identifying a first adj acent macroblock that is adjacent to a target macroblock; b) identifying a second adjacent macroblock that is adjacent to the target macroblock; and c) setting a target block estimated prediction mode to a predetermined mode when any of conditions i) through vi) are true: i) the first adj acent macroblock is available; ii) the first adj acent macroblock is coded in inter prediction mode; iii) the first adj acent macroblock is coded with reference to another layer; iv) the second adj acent macroblock is available; v) the second adjacent macroblock is coded in inter prediction mode; vi) the second adj acent macroblock is coded with reference to another layer.
The predetermined mode may be a DC prediction mode. The method may further comprise determining an actual prediction mode for the target block based on target block content.
The method may further comprise comparing the estimated prediction mode with the actual prediction mode.
The method may further comprise encoding a message, which instructs a decoder to use the estimated prediction mode to predict the target block when the actual prediction mode is the same as the estimated prediction mode.
The method may further comprise decoding a message, which instructs a decoder to use the estimated prediction mode to predict the target block when the actual prediction mode is the same as the estimated prediction mode.
The method may further comprise decoding a message, which instructs a decoder to use the actual prediction mode to predict the target block when the actual prediction mode is not the same as the estimated prediction mode.
The method may further comprise encoding a message, which instructs a decoder to use the actual prediction mode to predict the target block when the actual prediction mode is not the same as the estimated prediction mode. The target block estimated prediction mode may be a luma prediction mode.
According to a nineteenth aspect of the invention, there is provided a system for controlling entropy coding processes, the system comprising: a) a first identifier for identifying a first adjacent macroblock that is adjacent to a target macroblock; b) a second identifier for identifying a second adjacent - macroblock that is adjacent to the target macroblock; c) a first indicator determiner for determining a first macroblock indicator indicating whether the first adjacent macroblock is coded with reference to another layer; d) a second indicator determiner for determining a second macroblock indicator indicating whether the second adjacent macroblock is coded with reference to another layer; and e) a value determiner for determining an entropy coding control value based on the first macroblock indicator and the second macroblock indicator.
According to a twentieth aspect of the invention, there is provided a method for combining layers in a multi-layer bitstream, the method comprising: a) receiving a bitstream comprising encoded image coefficients and encoded block pattern (Cbp) information wherein the Cbp information identifies regions in the bitstream that comprise transform coefficients; b) decoding the Cbp information; c) parsing the bitstream by using the Cbp information to identify bitstream regions comprising transform coefficients; d) scaling first- layer transform coefficients in the bitstream to match a characteristic of a second-layer in the bitstream; e) adding the scaled, first-layer transform coefficients to second-layer transform coefficients to form combined coefficients in a combined layer; and f) calculating combined Cbp information for the combined layer wherein the combined Cbp information identifies regions in the combined layer that comprise transform coefficients .
The calculating may be performed only when coefficients in the second layer are predicted from the first layer.
The method may further comprise inverse transforming the combined coefficients thereby creating spatial domain values.
The method may further comprise selectively filtering regions of the spatial domain values based on the combined Cbp information.
The first layer and the second layer may have different spatial resolutions.
The first layer and the second layer may have different bit-depths.
The first layer may be a base layer.
The second layer may be an enhancement layer.
The calculating combined Cbp information may comprise testing the combined coefficients. The calculating combined Cbp information may comprise computing the binary-OR of the first layer and the second layer.
The calculating combined Cbp information may comprise scanning coefficient lists to identify regions with residual information.
According to a twenty-first aspect of the invention, there is provided a system for combining layers in a multi-layer bitstream, the system comprising: g) a receiver for receiving a bitstream comprising encoded image coefficients and encoded block pattern (Cbp) information wherein the Cbp information identifies regions in the bitstream that comprise transform coefficients; h) a decoder for decoding the Cbp information; i) a parser for parsing the bitstream by using the Cbp information to identify bitstream regions comprising transform coefficients; j) a sealer for scaling first-layer transform coefficients in the bitstream to match a characteristic of a second-layer in the bitstream; k) an adder for adding the scaled, first-layer transform coefficients to second-layer transform coefficients to form combined coefficients in a combined layer; and 1) a calculator for calculating combined Cbp information for the combined layer wherein the combined Cbp information identifies regions in the combined layer that comprise transform coefficients .
The calculating may be performed only when coefficients in the second layer are predicted from the first layer.
The system may further comprise an inverse transformer for inverse transforming the combined coefficients thereby creating spatial domain values.
The system may further comprise a filter for selectively filtering regions of the spatial domain values based on the combined Cbp information.
The first layer and the second layer may have different spatial resolutions.
The first layer and the second layer may have different bit-depths .
The first layer may be a base layer.
The second layer may be an enhancement layer.
The calculating combined Cbp information may comprise testing the combined coefficients. The calculating combined Cbp information may comprise computing the binary-OR of the first layer and the second layer.
The calculating combined Cbp information may comprise scanning coefficient lists to identify regions with residual information.
According to a twenty-second aspect of the invention, there is provided a method for selecting a reconstruction transform size when a transform size is not indicated in an enhancement layer, the method comprising: a) determining a lower-layer transform size; b) determining if the lower-layer transform size is substantially similar to a predefined transform size; c) selecting an inverse transform of the predefined transform size as a reconstruction transform when the lower-layer transform size is substantially similar to the predefined transform size; and d) selecting an inverse transform of a default transform size as a reconstruction transform when the lower-layer transform size is not substantially similar to the predefined transform size.
The default transform size may be used for parsing a bitstream regardless of the selection for a reconstruction transform.
The method may further comprise inverse transforming enhancement layer coefficients with the reconstruction transform. The predefined transform size may be 8x8. The predefined transform size may be 16x 16.
The method may further comprise determining a prediction mode for the enhancement layer and performing steps a) through d) only when the enhancement layer prediction mode indicates that the enhancement layer is predicted from the lower layer.
The method may further comprise extracting a plurality of enhancement-layer coefficients formatted for. the default transform size. The method may further comprise reformatting for the predefined transform size the plurality of extracted enhancement-layer coefficients.
The method may further comprise extracting a plurality of quantized enhancement-layer coefficients formatted for the default transform size .
The method may further comprise reformatting for the predefined transform size the plurality of extracted ' quantized enhancement-layer coefficients thereby creating reformatted quantized enhancement-layer coefficients. The method may further comprise: i) inverse quantizing a plurality of lower-layer quantized transform coefficients thereby creating lower-layer transform coefficients; ii) scaling the lower-layer transform coefficients to match a characteristic of the enhancement-layer thereby creating scaled, lower-layer transform coefficients; iii) inverse quantizing the reformatted quantized enhancement-layer coefficients thereby creating enhancement-layer transform coefficients; and iv) combining the scaled, lower-layer transform coefficients with the enhancement-layer transform coefficients to form combined coefficients .
The method may further comprise generating a combined bitstream comprising the combined coefficients.
The combined bitstream may further comprise an intra- prediction mode. The combined bitstream may further comprise a motion vector.
The method may further comprise inverse transforming the combined coefficients using the reconstruction transform thereby generating a spatial-domain residual value . The method may further comprise combining the spatial- domain residual value with the spatial-domain prediction value. .
The method may further comprise: i) scaling a plurality of lower-layer quantized transform coefficients to match a characteristic of the enhancement-layer thereby creating scaled, lower-layer quantized transform coefficients; and ii) combining the scaled, lower-layer quantized transform coefficients with the reformatted quantized enhancement- layer coefficients to form combined quantized coefficients. The method may further comprise inverse quantizing the combined quantized coefficients, thereby creating combined coefficients.
The method may further comprise generating a combined bitstream comprising the combined coefficients. The combined bitstream may further comprise an intra- prediction mode.
The combined bitstream may further comprise a motion vector.
The method may further comprise inverse transforming the combined coefficients using the reconstruction transform thereby generating a spatial-domain residual value .
The method may further comprise combining the spatial- domain residual value with the spatial-domain prediction value. According to a twenty-third aspect of the invention, there is provided a system for selecting a reconstruction transform size when a transform size is not indicated in an enhancement layer, the system comprising: a) a size determiner for determining a lower-layer transform size; b) a determiner for determining if the lower-layer transform size is substantially similar to a predefined transform size; c) a first selector for selecting an inverse transform of the predefined transform size as a reconstruction transform when the lower- layer transform size is substantially similar to the predefined transform size; and d) a second selector for selecting an inverse transform of a default transform size as a reconstruction transform when the lower-layer transform size is not substantially similar to the predefined transform size.
Some embodiments of the present invention comprise methods and systems for processing and process management in a multi-layer bitstream.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF DRAWINGS
Fig. IA is a diagram showing embodiments of the present invention comprising scaling of transform domain coefficients;
Fig. I B is a diagram showing embodiments of the present invention comprising accumulation of quantized transform coefficients and scaling of quantized transform domain coefficients; Fig. 2A is a diagram showing embodiments of the present invention comprising scaling of transform domain coefficients and bitstream rewriting without reconstruction;
Fig. 2B is a diagram showing embodiments of the present invention comprising accumulation of quantized transform coefficients or indices and bitstream rewriting without reconstruction;
Fig. 3 is a diagram showing embodiments of the present invention comprising transform size selection;
Fig. 4 is a diagram showing embodiments of the present invention comprising conditional transform size indication and selection;
Fig. 5 is a diagram showing embodiments of the present invention comprising coefficient scaling based on quantization parameters; Fig. 6 is a diagram showing embodiments of the present invention comprising calculation of an entropy encoder control value based on adjacent macroblock data;
Fig. 7 is a diagram showing embodiments of the present invention comprising determination of an entropy encoder control value based on a combination of adjacent macroblock conditions;
Fig. 8 is a diagram showing embodiments of the present invention comprising a determination of an estimated prediction mode and prediction mode signaling based on adjacent macroblock data;
Fig. 9 is a diagram showing embodiments of the present invention comprising calculation of a combined layer coded block pattern;
Fig. 10 is a diagram showing embodiments of the present invention comprising selective transform accumulation based on layer spatial resolutions;
Fig. 1 1 is a block diagram showing embodiments of the present invention comprising transform size selection;
Fig. 12 is a block diagram showing embodiments of the present invention comprising coefficient scaling based on quantization parameters;
Fig. 13 is a diagram showing embodiments of the present invention comprising calculation of an entropy encoder control value based on adjacent macroblock data; Fig. 14 is a diagram showing embodiments of the present invention comprising calculation of a combined layer coded block pattern; and
Fig. 15 is a block diagram showing embodiments of the present invention comprising selective transform accumulation based on layer spatial resolutions .
BEST MODE FOR CARRYING OUT THE INVENTION
Embodiments of the present invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The figures listed above are expressly incorporated as part of this detailed description.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations . Thus, the following more detailed description of the embodiments of the methods and systems of the present invention is not intended to limit the scope of the invention but it is merely representative of the presently preferred embodiments of the invention.
Elements of embodiments of the present invention may be embodied in hardware, firmware and/ or software. While exemplary embodiments revealed herein may only describe one of these forms, it is to be understood that one skilled in the art would be able to effectuate these elements in any of these forms while resting within the scope of the present invention.
Some embodiments of the present invention comprise methods and systems for residual accumulation for scalable video coding. Some embodiments comprise methods and systems for decoding a scalable bit-stream. The bit-stream may be generated by an encoder and subsequently stored and/ or transmitted to a decoder. The decoder may parse the bit-stream and convert the parsed symbols into a sequence of decoded images.
A scalable bit-stream may contain different representations of an original image sequence . In one specific example, a first layer in the bit-stream contains a low quality version of the image sequence, and a second layer in the bit- stream contains a higher quality version of the image sequence . In a second specific example, a first layer in the bit-stream contains a low resolution version of the image sequence, and a second layer in the bit-stream contains a higher resolution version of the image sequence . More sophisticated examples will be readily apparent to those skilled in the art, and these more sophisticated examples may include a plurality of representations of an image sequence and/ or a bit-stream that contains a combination of different qualities and resolutions. In order to reduce the bit-rate of the encoder output, a scalable bit-stream may comprise a form of inter-layer prediction. Exemplary embodiments may comprise inter-layer prediction within the scalable video extensions for the AVC I H .264 video coding standards. These extensions are commonly known as SVC, and the SVC system, described in T.
Wiegand, G. Sullivan, J . Reichel, H . Schwarz and M. Wien, "Joint Draft 9 of SVC amendment (revision 2)", JVT-V201 , Marrakech, Morroco, January 13- 19 , 2007; is incorporated herein by reference . In the SVC system, inter-layer prediction is realized by proj ecting motion and mode information from an enumerated lower layer to an enumerated higher layer. In addition, prediction residual is projected from an enumerated lower layer to an enumerated higher layer. The higher layer bit-stream may then contain additional residual to improve the quality of the decoded output. ISO / IEC JTC 1 / SC29 /WG 1 1 Information Technology - Coding of Audio-Visual Objects - Part 10 : Advanced Video Coding, ISO / IEC 14496- 10 , 2005, is also incorporated herein by reference. ITU-T Recommendation H .264 : "Advanced video coding for generic audio visual services", March 2003, is also incorporated herein by reference.
SVC to AVC Bit-Stream Rewriting The current SVC system requires transcoding to support an AVC device at any layer besides the base layer. This limits the application space for SVC. Embodiments of the present invention comprise changes to the syntax and semantics of the coarse grain scalable layer to enable the fast rewriting of an SVC bit-stream into an AVC compliant bit-stream. In some embodiments, a network device can rewrite the SVC data into an AVC bit-stream without drift and without needing to reconstruct the sequence. In some embodiments, this may be accomplished by merging multiple coarse grain scalable layers.
Some embodiments of the present invention comprise SVC to AVC bit-stream rewriting. This process may comprise taking an SVC bit-stream as input and producing an AVC bit- stream as output. Conceptually, this is similar to transcoding. However, some embodiments exploit the single loop structure of SVC and enable the direct mapping of an SVC bit-stream onto AVC syntax elements . Some embodiments may perform this function without introducing drift and without reconstructing the video sequence. Embodiments that enable the fast rewriting of an SVC to
AVC bit-stream obviate the need to carry the additional overhead introduced by SVC end-to-end. Thus, it can be discarded when the scalable functionality is no longer needed. These embodiments can greatly expand the application space for SVC . As a non-limiting example of an exemplary embodiment, consider the scenario where the final transmission link is rate constrained. This could be a wireless link to a portable device, or alternatively, a wireless link to a high resolution display. In either case, we can employ the scalability features of SVC to intelligently adapt the rate at the transmitter. However, since the receiving device has no need for the SVC functionality, it is advantageous to remove the SVC component from the bit- stream. This improves the visual quality of the transmitted video, as fewer bits are devoted to overhead and more bits are available for the visual data.
As a second non-limiting example of bit-stream rewriting, consider a system that supports a large number of heterogeneous devices. Devices connected via slow transmission links receive the AVC base layer that is part of the SVC bit-stream; devices connected via faster transmission links receive the AVC base layer plus additional SVC enhancement. To view this enhancement data, these receivers must be able to decode and reconstruct the SVC sequence . For applications with a larger number of these devices, this introduces a large expense for deploying SVC . Set-top boxes (or other decoding hardware) must be deployed at each receiver. As a more cost effective solution, the process of bit- stream rewriting from SVC to AVC within the network could be employed to deliver AVC data to all devices. This reduces the deployment cost of SVC .
As a third non-limiting example of bit-stream rewriting, consider an application that utilizes SVC for storing content on a media server for eventual delivery to a client device. The SVC format is very appealing as it requires less storage space compared to archiving multiple AVC bit-streams at the server. However, it also requires a transcoding operation in the server to support AVC clients or SVC capabilities at the client. Enabling SVC-to-AVC bit-stream rewriting allows the media server to utilize SVC for coding efficiency without requiring computationally demanding transcoding and/ or SVC capabilities throughout the network.
As a fourth non-limiting example of bit-stream rewriting, the process of SVC-to-AVC bit-stream rewriting simplifies the design of SVC decoder hardware. Currently, an SVC decoder requires modifications throughout the AVC decoding and reconstruction logic. With the enablement of SVC-to-AVC bit- stream rewriting, the differences between AVC and SVC are localized to the entropy decoder and coefficient scaling operations. This simplifies the design of the SVC decoding process, as the final reconstruction loop is identical to the AVC reconstruction process. Moreover, the SVC reconstruction step is guaranteed to contain only one prediction operation and one inverse transform operation per block. This is different than current SVC operations, which require multiple inverse transform operations and variable reference data for intra prediction.
Some embodiments of the present invention comprise changes to the SVC coarse grain scalability layer to enable the direct mapping of an SVC bit-stream to an AVC bit-stream.
These changes comprise a modified IntraBL mode and restrictions on the transform for BLSkip blocks in inter-coded enhancement layers. In some embodiments, these changes may be implemented by a flag sent on a sequence basis and, optionally, on a slice basis.
Inter-Coded Blocks
Some embodiments comprise changes for inter-coded blocks. These changes comprise: Blocks that are inferred from base layer blocks must utilize the same transform as the base layer block. For example, if a block in the coarse grain scalable layer has base_mode_flag equal to one and the co-located base layer block utilizes the 4x4 transform, then the enhancement layer block must also utilize a 4x4 transform.
The reconstruction of a block that is inferred from base layer blocks and utilizes residual prediction shall occur in the transform domain. Currently, the base layer block would be reconstructed in the spatial domain and then the residual transmitted in the enhancement layer. In these embodiments, the transform coefficients of the base layer block are scaled at the decoder, refined by information in the enhancement layer and then inverse transformed.
The smoothed_reference_flag shall be zero when the avc_rewrite flag is one.
Intra-Coded Blocks
Intra-coded blocks provide additional barriers to the SVC-to-AVC rewriting problem. Within the CGS system, a block in the enhancement layer may be coded with the
IntraBL mode. This mode signals that the intra-coded block in the base layer should be decoded and used for prediction. Then, additional residual may be signaled in the enhancement layer. Within the SVC-to-AVC rewriting system, this creates difficulties since the reconstructed intra-coded block cannot be described as a spatial prediction of its neighbors plus a signaled residual. Thus, the intra-coded block must be transcoded from SVC to AVC . This requires added computational complexity; it also introduces coding errors that may propagate via motion compensation.
Some embodiments of the present invention may be described with reference to Figure IA. A decoder or rewriter (system) according to these embodiments comprises a first inverse quantizer 5, a sealer 6, a second inverse quantizer 1 1 , a first adder (coefficient combiner) 7, an inverse transformer
10, and a second adder (second combiner) 9 . In these embodiments, a base layer residual (base layer quantized transform coefficients) 1 , prediction mode data 2 and enhancement layer residual (enhancement layer quantized transform coefficients) 3 are received at the decoder or rewriter. Neighboring block data 4 is also known at the decoder/ rewriter. The base layer residual data 3 may be inverse quantized by the first inverse quantizer 5 thereby creating base-layer transform coefficients and the transform coefficients may be scaled by the sealer 6 to match a characteristic of the enhancement layer thereby creating scaled base-layer transform coefficients. In some embodiments, the matched characteristic may comprise a quantization parameter characteristic. The enhancement layer residual 3 may also be inverse quantized by the second inverse quantizer 1 1 and added by the first adder 7 to the scaled base layer residual coefficients (scaled base-layer transform coefficients) thereby forming combined coefficients . The combined coefficients are then inverse transformed by the inverse transformer 10 to produce spatial domain intensity values. In some embodiments, the enhancement layer information may be ignored when it is not needed. Prediction mode data 2 and neighboring block data 4 are used to determine a prediction block by intra-prediction 8. The prediction block is then added by the second adder 9 to the spatial domain intensity values from the base and enhancement layers to produce a decoded block 12.
Some embodiments of the present invention may be described with reference to Figure IB. In these embodiments, a base layer residual 1 , prediction mode 2 and enhancement layer residual 3 are received at a decoder or rewriter. Neighboring block data 135 is also known at the decoder/ rewriter and may be used for prediction 134. In these embodiments, the base layer quantized transform coefficients 1 may be scaled 130 to match a characteristic of the enhancement layer thereby creating scaled base-layer transform coefficients. In some embodiments, the matched characteristic may comprise a quantization parameter characteristic. The enhancement-layer quantized transform coefficients 3 may be added 131 to the scaled base-layer quantized transform coefficients to create combined .quantized coefficients. The combined quantized coefficients may then be inverse quantized 132 to produce de-quantized combined coefficients, which may then be inverse transformed 133 to produce combined spatial domain values. These spatial domain values may then be combined 136 with prediction data to form a reconstructed image 137.
Some embodiments of the present invention may be described with reference to Figure 2A. In these embodiments, the bitstream is re-encoded without complete reconstruction of the image. In these embodiments, base layer (BL) residual data 1 may be received at a decoder, transcoder, decoder portion of an encoder or another device or module. Enhancement layer (EL) residual data 3 may also be received at the device or module. In these embodiments, the BL residual 1 may be inverse quantized by a first inverse quantizer 5 to produce BL transform coefficients. These BL transform coefficients may then be scaled by a sealer 6 to match a characteristic of the enhancement layer thereby creating scaled BL transform coefficients. In some embodiments, this enhancement layer characteristic may be a quantization parameter, a resolution parameter or some other parameter that relates the base layer to the enhancement layer. The enhancement layer data 3 may also be inverse quantized by a second inverse quantizer 1 1 to produce enhancement layer coefficients 18. The scaled BL coefficients 16 may then be combined with the scaled BL coefficients by a coefficient combiner 19 to produce combined coefficients 17. These combined coefficients may then be rewritten to a reduced-layer or single-layer bitstream with a bitstream encoder (bitstream generator) 13. The bitstream encoder 13 may also write prediction data 2 into the bitstream. The functions of bitstream encoder 13 may also comprise quantization, entropy coding and other functions. Some embodiments of the present invention may be described with reference to Figure 2B. In these embodiments, the bitstream is re-encoded without complete reconstruction of the image and without inverse quantization. In these embodiments, base layer (BL) residual data 36 may be received at a decoder, transcoder, decoder portion of an encoder or another device or module. Enhancement layer (EL) data 37 may also be received at the device or module . In these embodiments, the BL signal 36 and enhancement layer signal 37 may be entropy decoded to produce quantized coefficients or indices 21 and 23. The BL quantization indices 21 may then be scaled 20 to match a characteristic of the enhancement layer thereby creating scaled BL indices. In some embodiments, this enhancement layer characteristic may be a quantization parameter, a resolution parameter or some other parameter that relates the base layer to the enhancement layer. The scaled BL indices 26 may then be combined 24 with the EL indices 23 to produce combined indices 27. These combined coefficients may then be rewritten to a reduced-layer or single-layer bitstream 28 with a bitstream encoder 25. The bitstream encoder 25 may also write prediction data 35 into the bitstream. The functions of bitstream encoder 25 may also comprise quantization, entropy coding and other functions.
In these embodiments, the base layer block does not need to be completely reconstructed. Instead, the intra- prediction mode and residual data are both mapped to the enhancement layer. Then, additional residual data is added from the enhancement layer. Finally, the block is reconstructed. The advantage of this approach is that the enhancement block may be written into a single layer bit- stream without loss and without requiring the base layer to be completely decoded.
Some embodiments of the present invention comprise propagation of motion data between layers in a CGS system without the use of a residual prediction flag. These embodiments comprise a modified IntraBL method that propagates the intra prediction mode from the base layer to the enhancement layer. Intra prediction is then performed at the enhancement layer. In these embodiments, the transform type for IntraBL blocks must be the same as the co-located base layer block. For example, if the base layer block employs the 8x8 transform, then the enhancement layer block must also utilize the 8x8 transform. In some embodiments, to enable the independent processing of the bit-stream, an 8x8 transform flag may still be transmitted in an enhancement layer.
In some exemplary embodiments, blocks coded by the 16x 16 transform in the base layer are also coded by the 16x16 transform in the enhancement layer. The enhancement layer blocks, however, are transmitted with the 4x4 scan pattern and method. That is, in some embodiments, the DC and AC coefficients of the 16x16 blocks are not sent separately. Some embodiments of the present invention may be described with reference to Figure 3 and Figure 1 1 . A system according to these embodiments comprises a size determiner 201 , a determiner 202 , a first selector 203 , and a second selector 204. In these embodiments, comprising multi-layer images, intra-prediction modes and transform data may be inferred from one layer to another. In some embodiments, a first-layer transform size may be determined by the size determiner 201 (30) . The first layer may be a base layer or a layer from which another layer is predicted. In these embodiments, a predetermined transform size is established. The first-layer transform size is then compared to the predetermined (predefined) transform size. That is, the determiner 202 determines if said lower-layer transform size is the same as (substantially similar to) a predetermined transform size. If the first-layer transform size is the same
(31) as the predetermined transform size, the predetermined transform size is selected by the first selector 203 (33) , for inverse transformation operations. If the first-layer transform size is not the same (3 1) as the predetermined transform size, a default transform size is selected by the second selector 204
(32) , for inverse transformation operations. In some embodiments, the predetermined transform size may be 8x8 and the default transform size may be 4x4.
In some embodiments, the predetermined transform size may also be related to a special scan pattern and method. In these embodiments the relationship between the first-layer transform size and the predetermined transform size may also trigger special encoding methods and patterns. For example, in some embodiments, the predetermined transform size may be 16x 16 and a match between the predetermined 16x16 size and the actual lower-layer size may indicate that the 16x16 is to be used, but that the data is to be encoded with a 4x4 scan pattern and method wherein AC and DC coefficients are transmitted together. Some embodiments of the present invention may be described with reference to Figure 4. In these embodiments, a multi-layer bitstream is parsed 40 and processed to determine a base-layer transform size and to produce BL coefficient values . The enhancement layer of the bitstream is also parsed 41 to determine whether a transform indicator is present. If the enhancement layer transform indicator is present in the bitstream 42 , the indicated transform size may be used for inverse transformation of the EL coefficients. If the enhancement layer transform indicator is not present in the bitstream 42 , it is determined whether the base layer transform size is 8x8 44. If the base layer transform size is 8x8, the 8x8 transform size is used to inverse transform the enhancement layer 46. If the base layer transform size is not 8x8, a default transform size, such as 4x4, may be used to inverse transform the enhancement layer 45.
In some embodiments of the present invention, the intra- predicted mode can be directly copied from the base layer by inferring the intra÷prediction mode from the base layer in an IntraBL block. In some alternative embodiments, it can be differentially coded relative to the base layer mode. In some embodiments, the current method for signaling intra prediction modes in AVC may be used. However, in these embodiments, the predicted mode (or most probable mode) is set equal to the base layer mode . In some embodiments, the 8x8 transform flag may be omitted from the enhancement layer bit-stream , and the transform may be inferred from the base layer mode .
In some embodiments, the 16x 16 transform coefficients may be signaled in the same manner in both the base and enhancement layers. The presence of the 16x 16 transform can be signaled with an additional flag in the enhancement layer or inferred from the base layer bit-stream.
Some embodiments of the present invention comprise a residual prediction flag for IntraBL blocks.. These embodiments enable the adaptive use of base layer residual for refining the enhancement layer, intra-predicted block.
In some embodiments of the present invention, all modes in the SVC bit-stream that cannot be directly mapped to an AVC bit-stream may be disabled by the encoder. Signaling for these embodiments, may be done in the SVC bit-streams. In some exemplary embodiments, this signaling may occur in the sequence header, sequence parameter set, picture parameter set, slice header or elsewhere. In some embodiments, this signaling may occur in an SEI message . In an exemplary embodiment, this signaling may occur in a spatial scalability
SEI message. In some embodiments, this signaling may occur by other out-of-band methods and, in some cases, will not require normative changes to the SVC decoding operation.
In some embodiments, when the encoder signals this operating mode, a decoder may assume that the encoder is generating a bit-stream, that can be translated to AVC. In some exemplary embodiments, the encoder may not utilize the IntraBL block mode or the smoothed reference tools when operating in this mode. Also, in these embodiments, the encoder may ensure that the residual data can be incorporated by scaling the base layer transform coefficients and then adding the transmitted residual. These embodiments may require the encoder to utilize the same transform method in the base and enhancement layers.
SVC-to-AVC Bit-stream Rewriting for CGS: Syntax
F.7.3.2 Sequence parameter set SVC extension syntax
4040
- 56 -
P2007/064040
- 58 - 007/064040
- 60 -
F.7.3.6.3 Residual in scalable extension syntax
064040
- 62 -
F.7.3.2 Sequence parameter set SVC extension semantics nal_unit_extension_flag equal to 0 specifies that the parameters that specify the mapping of simple_priority_id to (dependency_id, temporal_level, quality_level) follow next in the sequence parameter- set. nal_unit_extension_flag equal to 1 specifies that the parameters that specify the mapping of simple_priority_id to (dependency _id, temporal_level, quality_level) are not present. When nal_unit_extension_flag is not present, it shall be inferred to be equal to 1. The NAL unit syntax element extension_flag of all NAL units with nal_unit_type equal to 20 and 21 that reference the current sequence parameter set shall be equal to nal_unit_extension_flag.
NOTE - When profile_idc is not equal to 83, the syntax element extension_flag of all NAL units with nal_unit_type equal to 20 and 2 1 that reference the current sequence parameter set shall be equal to 1. number_of_simple_priority_id_values_minusl plus 1 specifies the number of values for simple_priority_id, for which a mapping to (dependency^id, temρoral_level, quality_level) is specified by the parameters that follow next in the sequence parameter set. The value of number_of_simple_priority_id_values_minus l shall be in the range of 0 to 63, inclusive. priority_id, dependency_id_list[ ρriority_id ], temporal_level_list[ priority_id ], quality_level_list[ priority_id ] specify the inferring process for the syntax elements dependency_id, temporal_level, and quality_level as specified in subclause F.7.4. 1. For all values of priority_id, for which dependency_list[ priority_id ] , temporal_level_list[ priority_id ] , and quality_level_list[ priority_id ] are not present, dependency_list[ priority_id ] , temporal_level_list[ priority_id ], and quality_level_list[ priority_id ] shall be inferred to be equal to 0. extended_spatial_scalability specifies the presence of syntax elements related to geometrical parameters for the base layer upsampling. When extended_spatial_scalability is equal to 0, no geometrical parameter is present in the bitstream. When extended_spatial_scalability is equal to 1 , geometrical parameters are present in the sequence parameter set. When extended_spatial_scalability is equal to 2 , geometrical parameters are present in slice_data_in_scalable_extension. The value of 3 is reserved for extended_spatial_scalability. When extended_spatial_scalability is not present, it shall be inferred to be equal to 0. scaled_base_left_offset specifies the horizontal offset between the upper-left pixel of an upsampled base layer picture and the upper-left pixel of a picture of the current layer in units of two luma samples . When scaled_base_left_offset is not present, it shall be inferred to be equal to 0.
The variable ScaledBaseLeftOffset is defined as follows:
ScaledBaseLeftOffset = 2 * scaled_base_left_offset (F-40) The variable ScaledBaseLeftOffsetC is defined as follows:
ScaledBaseLeftOffsetC =
ScaledBaseLeftOffset / SubWidthC (F-41)
scaled_base_top_offset specifies vertical offset of the upper-left pixel of an upsampled base layer picture and the upper-left pixel of a picture of the current layer in units of two luma samples . When scaled_base_top_offset is not present, it shall be inferred to be equal to 0.
The variable ScaledBaseTopOffset is defined as follow: ScaledBaseTopOffset = 2 * scaled_base_top_offset (F-42)
The variable ScaledBaseTopOffsetC is defined as follow: ScaledBaseTopOffsetC =
ScaledBaseTopOffset / SubHeightC (F-43)
scaled_base__right_offset specifies the horizontal offset between the bottom-right pixel of an upsampled based layer picture and the bottom-right pixel of a picture of the current layer in units of two luma samples. When scaled_base_right_offset is not present, it shall be inferred to be equal to 0.
The variable ScaledBaseRightOffset is defined as follow:
ScaledBaseRightOffset = 2 * scaled_base_right_offset (F-44) The variable ScaledBaseWidth is defined as follow:
ScaledBaseWidth= PicWidthlnMbs * 16 - ScaledBaseLeftOffset - ScaledBaseRightOffset (F-45)
The variable ScaledBaseWidthC is defined as follow:
ScaledBaseWidthC = ScaledBaseWidth / SubWidthC (F-46)
scaled_base_bottom_offset specifies the vertical offset between the bottom-right pixel of an upsampled based layer picture and the bottom-right pixel of a picture of the current layer in units of two luma samples. When scaled_base_bottom_offset is not present, it shall be inferred to be equal to 0. The variable ScaledBaseBottomOffset is defined as follow:
ScaledBaseBottomOffset = 2 * scaled_base_bottom_offset (F-47) The variable ScaledBaseHeight is defined as follow:
ScaledBaseHeight = PicHeightlnMbs * 16 - ScaledBaseTopOffset - ScaledBaseBottomOffset (F-48) The variable ScaledBaseHeightC is defined as follow: ScaledBaseHeightC = ScaledBaseHeight / SubHeightC (F-49)
chroma_phase_x_plus l specifies the horizontal phase shift of the chroma components in units of quarter sampling space in the horizontal direction of a picture of the current layer. When chroma_phase_x_plus l is not present, it shall be inferred to be equal to 0. The chroma_phase_x_plus l is in range 0.. 1 , the values of 2 and 3 are reserved. chroma_phase_y_plusl specifies the vertical phase shift of the chroma components in units of quarter sampling space in the vertical direction of a picture of the current layer. When chroma_phase_y_ρlus l is not present, it shall be inferred to be equal to 1. The chroma_phase_y_plus l is in range 0..2, the value of 3 is reserved. Note: The chroma type specified in the vui_parameters should be consistent with the chroma phase parameters chroma_phase_x_plus l and chroma_phase_y_plus l in the same sequence_parameter_set. avc__rewrite_flag specifies that the transmitted sequence can be rewritten without degradation as an AVC bit-stream by only decoding and coding entropy codes and scaling transform coefficients. An alternative method for the IntraBL block is employed and restrictions are placed on transform size selection by the encoder. avc_adaptive_rewrite_flag specifies that the avc_rewrite_flag will be sent in the slice header.
Some embodiments of the present invention comprise a scaling process that maps quantized transform coefficients to either a "de-quantized" version or an alternative quantization domain. In some embodiments, when the avc_rewrite_flag, described above, signals that these processes are disabled, then the decoded transform coefficients in all layers may be "de-quantized" according to the process defined in the current H.264/AVC video coding standard. However, when the avc_rewrite_flag signals that these embodiments are enabled, then the decoded, quantized transform coefficients or indices are not "de-quantized" in layers preceding the desired enhancement layer. Instead, the quantized coefficients or indices are mapped from a lower layer (specifically, a layer on which a desired enhancement layer depends) to the next higher layer (specifically, a layer closer to the desired enhancement layer, in order of dependency, that depends explicitly on the previously-mentioned lower layer) . Some embodiments of the present invention may be described with reference to Figure 5 and Figure 12. A system according to these embodiments comprises a first parameter determiner 21 1 , a second parameter determiner 2 12 , and a sealer 213. In these embodiments, the mapping process may operate as follows. First, the quantization parameter, or Qp value, in the lower layer bit-stream is determined by the first parameter determiner 21 1 (50) . Then, the quantization parameter, or Qp value, in the higher layer is determined by the second parameter determiner 212 (51) . Next, the lower- layer coefficients (first-layer transform coefficients) may be scaled (52) by a factor based on the quantization parameters at the sealer 213.
In some embodiments, the difference between the lower layer and higher layer Qp values may be computed. In some embodiments, the transform coefficients may be scaled with the following process:
Qp _ LowerLayer -Qp _ HigherLayer
1 HigherLayer where TmgherLayer and TiowerLayer denote the transform coefficients at the higher layer and lower layer, respectively; n is an integer, and Qp_LowerLayer and Qp_HigherLayer are the quantization parameters for the lower layer and higher layer, respectively.
The calculation of the mapping process can be implemented in a number of ways to simplify calculation. For example, the following system is equivalent:
Qp _ Diff = Qp_ LowerLayer -Qp _ HigherLayer
I HigherLayer lnl ~
( (TLowerLayer [n] « QP _ Diff Il 6) * ScaleMatrix[QP _ Diff%6] + MI2) » M where / / denotes integer division, % denotes the modulo operation and M and ScaleMatrix are predefined constants.
One specific example of these pre-defined values is
ScaleMatrix = [512 573 642 719 806 902] M=512
However, it should be readily apparent that- other values for M and ScaleMatrix may also be used. The simplified example above assumes that the value for
Qp_Diff is always greater than 0. Accordingly, in some embodiments, applications may check the value for Qp_Diff prior to performing the scaling operation. When the value for Qp_Diff is less than zero, it can be re-assigned a value of zero prior to more processing. In some embodiments, it may be assumed that Qp_LoιυerLayer will be greater than or equal to Qp_Hig he rL a y e r.
In some alternative embodiments, the following system may be implemented
Qp _ Diff = Qp__ LowerLayer -Qp _ HigherLayer THigherLayιer\n\ =
((TLowerLayer [n] « QP _ Diff Il 6) * ScaleMatrix[QP _ Diff%6 + 5] + M I T) » M
In an exemplary embodiment, the pre-defined values may be selected as:
ScaleMatrix = [291 325 364 408 457 512 573 642 719 .806 902] M=512
In some embodiments, after the transform coefficients are mapped from a lower layer to a higher layer, in some cases utilizing a process described above, the coefficients may be refined. After refinement, a second scaling operation may be employed. This scaling operation may "de-quantize" the transform coefficients . While some embodiments described above only describe one lower layer and one higher layer, some embodiments may comprise more than two layers. For example, an exemplary three-layer case may function as follows: First, the lowest layer may be decoded. Then, transform coefficients may be mapped to the second layer via the method described above.
The mapped transform coefficients may then be refined. Next, these transform coefficients may be mapped to a third layer using a method described above . These transform coefficients may then be refined, and the resulting coefficients may be "de-quantized" via a scaling operation such as the one defined by the AVC/ H.264 video coding standard.
Some embodiments of the present invention may be described with reference to Figure 6 and Figure 13. A system according to these embodiments comprises a first identifier 221 , a second identifier 222 , a first indicator determiner 223, a second indicator determiner 224, and a value determiner 225. In these embodiments information related to adj acent macroblocks may be used to inform an encoding or decoding operation for a target block or macroblock. In some embodiments, a first adjacent macroblock is identified by the first identifier 22 1 (60) and a second adjacent macroblock is identified by the second identifier 222 (61) . A first adjacent macroblock indicator is then determined by the first indicator determiner 223 (62) and a second adjacent macroblock indicator is determined by the second indicator determiner
224 (63) . An entropy coder control value may then be determined by the value determiner 225 (64) , based on the adjacent macroblock indicators.
Some embodiments of the present invention may be described with reference to Figure 7. In these embodiments, a first adjacent macroblock is identified 71 and a second adjacent macroblock is identified 72. Attributes of the first adjacent macroblock may then be examined to determine if the first macroblock meets pre-defined conditions 73. The second adjacent macroblock may also be examined to determine whether conditions are met 74. In some embodiments, these conditions may comprise: whether a macroblock is not available, whether a macroblock is coded in inter-prediction mode, whether a macroblock is encoded in the spatial domain, whether a macroblock is intra-predicted with DC prediction and whether a macroblock is coded with reference to another temporally-coincident layer. If any of the conditions are met for the first macroblock 75, a first macroblock flag is set to indicate the compliance 80. If no conditions are met, the flag is set to indicate a lack of compliance 76. In some embodiments, the flag may be set to "zero" if any conditions are met 80 and the flag may be set to "one" if no conditions are met 76. The same process 74, 79 may be followed for the second adj acent macroblock where a flag may be set to one value if a condition is met 81 and to another value if no conditions are met 78. When both adjacent macroblocks have been examined and related flags have been set, the flags may be added 83. The resultant value may then be used as an entropy coder control value. Some embodiments of the present invention may be described with reference to Figure 8. In these embodiments, a first adjacent macroblock is identified 90 and a second adj acent macroblock is identified 91 . Attributes of the first adjacent macroblock and second adjacent macroblock may then be examined to determine if the macroblocks meet predefined conditions 92. In some embodiments, these conditions may comprise: whether the macroblock is available, whether the macroblock is encoded in inter- prediction mode and whether the macroblock is coded with reference to another layer. If any of the conditions are met for either macroblock 94 , an estimated prediction m,ode is set to a predetermined mode. In some embodiments, the predetermined mode may be a DC prediction mode.
In these embodiments, an actual prediction mode may also be determined. The actual prediction mode may be based on image content. Methods may be used to determine a prediction mode that results in the least error or a reduced error. If the actual prediction mode is the same as the estimated prediction mode 94, the bitstream may be encoded to indicate use of the estimated prediction mode . On the decoder side, the same process may be followed to select the estimated mode when decoding the bitstream. When the actual prediction mode is not the same as the estimated prediction mode 94, a message may be sent to indicate the actual mode and its selection 95. Details of signaling of the estimated prediction mode and the actual prediction mode may be found in the JVT AVC specification, incorporated herein by reference.
Some embodiments of the present invention may comprise coding of intra-prediction modes for luma and chroma information in intra-coded blocks . Traditionally, these modes are signaled with a context adaptive method and coded in a manner dependent on the prediction modes of spatial neighbors. In some embodiments of the present invention, a conditional process may be used. In these embodiments, prediction modes may be predicted from neighbors if the neighbor does not utilize inter-layer prediction. Blocks that do utilize inter-layer prediction may be treated in one of the following ways. In some exemplary embodiments, the block may be treated as if it has the most probable prediction mode. In H.264 /AVC-related embodiments, this may be the DC prediction mode (mode 2) for the case of luma prediction.
In some alternative embodiments, the block may be treated as if it is an inter-coded block and OUTSIDE of the prediction region. In these embodiments, OUTSIDE has a specific context with the software utilized for testing in the JVT SVC project group. This software is commonly known as the JSVM . In some environments, encoding of the prediction mode and selection of the context for signaling the encoded mode may be separate processes. Different prediction methods may be used for the two processes . For example, the prediction mode may be encoded using the actual prediction mode for all intra-coded blocks - including blocks employing inter-layer prediction. However, these same blocks may utilize another rule, such as one of the rules described above to derive contexts for coding the encoded value . For example, the contexts may assume that the intra-blocks utilizing inter- layer prediction have the most probable prediction mode. Some of these embodiments enable independent propessing of the bit-streams corresponding to different layers.
Some embodiments of the present invention comprise maintenance of the "coded block pattern" information, or Cbp, as defined in the JVT SVC standard incorporated herein by reference . This information defines sub-regions within an image (or macro-block) that contain residual information. In some cases, it may be necessary for decoding the bit-stream, as the bit-stream decoder first decodes the Cbp and then utilizes the information to parse the remainder of the bit- stream. (For example, the Cbp may define the number of transform coefficient lists that may be present.) In many decoders, the Cbp is also utilized for reconstructing the decoded frame. For example, the decoder only needs to calculate the inverse transform if the Cbp denotes residual information. In some embodiments, the Cbp transmitted in the bit-stream may be utilized by the parsing process to extract the transform coefficients. However, it may no longer be useful to the reconstruction process since the sub-regions may contain residual information from previous layers.
Accordingly, a decoder of embodiments of the present invention may either: ( 1) not utilize the Cbp information within the reconstruction process, or (2) recalculate the Cbp after parsing the bit-stream. Examples of the recalculation process include scanning through all coefficient lists to identify the sub-regions with residual information, or alternatively, generating a new Cbp by computing the binary- OR operation between the transmitted Cbp and the Cbp utilized for reconstructing the lower layer data. In this case, "lower layer data" denotes the layer utilized during the inter- layer prediction process.
Some embodiments of the present invention may be described with reference to Figure 9 and Figure 14. A system according to these embodiments comprises a receiver 231 , a decoder 232 , a parser 233, a sealer 234, an adder 235, a calculator 236. In these embodiments, the receiver 231 receives ( 100) a bitstream comprising Cbp information and encoded image data. The Cbp information may be decoded by the decoder 232 ( 101) and used to determine which parts of the bitstream comprise transform coefficient data. The parser
233 may then parse ( 102) the bitstream by using the Cbp information to identify quantized indices or dequantized transform coefficients in a base layer and any enhancement layers. The indices or coefficients of a base layer or a lower- layer may then be scaled ( 103) by the sealer 234 to match an enhancement layer. The scaled indices or coefficients may then be added to or combined with the enhancement layer by means of the adder to form a combined layer ( 104) . The Cbp information may then be re-calculated or updated ( 105) by the calculator 236 to reflect changes in coefficient location JP2007/064040
- 78 -
between the original base layer or lower-layer and the new combined layer. The new combined Cbp information may then be used for subsequent processing of the combined layer or a resulting reconstructed image. In some embodiments, the combined Cbp information may be utilized for the loop filter operation defined in the AVC specification.
Some embodiments of the present invention comprise methods and systems for handling of a flag that enables an 8x8 transform. These embodiments may relate to the JVT SVC standard. In these embodiments, this flag does not need to be transmitted when a block is intra-coded with inter-layer prediction and does not contain residual data. In some embodiments, the flag does not need to be transmitted when inter-frame prediction utilizes blocks smaller than a specified size, such as 8x8. These embodiments may copy the transform flag that was transmitted in the lower layer (or lower layers) and employ this flag during the reconstruction process.
Some embodiments of the present invention comprise alternative methods and systems for handling of a flag that enables an 8x8 transform. In these embodiments, this flag does not need to be transmitted when a block does not contain residual data. If this case occurs in a lower layer that is utilized for inter-layer prediction, then the higher layer can choose to enable the 8x8 transform when sending 064040
- 79 -
residual data. This may be the default value for. the flag, which is not transmitted, but disables the 8x8 transform. In some embodiments, in this special case, a decoder can allow the lower layer and higher layer to utilize different transforms. Some embodiments of the present invention comprise methods and systems for handling of quantization matrices, which are also known as weight matrices or scaling matrices to experts in the field. These matrices may change the "de- quantization" process and allow an encoder and . decoder to apply frequency dependent (or transform coefficient dependent) quantization. In these embodiments, the presence of these scaling matrices alters the scaling process described in the mapping process described above. In some embodiments, the mapping procedure may be described as:
S _ L[n]*Qp _ LowerLayer-S _ H[n]*Qp _ HigherLayer -* HigherLayer \P\ = * LowerLayer U1 J ^
where TmgherLayer and TLowerLayer denote the transform coefficients at the higher layer and lower layer, respectively; n is an integer, Qp_LowerLayer and Qp_HigherLayer are, respectively, the quantization parameters for the lower layer and higher layer, and S_L and S_H are, respectively, the scaling factors for the lower layer and higher layer.
To accommodate weighting matrices, some embodiments 007/064040
- 80 -
may utilize modified versions of the algorithms presented in the mapping process above. With reference to the above discussion, it is possible to define
Qp-DiffM = S_L[n] ■ THigherLayer[n\ =
((TLowerLayer [n] « QP _ Diff[n] Il 6) * ScaleMatrk[QP _ Dfflή\ %6] + M 12) » M
where we note that S_L[n] and S_H[n] may be explicitly present or, alternatively, derived from the bit-stream.
In an alternative embodiment for accommodating weighting matrices, an additional weighting matrix may be sent in the bit-stream. This additional weighting matrix may explicitly define the frequency weighting necessary to predict a layer from a lower layer. For example, the weighting matrix can be employed as
* HigherLayerlnl ~ i(TLowsrLayeM « QP_Diff[n]/l6r ScaleMatrk[QP__Diff[n]%6] + Ml2) » M
where W l and W2 are weighting matrices included in the bit-stream. In some embodiments, either W l or W2 may not be transmitted. In these embodiments, the matrix not transmitted may be assumed to have elements equal to zero.
Embodiments of the present invention comprise methods and systems for modifying, creating and/ or applying a scalable video co.dec. Some embodiments allow for the fast 64040
- 81 -
conversion of a multi-layer bit-stream to a bit-str.eam with fewer layers. Some embodiments comprise conversion of a multi-layer bit-stream to a single layer bit-stream. Some exemplary embodiments comprise conversion of an SVC bit- stream to an AVC bit-stream.
Embodiments of the present invention relate to residual prediction. These embodiments may comprise a residual prediction process that operates in both the transform and spatial domains. In exemplary embodiments, when a higher layer in the bit-stream references a lower layer in the bit- stream and both layers contain the same spatial resolutions, the residual prediction process may comprise mapping the residual transform coefficients from the lower layer to the higher layer. This mapping process can operate on the scaled transform coefficients or the (unsealed) transform coefficient levels. In some embodiments, the process of residual prediction of scaled transform coefficients may be specified as A.8. 1 1.4. 1 Residual accumulation process for scaled transform coefficients Inputs to this process are a variable fieldMb specifying whether a macroblock is a field or a frame macroblock a variable lumaTrafo specifying the luma transform type a list of scaled transform coefficient values sTCoeff with 256 + 2 * MbWidthC * MbHeightC elements 007/064040
- 82 -
Outputs of this process comprise a modified version of the scaled transform coefficient values sTCoeff
The progressive refinement process for scaled transform coefficients as specified in subclause G.8. 1 1.3 may be invoked with fieldMb, lumaTrafo and sTCoeff as input and a modified version of sTCoeff as output where G.8. 1 1 .3 is defined in the incorporated SVC standard.
Conversely, in some embodiments, the residual prediction process may occur in the spatial domain when the enhancement layer utilizes a lower layer for inter-layer prediction that contains a different spatial resolution. In these embodiments, the residual from the referenced layer is reconstructed in the intensity domain and interpolated to the enhancement layer resolution. In an alternative scenario, the residual from the referenced layer is added to a prediction derived from the referenced layer in the spatial domain. The result of this addition is then interpolated to the enhancement layer.
Some embodiments of the present invention may be described with reference to Figure 10 and Figure 15. A system according to these embodiments comprises a resolution determiner 241 , a comparator 242, a controller 243, 064040
- 83 -
a coefficient scaler 244, a coefficient combiner. 245, an inverse transformer 246, and a spatial-domain combiner 247. In these embodiments, a current layer may be examined to determine if it employs residual prediction (110). If no residual prediction is employed, no accumulation is required
(111). If residual prediction is employed (110), the spatial resolutions of the current layer and a reference layer are determined by the resolution determiner 241- (112, 113). Then, the spatial resolution of the current layer is compared to the spatial resolution of a reference layer by the comparator 242 (114). When these spatial resolution are the same, the controller 243 selectively allows the coefficient sealer 244 and the coefficient combiner 245 to perform steps 116 and 117. That is, if these spatial resolutions are the same (114), the coefficients or indices of the reference layer
(from which the current layer is predicted) may be scaled by the coefficient sealer 244 (116) and combined (117) with the indices or coefficients of the current layer by the coefficient combiner 245. When these spatial resolution are not the same, the controller 243 selectively allows the inverse transformer 246 and the spatial-domain combiner 247 to perform steps 115, 118, and 120. That is, if the spatial resolutions are not the same (114), the current layer and reference layer indices may be dequantized and the resulting coefficients may be inverse transformed (115, 118). The resulting spatial domain values in the current layer and the reference layer may then be combined by the spatial-domain combiner 247 ( 120) to form a reconstructed image.
As is readily seen from the above description, the method of residual prediction depends on the resolution of the enumerated higher layer and the enumerated lower layer referenced for prediction. Unfortunately, this is problematic as the accumulation of residual information in the spatial domain may not equal the accumulation of residual in the transform domain followed by subsequent conversion to the spatial domain. For the case of a standardized decoding process this may lead to a drift between the encoder and decoder and a loss of coding efficiency.
The current SVC system addresses this problem by performing residual prediction only in the spatial domain.
However, some embodiments of the present invention comprise a decoding process that performs residual prediction in both domains. Specifically, when residual prediction is enabled and the enhancement layer and layer referenced for inter-layer prediction are the same resolution, then the residual is accumulated in the transform domain. However, when residual prediction is enabled and the enhancement layer and layer referenced for inter-layer prediction are different resolutions, then the residual is accumulated in the spatial domain. An exemplary decoding process is described , with the following
// Initialize list of scaled transform coefficients to zero for( i=0; i<NumberTransformCoefficients; i++ ) sTCoeffli] = 0;
// Initialize spatial residual to zero for( v=0; i< WidthResidual; i++ ) rYCC[W] = 0;
// Process layer for( layerID=0; layerID<NumLayers; layerID++ ) / if( UtilizeAnotherLayerForlnterLayerPredictionβayerlD) == false )
{ // For layers that do not employ residual prediction, decode and store transform coefficients // Note that this will discard any data previously stored in sTCoejf sTCoeff= DecodeAndScaleTransmittedTransformCoefficients( layerID );
} else
{ // For layers that utilize residual prediction, determine the spatial resolution of the current and reference layers if( ResolutionOfLayerflayerID) ==
ResolutionOfLayerReferencedForlnterLayerPredictionβayerlD));
{ // If the resolutions are the same, accumulate the residual information in the transform domain sTCoeff= sTCoeff+DecodeAndScaleTransmittedTransformCoefficients( layerJD );
} else
{
// If the resolutions are not the same, convert the contents ofsTCoeffto the spatial domain
// and add it to any residual stored in rYCC. Then, upsample (or interpolate the residual).
// Finally, discard the data in sTCoeff and store transform coefficients or the current layer rYCC = rYCC +
CalculateΙnverseTransfoπnOfScaledTransformCoejfkήents(sTCoeff); rYCC = Upsampleθrlnterpolate( rYCC ;) for( i=0; i<NumberTransformCoejficients; i++ ) sTCoeffli] = O; sTCoeff= DecodeAndScάleTransmittedTransformCoefficients( layerID );
} } // Determine if the layer is identified for output. If so convert residual to the pixel domain.
// Then, add to any intra-layer prediction. if( LayerShouldBeReconstructedForDisplay( layerID ) == true ) i rYCC = rYCC
+CalculateInverseTransformOfScnledTransfonnCoefficients(sTCoejf); outYCC = GenerateIntraLayerPrediction( layerJD ) + rYCC; ;
;
While not explicitly described in the above pseudo-code, other exemplary embodiments comprise other extensions to the defined decoding process. In some embodiments, intra- layer prediction may be performed at multiple layers in the scalable bit-stream. When this is allowed in the video coding standard, then the function GeneratelntraLayerPrediction may be called prior to any residual processing. The output of this function may be added to the array rYCC. Furthermore, in some embodiments, the function
GeneratelntraLayerPrediction is not called in the above pseudo-code. Instead, the line outYCC=GeneateIntraLayerPrediction(layerID)+rYCC would be replaced by outYCC=rYCC . 40
- 88 -
In some embodiments of the present invention, the residual accumulation process may occur on unsealed transform coefficients. In this case, the inter-layer prediction process may be performed prior to constructing the scaled transform coefficients. Aspects of some embodiments are described in US Provisional Patent Application No. 60/ 806,930, entitled "Methods and Systems for Image Scalability," filed July 10, 2006 and invented by C . Andrew Segall. Aspects of some embodiments are described in US Provisional Patent Application No. 60 / 828 ,618, entitled
"Systems and Methods for Bit-Stream Rewriting for Coarse Grain Scalability," filed October 6, 2006 and invented by C. Andrew Segall.
Psuedo-code for an exemplary procedure is given as
// Initialize list of scaled transform coefficients to zero for( i=0; i<NumberTransformCoefficients; i++ ) sTCoeffli] = 0;
// Initialize spatial residual to zero for(i=0; i<WidthResidual; i++ ) for(j=0; j<HeightResidual; j++ ) rYCC[i][j] = 0;
// Process layer for( layerTD=0; layerJD<NumLayers; layerE)++ )
{ if( UtilizeAnotherLayerForlnterLayerPredictionβayerE)) == false)
{ // For layers that do not employ residual prediction, decode and store transform coefficients
// Note that this will discard any data previously stored in sTCoeff sTCoejf= DecodeAndScaleTransrnittedTransformCoefβcients( layerID );
} else
{ // For layers that utilize residual prediction, determine the spatial resolution of the current and reference layers if( ResolutionOfLayerβayerE)) == ResolutionOfLayerReferencedForlnterLayerPredictionflayerE)) );
{ // If the resolutions are the same, accumulate the residual information in the transform domain iβlnterLayerPredidionWithUnScaledTransfonnCoefficientsf layerK)) == false) sTCoeff= sTCoeff+DecodeAndScaleTransmittedTransformCoeffiάents( layerJD ); else sTCoeff= DecodeAndScaleTransmittedTransformCoefficients( layerTD); ; else
{
// If the resolutions are not the same, convert the contents ofsTCoeffto the spatial domain // and add it to any residual stored in rYCC. Then, upsample (or interpolate) the residual.
// Finally, discard the data in sTCoeff and store transform coefficients for the current layer rYCC = rYCC + Cal(MlateInverseTransformOfScaledTransforτnCoefficieιτts( sTCoeff); rYCC = Upsampleθrlnterpolate( rYCC ;) for( i=0; i<NumberTransformCoefficients; i++ ) sTCoeff[i] = 0; sTCoeff = DecodeAndScaleTransmittedTransformCoefβcients( layerID);
)
}
// Determine if the layer is identified for output. If so convert residual to the pixel domain. // Then, add to any intra-layer prediction. if(LayerShouldBeReconstructedForDisplay( layerID ) == true )
{ rYCC = rYCC
+CalculateInverseTransformOfScaledTransformCoefficients( sTCoeff ); outYCC = GenerateIntraLayerPrediction( layerID ) + rYCC; ;
Some embodiments of the present invention comprise a decoder that takes a scalable bit-stream as input and generates a reconstructed image sequence. The scalable bit- stream employing an inter-layer prediction process to project information from enumerated lower layers of the bit-stream to enumerated higher layers of the bit-stream. Some embodiments of the present invention comprise a decoding process that accumulates residual information in both the transform and spatial domain. Accumulation is performed in the transform domain between enumerated layers in the bit-stream when the layers describe an image sequence with the same resolution.
Some embodiments of the present invention comprise a decoding process that converts accumulated transform coefficients to the spatial domain only when processing a current layer that has a different spatial resolution than the layer utilized for inter-layer prediction. The transform coefficients are converted to the spatial domain and subsequently upsampled (or interpolated) . The transform coefficient list is then set equal to zero.
Some embodiments of the present invention comprise a decoding process that accumulates residuals in the transform domain until the resolution between the current , decoding layer and the layer utilized for inter-layer prediction differs. The transform coefficient list is then set to zero, with subsequent processing of layers that reference layers with the same spatial resolution performing accumulation in the transform domain.
Some embodiments of the present invention comprise a decoding process that generates an output bit-stream by performing intra-layer prediction, computing the inverse transform on scaled transform coefficients, adding the output of the inverse transform operation to a possibly non-zero residual signal, and summing the result of this previous addition with the output of the intra-layer prediction process.
Some embodiments of the present invention comprise a decoding process that also allows for inter-layer prediction to be performed on unsealed transform coefficients or transform coefficient levels.
Some embodiments of the present invention comprise a decoding process that also allows for intra-layer prediction to be performed within layers of the bit-stream that are not reconstructed for output. The result of this intra-layer prediction being added to the accumulated spatial residual.
Some embodiments of the present invention comprise a decoding process where clipping is performed within the residual prediction process. The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalence of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
Elements in the system of embodiments of the present invention may be realized by software, with the use of a CPU as follows.
That is, the system may include members such as:, a CPU (Central Processing Unit) that executes instructions of a control program realizing the functions; a ROM (Read Only Memory) recording the program; a RAM (Random Access Memory) on which the program is executed; and a storage device (recording medium) such as a memory, which stores the program and various kinds of data. The objective of the present invention can be achieved in the following manner: program code (e. g. an executable code program, intermediate code program, and source program) of the control program of the system, the control program being software for realizing the functions, is recorded on a recording medium in a computer-readable manner, this recording medium is supplied to the system, and the computer (or CPU or MPU) reads out the program code from the recording medium and execute the program.
Examples of such a recording medium include a tape, such as a magnetic tape and a cassette tape; a magnetic disk, such as a flexible disk and a hard disk; a disc including an optical disc, such as a CD-ROM / MO / MD / DVD / CD-R; a card, such as an IC card (inclusive of a memory card) ; and a semiconductor memory, such as a mask ROM, an EPROM (Erasable Programmable Read Only Memory) , an EEPROM (Electrically Erasable Programmable Read Only Memory) , or a flash ROM.
Alternatively, the system may be capable of being connected to a communications network, allowing the program code to be supplied via the communications network. Non-limiting examples of the communications network include the Internet, intranet, extranet, LAN, ISDN, VAN CATV network, virtual private network, telephone network, mobile communications network, and satellite communications network. Non-limiting examples of the transmission media composing the communications network are, wired media such as IEEE 1394, USB, power line communication, cable TV lines, telephone lines, and ADSL lines, infrared light such as IrDA and remote controller, electric waves such as Bluetooth®, IEEE802. i l , HDR, mobile telephone network, satellite connection, and terrestrial digital broadcasting network. It is also noted the present invention may be realized by a carrier wave or as data signal sequence, which are realized by electronic transmission of the program code.

Claims

1. A method for combining layers in a multi-layer bitstream, said method comprising: a) inverse quantizing a first-layer quantized transform coefficient thereby creating a first-layer transform coefficient; b) scaling said first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; c) inverse quantizing a second-layer quantized transform coefficient thereby creating a second-layer transform coefficient; and d) combining said scaled, first-layer transform coefficient with said second-layer transform coefficient to form a combined coefficient.
2. A system for combining layers in a multi-layer bitstream, said system comprising: a) a first inverse quantizer for inverse quantizing a first- layer quantized transform coefficient thereby creating a first- layer transform coefficient; b) a sealer for scaling said first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; c) a second inverse quantizer for inverse quantizing a P2007/064040
- 97 -
second-layer quantized transform coefficient thereby creating a second-layer transform coefficient; and d) a coefficient combiner for combining said scaled, first- layer transform coefficient with said second-layer transform coefficient to form a combined coefficient.
3. A method for converting an SVC-compliant bitstream to AVC-compliant data, said method comprising: a) receiving an SVC-compliant bitstream comprising prediction data, base-layer residual data and enhancement- layer residual data; b) inverse quantizing said base-layer residual data thereby creating base-layer transform coefficients; c) inverse quantizing said enhancement-layer residual data thereby creating enhancement-layer transform coefficients; d) scaling said base-layer transform coefficients to match a quantization characteristic of said enhancement- layer thereby creating scaled base-layer transform coefficients; and e) combining said scaled base-layer transform coefficients with said enhancement-layer transform coefficients to form combined coefficients.
4. A method for combining layers in a multi-layer 64040
- 98 -
bitstream, said method comprising: a) receiving a first-layer quantized transform coefficient; b) scaling said first-layer quantized transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer quantized transform coefficient; c) receiving a second-layer quantized transform coefficient; and d) combining said scaled, first-layer transform coefficient with said second-layer quantized . transform coefficient to form a combined quantized coefficient.
5. A method for conditionally combining layers in a multi-layer bitstream, said method comprising: a) receiving a first-layer quantized transform coefficient; b) receiving a second-layer quantized transform coefficient; c) receiving a layer combination indicator; d) scaling said first-layer quantized transform coefficient to match a characteristic of a second-layer thereby' creating a scaled, first-layer quantized transform coefficient, when said layer combination indicator indicates transform domain accumulation; and e) combining said scaled, first-layer transform coefficient with said second-layer quantized transform coefficient to form a combined quantized coefficient, when said layer combination indicator indicates transform , domain accumulation.
6. A method for reconstructing an enhancement layer from a multi-layer bitstream, said method comprising: a) receiving a first-layer intra-prediction mode; b) receiving a second-layer bitstream prediction indicator, said indicator indicating that said first-layer prediction mode is to be used for prediction of said second layer; c) using said first-layer prediction mode to construct a second-layer prediction based on adjacent block data in said second layer; and d) combining said second-layer prediction with residual information thereby creating a reconstructed second layer.
7. A method for combining layers in a multi-layer bitstream, said method comprising: a) determining a first spatial resolution of a first layer of a multi-layer image; b) determining a second spatial resolution of a second layer of said multi-layer image; c) comparing said first spatial resolution with said second spatial resolution; d) performing steps e) through f) when said first spatial resolution is substantially equal to said second spatial resolution; e) scaling a first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; f) combining said scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; g) performing steps h) through k) when said first-layer spatial resolution is not substantially equal to said second- layer spatial resolution; h) inverse transforming said first-layer transform coefficient thereby producing a first-layer spatial domain value; i) inverse transforming said second-layer transform coefficient thereby producing a second-layer spatial domain value; j) scaling said first-layer spatial domain value to match the resolution of said second layer thereby producing a scaled, first-layer spatial domain value; and k) combining said scaled, first-layer spatial domain value with said second-layer spatial domain value thereby producing a combined spatial domain residual value.
8. A system for combining layers in a multi-layer 40
- 101 -
bitstream, said system comprising: a) a resolution determiner for determining a first spatial resolution of a first layer of a multi-layer image and for determining a second spatial resolution of a second layer of said multi-layer image; b) a comparator for comparing said first spatial resolution with said second spatial resolution; c) a controller for selectively performing steps d) through e) when said first spatial resolution is substantially equal to said second spatial resolution; d) a coefficient scaler for scaling a first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; e) a coefficient combiner for combining said scaled, first- layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; f) said controller selectively performing steps g) through i) when said first-layer spatial resolution is not substantially equal to said second-layer spatial resolution; g) an inverse transformer for inverse transforming said first-layer transform coefficient thereby producing a first- layer spatial domain value and for inverse transforming said second-layer transform coefficient thereby producing a second-layer spatial domain value; h) a spatial-domain sealer for scaling said first-layer spatial domain value to match the resolution of said second layer thereby producing a scaled, first-layer spatial domain value; and i) a spatial-domain combiner for combining said scaled, first-layer spatial domain value with said second-layer spatial domain value thereby producing a combined spatial domain residual value.
9. A method for combining layers in a multi-layer bitstream, said method comprising: a) receiving de-quantized transform coefficients for a first layer of a first spatial resolution; b) receiving de-quantized transform coefficients for a second layer of said first spatial resolution; c) scaling said first-layer transform coefficients, thereby creating scaled first-layer transform coefficients; d) combining said scaled first-layer transform coefficients with said second-layer transform coefficients thereby creating combined transform coefficients; e) inverse transforming said combined transform coefficients thereby creating combined residual spatial- domain values; f) receiving de-quantized transform coefficients for a third layer of a second spatial resolution; g) resampling said combined residual spatial-domain 064040
- 103 -
values to said second spatial resolution, thereby creating resampled combined spatial-domain values; h) inverse transforming said third layer transform coefficients, thereby creating third-layer spatial-domain values; and i) combining said resampled combined spatial-domain values with said third-layer spatial-domain values.
10. A method for combining layers in a multi-layer bitstream, said method comprising: a) receiving quantized transform coefficients for a first layer of a first spatial resolution; b) receiving quantized transform coefficients for a second layer of said first spatial resolution; c) scaling said quantized first-layer transform coefficients, thereby creating scaled quantized first-layer transform coefficients; d) combining said scaled quantized first-layer transform coefficients with said second-layer quantized transform coefficients thereby creating combined quantized transform coefficients; e) inverse quantizing said combined quantized transform coefficients thereby creating combined transform coefficients; f) inverse transforming said combined transform coefficients thereby creating combined residual spatial- domain values; g) receiving quantized transform coefficients for a third layer of a second spatial resolution; h) resampling said combined residual spatial-domain values to said second spatial resolution, thereby creating resampled combined spatial-domain values; i) inverse quantizing said third-layer quantized transform coefficients thereby creating third-layer transform coefficients; j) inverse transforming said third layer transform coefficients, thereby creating third-layer spatial-domain values; and k) combining said resampled combined spatial-domain values with said third-layer spatial-domain values.
1 1. A method for combining layers in a multi-layer bitstream, said method comprising: a) receiving de-quantized transform coefficients for a first layer of a first spatial resolution; b) inverse transforming said de-quantized first-layer transform coefficients thereby producing first-layer spatial domain values; c) receiving de-quantized transform coefficients for a second layer of a second spatial resolution that is higher than said first spatial resolution; 64040
- 105 -
d) receiving de-quantized transform coefficients for a third layer of said second spatial resolution; e) upsampling said first-layer spatial domain values to said second spatial resolution thereby producing upsampled first-layer spatial domain values; f) combining said second-layer de-quantized transform coefficients with said third-layer de-quantized transform coefficients thereby creating combined transform coefficients; g) inverse transforming said combined . transform coefficients thereby creating first combined residual spatial- domain values; and h) combining said upsampled first-layer spatial domain values with said first combined residual spatial-domain values.
12. A method for combining layers in a multi-layer bitstream, said method comprising: a) receiving quantized transform coefficients for a first layer of a first spatial resolution; b) receiving quantized transform coefficients for a second layer of said first spatial resolution; c) receiving quantized transform coefficients for a third layer of said first spatial resolution; d) scaling said quantized first-layer transform coefficients to match properties of said second-layer, thereby creating scaled quantized first-layer transform coefficients; e) combining said scaled quantized first-layer transform coefficients with said second-layer quantized transform coefficients thereby creating combined quantized transform coefficients; f) inverse quantizing said combined quantized transform coefficients thereby creating combined transform coefficients; g) inverse quantizing said third-layer quantized transform coefficients thereby creating third-layer de- quantized transform coefficients; h) combining said combined transform coefficients with said third-layer de-quantized transform coefficients thereby creating three-layer combined transform coefficients; and i) inverse transforming said three-layer combined transform coefficients thereby creating combined spatial- domain values .
13. A method for combining layers in a multi-layer bitstream, said method comprising: i) determining whether a second layer of a multi-layer image employs residual prediction; ii) performing the following steps only if said second layer employs residual prediction; iii) determining a first spatial resolution of a first layer of a multi-layer image; iv) determining a second spatial resolution of said second layer; v) comparing said first spatial resolution with said second spatial resolution; vi) performing steps vii) through viii) when said first spatial resolution is substantially equal to said second spatial resolution; vii) scaling a first-layer transform coefficient to match a characteristic of a second-layer thereby creating a scaled, first-layer transform coefficient; viii) combining said scaled, first-layer transform coefficient with a second-layer transform coefficient to form a combined coefficient; ix) performing steps x) through xiii) when said first-layer spatial resolution is not substantially equal to said second- layer spatial resolution; x) inverse transforming said first-layer transform coefficient thereby producing a first-layer spatial domain value; xi) inverse transforming said second-layer transform coefficient thereby producing a second-layer spatial domain value; xii) scaling said first-layer spatial domain value to match the resolution of said second layer thereby producing a scaled, first-layer spatial domain value; and 064040
- 108 -
xiii) combining said scaled, first-layer spatial domain value with said second-layer spatial domain value thereby producing a combined spatial domain value.
14. A method for scaling transform coefficients in a multi-layer bitstream, said method comprising: determining a first-layer quantization parameter based on said multi-layer bitstream; determining a second-layer quantization parameter based on said multi-layer bitstream; and scaling a first-layer transform coefficient based on said first-layer quantization parameter and said second-layer quantization parameter.
15. A method for controlling entropy coding processes, said method comprising: a) identifying a first adjacent macroblock that is adjacent to a target macroblock; b) identifying a second adjacent macroblock that is adjacent to said target macroblock; c) determining a first macroblock indicator indicating whether said first adjacent macroblock is coded with reference to another layer; d) determining a second macroblock indicator indicating whether said second adjacent macroblock is coded with reference to another layer; and e) determining an entropy coding control value based on said first macroblock indicator and said second macroblock indicator.
16. A method for controlling entropy coding processes, said method comprising: a) identifying a first adjacent macroblock that is adjacent to a target macroblock; b) identifying a second adjacent macroblock that is adjacent to said target macroblock; c) determining whether said first adjacent macroblock is available; d) determining whether said first adjacent macroblock is coded in inter prediction mode; e) determining whether said first adjacent macroblock is encoded in the spatial domain; f) determining whether said first adjacent macroblock is intra predicted with a DC prediction mode; g) determining whether said first adjacent macroblock is coded with reference to another layer; h) setting a first adjacent block flag to one when any of steps c) through g) are true; i) determining whether said second adjacent macroblock is available; j) determining whether said second adjacent macroblock is coded in inter prediction mode; k) determining whether said second adjacent macroblock is encoded in the spatial domain; 1) determining whether said second adj acent macroblock is intra predicted with a DC prediction mode; m) determining whether said second adjacent macroblock is coded with reference to another layer; n) setting a second adjacent block flag value to one when any of steps i) through m) are true; and
0) adding said first adjacent block flag value and said second adjacent block flag value to produce an entropy coding control value.
17. A method for prediction mode determination, said method comprising: a) identifying a first adjacent macroblock that is adjacent to a target macroblock; b) identifying a second adjacent macroblock that is adjacent to said target macroblock; and c) setting a target block estimated prediction mode to a predetermined mode when any of conditions i) through vi) are true:
1) said first adjacent macroblock is available; ii) said first adjacent macroblock is coded in inter 064040
- I l l -
prediction mode; iii) said first adj acent macroblock is coded with reference to another layer; iv) said second adjacent macroblock is available; v) said second adjacent macroblock is coded in inter prediction mode; vi) said second adjacent macroblock is coded with reference to another layer.
18. A method for combining layers in a multi-layer bitstream, said method comprising: a) receiving a bitstream comprising encoded image coefficients and encoded block pattern (Cbp) information wherein said Cbp information identifies regions in said bitstream that comprise transform coefficients; b) decoding said Cbp information; c) parsing said bitstream by using said Cbp information to identify bitstream regions comprising transform coefficients; d) scaling first-layer transform coefficients in said bitstream to match a characteristic of a second-layer in said bitstream; e) adding said scaled, first-layer transform coefficients to second-layer transform coefficients to form combined coefficients in a combined layer; and P2007/064040
- 1 12 -
f) calculating combined Cbp information for said combined layer wherein said combined Cbp information identifies regions in said combined layer that comprise transform coefficients.
19 A system for combining layers in a multi-layer bitstream, said system comprising: g) a receiver for receiving a bitstream- comprising encoded image coefficients and encoded block pattern (Cbp) information wherein said Cbp information identifies regions in said bitstream that comprise transform coefficients; h) a decoder for decoding said Cbp information; i) a parser for parsing said bitstream by using said Cbp information to identify bitstream regions comprising transform coefficients; j) a sealer for scaling first-layer transform coefficients in said bitstream to match a characteristic of a second-layer in said bitstream; k) an adder for adding said scaled, first-layer transform coefficients to second-layer transform coefficients to form combined coefficients in a combined layer; and
1) a calculator for calculating combined Cbp information for said combined layer wherein said combined Cbp information identifies regions in said combined layer that comprise transform coefficients.
20. A method for selecting a reconstruction transform size when a transform size is not indicated in an enhancement layer, said method comprising: a) determining a lower-layer transform size; b) determining if said lower-layer transform size is substantially similar to a predefined transform size ; c) selecting an inverse transform of said predefined transform size as a reconstruction transform when, said lower- layer transform size is substantially similar to said predefined transform size; and d) selecting an inverse transform of a default transform size as a reconstruction transform when said lower-layer transform size is not substantially similar to said predefined transform size.
EP07768418A 2006-07-10 2007-07-09 Methods and systems for combining layers in a multi-layer bitstream Ceased EP2044773A4 (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US80693006P 2006-07-10 2006-07-10
US82861806P 2006-10-06 2006-10-06
US88849907P 2007-02-06 2007-02-06
US89414807P 2007-03-09 2007-03-09
US11/694,955 US8130822B2 (en) 2006-07-10 2007-03-31 Methods and systems for conditional transform-domain residual accumulation
US11/694,956 US8059714B2 (en) 2006-07-10 2007-03-31 Methods and systems for residual layer scaling
US11/694,959 US8422548B2 (en) 2006-07-10 2007-03-31 Methods and systems for transform selection and management
US11/694,954 US8532176B2 (en) 2006-07-10 2007-03-31 Methods and systems for combining layers in a multi-layer bitstream
US11/694,958 US7885471B2 (en) 2006-07-10 2007-03-31 Methods and systems for maintenance and use of coded block pattern information
US11/694,957 US7840078B2 (en) 2006-07-10 2007-03-31 Methods and systems for image processing control based on adjacent block characteristics
PCT/JP2007/064040 WO2008007792A1 (en) 2006-07-10 2007-07-09 Methods and systems for combining layers in a multi-layer bitstream

Publications (2)

Publication Number Publication Date
EP2044773A1 true EP2044773A1 (en) 2009-04-08
EP2044773A4 EP2044773A4 (en) 2011-10-12

Family

ID=38923347

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07768418A Ceased EP2044773A4 (en) 2006-07-10 2007-07-09 Methods and systems for combining layers in a multi-layer bitstream

Country Status (4)

Country Link
EP (1) EP2044773A4 (en)
JP (1) JP2009543501A (en)
CN (2) CN102685496B (en)
WO (1) WO2008007792A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105230018A (en) * 2013-05-24 2016-01-06 株式会社Kt For the method and apparatus of encoding to the video of support multiple layers

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8289370B2 (en) 2005-07-20 2012-10-16 Vidyo, Inc. System and method for scalable and low-delay videoconferencing using scalable video coding
KR101505195B1 (en) 2008-02-20 2015-03-24 삼성전자주식회사 Method for direct mode encoding and decoding
JP5169978B2 (en) * 2009-04-24 2013-03-27 ソニー株式会社 Image processing apparatus and method
DE102009039095A1 (en) * 2009-08-27 2011-03-10 Siemens Aktiengesellschaft Method and apparatus for generating, decoding and transcoding a coded video data stream
JP5833682B2 (en) * 2011-03-10 2015-12-16 ヴィディオ・インコーポレーテッド Dependency parameter set for scalable video coding
US20120257675A1 (en) * 2011-04-11 2012-10-11 Vixs Systems, Inc. Scalable video codec encoder device and methods thereof
US20130083856A1 (en) * 2011-06-29 2013-04-04 Qualcomm Incorporated Contexts for coefficient level coding in video compression
US9313486B2 (en) 2012-06-20 2016-04-12 Vidyo, Inc. Hybrid video coding techniques
US9843801B2 (en) * 2012-07-10 2017-12-12 Qualcomm Incorporated Generalized residual prediction for scalable video coding and 3D video coding
CN102790905B (en) * 2012-08-03 2016-08-17 重庆大学 The code-transferring method H.264/SVC arrived H.264/AVC based on P2PVoD video on-demand system
EP2904803A1 (en) 2012-10-01 2015-08-12 GE Video Compression, LLC Scalable video coding using derivation of subblock subdivision for prediction from base layer
CN104813667B (en) * 2012-11-15 2018-03-16 联发科技股份有限公司 Interframe layer prediction method and device for scalable video
CN109068136B (en) * 2012-12-18 2022-07-19 索尼公司 Image processing apparatus, image processing method, and computer-readable storage medium
TWI597968B (en) 2012-12-21 2017-09-01 杜比實驗室特許公司 High precision up-sampling in scalable coding of high bit-depth video
JP6457488B2 (en) * 2013-04-15 2019-01-23 ロッサト、ルカ Method for decoding a hybrid upward compatible data stream
CN106105213B (en) 2014-03-24 2019-09-10 株式会社Kt Multi-layer video signal encoding/decoding method and apparatus

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030058936A1 (en) * 2001-09-26 2003-03-27 Wen-Hsiao Peng Scalable coding scheme for low latency applications

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2697393A1 (en) * 1992-10-28 1994-04-29 Philips Electronique Lab Device for coding digital signals representative of images, and corresponding decoding device.
US6795501B1 (en) * 1997-11-05 2004-09-21 Intel Corporation Multi-layer coder/decoder for producing quantization error signal samples
JP3561485B2 (en) * 2000-08-18 2004-09-02 株式会社メディアグルー Coded signal separation / synthesis device, difference coded signal generation device, coded signal separation / synthesis method, difference coded signal generation method, medium recording coded signal separation / synthesis program, and difference coded signal generation program recorded Medium
US6925120B2 (en) * 2001-09-24 2005-08-02 Mitsubishi Electric Research Labs, Inc. Transcoder for scalable multi-layer constant quality video bitstreams
KR100556838B1 (en) * 2002-09-17 2006-03-10 엘지전자 주식회사 Fine granularity scalability encoding and decoding apparatus and method
JP2004363931A (en) * 2003-06-04 2004-12-24 Nippon Telegr & Teleph Corp <Ntt> Method and apparatus for re-encoding hierarchically encoded bit stream
JP4068537B2 (en) * 2003-09-03 2008-03-26 日本電信電話株式会社 Hierarchical coded bitstream requantization method and apparatus, hierarchical coded bitstream requantization program, and recording medium recording the program
JP2006157881A (en) * 2004-11-08 2006-06-15 Toshiba Corp Variable-length coding device and method of same

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030058936A1 (en) * 2001-09-26 2003-03-27 Wen-Hsiao Peng Scalable coding scheme for low latency applications

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARNOLD J ET AL: "Scalable Video Coding by Stream Morphing", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 15, no. 2, 1 February 2005 (2005-02-01), pages 306-319, XP011126409, ISSN: 1051-8215, DOI: DOI:10.1109/TCSVT.2004.841692 *
MEI KODAMA ET AL: "Scalable video transcoding method with spatial updatable scalability", IEEE INTERNATIONAL MIDWEST SYMPOSIUM,, vol. 1, 25 July 2004 (2004-07-25), pages 1_257-1_260, XP010738975, ISBN: 978-0-7803-8346-3 *
See also references of WO2008007792A1 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105230018A (en) * 2013-05-24 2016-01-06 株式会社Kt For the method and apparatus of encoding to the video of support multiple layers
CN105230018B (en) * 2013-05-24 2019-04-16 株式会社Kt For to the method and apparatus for supporting multiple layers of video to be encoded
US10349063B2 (en) 2013-05-24 2019-07-09 Kt Corporation Method and apparatus for coding video supporting plurality of layers

Also Published As

Publication number Publication date
CN101507282B (en) 2012-06-27
CN101507282A (en) 2009-08-12
EP2044773A4 (en) 2011-10-12
CN102685496B (en) 2014-11-05
JP2009543501A (en) 2009-12-03
CN102685496A (en) 2012-09-19
WO2008007792A1 (en) 2008-01-17

Similar Documents

Publication Publication Date Title
US7885471B2 (en) Methods and systems for maintenance and use of coded block pattern information
US8532176B2 (en) Methods and systems for combining layers in a multi-layer bitstream
JP5179484B2 (en) Method and system for communicating multi-layer bitstream data
US7840078B2 (en) Methods and systems for image processing control based on adjacent block characteristics
EP2044773A1 (en) Methods and systems for combining layers in a multi-layer bitstream
US8059714B2 (en) Methods and systems for residual layer scaling
US8422548B2 (en) Methods and systems for transform selection and management
US8130822B2 (en) Methods and systems for conditional transform-domain residual accumulation
US8867618B2 (en) Method and apparatus for weighted prediction for scalable video coding
AU2015230740B2 (en) Method and apparatus of scalable video coding
JP5095750B2 (en) Image bitstream processing method
TWI528831B (en) Methods and apparatus for inter-layer residue prediction for scalable video
WO2019197712A1 (en) An apparatus, a method and a computer program for video coding and decoding
JP6502357B2 (en) Signaling of segmentation information on 3D lookup table for color gamut scalability in multi-layer video coding
CA2950182A1 (en) Systems and methods for selectively performing a bitstream conformance check
EP3020193A1 (en) An apparatus, a method and a computer program for video coding and decoding
WO2012167711A1 (en) Method and apparatus of scalable video coding
KR20120093442A (en) Merging encoded bitstreams
KR20220061245A (en) Video coding and decoding apparatus, method and computer program
WO2024074754A1 (en) An apparatus, a method and a computer program for video coding and decoding

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090205

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): DE FR GB

A4 Supplementary search report drawn up and despatched

Effective date: 20110913

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 7/50 20060101ALI20110907BHEP

Ipc: H04N 7/26 20060101AFI20110907BHEP

17Q First examination report despatched

Effective date: 20120807

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20160203