WO2013141587A1 - Procédé de prédiction inter-couche et appareil l'utilisant - Google Patents

Procédé de prédiction inter-couche et appareil l'utilisant Download PDF

Info

Publication number
WO2013141587A1
WO2013141587A1 PCT/KR2013/002264 KR2013002264W WO2013141587A1 WO 2013141587 A1 WO2013141587 A1 WO 2013141587A1 KR 2013002264 W KR2013002264 W KR 2013002264W WO 2013141587 A1 WO2013141587 A1 WO 2013141587A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction mode
prediction
block
mode
layer
Prior art date
Application number
PCT/KR2013/002264
Other languages
English (en)
Korean (ko)
Inventor
임재현
박승욱
김철근
전병문
전용준
박준영
박내리
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2013141587A1 publication Critical patent/WO2013141587A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Definitions

  • the present invention relates to video compression techniques, and more particularly, to a method and apparatus for performing scalable video coding.
  • video quality of the terminal device can be supported and the network environment is diversified, in general, video of general quality may be used in one environment, but higher quality video may be used in another environment. .
  • a consumer who purchases video content on a mobile terminal can view the same video content on a larger screen and at a higher resolution through a large display in the home.
  • the quality of the image for example, the image quality, the resolution of the image, the size of the image. It is necessary to provide scalability in the frame rate of video and the like. In addition, various image processing methods associated with such scalability should be discussed.
  • An object of the present invention is to provide an inter-layer prediction method for performing intra prediction on a current layer using intra prediction mode information of another layer, and an apparatus using the same.
  • Another object of the present invention is to provide an interlayer prediction method and an apparatus using the same, which can derive an intra prediction mode of a current layer using a plurality of intra prediction modes of another layer.
  • Another object of the present invention is to provide a signaling method and an apparatus using the same when performing intra prediction on a current layer using other layer information.
  • An embodiment of the present invention includes deriving a neighbor prediction mode from a neighboring block adjacent to a current block, deriving a reference prediction mode from a reference block on a reference layer corresponding to the current block, Deriving a candidate intra prediction mode for the current block based on a reference prediction mode.
  • the deriving of the neighboring prediction mode may further include deriving a left prediction mode and an up prediction mode, and deriving an MPM list based on the left prediction mode, the up prediction mode, and the reference prediction mode.
  • Deriving the neighboring prediction mode may include inducing three neighboring prediction modes, and may further include replacing any one of the three neighboring prediction modes with the reference prediction mode.
  • the deriving of the neighbor prediction mode may include deriving three neighbor prediction modes, and may further include deriving an MPM list including the three neighbor prediction modes and the reference prediction mode.
  • deriving the reference prediction mode may derive the reference prediction mode from a prediction mode having the lowest index among the prediction modes of the plurality of PUs, A prediction mode of a PU having a small order may be derived to the reference prediction mode.
  • the deriving of the reference prediction mode may include deriving the reference prediction mode from the most frequent prediction mode among the prediction modes of the plurality of PUs, or refer to the prediction mode of the PU occupying the widest region of the reference block. Can lead to prediction mode.
  • a prediction mode closest to an average of prediction modes of the plurality of PUs may be derived as the reference prediction mode.
  • the signal may be encoded only when the prediction mode of the current block is not derived from a candidate intra prediction mode of the MPM list.
  • intra prediction on the current layer may be performed using intra prediction mode information of another layer.
  • the intra prediction mode of the current layer may be derived by using the plurality of intra prediction modes of another layer to increase the efficiency of inter layer prediction.
  • a signaling method and an apparatus using the same when performing intra prediction on a current layer using different layer information are provided.
  • FIG. 1 is a block diagram schematically illustrating a video encoding apparatus supporting scalability according to an embodiment of the present invention.
  • FIG. 2 is a block diagram schematically illustrating a video decoding apparatus supporting scalability according to an embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating an example of inter-layer prediction in an encoding apparatus and a decoding apparatus that perform scalable coding according to the present invention.
  • FIG. 4 is a diagram illustrating an intra prediction mode for a luminance component.
  • FIG. 5 is a diagram illustrating a current block and neighboring blocks in which intra prediction is performed according to the present invention.
  • FIG. 6 is a diagram for describing a method of deriving a left prediction mode and an up prediction mode from a neighboring block according to the present invention.
  • FIG. 7 is a control flowchart illustrating a method of forming an MPM list when the prediction modes derived from neighboring blocks are the same according to the present invention.
  • FIG. 8 is a control flowchart illustrating a method of forming an MPM list when prediction modes derived from neighboring blocks are different according to the present invention.
  • FIG 9 illustrates a current block and a reference block according to an embodiment of the present invention.
  • FIG. 10 is a control flowchart illustrating a method of performing intra prediction using a prediction mode of a reference block according to an embodiment of the present invention.
  • FIG. 11 is a control flowchart illustrating a method of deriving two candidate intra prediction modes from a neighboring block according to FIG. 10.
  • FIG. 12 is a control flowchart illustrating a method of constructing an MPM list using a reference prediction mode of a reference block according to an embodiment of FIG. 10.
  • 13 and 14 are control flowcharts illustrating a method of performing intra prediction using a prediction mode of a reference block according to another embodiment of the present invention.
  • 15 and 16 are control flowcharts illustrating a method of performing intra prediction using a prediction mode of a reference block according to another embodiment of the present invention.
  • FIG 17 illustrates a reference block according to another embodiment of the present invention.
  • FIG. 18 is a control flowchart illustrating a signaling method of a prediction mode according to an embodiment of the present invention.
  • 19 is a control flowchart illustrating a signaling method of a prediction mode according to another embodiment of the present invention.
  • 20 is a control flowchart illustrating a signaling method of a prediction mode according to another embodiment of the present invention.
  • each of the components in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions in the video encoding apparatus / decoding apparatus, each component is a separate hardware or separate software It does not mean that it is implemented.
  • two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
  • input signals may be processed for each layer.
  • the input signals may differ in at least one of resolution, frame rate, bit-depth, color format, and aspect ratio. Can be.
  • scalable coding includes scalable encoding and scalable decoding.
  • prediction between layers is performed by using differences between layers, that is, based on scalability, thereby reducing overlapping transmission / processing of information and increasing compression efficiency.
  • FIG. 1 is a block diagram schematically illustrating a video encoding apparatus supporting scalability according to an embodiment of the present invention.
  • the encoding apparatus 100 includes an encoder 105 for layer 1 and an encoder 135 for layer 0.
  • Layer 0 may be a base layer, a reference layer, or a lower layer
  • layer 1 may be an enhancement layer, a current layer, or an upper layer.
  • the encoding unit 105 of the layer 1 includes a prediction unit 110, a transform / quantization unit 115, a filtering unit 120, a decoded picture buffer (DPB) 125, an entropy coding unit 130, and a MUX (Multiplexer, 165).
  • the encoding unit 135 of the layer 0 includes a prediction unit 140, a transform / quantization unit 145, a filtering unit 150, a DPB 155, and an entropy coding unit 160.
  • the prediction units 110 and 140 may perform inter prediction and intra prediction on the input image.
  • the prediction units 110 and 140 may perform prediction in predetermined processing units.
  • the performing unit of prediction may be a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
  • the prediction units 110 and 140 may determine whether to apply inter prediction or intra prediction in a CU unit, determine a mode of prediction in a PU unit, and perform prediction in a PU unit or a TU unit. have. Prediction performed includes generation of a prediction block and generation of a residual block (residual signal).
  • a prediction block may be generated by performing prediction based on information of at least one picture of a previous picture and / or a subsequent picture of the current picture.
  • prediction blocks may be generated by performing prediction based on pixel information in a current picture.
  • inter prediction there are a skip mode, a merge mode, a motion vector predictor (MVP) mode method, and the like.
  • a reference picture may be selected with respect to the current PU that is a prediction target, and a reference block corresponding to the current PU may be selected within the reference picture.
  • the prediction unit 110 may generate a prediction block based on the reference block.
  • the prediction block may be generated in integer sample units or may be generated in integer or less pixel units.
  • the motion vector may also be expressed in units of integer pixels or units of integer pixels or less.
  • motion information that is, information such as an index of a reference picture, a motion vector, and a residual signal
  • residuals may not be generated, transformed, quantized, or transmitted.
  • the prediction mode may have 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
  • a prediction block may be generated after applying a filter to a reference sample.
  • the PU may be a block of various sizes / types, for example, in the case of inter prediction, the PU may be a 2N ⁇ 2N block, a 2N ⁇ N block, an N ⁇ 2N block, an N ⁇ N block (N is an integer), or the like.
  • the PU In the case of intra prediction, the PU may be a 2N ⁇ 2N block or an N ⁇ N block (where N is an integer).
  • the PU of the N ⁇ N block size may be set to apply only in a specific case.
  • the NxN block size PU may be used only for the minimum size CU or only for intra prediction.
  • PUs such as N ⁇ mN blocks, mN ⁇ N blocks, 2N ⁇ mN blocks, or mN ⁇ 2N blocks (m ⁇ 1) may be further defined and used.
  • the prediction units 110 and 140 may perform prediction on the layer 1 by using the information of the layer 0.
  • a method of predicting information of a current layer using information of another layer is referred to as inter-layer prediction for convenience of description.
  • Information of the current layer that is predicted using information of another layer may include texture, motion information, unit information, predetermined parameters (eg, filtering parameters, etc.).
  • information of another layer used for prediction for the current layer may include texture, motion information, unit information, and predetermined parameters (eg, filtering parameters).
  • inter-layer motion prediction is also referred to as inter-layer inter prediction.
  • prediction of a current block of layer 1 may be performed using motion information of layer 0 (reference layer or base layer).
  • motion information of a reference layer may be scaled.
  • inter-layer texture prediction is also called inter-layer intra prediction or intra base layer (BL) prediction.
  • Inter layer texture prediction may be applied when a reference block in a reference layer is reconstructed by intra prediction.
  • the texture of the reference block in the reference layer may be used as a prediction value for the current block of the enhancement layer.
  • the texture of the reference block may be scaled by upsampling.
  • inter-layer unit parameter prediction derives unit (CU, PU, and / or TU) information of a base layer and uses it as unit information of an enhancement layer, or based on unit information of a base layer. Unit information may be determined.
  • the unit information may include information at each unit level.
  • information about a partition (CU, PU and / or TU) may include information on transform, information on prediction, and information on coding.
  • information on a PU partition and information on prediction (eg, motion information, information on a prediction mode, etc.) may be included.
  • the information about the TU may include information about a TU partition, information on transform (transform coefficient, transform method, etc.).
  • the unit information may include only the partition information of the processing unit (eg, CU, PU, TU, etc.).
  • inter-layer parameter prediction may derive a parameter used in the base layer to reuse it in the enhancement layer or predict a parameter for the enhancement layer based on the parameter used in the base layer.
  • interlayer prediction As an example of interlayer prediction, interlayer texture prediction, interlayer motion prediction, interlayer unit information prediction, and interlayer parameter prediction have been described. However, the interlayer prediction applicable to the present invention is not limited thereto.
  • the prediction unit may use inter-layer residual prediction, which predicts the residual of the current layer using residual information of another layer as inter-layer prediction, and performs prediction on the current block in the current layer based on the residual layer.
  • the prediction unit is an interlayer prediction interlayer that performs prediction on the current block in the current layer by using a difference (difference image) image between the reconstructed picture of the current layer and the resampled picture of another layer as the interlayer prediction. Differential prediction may also be performed.
  • the prediction unit may use interlayer syntax prediction, which is used to predict or generate a texture of a current block using syntax information of another layer as interlayer prediction.
  • the syntax information of the reference layer used for prediction of the current block may be information about an intra prediction mode, motion information, and the like.
  • inter-layer syntax prediction may be performed by referring to the intra prediction mode from a block to which the intra prediction mode is applied in the reference layer and referring to motion information from the block MV to which the inter prediction mode is applied.
  • the reference layer is a P slice or a B slice
  • the reference block in the slice may be a block to which an intra prediction mode is applied.
  • inter-layer prediction may be performed to generate / predict a texture for the current block by using an intra prediction mode of the reference block among syntax information of the reference layer.
  • the transform / quantization units 115 and 145 may perform transform on the residual block in transform block units to generate transform coefficients and quantize the transform coefficients.
  • the transform block is a block of samples and is a block to which the same transform is applied.
  • the transform block can be a transform unit (TU) and can have a quad tree structure.
  • the transform / quantization units 115 and 145 may generate a 2D array of transform coefficients by performing transform according to the prediction mode applied to the residual block and the size of the block. For example, if intra prediction is applied to a residual block and the block is a 4x4 residual array, the residual block is transformed using a discrete sine transform (DST), otherwise the residual block is transformed into a discrete cosine transform (DCT). Can be converted using.
  • DST discrete sine transform
  • DCT discrete cosine transform
  • the transform / quantization unit 115 and 145 may quantize the transform coefficients to generate quantized transform coefficients.
  • the transform / quantization units 115 and 145 may transfer the quantized transform coefficients to the entropy coding units 130 and 180.
  • the transform / quantization unit 145 may rearrange the two-dimensional array of quantized transform coefficients into one-dimensional arrays according to a predetermined scan order and transfer them to the entropy coding units 130 and 180.
  • the transform / quantizers 115 and 145 may transfer the reconstructed block generated based on the residual and the predictive block to the filtering units 120 and 150 for inter prediction.
  • the transform / quantization units 115 and 145 may skip transform and perform quantization only or omit both transform and quantization as necessary.
  • the transform / quantization unit 115 or 165 may omit the transform for a block having a specific prediction method or a specific size block, or a block of a specific size to which a specific prediction block is applied.
  • the entropy coding units 130 and 160 may perform entropy encoding on the quantized transform coefficients.
  • Entropy encoding may use, for example, an encoding method such as Exponential Golomb, Context-Adaptive Binary Arithmetic Coding (CABAC), or the like.
  • CABAC Context-Adaptive Binary Arithmetic Coding
  • the filtering units 120 and 150 may apply a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) to the reconstructed picture.
  • ALF adaptive loop filter
  • SAO sample adaptive offset
  • the deblocking filter may remove distortion generated at the boundary between blocks in the reconstructed picture.
  • the adaptive loop filter may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter.
  • the SAO restores the offset difference from the original image on a pixel-by-pixel basis to the residual block to which the deblocking filter is applied, and is applied in the form of a band offset and an edge offset.
  • the filtering units 120 and 150 may apply only the deblocking filter, only the deblocking filter and the ALF, or may apply only the deblocking filter and the SAO without applying all of the deblocking filter, ALF, and SAO.
  • the DPBs 125 and 155 may receive the reconstructed block or the reconstructed picture from the filtering units 120 and 150 and store the received reconstruction picture.
  • the DPBs 125 and 155 may provide a reconstructed block or picture to the predictors 110 and 140 that perform inter prediction.
  • Information output from the entropy coding unit 160 of layer 0 and information output from the entropy coding unit 130 of layer 1 may be multiplexed by the MUX 185 and output as a bitstream.
  • the encoding unit 105 of the layer 1 has been described as including the MUX 165.
  • the MUX is separate from the encoding unit 105 of the layer 1 and the encoding unit 135 of the layer 0. It may be a device or a module of.
  • FIG. 2 is a block diagram illustrating an example of interlayer prediction in an encoding apparatus that performs scalable coding according to the present invention.
  • the decoding apparatus 200 includes a decoder 210 of layer 1 and a decoder 250 of layer 0.
  • Layer 0 may be a base layer, a reference layer, or a lower layer
  • layer 1 may be an enhancement layer, a current layer, or an upper layer.
  • the decoding unit 210 of the layer 1 includes an entropy decoding unit 215, a reordering unit 220, an inverse quantization unit 225, an inverse transform unit 230, a prediction unit 235, a filtering unit 240, and a memory. can do.
  • the decoding unit 250 of the layer 0 may include an entropy decoding unit 255, a reordering unit 260, an inverse quantization unit 265, an inverse transform unit 270, a filtering unit 280, and a memory 285. .
  • the DEMUX 205 may demultiplex the information for each layer and deliver the information to the decoding device for each layer.
  • the entropy decoding units 215 and 255 may perform entropy decoding corresponding to the entropy coding scheme used in the encoding apparatus. For example, when CABAC is used in the encoding apparatus, the entropy decoding units 215 and 255 may also perform entropy decoding using CABAC.
  • Information for generating a prediction block among the information decoded by the entropy decoding units 215 and 255 is provided to the prediction units 235 and 275, and a residual value of which entropy decoding is performed by the entropy decoding units 215 and 255. That is, the quantized transform coefficients may be input to the reordering units 220 and 260.
  • the reordering units 220 and 260 may rearrange the information of the bitstreams entropy decoded by the entropy decoding units 215 and 255, that is, the quantized transform coefficients, based on the reordering method in the encoding apparatus.
  • the reordering units 220 and 260 may rearrange the quantized transform coefficients of the one-dimensional array into the coefficients of the two-dimensional array.
  • the reordering units 220 and 260 may generate a two-dimensional array of coefficients (quantized transform coefficients) by performing scanning based on the prediction mode applied to the current block (transform block) and / or the size of the transform block.
  • the inverse quantizers 225 and 265 may generate transform coefficients by performing inverse quantization based on the quantization parameter provided by the encoding apparatus and the coefficient values of the rearranged block.
  • the inverse transform units 230 and 270 may perform inverse transform on the transform performed by the transform unit of the encoding apparatus.
  • the inverse transform units 230 and 270 may perform inverse DCT and / or inverse DST on a discrete cosine transform (DCT) and a discrete sine transform (DST) performed by an encoding apparatus.
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the DCT and / or DST in the encoding apparatus may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size of a current block, and a prediction direction, and the inverse transformers 230 and 270 of the decoding apparatus may perform transform information performed in the encoding apparatus. Inverse transformation may be performed based on.
  • the inverse transform units 230 and 270 may apply inverse DCT and inverse DST according to a prediction mode / block size.
  • the inverse transformers 230 and 270 may apply an inverse DST to a 4x4 luma block to which intra prediction is applied.
  • the inverse transform units 230 and 270 may fixedly use a specific inverse transform method regardless of the prediction mode / block size.
  • the inverse transform units 330 and 370 may apply only inverse DST to all transform blocks.
  • the inverse transform units 330 and 370 may apply only inverse DCT to all transform blocks.
  • the inverse transformers 230 and 270 may generate a residual or residual block by inversely transforming the transform coefficients or the block of the transform coefficients.
  • the inverse transformers 230 and 270 may also skip the transformation as needed or in accordance with the manner encoded in the encoding apparatus. For example, the inverse transforms 230 and 270 may omit the transform for a block having a specific prediction method or a specific size or a block of a specific size to which a specific prediction block is applied.
  • the prediction units 235 and 275 may perform prediction on the current block based on prediction block generation related information transmitted from the entropy decoding units 215 and 255 and previously decoded blocks and / or picture information provided by the memories 245 and 285.
  • a prediction block can be generated.
  • the prediction units 235 and 275 may perform intra prediction on the current block based on pixel information in the current picture.
  • the prediction units 235 and 275 may perform information on the current block based on information included in at least one of a previous picture or a subsequent picture of the current picture. Inter prediction may be performed. Some or all of the motion information required for inter prediction may be derived from the information received from the encoding apparatus and correspondingly.
  • the prediction block may be a reconstruction block.
  • the prediction unit 235 of layer 1 may perform inter prediction or intra prediction using only information in layer 1, or may perform inter layer prediction using information of another layer (layer 0).
  • the predictor 235 of the layer 1 may perform prediction on the current block by using one of the motion information of the layer 1, the texture information of the layer 1, the unit information of the layer 1, and the parameter information of the layer 1.
  • the prediction unit 235 of the layer 1 may perform prediction on the current block by using a plurality of pieces of information of the motion information of the layer 1, the texture information of the layer 1, the unit information of the layer 1, and the parameter information of the layer 1. have.
  • the predictor 235 of the layer 1 may receive motion information of the layer 1 from the predictor 275 of the layer 0 to perform motion prediction.
  • Inter-layer motion prediction is also called inter-layer inter prediction.
  • inter-layer motion prediction prediction of a current block of a current layer (enhanced layer) may be performed using motion information of a reference layer (base layer).
  • the prediction unit 335 may scale and use motion information of the reference layer when necessary.
  • the predictor 235 of the layer 1 may receive texture information of the layer 1 from the predictor 275 of the layer 0 to perform texture prediction.
  • Texture prediction is also called inter layer intra prediction or intra base layer (BL) prediction. Texture prediction may be applied when the reference block of the reference layer is reconstructed by intra prediction.
  • inter-layer intra prediction the texture of the reference block in the reference layer may be used as a prediction value for the current block of the enhancement layer. In this case, the texture of the reference block may be scaled by upsampling.
  • the predictor 235 of the layer 1 may receive unit parameter information of the layer 1 from the predictor 275 of the layer 0 to perform unit parameter prediction.
  • unit parameter prediction unit (CU, PU, and / or TU) information of the base layer may be used as unit information of the enhancement layer, or unit information of the enhancement layer may be determined based on unit information of the base layer.
  • the predictor 235 of the layer 1 may receive parameter information regarding the filtering of the layer 1 from the predictor 275 of the layer 0 to perform parameter prediction.
  • parameter prediction the parameters used in the base layer can be derived and reused in the enhancement layer, or the parameters for the enhancement layer can be predicted based on the parameters used in the base layer.
  • the adders 290 and 295 may generate reconstruction blocks using the prediction blocks generated by the predictors 235 and 275 and the residual blocks generated by the inverse transformers 230 and 270.
  • the adders 290 and 295 can be viewed as separate units (restore block generation unit) for generating the reconstruction block.
  • Blocks and / or pictures reconstructed by the adders 290 and 295 may be provided to the filtering units 240 and 280.
  • the filtering unit 240 of the layer 1 filters the reconstructed picture by using parameter information transmitted from the predicting unit 235 of the layer 1 and / or the filtering unit 280 of the layer 0. You can also do
  • the filtering unit 240 may apply filtering to or between layers using the parameters predicted from the parameters of the filtering applied in the layer 0.
  • the memories 245 and 285 may store the reconstructed picture or block to use as a reference picture or reference block.
  • the memories 245 and 285 may output the stored reconstructed picture through a predetermined output unit (not shown) or a display (not shown).
  • the reordering unit, the inverse quantization unit, and the inverse transform unit have been described. However, as in the encoding apparatus of FIG. It can also be configured.
  • the prediction unit of layer 1 may be different from the interlayer prediction unit that performs prediction using information of another layer (layer 0). It may also be regarded as including an inter / intra predictor for performing prediction without using the information of).
  • FIG. 3 is a block diagram illustrating an example of inter-layer prediction in an encoding apparatus and a decoding apparatus that perform scalable coding according to the present invention.
  • the predictor 300 of layer 1 includes an inter / intra predictor 340 and an interlayer predictor 350.
  • the prediction unit 300 of the layer 1 may perform interlayer prediction necessary for the prediction of the layer 1 from the information of the layer 0.
  • the interlayer prediction unit 350 may receive interlayer prediction information from the prediction unit 320 and / or the filtering unit 330 of the layer 0 to perform interlayer prediction necessary for the prediction of the layer 1.
  • the inter / intra prediction unit 340 of the layer 1 may perform inter prediction or intra prediction using the information of the layer 1 without using the information of the layer 0.
  • the inter / intra predictor 340 of the layer 1 may perform prediction based on the information of the layer 0 using the information transmitted from the interlayer predictor 350.
  • the filtering unit 310 of the layer 1 may perform the filtering based on the information of the layer 0 or may perform the filtering based on the information of the layer 1.
  • Information of the layer 0 may be transferred from the filtering unit 330 of the layer 0 to the filtering unit 310 of the layer 1, or may be transferred from the interlayer prediction unit 350 of the layer 1 to the filtering unit 310 of the layer 1. It may be.
  • the information transmitted from the layer 0 to the interlayer prediction unit 330 may be at least one of information about a unit parameter of the layer 0, motion information of the layer 0, texture information of the layer 0, and filter parameter information of the layer 0. have.
  • the texture predictor 360 may use the texture of the reference block in the reference layer as a prediction value for the current block of the enhancement layer. In this case, the texture predictor 360 may scale the texture of the reference block by upsampling.
  • the motion predictor 370 may predict the current block of layer 1 (the current layer or the enhancement layer) by using the motion information of the layer 0 (the reference layer or the base layer). In this case, the motion predictor 370 may scale the motion information of the reference layer.
  • the unit information predictor 380 derives unit (CU, PU, and / or TU) information of the base layer and uses the unit information of the enhancement layer based on the unit information of the base layer or uses the unit information of the enhancement layer based on the unit information of the base layer. You can decide.
  • unit (CU, PU, and / or TU) information of the base layer uses the unit information of the enhancement layer based on the unit information of the base layer or uses the unit information of the enhancement layer based on the unit information of the base layer. You can decide.
  • the parameter predictor 390 may derive the parameters used in the base layer to reuse them in the enhancement layer or predict the parameters for the enhancement layer based on the parameters used in the base layer.
  • interlayer prediction As an example of interlayer prediction, interlayer texture prediction, interlayer motion prediction, interlayer unit information prediction, and interlayer parameter prediction have been described. However, the interlayer prediction applicable to the present invention is not limited thereto.
  • the inter-layer prediction unit may further include a sub-prediction unit that performs inter-layer residual prediction, a sub-prediction unit that performs inter-layer syntax prediction, and / or a sub-prediction unit that performs inter-layer difference prediction.
  • the interlayer residual prediction, the interlayer differential prediction, the interlayer syntax prediction, and the like may be performed using a combination of prediction units.
  • the prediction unit 300 may correspond to the prediction unit 110 of FIG. 1, and the filtering unit 310 may include the filtering unit 120 of FIG. 1. It can correspond to.
  • the predictor 320 may correspond to the predictor 140 of FIG. 1
  • the filter 330 may correspond to the filter 150 of FIG. 1.
  • the prediction unit 300 may correspond to the prediction unit 235 of FIG. 2, and the filtering unit 310 is the filtering unit 240 of FIG. 2.
  • the predictor 320 may correspond to the predictor 275 of FIG. 2
  • the filter 330 may correspond to the filter 280 of FIG. 2.
  • inter-layer prediction for predicting information of a current layer using information of another layer may be performed.
  • the current picture of the layer 1 may perform interlayer prediction by using information of the reference picture of the layer 0.
  • Intra prediction generates a prediction block for a current block (hereinafter, referred to as a current block) to be predicted based on the reconstructed pixels in the current layer.
  • FIG. 4 is a diagram illustrating an example of an intra prediction mode.
  • a prediction mode may be largely classified into a directional mode and a non-directional mode according to the direction in which reference pixels used for pixel value prediction are located and a prediction method.
  • 4 shows intra prediction modes for 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC mode and a planer mode.
  • DC mode a single fixed value, for example, the average value of surrounding reconstructed pixel values is used as a prediction value
  • Planer mode vertical interpolation and horizontal use are performed using vertically adjacent pixel values of the current block and horizontally adjacent pixel values.
  • Directional interpolation is performed, and their average value is used as the predicted value.
  • the directional mode is an angular mode and refers to modes indicating a corresponding direction by an angle between a reference pixel located in a predetermined direction and a current pixel, and may include a horizontal mode and a vertical mode.
  • a horizontal mode vertically adjacent pixel values of the current block are used as prediction values of the current block
  • horizontally adjacent pixel values are used as prediction values of the current block.
  • this prediction mode may be specified using a predetermined angle and mode number.
  • a prediction block may be generated after applying a filter to a reference pixel.
  • the prediction mode for the current block may be transmitted with a value indicating the mode itself, but may be derived from information about candidate intra prediction modes that may be the prediction mode of the current block.
  • the candidate intra prediction mode for the current block may be derived using the intra prediction mode of the neighboring block adjacent to the current block, and may be referred to as most probable mode (MPM).
  • MPM most probable mode
  • FIG. 5 is a diagram illustrating a current block and neighboring blocks in which intra prediction is performed according to the present invention.
  • a left peripheral block 520 positioned to the left of the current block 510 and an upper peripheral block 530 positioned above the current block 510 may be used.
  • the sizes of the neighboring blocks 520 and 530 may be the same or different.
  • the MPM is the left prediction mode, the upper prediction mode, It may consist of one prediction mode that does not overlap with the left prediction mode and the top prediction mode.
  • FIG. 6 is a diagram for describing a method of deriving a left prediction mode and an up prediction mode from neighboring blocks.
  • a process of deriving two candidate intra prediction modes from the left neighboring block 520 and the upper neighboring block 530 will be described with reference to FIG. 6.
  • the intra prediction mode of the left neighbor block 520 is derived to the left prediction mode (S602).
  • the specific prediction mode may be set.
  • the specific prediction mode may be a DC mode (S603).
  • the upper neighboring block 530 is valid, the upper neighboring block 530 is a block to which intra prediction is applied, and the upper neighboring block 530 is the current block. If the block 510 is in a coding tree block to which it belongs (S604), the intra prediction mode for the upper neighboring block 530 is derived to the upper prediction mode (S605).
  • the coding block to which the upper neighboring block 530 belongs is not valid and information about the prediction mode does not exist, the upper neighboring block 530 is not a block to which intra prediction is applied, or the upper neighboring block 530 is currently
  • the block 510 is not a block in a coding tree block to which the block 510 belongs (S604), that is, if any one of the three conditions is not satisfied, the upper prediction mode may be derived to a predetermined predetermined prediction mode ( S606).
  • the specific prediction mode may be a DC mode.
  • one additional prediction mode may be further derived to form a candidate mode list, that is, an MPM list.
  • the prediction mode for the current block is in the MPM list. Information about any one of the candidate intra modes may be decoded and encoded. On the other hand, if the prediction mode for the current block cannot be inferred from the intra prediction mode of the neighboring block, the information about the intra prediction mode for the current block may be separately encoded and decoded.
  • FIG. 7 and 8 are control flowcharts illustrating a process of forming an MPM list including three candidate intra prediction modes using the left prediction mode and the up prediction mode derived according to FIG. 6.
  • FIG. 7 illustrates a process of forming an MPM list when the left prediction mode and the up prediction mode are the same
  • FIG. 8 illustrates a process of forming an MPM list when the left prediction mode and the up prediction mode are different.
  • the candidate intra prediction mode indexed first in the MPM list is MPM [0], the second candidate intra prediction mode indexed in MPM [1], and the third candidate intra prediction mode indexed in MPM. Mark as [2].
  • the left prediction mode and the up prediction mode are the same (S701), it is determined whether the left prediction mode is the DC mode or the planar mode (S702).
  • MPM [0] is derived to planar mode
  • MPM [1] is derived to DC mode
  • MPM [2] is derived to vertical mode (S703).
  • MPM [0] is derived to the left prediction mode
  • MPM [1] and MPM [2] are the prediction modes having an angle adjacent to the left prediction mode. Can be induced.
  • MPM [1] may be 4 and MPM [2] may be 6.
  • MPM [1] may be derived as a prediction mode having an adjacent angle such as 25 and MPM [2] is 27.
  • MPM [1] is 9, MPM [2] may be derived in a prediction mode having an adjacent angle such as 11.
  • MPM [1] and MPM [2] may be derived as shown in Equation 1 (S704).
  • MPM [2] 2 + ((Left Prediction Mode -2 +1)% 32)
  • the left prediction mode is the same as the top prediction mode, and the left prediction mode is Planer mode or DC mode
  • MPM [0] is in Planar mode
  • MPM [1] is in DC mode
  • MPM [2] is in Vertical mode.
  • MPM [0] may be derived to the left prediction mode
  • MPM [1] and MPM [2] may be derived to the prediction mode having an angle adjacent to the left prediction mode.
  • MPM [2] can be derived as follows. It is determined whether the MPM [0] and the MPM [1] are in Planar mode (S803), and as a result of the determination, if both the MPM [0] and the MPM [1] are not in the Planar mode, the MPM [2] is induced to the Planar mode ( S804).
  • any one of the MPM [0] and MPM [1] is a planar mode, it is determined whether the MPM [0] and MPM [1] is a DC mode (S805).
  • MPM [2] is induced to DC mode (S806).
  • the MPM [2] may be induced in the vertical mode (S807).
  • MPM [0] and MPM [1] are derived as the left prediction mode and the up prediction mode, and the left prediction mode and the top prediction mode among the planar mode, the DC mode, and the vertical mode. And one not equal to MPM [2].
  • Table 1 schematically illustrates an example of syntax elements that may be applied when encoding and decoding an intra prediction mode of a current block. This syntax element may be applied in a prediction unit (PU) or a coding unit (CU).
  • PU prediction unit
  • CU coding unit
  • the encoding apparatus may encode a prev_intra_luma_pred_flag syntax element indicating whether the intra prediction mode for the current block can be inferred from the intra prediction mode of the neighboring neighboring block, as in the example of Table 1.
  • the encoding apparatus may determine whether the intra prediction mode of the current block 510 is one of the candidate intra prediction modes. That is, the index information of the MPP list is encoded using the mpm_idx syntax element.
  • prev_intra_luma_pred_flag 0 because the intra prediction mode of the current block 510 cannot be inferred from the intra prediction mode of the neighboring neighboring block
  • the encoding apparatus predicts the remaining prediction except the candidate intra prediction mode in the MPM list among the 35 intra prediction modes.
  • Information about the intra prediction mode for the current block 510 among the modes is encoded using the rem_intra_luma_pred_mode syntax element.
  • the decoding apparatus derives a candidate intra prediction mode for the current block 510 to generate an MPM list.
  • the decoding apparatus decodes prev_intra_luma_pred_flag received from the encoding apparatus. If prev_intra_luma_pred_flag is 1, the decoding apparatus determines that the intra prediction mode of the current block 510 may be inferred from the intra prediction mode of the adjacent neighboring block, and decodes mpm_idx. By decoding the mpm_idx, the intra prediction mode of the current block 510 may be derived to the prediction mode of any one of the MPM lists.
  • the intra prediction mode of the current block 510 may be decoded by decoding the rem_intra_luma_pred_mode syntax element. .
  • the MPM list when performing intra prediction of the current block in the current layer, the MPM list may be formed using the prediction mode information in the current layer, but the interleaving mode for deriving the prediction mode of the current block using information of another layer is performed. Layer prediction may be performed.
  • An embodiment of the present invention may generate a texture for a current block by using information about an intra prediction mode among information of a reference picture.
  • FIG. 9 is a diagram illustrating a current block and a reference block according to an embodiment of the present invention.
  • the corresponding part of the reference picture 901 corresponding to the current block 910 of the current picture 900 is shown as a reference block 911.
  • the reference block 911 may be positioned according to a resolution ratio of the current picture 900 and the reference picture 901. That is, the coordinates indicating the position of the current block 910 may correspond to specific coordinates of the reference picture 901 according to the resolution ratio.
  • This reference block 911 may include one prediction unit or may include a plurality of prediction units.
  • FIG. 10 is a control flowchart illustrating a method of performing intra prediction using a prediction mode of a reference block according to an embodiment of the present invention.
  • This embodiment derives two candidate intra prediction modes from adjacent neighboring blocks 920, 930 of the current block 910, and additionally includes three candidate intra prediction modes using the prediction mode applied to the reference block 911. Derived MPM list.
  • the prediction mode applied to the reference block 911 of the reference picture 901 is expressed as a reference prediction mode.
  • the candidate intra prediction modes derived from the neighboring blocks 920 and 930 may be expressed as the neighbor prediction mode by distinguishing the reference prediction mode from the reference prediction mode.
  • two neighboring prediction modes that is, the left prediction mode and the up prediction mode, are derived using information of the neighboring blocks 920 and 930 adjacent to the current block 910 (S1001).
  • the left prediction mode and the up prediction mode are derived from the neighboring blocks 920 and 930. If the left neighbor block 920 is valid and the left neighbor block 920 is a block to which intra prediction is applied, the prediction mode for the left neighbor block 520 is derived to the left prediction mode.
  • the upper neighbor block 930 is valid and information about the prediction mode exists, the upper neighbor block 930 is a block to which intra prediction is applied, and the upper neighbor block 930 is the same coding tree block as the current block 910 ( If the block is within the Coding Tree Block, the prediction mode for the upper neighboring block 530 is derived to the upper prediction mode.
  • the left prediction mode and the up prediction mode may be derived to a predetermined specific prediction mode.
  • Specific prediction modes may be planar mode, DC mode, and vertical mode.
  • the left prediction mode may be derived into any one of the planar mode, the DC mode, and the vertical mode, and the upper prediction mode may be derived into one of the modes different from the left prediction mode of the planar mode, the DC mode, and the vertical mode.
  • MPM [0] is derived to the left prediction mode and MPM [1] is derived to the up prediction mode (S1102).
  • the left prediction mode and the up prediction mode are the same, it is determined whether the left prediction mode is the planar mode (S1103). If the left prediction mode is not the planar mode, MPM [0] is the left prediction mode, and MPM [1 ] Is induced in the planar mode (S1104).
  • the planar mode is an exemplary mode and may be derived to another prediction mode.
  • MPM [0] may be derived to the left prediction mode and MPM [1] may be derived to the DC mode (S1105).
  • the DC mode derived to the MPM [1] is an exemplary prediction mode, and may be derived to a prediction mode other than the DC mode.
  • the present embodiment uses the neighboring blocks 920 and 930 of the current block 910 to derive the MPM [0] and the MPM [1] among the three candidate intra prediction modes.
  • the number of times the left prediction mode and the top prediction mode are configured in the MPM list is subsequently indexed in consideration of the prediction mode of the reference block. It may be.
  • the process returns to FIG. 10 again to form an MPM list including three candidate intra prediction modes using the prediction mode of the reference block 911 (S1002).
  • the prediction mode for the reference block 911 may be decoded or encoded before decoding or encoding for the current layer, and the prediction mode of the reference block 911 derived in this process may be stored in the DPB or memory.
  • the stored reference prediction mode may be used when generating the MPM list of the current block 910.
  • Table 2 shows an example of an MPM list including a reference prediction mode in intra prediction for the current block 910 according to the present invention.
  • the left prediction mode and the up prediction mode indicate the neighbor prediction modes derived from the neighboring blocks 920 and 930 of the current block 910.
  • the topmost candidate may be assigned the smallest index, and the bottommost candidate may be assigned the largest index.
  • the reference prediction mode derived from the reference block 911 may be included in any one of (a) to (c) in the MPM list of Table 1 above.
  • the reference prediction mode may be assigned to the smallest index and included in (a) order of the MPM list, or may be assigned to the largest index and included in (c) order of the MPM list. Or it may be assigned to the middle index and included in the order (b).
  • the condition having such an arbitrary index may be determined according to whether the prediction mode of the left neighboring block derived from the left neighboring block and the upper prediction mode derived from the upper neighboring block are the same. Alternatively, it may be determined according to whether the reference prediction mode is identical to the left prediction mode and the upper prediction mode.
  • step S1002 of constructing an MPM list using the reference prediction mode in detail.
  • the reference prediction mode is the same as either MPM [0] or MPM [1] (S1201)
  • MPM [0] is the left prediction mode
  • MPM [1] is the reference prediction mode
  • MPM [2] is the same.
  • One specific prediction mode such as a planar mode, a DC mode, a vertical mode, and the like may be derived (S1202).
  • MPM [2] will be derived in a mode that is not the same as the left prediction mode and the reference prediction mode.
  • MPM [2] may be derived to reference prediction mode (S1203).
  • the left prediction mode and the up prediction mode for the current block 910 are configured as MPM [0] and MPM [1] in the MPM list, and the reference prediction mode and MPM [0] and MPM [1 ],
  • the reference prediction mode is derived to MPM [1] or MPM [2] in consideration of the sameness.
  • the range of reference information that can be selected by the MPM can be widened.
  • the parsing throughput of the signal is improved by using the same number of candidate intra prediction modes as the MPM list is formed using the left prediction mode and the up prediction mode derived from the neighboring blocks 920 and 930. Can contribute.
  • the MPM may be appropriately selected from the neighboring blocks 920 and 930, such as when there is no encoded information on the prediction mode of the neighboring blocks 920 and 930 or when the neighboring blocks 920 and 930 are not predicted in the intra prediction mode. If it cannot be derived, the reference prediction mode can replace it, so that an efficient MPM candidate can be selected.
  • FIG. 13 is a control flowchart illustrating a method of performing intra prediction using a prediction mode of a reference block according to another embodiment of the present invention.
  • any one of the three candidate intra prediction modes derived from the neighboring blocks 920 and 930 is replaced with the reference prediction mode.
  • three candidate intra prediction modes are derived using the left prediction mode and the up prediction mode (S1301).
  • the process of deriving the candidate intra prediction mode according to the present embodiment is omitted because it overlaps with the method described with reference to FIGS. 6 to 8.
  • any one of the three candidate intra prediction modes is replaced with the reference prediction mode (S1302). That is, any one of the three candidate intra prediction modes is replaced with the reference prediction mode so that the reference prediction mode can be utilized when deriving the prediction mode for the current block 910.
  • FIG. 14 is a control flowchart illustrating a method of replacing a candidate intra prediction of any MPM list with a reference prediction mode according to the present embodiment.
  • the MPM list is maintained as it is (S1402).
  • the index of the candidate intra prediction mode that is the same as the reference prediction mode may be changed in the MPM list. For example, if the candidate intra prediction mode identical to the reference prediction mode was MPM [2], the reference prediction mode may be indexed as MPM [0] or MPM [1] instead of MPM [2].
  • the reference prediction mode may be replaced with any one candidate intra prediction mode constituting the MPM list (S1403).
  • the reference prediction mode may be replaced with the candidate intra prediction mode having the largest value, and may be replaced with the MPM [0] or the candidate intra prediction mode of any one of the MPM [2].
  • the reference prediction mode may include a candidate intra prediction mode derived from a prediction mode derived from the left neighboring block 920 or a specific prediction mode derived from the upper neighboring block 930 when the MPM list is derived. It may be replaced.
  • a candidate intra prediction mode derived to a specific prediction mode such as a DC mode, a planer mode, or a vertical mode, may be replaced with a reference prediction mode.
  • the candidate intra prediction mode having the largest value in the MPM list may be replaced with a mode close to the average value with the reference prediction mode. For example, if the candidate intra prediction mode having the largest value is 5 and the reference prediction mode is 3, the candidate intra prediction mode having the largest value may be replaced with four.
  • an MPM list including three candidate intra prediction modes is generated to derive the prediction mode of the current block 910. If the intra prediction mode of the current block 910 is inferred from the intra prediction mode of the neighboring neighboring block and prev_intra_luma_pred_flag is 1, the encoding apparatus may determine whether the intra prediction mode of the current block 910 is one of the candidate intra prediction modes. That is, the index information of the MPP list is encoded using the mpm_idx syntax element. At this time, the information about mpm_idx is encoded into a 2-bit signal.
  • the decoding apparatus decodes the prev_intra_luma_pred_flag signal received from the encoding apparatus.
  • prev_intra_luma_pred_flag 1
  • the decoding apparatus determines that the intra prediction mode of the current block 910 may be inferred from the intra prediction mode of the neighboring neighboring block, and decodes mpm_idx. Induce the intra prediction mode of the current block 910.
  • Information about mpm_idx to be decoded by the decoding apparatus is also a 2-bit signal.
  • 15 is a control flowchart illustrating a method of performing intra prediction using a prediction mode of a reference block according to another embodiment of the present invention.
  • three candidate intra prediction modes using information of the left neighboring block 920 and the upper neighboring block 930 are derived (S1501), and four candidates are obtained using the prediction mode of the reference block 911.
  • An MPM list including the intra prediction mode is generated (S1502).
  • Table 3 shows an MPM list including a reference prediction mode in intra prediction for the current block 910 according to an example of the present invention.
  • the first to third candidate intra prediction modes indicate three candidate intra prediction modes derived from the neighboring blocks 920 and 930 of the current block 910.
  • the topmost candidate may be assigned the smallest index, and the bottommost candidate may be assigned the largest index.
  • the reference prediction mode derived from the reference block 911 may be included in any one of (a) to (d) in the MPM list of Table 3 above.
  • the reference prediction mode may be assigned to the smallest index and included in (a) order of the MPM list, or may be assigned to the largest index and included in (d) order of the MPM list. Or it may be assigned to an intermediate index and included in the order (b) or (c).
  • the condition having such an arbitrary index may be determined according to whether the prediction mode of the left neighboring block derived from the left neighboring block and the upper prediction mode derived from the upper neighboring block are the same. Alternatively, it may be determined according to whether the reference prediction mode is identical to the left prediction mode and the upper prediction mode.
  • FIG. 16 is a control flowchart for explaining a method of deriving an MPM list having four candidate intra prediction modes including a reference prediction mode.
  • the reference prediction mode may be derived to the fourth candidate MPM, that is, MPM [3] (S1602).
  • the reference prediction mode is the same as any one of the three candidate intra prediction modes included in the MPM list, three candidate intra prediction modes derived from MPM [0], MPM [1], and MPM [2] are Planar. It is determined whether only the mode, the DC mode, and the vertical mode are configured (S1603).
  • the fourth MPM [3] May be set to a specific prediction mode other than planar mode, DC mode, and vertical mode.
  • the specific prediction mode may be induced to the virtual prediction mode, that is, the dummy mode, not the actual prediction mode (1604).
  • MPM [0] may be derived from any one of a planar mode, a DC mode, and a vertical mode that do not overlap with the candidate intra prediction mode induced by the MPM [2] (S1605).
  • an MPM list including four candidate intra prediction modes is generated to predict the prediction mode of the current block 910.
  • the encoding apparatus encodes the intra prediction mode by using the mpm_idx syntax element for information indicating which intra prediction mode of the current block 910 is the candidate intra prediction mode, that is, index information of the MPP list.
  • the information about mpm_idx is encoded into a 2-bit signal.
  • the decoding apparatus decodes prev_intra_luma_pred_flag received from the encoding apparatus. If prev_intra_luma_pred_flag is 1, the decoding apparatus determines that the intra prediction mode of the current block 910 may be inferred from the intra prediction mode of the neighboring neighboring block, and decodes mpm_idx. Information about mpm_idx to be decoded by the decoding apparatus is also a 2-bit signal.
  • the information on the mpm_idx may be a maximum 2 bit signal
  • the same number of bits as the case of using three candidate intra prediction modes may be used when signaling the mpm_idx. That is, even if the prediction mode of the current block 910 is predicted using the four candidate intra prediction modes, bit addition for signaling does not occur.
  • FIG 17 illustrates a reference block according to another embodiment of the present invention.
  • the reference block 1700 is designated according to the resolution ratio of the current picture and the reference picture.
  • the reference block 1700 corresponding to the current block may be a block including a plurality of PUs instead of one PU having one prediction mode. That is, the reference block 1700 may be an area partitioned into a plurality of PUs.
  • the prediction unit may derive the reference prediction mode of the reference block 1700 based on the prediction modes of the plurality of PUs.
  • the reference block 1700 may include a PU having the same size as shown in ⁇ a> of FIG. 17. Alternatively, as illustrated in ⁇ b> of FIG. 17, sizes of PUs included in the reference block 1700 may be different.
  • the plurality of PUs may be ordered from the left / top side to the right / bottom direction of the reference block 1700.
  • the left / top PU is PU 0 1701
  • the right / top PU is PU 1 (PU 2)
  • the left / bottom PU is PU 2 (1703)
  • the right / bottom PU is PU 3 (1704).
  • all six PUs PU0 to PU5 may be included in the reference block 1700.
  • a prediction mode having the lowest index, that is, the smallest value, among the prediction modes of the plurality of PUs may be derived to the prediction mode representing the reference block 1700.
  • the prediction mode of the PU having the lowest order among the plurality of PUs may be derived to the prediction mode representing the reference block 1700. That is, the prediction mode of the PU existing on the starting point of the reference picture that corresponds exactly to the starting point of the current block may be derived into the prediction mode of the reference block 1700, in which case PU 01701 in FIG. It can be derived to the prediction mode representing this reference block 1700.
  • the prediction mode of the PU present at the location of the reference picture corresponding to the coordinate representing the location of the current block may be derived to the prediction mode representing the reference block 1700.
  • the prediction mode of PU0 may be derived to the prediction mode representing the reference block 1700.
  • the most frequent prediction mode may be derived to the prediction mode representing the reference block 1700.
  • the prediction mode for a plurality of PUs includes a planar mode or a DC mode, it is highly likely to be derived into the reference prediction mode in the planar mode or the DC mode.
  • the prediction mode of the PU occupying the shortest region of the reference block 1700 may be It may be derived with derivation in a prediction mode representative of reference block 1700.
  • the prediction mode of PU4 of ⁇ b> will be derived to the prediction mode representing the reference block 1700.
  • an average value of prediction modes of a plurality of PUs may be calculated, and a prediction mode closest to the average value or a prediction mode corresponding to the quotient of the average value may be derived to the prediction mode representing the reference block 1700.
  • the above-described derivation method of the reference prediction mode may be derived by combining or using the methods described as exemplary.
  • the reference prediction mode of the reference block is signaled that the prediction mode of the current block is independent of the candidate intra prediction mode of the MPM list.
  • the encoding apparatus signals that the reference prediction mode of the reference block is the prediction mode of the current block, as shown.
  • Such signaling may be implemented through a flag indicating whether the reference prediction mode of the reference block is the prediction mode of the current block.
  • the encoding apparatus may encode the information about the reference prediction mode into an independent signal and transmit the encoded information to the decoding apparatus (S1802).
  • the decoding apparatus determines whether the reference prediction mode is the prediction mode of the current block through a flag indicating whether the reference prediction mode is the prediction mode of the current block, and if the flag is 1, receives the information about the reference prediction mode. To derive the prediction mode of the current block.
  • 19 is a control flowchart illustrating a signaling method of a prediction mode according to another embodiment of the present invention.
  • the encoding apparatus may encode the reference prediction mode information into an independent signal and transmit it to the decoding apparatus (S1902).
  • signaling that the reference prediction mode of the reference block is the current block prediction mode by using a flag indicating whether to select an intra prediction mode from the candidate intra prediction mode of the current layer or an intra prediction mode of the reference layer. can do.
  • the flag may be transmitted from the encoding device to the decoding device only when the prediction mode for the current block is not selected in the candidate intra prediction mode increment derived from the current picture in order to reduce the amount of transmitted signals.
  • a signal prev_intra_luma_pred_flag indicating whether the prediction mode of the current block can be inferred from the candidate intra prediction mode derived from the current picture may be signaled.
  • the encoding apparatus signals the prediction mode for the current block using mpm_idx syntax elements (S1904).
  • the encoding apparatus signals the prediction mode for the current block by using the rem_intra_luma_pred_mode syntax element (S1905).
  • the decoding apparatus receives a flag indicating whether to select the intra prediction mode from the candidate intra prediction mode of the current layer or the reference prediction mode of the reference layer, and if the flag is 1, decodes the reference prediction mode information to decode the current prediction mode. Deduce the prediction mode for the block.
  • the decoding apparatus decodes a signal indicating whether the prediction mode of the current block can be inferred from the candidate intra prediction mode derived from the current picture.
  • the decoding apparatus decodes the mpm_idx syntax element to derive the prediction mode of the current block, and the prediction mode of the current block is a candidate derived from the current picture. If it is not inferred in the intra prediction mode, the rem_intra_luma_pred_mode syntax element is decoded to derive the prediction mode of the current block.
  • 20 is a control flowchart for explaining encoding and decoding of a prediction mode according to another embodiment of the present invention.
  • the encoding apparatus first signals a flag signal indicating whether the prediction mode of the current block is inferred from the candidate intra prediction mode derived from the current picture.
  • the encoding apparatus signals the prediction mode for the current block using a syntax element such as mpm_idx (S2002).
  • the encoding apparatus may signal a flag indicating whether the prediction mode for the current block can be derived from the reference prediction mode. .
  • the encoding apparatus If the prediction mode for the current block can be derived from the reference prediction mode (S2003), the encoding apparatus signals the reference prediction mode (S2004).
  • the encoding apparatus signals the prediction mode itself using the rem_intra_luma_pred_mode syntax element (S2005).
  • the decoding apparatus receives a flag indicating whether the prediction mode of the current block can be inferred from the candidate intra prediction mode derived from the current picture. Induce.
  • the decoding apparatus receives a flag indicating whether the reference prediction mode of the reference layer is the intra prediction mode of the current block, and if the flag is 1, decodes the reference prediction mode information to derive the prediction mode for the current block. do.
  • the decoding apparatus decodes information about the prediction mode of the current block to derive the prediction mode of the current block.
  • the MPM list may be composed of one candidate intra prediction mode derived from the current picture and two reference prediction modes derived from the reference picture, or from two candidate intra prediction modes and reference pictures derived from the current picture.
  • the MPM list may be constructed with two derived reference prediction modes.
  • an array of samples reconstructed at a specific time point for example, a picture order count (POC) or an access unit (AU)
  • a 'picture for example, a picture order count (POC) or an access unit (AU)
  • the entire sample array reconstructed or reconstructed at a specific time in the decoded and output layer (current layer) may be called a picture and may be distinguished from the reconstructed or reconstructed sample array of the referenced layer.
  • the sample array reconstructed or reconstructed at a specific time point in the referenced layer may be referred to as a representation, a reference layer picture, a reference layer sample array, a reference layer texture, or the like.
  • one decoded picture reconstructed in the current layer may be output for one AU.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de prédiction inter-couches et un appareil l'utilisant, qui comprend les étapes consistant à : dériver un mode de prédiction environnant à partir d'un bloc environnant adjacent au bloc présent ; dériver un mode de prédiction de référence à partir d'un bloc de référence sur une couche de référence correspondant au bloc présent ; et dériver un candidat de mode d'intra-prédiction pour le bloc présent sur la base du mode de prédiction environnant et du mode de prédiction de référence. Ainsi, les informations du mode d'intra-prédiction d'une autre couche peuvent être utilisées pour exécuter l'intra-prédiction pour la couche présente.
PCT/KR2013/002264 2012-03-20 2013-03-20 Procédé de prédiction inter-couche et appareil l'utilisant WO2013141587A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261613479P 2012-03-20 2012-03-20
US61/613,479 2012-03-20

Publications (1)

Publication Number Publication Date
WO2013141587A1 true WO2013141587A1 (fr) 2013-09-26

Family

ID=49222970

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/002264 WO2013141587A1 (fr) 2012-03-20 2013-03-20 Procédé de prédiction inter-couche et appareil l'utilisant

Country Status (1)

Country Link
WO (1) WO2013141587A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278236A (zh) * 2015-08-28 2022-11-01 株式会社Kt 对图像进行解码或编码的方法和非暂态计算机可读介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007104117A (ja) * 2005-09-30 2007-04-19 Seiko Epson Corp 画像処理装置及び画像処理方法をコンピュータに実行させるためのプログラム
KR20110019855A (ko) * 2009-08-21 2011-03-02 에스케이 텔레콤주식회사 가변 길이 부호를 이용한 인트라 예측모드 부호화 방법과 장치, 및 이를 위한기록 매체
KR20110073263A (ko) * 2009-12-21 2011-06-29 한국전자통신연구원 인트라 예측 부호화 방법 및 부호화 방법, 그리고 상기 방법을 수행하는 인트라 예측 부호화 장치 및 인트라 예측 복호화 장치
US20110292994A1 (en) * 2010-05-30 2011-12-01 Lg Electronics Inc. Enhanced intra prediction mode signaling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007104117A (ja) * 2005-09-30 2007-04-19 Seiko Epson Corp 画像処理装置及び画像処理方法をコンピュータに実行させるためのプログラム
KR20110019855A (ko) * 2009-08-21 2011-03-02 에스케이 텔레콤주식회사 가변 길이 부호를 이용한 인트라 예측모드 부호화 방법과 장치, 및 이를 위한기록 매체
KR20110073263A (ko) * 2009-12-21 2011-06-29 한국전자통신연구원 인트라 예측 부호화 방법 및 부호화 방법, 그리고 상기 방법을 수행하는 인트라 예측 부호화 장치 및 인트라 예측 복호화 장치
US20110292994A1 (en) * 2010-05-30 2011-12-01 Lg Electronics Inc. Enhanced intra prediction mode signaling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KIM, DAE-YEON ET AL.: "A new method for estimating Intra prediction mode in H. 264/AVC", 2008 IMAGE PROCESSING AND IMAGE UNDERSTANDING WORKSHOP., 20 February 2008 (2008-02-20) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278236A (zh) * 2015-08-28 2022-11-01 株式会社Kt 对图像进行解码或编码的方法和非暂态计算机可读介质

Similar Documents

Publication Publication Date Title
WO2019103564A1 (fr) Procédé et appareil de décodage d'image sur la base d'une inter-prédiction dans un système de codage d'image
WO2019164031A1 (fr) Procédé et appareil de décodage d'image en fonction d'une structure de division de bloc dans un système de codage d'image
WO2019107916A1 (fr) Appareil et procédé de décodage d'image basés sur une inter-prédiction dans un système de codage d'image
WO2020166897A1 (fr) Procédé et dispositif d'inter-prédiction sur la base d'un dmvr
WO2017026681A1 (fr) Procédé et dispositif d'interprédiction dans un système de codage vidéo
WO2020171444A1 (fr) Procédé et dispositif d'inter-prédiction basés sur un dmvr
WO2019117634A1 (fr) Procédé de codage d'image fondé sur une transformation secondaire et dispositif associé
WO2014003519A1 (fr) Procédé et appareil de codage de vidéo évolutive et procédé et appareil de décodage de vidéo évolutive
WO2020184848A1 (fr) Procédé et dispositif d'inter-prédiction basés sur un dmvr
WO2014010943A1 (fr) Procédé et dispositif de codage/décodage d'image
WO2019194514A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et dispositif associé
WO2014038906A1 (fr) Procédé de décodage d'image et appareil utilisant celui-ci
WO2018105759A1 (fr) Procédé de codage/décodage d'image et appareil associé
WO2019216714A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2020184847A1 (fr) Procédé et dispositif d'inter-prédiction basés sur dmvr et bdof
WO2019235822A1 (fr) Procédé et dispositif de traitement de signal vidéo à l'aide de prédiction de mouvement affine
WO2017150823A1 (fr) Procédé d'encodage/décodage de signal vidéo, et appareil associé
WO2015060614A1 (fr) Procédé et dispositif pour coder/décoder un signal vidéo multi-couches
WO2020017785A1 (fr) Procédé de décodage d'images à l'aide d'informations liées à l'intra-prédiction dans un système de codage d'images et appareil associé
WO2015099398A1 (fr) Procédé et appareil pour le codage/décodage d'un signal vidéo multicouche
WO2013168952A1 (fr) Procédé de prédiction intercouche et appareil utilisant celui-ci
WO2020256482A1 (fr) Procédé de codage d'image basé sur une transformée et dispositif associé
WO2020130581A1 (fr) Procédé permettant de coder une image sur la base d'une transformée secondaire et dispositif associé
WO2023132556A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké
WO2019216736A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13763745

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13763745

Country of ref document: EP

Kind code of ref document: A1