WO2018131838A1 - Procédé et dispositif de décodage d'image en fonction d'une intra-prédiction dans un système de codage d'image - Google Patents

Procédé et dispositif de décodage d'image en fonction d'une intra-prédiction dans un système de codage d'image Download PDF

Info

Publication number
WO2018131838A1
WO2018131838A1 PCT/KR2018/000246 KR2018000246W WO2018131838A1 WO 2018131838 A1 WO2018131838 A1 WO 2018131838A1 KR 2018000246 W KR2018000246 W KR 2018000246W WO 2018131838 A1 WO2018131838 A1 WO 2018131838A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
intra prediction
current chroma
chroma block
samples
Prior art date
Application number
PCT/KR2018/000246
Other languages
English (en)
Korean (ko)
Inventor
유선미
허진
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2018131838A1 publication Critical patent/WO2018131838A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Definitions

  • the present invention relates to an image coding technique, and more particularly, to an image decoding method and apparatus according to intra prediction in an image coding system.
  • the demand for high resolution and high quality images such as high definition (HD) images and ultra high definition (UHD) images is increasing in various fields.
  • the higher the resolution and the higher quality of the image data the more information or bit rate is transmitted than the existing image data. Therefore, the image data can be transmitted by using a medium such as a conventional wired / wireless broadband line or by using a conventional storage medium. In the case of storage, the transmission cost and the storage cost are increased.
  • a high efficiency image compression technique is required to effectively transmit, store, and reproduce high resolution, high quality image information.
  • An object of the present invention is to provide a method and apparatus for improving image coding efficiency.
  • Another object of the present invention is to provide an intra prediction method and apparatus for performing based on a corresponding block of a current chroma block and another component.
  • Another object of the present invention is to provide a prediction method and apparatus for deriving an intra prediction mode based on a corresponding block of a current chroma block and another component.
  • an image decoding method performed by a decoding apparatus.
  • the method derives linear model parameters for the current chroma block based on the surrounding samples of the current chroma block and the corresponding luma block, based on the reconstruction samples of the linear model parameters and the corresponding luma block.
  • Generating a temporary prediction block of the current chroma block deriving an intra prediction mode of the current chroma block among a plurality of candidate intra prediction modes for the current chroma block based on the temporary prediction block, and the current And deriving a candidate prediction block generated based on the intra prediction mode of the chroma block as a prediction block for the current chroma block.
  • a decoding apparatus for performing image decoding.
  • the decoding apparatus derives linear model parameters for the current chroma block based on an entropy decoding unit for obtaining prediction information on the current chroma block, and surrounding samples of the current chroma block and surrounding samples of the corresponding luma block.
  • a temporary prediction block of the current chroma block based on linear model parameters and reconstructed samples of the corresponding luma block, and based on the temporary prediction block, the current one of a plurality of candidate intra prediction modes for the current chroma block
  • a prediction unit for deriving an intra prediction mode of the chroma block and deriving a candidate prediction block generated based on the intra prediction mode of the current chroma block as a prediction block for the current chroma block.
  • a video encoding method performed by an encoding apparatus derives linear model parameters for the current chroma block based on the surrounding samples of the current chroma block and the corresponding luma block, based on the reconstruction samples of the linear model parameters and the corresponding luma block.
  • Generating a temporary prediction block of the current chroma block deriving an intra prediction mode of the current chroma block among a plurality of candidate intra prediction modes for the current chroma block based on the temporary prediction block, the current chroma Deriving a candidate prediction block generated based on the intra prediction mode of the block as a prediction block for the current chroma block, and generating, encoding, and outputting prediction information about the current chroma block. It is done.
  • a video encoding apparatus derives linear model parameters for the current chroma block based on the surrounding samples of the current chroma block and the corresponding luma block, and based on the reconstructed samples of the linear model parameters and the corresponding luma block.
  • Generate a temporary prediction block of the current chroma block derive an intra prediction mode of the current chroma block among a plurality of candidate intra prediction modes for the current chroma block based on the temporary prediction block, And a prediction unit for deriving a candidate prediction block generated based on the intra prediction mode as a prediction block for the current chroma block, and an entropy encoding unit for generating, encoding, and outputting prediction information about the current chroma block. It is done.
  • an intra prediction mode of the current chroma block may be derived using a component different from the current chroma block, and prediction may be performed based on neighboring samples of the current chroma block, thus not including noise of the other component. Since the prediction block of the current chroma block can be generated, the prediction accuracy of the current chroma block can be improved, thereby improving the overall coding efficiency.
  • the intra prediction mode minimized in the distortion oligo of the intra prediction modes can be selected as the intra prediction mode of the current chroma block without transmitting information indicating the intra prediction mode of the current chroma block.
  • the amount of bits for prediction of the current chroma block can be reduced, and the overall coding efficiency can be improved.
  • FIG. 1 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • FIG. 2 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • 3 exemplarily shows the left side samples and the top side samples used for intra prediction of the current block.
  • 4 exemplarily shows intra directional modes of 65 prediction directions.
  • 5 exemplarily shows neighboring samples of the current chroma block and neighboring samples of the corresponding luma block used to derive a relationship between the corresponding luma block and the current chroma block.
  • FIG. 6 illustrates an example of selecting an intra prediction mode of the current chroma block based on a temporary prediction block among a plurality of candidate intra prediction modes in a decoding apparatus.
  • FIG. 7 schematically illustrates a video encoding method by an encoding device according to the present invention.
  • FIG. 8 schematically illustrates a video decoding method by a decoding apparatus according to the present invention.
  • each configuration in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions, it does not mean that each configuration is implemented by separate hardware or separate software.
  • two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
  • a picture generally refers to a unit representing one image of a specific time zone
  • a slice is a unit constituting a part of a picture in coding.
  • One picture may be composed of a plurality of slices, and if necessary, the picture and the slice may be mixed with each other.
  • a pixel or a pel may refer to a minimum unit constituting one picture (or image). Also, 'sample' may be used as a term corresponding to a pixel.
  • a sample may generally represent a pixel or a value of a pixel, and may only represent pixel / pixel values of the luma component, or only pixel / pixel values of the chroma component.
  • a unit represents the basic unit of image processing.
  • the unit may include at least one of a specific region of the picture and information related to the region.
  • the unit may be used interchangeably with terms such as block or area in some cases.
  • an M ⁇ N block may represent a set of samples or transform coefficients composed of M columns and N rows.
  • FIG. 1 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • the video encoding apparatus 100 may include a picture splitter 105, a predictor 110, a residual processor 120, an entropy encoder 130, an adder 140, and a filter 150. ) And memory 160.
  • the residual processing unit 120 may include a subtraction unit 121, a conversion unit 122, a quantization unit 123, a reordering unit 124, an inverse quantization unit 125, and an inverse conversion unit 126.
  • the picture divider 105 may divide the input picture into at least one processing unit.
  • the processing unit may be called a coding unit (CU).
  • the coding unit may be recursively split from the largest coding unit (LCU) according to a quad-tree binary-tree (QTBT) structure.
  • LCU largest coding unit
  • QTBT quad-tree binary-tree
  • one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure and / or a binary tree structure.
  • the quad tree structure may be applied first and the binary tree structure may be applied later.
  • the binary tree structure may be applied first.
  • the coding procedure according to the present invention may be performed based on the final coding unit that is no longer split.
  • the maximum coding unit may be used as the final coding unit immediately based on coding efficiency according to the image characteristic, or if necessary, the coding unit is recursively divided into coding units of lower depths and optimized.
  • a coding unit of size may be used as the final coding unit.
  • the coding procedure may include a procedure of prediction, transform, and reconstruction, which will be described later.
  • the processing unit may include a coding unit (CU) prediction unit (PU) or a transform unit (TU).
  • the coding unit may be split from the largest coding unit (LCU) into coding units of deeper depths along the quad tree structure.
  • LCU largest coding unit
  • the maximum coding unit may be used as the final coding unit immediately based on coding efficiency according to the image characteristic, or if necessary, the coding unit is recursively divided into coding units of lower depths and optimized.
  • a coding unit of size may be used as the final coding unit. If a smallest coding unit (SCU) is set, the coding unit may not be split into smaller coding units than the minimum coding unit.
  • the final coding unit refers to a coding unit that is the basis of partitioning or partitioning into a prediction unit or a transform unit.
  • the prediction unit is a unit partitioning from the coding unit and may be a unit of sample prediction. In this case, the prediction unit may be divided into sub blocks.
  • the transform unit may be divided along the quad tree structure from the coding unit, and may be a unit for deriving a transform coefficient and / or a unit for deriving a residual signal from the transform coefficient.
  • a coding unit may be called a coding block (CB)
  • a prediction unit is a prediction block (PB)
  • a transform unit may be called a transform block (TB).
  • a prediction block or prediction unit may mean a specific area in the form of a block within a picture, and may include an array of prediction samples.
  • a transform block or a transform unit may mean a specific area in a block form within a picture, and may include an array of transform coefficients or residual samples.
  • the prediction unit 110 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a predicted block including prediction samples of the current block.
  • the unit of prediction performed by the prediction unit 110 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 110 may determine whether intra prediction or inter prediction is applied to the current block. As an example, the prediction unit 110 may determine whether intra prediction or inter prediction is applied on a CU basis.
  • the prediction unit 110 may derive a prediction sample for the current block based on reference samples outside the current block in the picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 110 may (i) derive the prediction sample based on the average or interpolation of neighboring reference samples of the current block, and (ii) the neighbor reference of the current block.
  • the prediction sample may be derived based on a reference sample present in a specific (prediction) direction with respect to the prediction sample among the samples. In case of (i), it may be called non-directional mode or non-angle mode, and in case of (ii), it may be called directional mode or angular mode.
  • the prediction mode may have, for example, 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
  • the prediction unit 110 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
  • the prediction unit 110 may derive the prediction sample for the current block based on the sample specified by the motion vector on the reference picture.
  • the prediction unit 110 may apply one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode to derive a prediction sample for the current block.
  • the prediction unit 110 may use the motion information of the neighboring block as the motion information of the current block.
  • the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the MVP mode the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
  • the motion information may include a motion vector and a reference picture index.
  • Information such as prediction mode information and motion information may be encoded (entropy) and output in the form of a bitstream.
  • the highest picture on the reference picture list may be used as the reference picture.
  • Reference pictures included in a reference picture list may be sorted based on a difference in a picture order count (POC) between a current picture and a corresponding reference picture.
  • POC picture order count
  • the subtraction unit 121 generates a residual sample which is a difference between the original sample and the prediction sample.
  • residual samples may not be generated as described above.
  • the transform unit 122 generates transform coefficients by transforming the residual sample in units of transform blocks.
  • the transform unit 122 may perform the transform according to the size of the transform block and the prediction mode applied to the coding block or the prediction block that spatially overlaps the transform block. For example, if intra prediction is applied to the coding block or the prediction block that overlaps the transform block, and the transform block is a 4 ⁇ 4 residual array, the residual sample is configured to perform a discrete sine transform (DST) transform kernel.
  • the residual sample may be transformed using a discrete cosine transform (DCT) transform kernel.
  • DST discrete sine transform
  • DCT discrete cosine transform
  • the quantization unit 123 may quantize the transform coefficients to generate quantized transform coefficients.
  • the reordering unit 124 rearranges the quantized transform coefficients.
  • the reordering unit 124 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector form through a coefficient scanning method. Although the reordering unit 124 has been described in a separate configuration, the reordering unit 124 may be part of the quantization unit 123.
  • the entropy encoding unit 130 may perform entropy encoding on the quantized transform coefficients.
  • Entropy encoding may include, for example, encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like.
  • the entropy encoding unit 130 may encode information necessary for video reconstruction other than the quantized transform coefficient (for example, a value of a syntax element) together or separately. Entropy encoded information may be transmitted or stored in units of network abstraction layer (NAL) units in the form of bitstreams.
  • NAL network abstraction layer
  • the inverse quantization unit 125 inverse quantizes the quantized values (quantized transform coefficients) in the quantization unit 123, and the inverse transformer 126 inverse transforms the inverse quantized values in the inverse quantization unit 125 to obtain a residual sample.
  • the adder 140 reconstructs the picture by combining the residual sample and the predictive sample.
  • the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
  • the adder 140 may be part of the predictor 110.
  • the adder 140 may be called a restoration unit or a restoration block generation unit.
  • the filter unit 150 may apply a deblocking filter and / or a sample adaptive offset to the reconstructed picture. Through deblocking filtering and / or sample adaptive offset, the artifacts of the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
  • the sample adaptive offset may be applied on a sample basis and may be applied after the process of deblocking filtering is completed.
  • the filter unit 150 may apply an adaptive loop filter (ALF) to the reconstructed picture. ALF may be applied to the reconstructed picture after the deblocking filter and / or sample adaptive offset is applied.
  • ALF adaptive loop filter
  • the memory 160 may store reconstructed pictures (decoded pictures) or information necessary for encoding / decoding.
  • the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 150.
  • the stored reconstructed picture may be used as a reference picture for (inter) prediction of another picture.
  • the memory 160 may store (reference) pictures used for inter prediction.
  • pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
  • FIG. 2 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • the video decoding apparatus 200 may include an entropy decoding unit 210, a residual processor 220, a predictor 230, an adder 240, a filter 250, and a memory 260. It may include.
  • the residual processor 220 may include a rearrangement unit 221, an inverse quantization unit 222, and an inverse transform unit 223.
  • the video decoding apparatus 200 may restore video in response to a process in which video information is processed in the video encoding apparatus.
  • the video decoding apparatus 200 may perform video decoding using a processing unit applied in the video encoding apparatus.
  • the processing unit block of video decoding may be, for example, a coding unit, and in another example, a coding unit, a prediction unit, or a transform unit.
  • the coding unit may be split along the quad tree structure and / or binary tree structure from the largest coding unit.
  • the prediction unit and the transform unit may be further used in some cases, in which case the prediction block is a block derived or partitioned from the coding unit and may be a unit of sample prediction. At this point, the prediction unit may be divided into subblocks.
  • the transform unit may be divided along the quad tree structure from the coding unit, and may be a unit for deriving a transform coefficient or a unit for deriving a residual signal from the transform coefficient.
  • the entropy decoding unit 210 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction. For example, the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements necessary for video reconstruction, and residual coefficients. Can be output.
  • a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements necessary for video reconstruction, and residual coefficients. Can be output.
  • the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and decodes syntax element information and decoding information of neighboring and decoding target blocks or information of symbols / bins decoded in a previous step.
  • the context model may be determined using the context model, the probability of occurrence of a bin may be predicted according to the determined context model, and arithmetic decoding of the bin may be performed to generate a symbol corresponding to the value of each syntax element. have.
  • the CABAC entropy decoding method may update the context model by using the information of the decoded symbol / bin for the context model of the next symbol / bean after determining the context model.
  • the information related to the prediction among the information decoded by the entropy decoding unit 210 is provided to the prediction unit 230, and the residual value on which the entropy decoding has been performed by the entropy decoding unit 210, that is, the quantized transform coefficient, is used as a reordering unit ( 221 may be input.
  • the reordering unit 221 may rearrange the quantized transform coefficients in a two-dimensional block form.
  • the reordering unit 221 may perform reordering in response to coefficient scanning performed by the encoding apparatus.
  • the rearrangement unit 221 has been described in a separate configuration, but the rearrangement unit 221 may be part of the inverse quantization unit 222.
  • the inverse quantization unit 222 may dequantize the quantized transform coefficients based on the (inverse) quantization parameter and output the transform coefficients.
  • information for deriving a quantization parameter may be signaled from the encoding apparatus.
  • the inverse transform unit 223 may inversely transform transform coefficients to derive residual samples.
  • the prediction unit 230 may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
  • the unit of prediction performed by the prediction unit 230 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 230 may determine whether to apply intra prediction or inter prediction based on the information about the prediction.
  • a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
  • the unit for generating a prediction sample in inter prediction and intra prediction may also be different.
  • whether to apply inter prediction or intra prediction may be determined in units of CUs.
  • a prediction mode may be determined and a prediction sample may be generated in PU units
  • intra prediction a prediction mode may be determined in PU units and a prediction sample may be generated in TU units.
  • the prediction unit 230 may derive the prediction sample for the current block based on the neighbor reference samples in the current picture.
  • the prediction unit 230 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the neighbor reference samples of the current block.
  • the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
  • the prediction unit 230 may derive the prediction sample for the current block based on the sample specified on the reference picture by the motion vector on the reference picture.
  • the prediction unit 230 may apply any one of a skip mode, a merge mode, and an MVP mode to derive a prediction sample for the current block.
  • motion information required for inter prediction of the current block provided by the video encoding apparatus for example, information about a motion vector, a reference picture index, and the like may be obtained or derived based on the prediction information.
  • the motion information of the neighboring block may be used as the motion information of the current block.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • the prediction unit 230 may construct a merge candidate list using motion information of available neighboring blocks, and may use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
  • the merge index may be signaled from the encoding device.
  • the motion information may include a motion vector and a reference picture.
  • the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • a merge candidate list may be generated by using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block, which is a temporal neighboring block.
  • the motion vector of the candidate block selected from the merge candidate list is used as the motion vector of the current block.
  • the information about the prediction may include a merge index indicating a candidate block having an optimal motion vector selected from candidate blocks included in the merge candidate list.
  • the prediction unit 230 may derive the motion vector of the current block by using the merge index.
  • a motion vector predictor candidate list may be generated using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block which is a temporal neighboring block.
  • the prediction information may include a prediction motion vector index indicating an optimal motion vector selected from the motion vector candidates included in the list.
  • the prediction unit 230 may select the predicted motion vector of the current block from the motion vector candidates included in the motion vector candidate list using the motion vector index.
  • the prediction unit of the encoding apparatus may obtain a motion vector difference (MVD) between the motion vector of the current block and the motion vector predictor, and may encode the output vector in a bitstream form. That is, MVD may be obtained by subtracting the motion vector predictor from the motion vector of the current block.
  • the prediction unit 230 may obtain a motion vector difference included in the information about the prediction, and derive the motion vector of the current block by adding the motion vector difference and the motion vector predictor.
  • the prediction unit may also obtain or derive a reference picture index or the like indicating a reference picture from the information about the prediction.
  • the adder 240 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
  • the adder 240 may reconstruct the current picture by adding the residual sample and the predictive sample in block units. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
  • the adder 240 has been described in a separate configuration, the adder 240 may be part of the predictor 230. On the other hand, the adder 240 may be called a restoration unit or a restoration block generation unit.
  • the filter unit 250 may apply the deblocking filtering sample adaptive offset, and / or ALF to the reconstructed picture.
  • the sample adaptive offset may be applied in units of samples and may be applied after deblocking filtering.
  • ALF may be applied after deblocking filtering and / or sample adaptive offset.
  • the memory 260 may store reconstructed pictures (decoded pictures) or information necessary for decoding.
  • the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 250.
  • the memory 260 may store pictures used for inter prediction.
  • pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
  • the reconstructed picture can be used as a reference picture for another picture.
  • the memory 260 may output the reconstructed picture in an output order.
  • the prediction when prediction is performed on the current block, the prediction may be performed based on an intra prediction mode.
  • the intra prediction may be performed based on neighboring samples that have already been encoded / decoded at the decoding time of the current block. That is, the predictive sample of the current block may be reconstructed using the left neighboring samples and the upper neighboring samples of the current block that have already been reconstructed.
  • the left peripheral samples and the upper peripheral samples may be represented as shown in FIG. 3.
  • an intra prediction mode for the current block may be derived, and the current block using at least one of the left neighboring samples and the upper neighboring samples according to the intra prediction mode.
  • a prediction sample for can be generated.
  • the intra prediction mode may include two non-directional intra prediction modes and 33 directional intra prediction modes.
  • the non-directional intra prediction modes may include a planar intra prediction mode and a DC intra prediction mode, and the directional intra prediction modes may include 2 to 34 intra prediction modes.
  • the planner intra prediction mode may be called a planner mode, and the DC intra prediction mode may be called a DC mode.
  • the intra prediction mode 10 may be a horizontal intra prediction mode or a horizontal mode
  • the intra intra prediction mode 26 may be a vertical intra prediction mode or a vertical mode.
  • the prediction direction of angular intra mode) can be expressed in degrees.
  • the relative angle corresponding to each intra prediction mode may be expressed based on the horizontal reference angle 0 ° corresponding to the intra prediction mode 10, and the vertical reference angle corresponding to the intra prediction mode 26 reference 0 °.
  • the relative angle corresponding to each intra prediction mode can be expressed.
  • the intra prediction mode may include two non-directional intra prediction modes and 65 directional intra prediction modes.
  • the non-directional intra prediction modes may include a planar intra prediction mode and a DC intra prediction mode, and the directional intra prediction modes may include 2 to 66 intra prediction modes.
  • 4 exemplarily shows intra directional modes of 65 prediction directions.
  • an intra prediction mode having horizontal directionality and an intra prediction mode having vertical directionality may be distinguished from the intra prediction mode 34 having a left upper diagonal prediction direction.
  • H and V in FIG. 4 mean horizontal directionality and vertical directionality, respectively, and numbers of -32 to 32 represent displacements of 1/32 units on a sample grid position.
  • Intra prediction modes 2 to 33 have horizontal orientation, and intra prediction modes 34 to 66 have vertical orientation.
  • Intra prediction mode 18 and intra prediction mode 50 respectively indicate a horizontal intra prediction mode and a vertical intra prediction mode, and based on this, an angular intra prediction mode is used.
  • the prediction direction can be expressed in degrees.
  • the relative angle corresponding to each intra prediction mode may be expressed based on the horizontal reference angle 0 ° corresponding to the 18th intra prediction mode, and the vertical reference angle corresponding to the 50th intra prediction mode may be expressed as 0 °.
  • the relative angle corresponding to each intra prediction mode can be expressed.
  • a luma component or a chroma component different from the chroma component of the current block that has been decoded and reconstructed may be used for intra prediction on the chroma component of the current block.
  • the chroma Cb component and the chroma Cr component are decoded in the following order.
  • the encoding / decoding information of the luma component may be used.
  • the color format of the input image is a 4: 2: 0 color format
  • the luma component since the luma component has four times as much data as each chroma component, the luma component may be down sampled according to each chroma component.
  • the down sampled luma component may be used for intra prediction of each chroma component.
  • the chroma component of the current block may be represented as a current chroma block
  • a luma component corresponding to the chroma component may be represented as a corresponding luma block.
  • FIG. 5 exemplarily shows neighboring samples of the current chroma block and neighboring samples of the corresponding luma block used to derive a relationship between the corresponding luma block and the current chroma block.
  • neighboring samples of the corresponding luma block corresponding to neighboring samples of the current chroma block may be derived.
  • Peripheral samples of the current chroma block may include left peripheral samples, upper left peripheral samples, and upper peripheral samples of the current chroma block.
  • the left peripheral samples are p [-1] [0] P [-1] [N-1]
  • the upper left peripheral sample is p [-1] [-1]
  • the upper peripheral samples are p [0] [-1] to p [M-1] [- 1] can be.
  • a linear model of the corresponding luma block and the current chroma block may be derived based on the surrounding samples of the corresponding luma block and the surrounding samples of the current chroma block. Prediction samples of the chroma block may be derived based on reconstructed samples of the luma block.
  • the linear model may be represented by a relation between the corresponding luma block and the current chroma block. In other words, the relation is derived based on the neighbor samples of the corresponding luma block and the neighbor samples of the current chroma block, and the prediction samples of the current chroma block are derived based on the relation and the reconstructed samples of the luma block.
  • the intra prediction mode using the correlation between the current chroma block and the corresponding luma block corresponding to the current chroma block may be referred to as a linear model (LM) mode.
  • the parameters of the relational expression may be derived based on the neighboring samples of the current chroma block and the neighboring samples of the corresponding luma block corresponding to the neighboring samples of the current chroma block.
  • the parameters of the relation may include coefficients and offsets of the relation. That is, coefficients and offsets of the relation may be derived based on the neighbor samples of the current chroma block and the neighbor samples of the corresponding luma block. The coefficient may be called a scaling factor.
  • the relational expression derived based on the neighboring samples of the current chroma block and the corresponding luma block may be expressed as the following equation.
  • Ref cb may be a neighboring sample of the current chroma block
  • Ref Y may be a neighboring sample of the corresponding luma block
  • may be a coefficient of the relational expression
  • may represent an offset of the relational expression
  • the parameters ⁇ and ⁇ are calculated by using a least squares method in which the neighboring samples of the current chroma block and the neighboring samples of the corresponding luma block are most similar.
  • peripheral samples of the corresponding luma block may be downsampled and used.
  • the ⁇ and ⁇ can be derived based on the following equation.
  • Equation 2 a difference between both sides of Equation 2 may be regarded as an error E, and the coding apparatus may obtain parameters ⁇ and ⁇ satisfying a condition for minimizing the error.
  • Equation 2 Since the parameters ⁇ and ⁇ to be obtained in Equation 2 are values that minimize errors on both sides, the equation for obtaining the parameters may be expressed as follows.
  • E ( ⁇ , ⁇ ) represents ⁇ and ⁇ values for minimizing errors, where i represents indexing of each sample and ⁇ (lambda) represents a control parameter.
  • the lambda can be predetermined or can be derived based on the surrounding samples of the corresponding luma block, for example.
  • may be set to 0 so that the rear end of Equation 3 may be omitted. The same applies to the equations described below.
  • Equation 3 can be summarized as follows.
  • Equation 4 the parameters ⁇ and ⁇ may be derived as follows.
  • N represents a normalization parameter.
  • N is Can be derived from a part.
  • N may be determined based on the size of the current chroma block.
  • reconstructed samples of the corresponding luma blocks may be applied to the parameters to derive prediction samples of the current chroma block.
  • the relation to which the reconstructed samples of the corresponding luma block are applied may be expressed as the following equation.
  • rec Y '(i, j) is a reconstructed sample of (i, j) coordinates of the corresponding luma block
  • ⁇ Is the coefficient and ⁇ may represent the offset.
  • the rec Y '(i, j) may represent a down-sampled reconstructed sample of the corresponding luma block.
  • the parameter ⁇ and the parameter ⁇ of the relational expression representing the linear model are neighboring samples used for intra prediction of the current luma block and neighboring samples of the current chroma block and neighboring samples of the luma block.
  • intra prediction of the above-described method can be utilized only by transmitting information indicating whether prediction using a linear model is performed on the current chroma block.
  • the embodiment shows a method of predicting the chroma Cb component by using the correlation between the luma component and the chroma Cb component, but based on the correlation between the luma component and the chroma Cb component as well as other components, the intra prediction is performed.
  • intra prediction may be performed by various modeling techniques for defining relationships between components as well as the above-described linear model.
  • the color format of the input image is not limited to the YCbCr 4: 2: 0 color format, and the color format of the input image is a color format capable of conversion between components, the above-described prediction method may be applied. .
  • the color format of the input image is the same color format as the amount of information of all components, such as the RGB color format or the YCbCr 4: 4: 4 color format
  • down-sampling for the luma component may be omitted.
  • a relation between the current chroma block and the corresponding luma block is derived, and a predicted block generated based on reconstruction samples of the relation and the corresponding luma block.
  • block may be derived as a prediction block of the current chroma block, but a prediction block generated based on the relational expression and reconstructed samples of the corresponding luma block and a prediction block generated through a plurality of prediction modes are optimally predicted.
  • a method of deriving an intra prediction mode in which a block is generated as an intra prediction mode of the current chroma block may be proposed.
  • the temporary prediction block described later may represent a prediction block generated based on the relational expression and reconstructed samples of the corresponding luma block.
  • neighboring samples of the current chroma block may be derived, and a temporary prediction block of the current chroma block may be generated based on the neighboring samples, or as described above, neighboring samples of the current chroma block. And a temporary prediction block of the current chroma block may be generated based on a relationship derived from the neighboring samples of the corresponding luma block.
  • candidate prediction blocks of the current chroma block may be generated based on a plurality of candidate intra prediction modes, and a candidate having the least error with the reconstructed block of the luma component among the candidate prediction blocks, that is, the temporary prediction block.
  • the candidate intra prediction mode in which the prediction block is generated may be derived as the intra prediction mode of the current chroma block.
  • the temporary prediction block may be generated by various modeling techniques that define not only the above-described linear model but also relationships between components.
  • the color format of the input image is not limited to the YCbCr 4: 2: 0 color format, and if the color format of the input image is a color format capable of conversion between components, the above-described prediction method is used.
  • Temporary prediction blocks may be generated.
  • the color format of the input image is the same color format as the amount of information of all components, such as the RGB color format or the YCbCr 4: 4: 4 color format, down-sampling for the luma component may be omitted. have.
  • the temporary prediction block is generated based on a relation between the current chroma block and the corresponding luma block, and the intra prediction mode of the current chroma block is derived based on the temporary prediction block.
  • the method may always be applied to the current chroma block if there is a restored corresponding chroma block corresponding to the current chroma block or a restored corresponding chroma block of a chroma component different from the current chroma block.
  • information indicating whether the above-described method is applied in a coding block unit, a slice unit, a picture unit, or a sequence unit may be transmitted, and based on the information, Whether the above-described method is applied to the chroma block can be determined.
  • the type and number of the candidate intra prediction modes for the prediction blocks compared to the temporary prediction block may be mutually promised between the encoding apparatus and the decoding apparatus. That is, the plurality of candidate intra prediction modes may be derived from a plurality of preset intra prediction modes.
  • the encoding apparatus may transmit information indicating the plurality of candidate intra prediction modes to the decoding apparatus.
  • the information representing the plurality of candidate intra prediction modes may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level. Can be.
  • the information indicating the plurality of candidate intra prediction modes may be transmitted by an additional information transmission grammar except for the above-described parameter sets.
  • the prediction block of the current chroma block is generated based on the LM mode described above, the prediction block of the current chroma block is generated based on the reconstructed corresponding luma block, so that a noise portion may be included.
  • an intra prediction mode of the current chroma block may be derived based on the temporary prediction block derived based on the LM mode, and neighboring reference samples of the current chroma block are derived according to the derived intra prediction mode. Since the prediction block of the current chroma block can be generated based on this, the noise portion is not included therein, and thus the prediction accuracy of the current chroma block can be improved.
  • the above-described method may be applied, that is, whether a method of deriving a temporary prediction block based on the LM mode and deriving an intra prediction mode of the current chroma block based on the derived temporary prediction block is indicated.
  • the prediction of the current chroma block may be performed based on various intra prediction modes only by transmission of flag information, and the intra prediction mode minimized from the perspective of distortion among the intra prediction modes is intra prediction of the current chroma block. As a mode can be selected, the effect of improving the prediction accuracy of the current chroma block can be obtained.
  • the above-described method may be added by being defined as a new intra prediction mode, or may replace the existing intra prediction mode.
  • the decoding apparatus may determine whether there is a decoded corresponding block having a component different from that of the current chroma block (S600). For example, when the current chroma block is a block of chroma Cb components, the decoding apparatus may determine whether a corresponding luma block of the decoded luma component or a corresponding chroma block of the chroma Cr component exists.
  • the decoding apparatus may determine whether a corresponding luma block of the decoded luma component or a corresponding chroma block of the chroma Cb component exists. On the other hand, when there is no decoded corresponding block of a component different from the component of the current chroma block, a method of deriving an intra prediction mode of the current chroma block based on the temporary prediction block described above in intra prediction of the current chroma block. May not apply.
  • the decoding apparatus parses a flag indicating whether to derive an intra prediction mode of the current chroma block based on a model for the current chroma block ( parsing) (S605).
  • the decoding apparatus may determine whether the value of the flag is 1 (S610).
  • the model for the current chroma block represents the relationship between the current chroma block and the corresponding block, and the model may represent a linear model or a model other than the linear model.
  • the flag may indicate that the intra prediction mode of the current chroma block is derived based on the model of the current chroma block.
  • the flag When the value of the flag is 0, the flag is It may represent that the intra prediction mode of the current chroma block is not derived based on the model for the current chroma block. That is, the decoding apparatus generates the temporary prediction block based on a relation between the corresponding block and the current chroma block based on the value of the flag (ie, linear model parameters of the current chroma block). The intra prediction mode of the current chroma block may be selected based on the temporary prediction block. When the value of the flag is 0, a method of deriving an intra prediction mode of the current chroma block based on the temporary prediction block described above for intra prediction of the current chroma block may not be applied.
  • the decoding apparatus may derive neighboring samples of the current chroma block and / or derive neighboring samples of the corresponding block and a model of the current chroma block and the corresponding block (S615).
  • the neighboring samples of the corresponding block and the model of the current chroma block and the corresponding block may be converted (S620).
  • the decoding apparatus may downsample the neighboring samples of the corresponding block, wherein the decoding apparatus may represent the model based on the downsampled neighboring samples of the corresponding block and the neighboring samples of the current chroma block.
  • Parameters may be derived, and the parameters may be applied to a reconstruction sample of the corresponding block to derive a temporary prediction block of the current chroma block.
  • the temporary prediction block may be derived based on Equation 7 described above.
  • the decoding apparatus may derive a temporary prediction block by performing intra prediction based on neighboring samples of the current chroma block.
  • the model may be a linear model.
  • the decoding apparatus may set the maximum distortion value as the minimum distortion (S625).
  • MINDIST of FIG. 6 may represent the minimum distortion.
  • the maximum distortion value may be a preset value.
  • the decoding apparatus may set the maximum value of the encoding prediction mode number (S630).
  • the decoding apparatus may derive intra prediction modes to compare based on the temporary prediction block, and may compare intra prediction modes having a number less than or equal to the maximum value and an intra prediction mode having a number less than or equal to the maximum value.
  • intra prediction mode of the current chroma block may be determined. That is, intra prediction modes having a number greater than the maximum value may not be included in intra prediction modes compared based on the temporary prediction block.
  • the maximum value may be a predetermined value or may be derived based on information indicating the received maximum value.
  • the information indicating the maximum value may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
  • the decoding apparatus may determine whether the mode number of the current intra prediction mode of the current chroma block is equal to or less than the maximum value (S635). If the mode number of the current intra prediction mode is less than or equal to the maximum value, the decoding apparatus may generate a prediction block by performing prediction based on the current intra prediction mode (S640).
  • the decoding apparatus may calculate a distortion of the prediction block generated based on the current intra prediction mode (S645), and determine whether the distortion of the prediction block generated based on the current intra prediction mode is smaller than the minimum distortion.
  • the decoding apparatus may calculate a distortion of the prediction block generated based on the current intra prediction mode, and determine whether the distortion is smaller than the minimum distortion.
  • CURDIST of FIG. 6 may represent distortion of the prediction block generated based on the current intra prediction mode.
  • the distortion of the prediction block may indicate a degree of distortion between the prediction block and the temporary prediction block. That is, the distortion of the prediction block may indicate a degree of difference from the temporary prediction block of the prediction block.
  • the decoding apparatus may determine the intra prediction mode of the mode number by adding 1 to the mode number of the current intra prediction mode.
  • the decoding apparatus may set the current intra prediction mode to an optimal intra prediction mode for the current chroma block (S655).
  • the decoding apparatus may set the distortion of the prediction block generated based on the current intra prediction mode as the minimum distortion (S660).
  • the decoding apparatus may determine the intra prediction mode of the mode number by adding 1 to the mode number of the current intra prediction mode (S665). In this way, the decoding apparatus may compare a plurality of predetermined candidate intra prediction modes or a plurality of candidate intra prediction modes derived based on the transmitted information in order of mode number, and an intra block in which a prediction block having a minimum distortion is generated.
  • a prediction mode may be derived as the intra prediction mode of the current chroma block.
  • FIG. 7 schematically illustrates a video encoding method by an encoding device according to the present invention.
  • the method disclosed in FIG. 7 may be performed by the encoding apparatus disclosed in FIG. 1.
  • S700 to S730 of FIG. 7 may be performed by the prediction unit of the encoding apparatus
  • S740 may be performed by the entropy encoding unit of the encoding apparatus.
  • the encoding apparatus derives linear model parameters for the current chroma block based on the neighbor samples of the current chroma block and the corresponding samples of the corresponding luma block (S700).
  • the encoding apparatus may derive the peripheral samples of the current chroma block and the peripheral samples of the corresponding block.
  • the corresponding block may be a corresponding luma block of luma components or a corresponding chroma block of chroma components other than the chroma component of the current chroma block.
  • the corresponding block when the chroma component of the current chroma block is a chroma Cb component, the corresponding block may be a corresponding luma block of a luma component or a corresponding chroma block of a chroma Cr component.
  • the corresponding block when the chroma component of the current chroma block is a chroma Cr component, the corresponding block may be a corresponding luma block of the luma component or a corresponding chroma block of the chroma Cb component.
  • the corresponding luma block or the corresponding chroma block may be a block already encoded at the encoding time of the current chroma block.
  • a corresponding luma block is described, but may be performed by the method for the corresponding chroma block.
  • the peripheral samples of the current chroma block may include left peripheral samples, upper left peripheral samples, and upper peripheral samples of the current chroma block, wherein the peripheral samples of the corresponding luma block are selected from the current chroma block.
  • the peripheral samples of the corresponding luma block corresponding to the peripheral samples may be represented.
  • the left peripheral samples are p [-1] [0] to p [-1] [N-1]
  • the upper left peripheral sample is p [-1] [-1]
  • the upper peripheral samples are p [0] [-1] to p [M-1] [-1 ] Can be.
  • an area including the peripheral samples of the current chroma block for deriving the linear model parameters may be represented as a template of the current chroma block.
  • the template of the current chroma block is p [-1. ] [0] to p [-1] [N-1], p [-1] [-1], p [0] [-1] to p [M-1] [-1]
  • an area including the peripheral samples of the corresponding luma block corresponding to the peripheral samples of the current chroma block may be represented as a template of the corresponding luma block.
  • the encoding apparatus may derive linear model parameters for the current chroma block based on the peripheral samples of the current chroma block and the peripheral samples of the corresponding luma block. That is, the encoding apparatus may derive linear model parameters for the current chroma block based on the template of the current chroma block and the template of the corresponding luma block. Specifically, a relation between the current chroma block and the corresponding luma block may be derived based on the neighbor samples of the current chroma block and the corresponding luma block, wherein the relation is the current chroma block and the corresponding luma. It may be an equation representing a model between blocks.
  • the relational expression may be an equation representing a linear model of the current chroma block and the corresponding luma block, or may be an equation representing a model other than the linear model.
  • the linear model parameters may include coefficients and offsets of the linear model. The parameters may be derived based on Equations 5 and 6 described above.
  • the encoding apparatus may generate a flag indicating whether an intra prediction mode of the current chroma block is derived based on a linear model. It may be determined whether an intra prediction mode of the current chroma block is derived among the plurality of candidate intra prediction modes based on a linear model according to the value of the flag. That is, it may be determined whether to derive the parameters based on the flag. For example, when the value of the flag is 1, the linear model parameters may be derived based on the neighbor samples of the current chroma block and the neighbor samples of the corresponding luma block, and the value of the flag is 0. In this case, the linear model parameters may not be derived.
  • the flag may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or at a block level.
  • the encoding apparatus generates a temporary prediction block of the current chroma block based on the linear model parameters and the reconstructed samples of the corresponding luma block (S710).
  • the encoding apparatus may generate the temporary prediction block of the current chroma block based on the linear model parameters and the reconstructed samples of the corresponding luma block.
  • the linear model parameters may include a coefficient and an offset. The coefficient may be called a scaling factor.
  • the encoding apparatus may generate temporary prediction samples of the current chroma block, that is, the temporary prediction block by substituting the reconstructed samples of the corresponding luma block into a relation representing the linear model. That is, the encoding apparatus may generate the temporary prediction samples (ie, the temporary prediction block) by adding the offset to a value obtained by multiplying the reconstructed samples of the corresponding luma block by the coefficient.
  • the sample of the temporary prediction block may be derived by adding the offset to a value obtained by multiplying the coefficient by the reconstruction sample of the corresponding luma block.
  • the temporary prediction block may be derived based on Equation 7 described above.
  • the encoding apparatus derives an intra prediction mode of the current chroma block among a plurality of candidate intra prediction modes for the current chroma block based on the temporary prediction block (S720).
  • the encoding apparatus may derive the intra prediction mode of the current chroma block among the plurality of candidate intra prediction modes for the current chroma block based on the temporary prediction block.
  • the encoding apparatus may generate candidate prediction blocks of the current chroma block by performing prediction on the current chroma block based on the plurality of candidate intra prediction modes, and among the candidate prediction blocks.
  • An intra prediction mode for the candidate prediction block most similar to the block may be derived as the intra prediction mode of the current chroma block.
  • the encoding apparatus may derive the candidate prediction block generated based on the intra prediction mode of the current chroma block as the prediction block for the current chroma block.
  • the encoding apparatus may generate candidate prediction blocks of the current chroma block using the neighboring samples of the current chroma block according to each of the plurality of candidate intra prediction modes.
  • the neighboring samples used to generate candidate prediction blocks of the current chroma block may include left neighboring samples, upper left neighboring samples, and upper neighboring samples of the current chroma block, and the size of the current chroma block is MxN.
  • the left peripheral samples are p [-1] [0] to p [-1] [M + N-1]
  • the upper left peripheral sample may be p [-1] [-1]
  • the upper peripheral sample may be p [0] [-1] to p [M + N-1] [-1].
  • the peripheral samples used to generate candidate prediction blocks of the current chroma block include peripheral samples not included in the peripheral samples of the current chroma block (the template of the current chroma block) for deriving the linear model parameters. can do.
  • the encoding apparatus may derive the distortion of each of the generated candidate prediction blocks with the temporary prediction block, and derive a candidate intra prediction mode for the candidate prediction block having the smallest distortion as the intra prediction mode of the current chroma block. can do.
  • the distortion of the candidate prediction block with the temporary prediction block may indicate a degree of difference between the candidate prediction block and the temporary prediction block.
  • a plurality of candidate intra prediction modes for the current chroma block may be preset.
  • the encoding apparatus may generate information indicating the plurality of candidate intra prediction modes for the current chroma block.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes. When the maximum value is n, the plurality of candidate intra prediction modes may be derived from intra prediction mode 1 to intra prediction mode n.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes and information indicating the number of the candidate intra prediction modes.
  • the plurality of candidate intra prediction modes may be derived from n-m + 1 intra prediction mode to n intra prediction mode.
  • the information indicating the plurality of candidate intra prediction modes may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
  • the encoding apparatus derives the candidate prediction block generated based on the intra prediction mode of the current chroma block as a prediction block for the current chroma block (S730).
  • the encoding apparatus may derive a candidate prediction block of the current chroma block generated using neighboring samples of the current chroma block as the prediction block for the current chroma block according to the intra prediction mode.
  • the intra prediction mode may be one of two non-directional prediction modes and 33 directional prediction modes. As described above, the two non-directional prediction modes may include an intra DC mode and an intra planner mode. Alternatively, the intra prediction mode may be one of two non-directional intra prediction modes and 65 directional intra prediction modes. As described above, the two non-directional prediction modes may include an intra DC mode and an intra planner mode.
  • the 65 directional intra prediction modes may include vertical directional intra prediction modes and horizontal directional intra prediction modes.
  • the vertical directional intra prediction modes may include intra prediction mode 34 to intra intra prediction mode
  • the horizontal directional intra prediction modes may include intra prediction mode 2 to intra prediction mode 33.
  • the encoding apparatus generates, encodes, and outputs prediction information about the current chroma block (S740).
  • the encoding apparatus may encode the prediction information on the current chroma block and output the encoded information in the form of a bitstream.
  • the prediction information may include information indicating the plurality of candidate intra prediction modes for the current chroma block.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes and information indicating the number of the candidate intra prediction modes.
  • the information indicating the plurality of candidate intra prediction modes may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
  • the prediction information may include a flag indicating whether an intra prediction mode of the current chroma block is derived based on the linear model.
  • the flag may be transmitted at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or at a block level.
  • FIG. 8 schematically illustrates a video decoding method by a decoding apparatus according to the present invention.
  • the method disclosed in FIG. 8 may be performed by the decoding apparatus disclosed in FIG. 2.
  • S800 to S830 of FIG. 8 may be performed by the prediction unit of the decoding apparatus.
  • the decoding apparatus derives linear model parameters for the current chroma block based on the neighbor samples of the current chroma block and the corresponding samples of the corresponding luma block (S800).
  • the decoding apparatus may derive the peripheral samples of the current chroma block and the peripheral samples of the corresponding block.
  • the corresponding block may be a corresponding luma block of luma components or a corresponding chroma block of chroma components other than the chroma component of the current chroma block.
  • the corresponding block when the chroma component of the current chroma block is a chroma Cb component, the corresponding block may be a corresponding luma block of a luma component or a corresponding chroma block of a chroma Cr component.
  • the corresponding block when the chroma component of the current chroma block is a chroma Cr component, the corresponding block may be a corresponding luma block of the luma component or a corresponding chroma block of the chroma Cb component.
  • the corresponding luma block or the corresponding chroma block may be a block already decoded at the decoding time of the current chroma block.
  • a corresponding luma block is described, but may be performed by the method for the corresponding chroma block.
  • the peripheral samples of the current chroma block may include left peripheral samples, upper left peripheral samples, and upper peripheral samples of the current chroma block, wherein the peripheral samples of the corresponding luma block are selected from the current chroma block.
  • the peripheral samples of the corresponding luma block corresponding to the peripheral samples may be represented.
  • the left peripheral samples are p [-1] [0] to p [-1] [N-1]
  • the upper left peripheral sample is p [-1] [-1]
  • the upper peripheral samples are p [0] [-1] to p [M-1] [-1 ] Can be.
  • an area including the peripheral samples of the current chroma block for deriving the linear model parameters may be represented as a template of the current chroma block.
  • the template of the current chroma block is p [-1. ] [0] to p [-1] [N-1], p [-1] [-1], p [0] [-1] to p [M-1] [-1] Can be represented.
  • an area including the peripheral samples of the corresponding luma block corresponding to the peripheral samples of the current chroma block may be represented as a template of the corresponding luma block.
  • the decoding apparatus may derive linear model parameters for the current chroma block based on the neighbor samples of the current chroma block and the neighbor samples of the corresponding luma block. That is, the decoding apparatus may derive linear model parameters for the current chroma block based on the template of the current chroma block and the template of the corresponding luma block.
  • a relation between the current chroma block and the corresponding luma block may be derived based on the neighbor samples of the current chroma block and the corresponding luma block, wherein the relation is the current chroma block and the corresponding luma. It may be an equation representing a model between blocks.
  • the relational expression may be an equation representing a linear model of the current chroma block and the corresponding luma block, or may be an equation representing a model other than the linear model.
  • the linear model parameters may include coefficients and offsets of the linear model. The parameters may be derived based on Equations 5 and 6 described above.
  • a flag indicating whether an intra prediction mode of the current chroma block is derived based on a linear model through a bitstream may be obtained. It may be determined whether an intra prediction mode of the current chroma block is derived among the plurality of candidate intra prediction modes based on a linear model according to the value of the flag.
  • the decoding apparatus may generate the temporary prediction block based on the flag, and determine whether to derive an intra prediction mode of the current chroma block among the plurality of candidate intra prediction modes based on the temporary prediction block. That is, the decoding apparatus may determine whether to derive the parameters based on the flag.
  • the parameters of the relation may be derived based on the neighbor samples of the current chroma block and the neighbor samples of the corresponding luma block, and the value of the flag is 0. If, the parameters of the relation may not be derived.
  • the flag may be received at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or at a block level.
  • the decoding apparatus generates a temporary prediction block of the current chroma block based on the linear model parameters and the reconstructed samples of the corresponding luma block (S810).
  • the decoding apparatus may generate the temporary prediction block of the current chroma block based on the linear model parameters and the reconstructed samples of the corresponding luma block.
  • the linear model parameters may include a coefficient and an offset.
  • the coefficient may be called a scaling factor.
  • the decoding apparatus may generate temporary prediction samples of the current chroma block, that is, the temporary prediction block by substituting the reconstructed samples of the corresponding luma block into a relation representing the linear model.
  • the decoding apparatus may generate the temporary prediction samples (ie, the temporary prediction block) by adding the offset to a value obtained by multiplying the reconstructed samples of the corresponding luma block by the coefficient.
  • the sample of the temporary prediction block may be derived by adding the offset to a value obtained by multiplying the coefficient by the reconstruction sample of the corresponding luma block.
  • the temporary prediction block may be derived based on Equation 7 described above.
  • the decoding apparatus derives an intra prediction mode of the current chroma block among a plurality of candidate intra prediction modes for the current chroma block based on the temporary prediction block (S820).
  • the decoding apparatus may derive the intra prediction mode of the current chroma block among the plurality of candidate intra prediction modes for the current chroma block based on the temporary prediction block.
  • the decoding apparatus may generate candidate prediction blocks of the current chroma block by performing prediction on the current chroma block based on the plurality of candidate intra prediction modes, and among the candidate prediction blocks.
  • a candidate intra prediction mode for the candidate prediction block most similar to the block may be derived as the intra prediction mode of the current chroma block.
  • the decoding apparatus may generate candidate prediction blocks of the current chroma block using neighboring samples of the current chroma block according to each of the plurality of candidate intra prediction modes. Meanwhile, the neighboring samples used to generate candidate prediction blocks of the current chroma block may include left neighboring samples, upper left neighboring samples, and upper neighboring samples of the current chroma block, and the size of the current chroma block is MxN.
  • the left peripheral samples are p [-1] [0] to p [-1] [M + N-1]
  • the upper left peripheral sample may be p [-1] [-1]
  • the upper peripheral sample may be p [0] [-1] to p [M + N-1] [-1].
  • the peripheral samples used to generate candidate prediction blocks of the current chroma block include peripheral samples not included in the peripheral samples of the current chroma block (the template of the current chroma block) for deriving the linear model parameters. can do.
  • the decoding apparatus may derive the distortion of each of the generated candidate prediction blocks with the temporary prediction block, and derive a candidate intra prediction mode for the candidate prediction block having the smallest distortion as the intra prediction mode of the current chroma block. can do.
  • the distortion of the candidate prediction block with the temporary prediction block may indicate a degree of difference between the candidate prediction block and the temporary prediction block.
  • a plurality of candidate intra prediction modes for the current chroma block may be preset.
  • the plurality of candidate intra prediction modes for the current chroma block may be derived based on information indicating the plurality of candidate intra prediction modes for the current chroma block.
  • the decoding apparatus may obtain information indicating the plurality of candidate intra prediction modes for the current chroma block through the bitstream, and based on the information indicating the plurality of candidate intra prediction modes, the current chroma.
  • the plurality of candidate intra prediction modes for a block may be derived.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes.
  • the plurality of candidate intra prediction modes may be derived from intra prediction mode 1 to intra prediction mode m.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes and information indicating the number of the candidate intra prediction modes.
  • the plurality of candidate intra prediction modes may be derived from m-n + 1 intra prediction mode to m intra prediction mode.
  • Information indicating the plurality of candidate intra prediction modes may be received at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
  • the decoding apparatus derives a candidate prediction block generated based on the intra prediction mode of the current chroma block as a prediction block for the current chroma block (S830).
  • the decoding apparatus may derive a candidate prediction block of the current chroma block generated using neighboring samples of the current chroma block as a prediction block for the current chroma block according to the intra prediction mode.
  • the intra prediction mode may be one of two non-directional prediction modes and 33 directional prediction modes. As described above, the two non-directional prediction modes may include an intra DC mode and an intra planner mode. Alternatively, the intra prediction mode may be one of two non-directional intra prediction modes and 65 directional intra prediction modes. As described above, the two non-directional prediction modes may include an intra DC mode and an intra planner mode.
  • the 65 directional intra prediction modes may include vertical directional intra prediction modes and horizontal directional intra prediction modes.
  • the vertical directional intra prediction modes may include intra prediction mode 34 to intra intra prediction mode
  • the horizontal directional intra prediction modes may include intra prediction mode 2 to intra prediction mode 33.
  • the decoding apparatus may directly use a prediction sample as a reconstruction sample according to a prediction mode, or generate a reconstruction sample by adding a residual sample to the prediction sample.
  • the decoding apparatus may receive information about the residual for the target block, and the information about the residual may be included in the information about the reconstructed sample.
  • the information about the residual may include transform coefficients regarding the residual sample.
  • the decoding apparatus may derive the residual sample (or residual sample array) for the target block based on the residual information.
  • the decoding apparatus may generate a reconstructed sample based on the prediction sample and the residual sample, and may derive a reconstructed block or a reconstructed picture based on the reconstructed sample.
  • the decoding apparatus may apply an in-loop filtering procedure, such as a deblocking filtering and / or SAO procedure, to the reconstructed picture in order to improve subjective / objective picture quality as necessary.
  • the decoding apparatus may receive prediction information on the current chroma block and entropy decode through the bitstream.
  • the prediction information may include information indicating the plurality of candidate intra prediction modes for the current chroma block.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes.
  • the information indicating the plurality of candidate intra prediction modes may include information indicating a maximum value among mode numbers of the plurality of candidate intra prediction modes and information indicating the number of the candidate intra prediction modes.
  • Information indicating the plurality of candidate intra prediction modes may be received at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or a block level.
  • the prediction information may include a flag indicating whether an intra prediction mode of the current chroma block is derived based on a linear model.
  • the flag may be received at a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), a slice segment header, or at a block level.
  • an intra prediction mode of the current chroma block may be derived using components different from the current chroma block, and prediction may be performed based on neighboring samples of the current chroma block to include noise of the other components.
  • the prediction block of the current chroma block can be generated, thereby improving the prediction accuracy of the current chroma block, thereby improving the overall coding efficiency.
  • an intra prediction mode minimized in terms of distortion among intra prediction modes without transmission of information indicating an intra prediction mode of a current chroma block may be selected as an intra prediction mode of the current chroma block.
  • the amount of bits for prediction of the current chroma block can be reduced, and the overall coding efficiency can be improved.
  • the above-described method according to the present invention may be implemented in software, and the encoding device and / or the decoding device according to the present invention may perform image processing of, for example, a TV, a computer, a smartphone, a set-top box, a display device, and the like. It can be included in the device.
  • the above-described method may be implemented as a module (process, function, etc.) for performing the above-described function.
  • the module may be stored in memory and executed by a processor.
  • the memory may be internal or external to the processor and may be coupled to the processor by various well known means.
  • the processor may include application-specific integrated circuits (ASICs), other chipsets, logic circuits, and / or data processing devices.
  • the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card, storage medium and / or other storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de décodage d'image réalisé par un dispositif de décodage, ledit procédé comprenant : une étape consistant à dériver des paramètres de modèle linéaire d'un bloc de chrominance actuel sur la base d'échantillons voisins du bloc de chrominance actuel et d'échantillons voisins d'un bloc de luminance correspondant ; une étape consistant à générer un bloc de prédiction temporaire du bloc de chrominance actuel sur la base des paramètres de modèle linéaire et des échantillons de restauration du bloc de luminance correspondant ; une étape consistant à dériver un mode d'intra-prédiction du bloc de chrominance actuel, parmi une pluralité de modes d'intra-prédiction candidats du bloc de chrominance actuel, sur la base du bloc de prédiction temporaire ; et une étape consistant à dériver, en tant que bloc de prédiction du bloc de chrominance actuel, un bloc de prédiction candidat généré sur la base du mode d'intra-prédiction du bloc de chrominance actuel.
PCT/KR2018/000246 2017-01-11 2018-01-05 Procédé et dispositif de décodage d'image en fonction d'une intra-prédiction dans un système de codage d'image WO2018131838A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762445210P 2017-01-11 2017-01-11
US62/445,210 2017-01-11

Publications (1)

Publication Number Publication Date
WO2018131838A1 true WO2018131838A1 (fr) 2018-07-19

Family

ID=62840472

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/000246 WO2018131838A1 (fr) 2017-01-11 2018-01-05 Procédé et dispositif de décodage d'image en fonction d'une intra-prédiction dans un système de codage d'image

Country Status (1)

Country Link
WO (1) WO2018131838A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113273213A (zh) * 2018-12-31 2021-08-17 韩国电子通信研究院 图像编码/解码方法和设备以及存储比特流的记录介质
CN113396584A (zh) * 2018-12-07 2021-09-14 弗劳恩霍夫应用研究促进协会 用于增强交叉分量线性模型参数的计算的稳健性的编码器、解码器和方法
WO2023197190A1 (fr) * 2022-04-12 2023-10-19 Oppo广东移动通信有限公司 Procédé et appareil de codage, procédé et appareil de décodage, dispositif de codage, dispositif de décodage et support d'enregistrement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012044126A2 (fr) * 2010-10-01 2012-04-05 삼성전자 주식회사 Procédé et appareil d'intra-prédiction d'image
US20130336591A1 (en) * 2011-03-06 2013-12-19 Lg Electronics Inc. Intra prediction method of chrominance block using luminance sample, and apparatus using same
US20150124881A1 (en) * 2010-04-05 2015-05-07 Samsung Electronics Co., Ltd. Determining intra prediction mode of image coding unit and image decoding unit
WO2016072777A1 (fr) * 2014-11-06 2016-05-12 삼성전자 주식회사 Procédé et dispositif d'encodage/décodage par prédiction intra combinée
WO2016115981A1 (fr) * 2015-01-22 2016-07-28 Mediatek Singapore Pte. Ltd. Procédé de codage vidéo destiné à des composants de chrominance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150124881A1 (en) * 2010-04-05 2015-05-07 Samsung Electronics Co., Ltd. Determining intra prediction mode of image coding unit and image decoding unit
WO2012044126A2 (fr) * 2010-10-01 2012-04-05 삼성전자 주식회사 Procédé et appareil d'intra-prédiction d'image
US20130336591A1 (en) * 2011-03-06 2013-12-19 Lg Electronics Inc. Intra prediction method of chrominance block using luminance sample, and apparatus using same
WO2016072777A1 (fr) * 2014-11-06 2016-05-12 삼성전자 주식회사 Procédé et dispositif d'encodage/décodage par prédiction intra combinée
WO2016115981A1 (fr) * 2015-01-22 2016-07-28 Mediatek Singapore Pte. Ltd. Procédé de codage vidéo destiné à des composants de chrominance

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113396584A (zh) * 2018-12-07 2021-09-14 弗劳恩霍夫应用研究促进协会 用于增强交叉分量线性模型参数的计算的稳健性的编码器、解码器和方法
CN113273213A (zh) * 2018-12-31 2021-08-17 韩国电子通信研究院 图像编码/解码方法和设备以及存储比特流的记录介质
CN113273213B (zh) * 2018-12-31 2024-07-02 韩国电子通信研究院 图像编码/解码方法和设备以及存储比特流的记录介质
WO2023197190A1 (fr) * 2022-04-12 2023-10-19 Oppo广东移动通信有限公司 Procédé et appareil de codage, procédé et appareil de décodage, dispositif de codage, dispositif de décodage et support d'enregistrement

Similar Documents

Publication Publication Date Title
WO2018062921A1 (fr) Procédé et appareil de partitionnement et de prédiction intra de blocs dans un système de codage d'image
WO2018174402A1 (fr) Procédé de transformation dans un système de codage d'image et appareil associé
WO2017052000A1 (fr) Procédé et appareil de prédiction inter basée sur le raffinement des vecteurs de mouvement dans un système de codage d'images
WO2017043786A1 (fr) Procédé et dispositif de prédiction intra dans un système de codage vidéo
WO2017034331A1 (fr) Procédé et dispositif de prédiction intra d'échantillon de chrominance dans un système de codage vidéo
WO2017069419A1 (fr) Procédé et appareil de prédiction intra dans un système de codage vidéo
WO2018056603A1 (fr) Procédé et appareil d'inter-prédiction basée sur une compensation d'éclairage dans un système de codage d'images
WO2016204360A1 (fr) Procédé et dispositif de prédiction de bloc basée sur la compensation d'éclairage dans un système de codage d'image
WO2016200043A1 (fr) Procédé et appareil d'inter-prédiction en fonction d'une image de référence virtuelle dans un système de codage vidéo
WO2018062702A1 (fr) Procédé et appareil de prédiction intra dans un système de codage d'images
WO2019198997A1 (fr) Procédé de codage d'image à base d'intraprédiction et appareil pour cela
WO2018070632A1 (fr) Procédé et dispositif de décodage vidéo dans un système de codage vidéo
WO2019112071A1 (fr) Procédé et appareil de décodage d'image basés sur une transformation efficace de composante de chrominance dans un système de codage d'image
WO2018174357A1 (fr) Procédé et dispositif de décodage d'image dans un système de codage d'image
WO2018056602A1 (fr) Appareil et procédé de prédiction-inter dans un système de codage d'image
WO2019194500A1 (fr) Procédé de codage d'images basé sur une prédication intra et dispositif associé
WO2018062699A1 (fr) Procédé et appareil de décodage d'images dans un système de codage d'images
WO2019194507A1 (fr) Procédé de codage d'image basé sur une prédiction de mouvement affine, et dispositif associé
WO2018212430A1 (fr) Procédé de filtrage de domaine de fréquence dans un système de codage d'image et dispositif associé
WO2018066791A1 (fr) Procédé et appareil de décodage d'image dans un système de codage d'images
WO2018128223A1 (fr) Procédé et appareil d'inter-prédiction dans un système de codage d'image
WO2018131838A1 (fr) Procédé et dispositif de décodage d'image en fonction d'une intra-prédiction dans un système de codage d'image
WO2018070661A1 (fr) Procédé et appareil de décodage d'image reposant sur une prédiction intra dans un système de codage d'image
WO2018084344A1 (fr) Procédé et dispositif de décodage d'image dans un système de codage d'image
WO2022114742A1 (fr) Appareil et procédé de codage et décodage vidéo

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18739343

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18739343

Country of ref document: EP

Kind code of ref document: A1