WO2020242145A1 - 적응적 파라미터 셋을 사용하는 비디오 코딩 방법 및 장치 - Google Patents
적응적 파라미터 셋을 사용하는 비디오 코딩 방법 및 장치 Download PDFInfo
- Publication number
- WO2020242145A1 WO2020242145A1 PCT/KR2020/006704 KR2020006704W WO2020242145A1 WO 2020242145 A1 WO2020242145 A1 WO 2020242145A1 KR 2020006704 W KR2020006704 W KR 2020006704W WO 2020242145 A1 WO2020242145 A1 WO 2020242145A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction
- parameter set
- aps
- quantization
- unit
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
- H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/188—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a video data packet, e.g. a network abstraction layer [NAL] unit
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/88—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
Definitions
- the present invention relates to a video signal processing method and apparatus.
- a block splitting structure means a unit that performs encoding and decoding, and a unit to which major encoding and decoding techniques such as prediction and transformation are applied.
- major encoding and decoding techniques such as prediction and transformation are applied.
- video encoding and decoding are performed using unit blocks that are subdivided according to a quadtree-type block splitting structure and a role for prediction and transformation.
- various types of block division such as QTBT (QuadTree plus Binary Tree) in the form of combining a quad tree and a binary-tree, and MTT (Multi-Type-Tree) in which a triple-tree is combined. Structures have been proposed to improve video coding efficiency.
- QTBT QuadTree plus Binary Tree
- MTT Multi-Type-Tree
- the present disclosure is to improve the coding efficiency of a video signal.
- An object of the present disclosure is to provide a method and apparatus for efficiently defining/managing various parameters applied in units of pictures or slices.
- the present disclosure is to provide a method and apparatus for obtaining a scaling list for quantization/dequantization.
- the present invention provides a video coding method and apparatus using an adaptive parameter set.
- a transform coefficient of a current block is obtained by decoding a bitstream, and inverse quantization is performed on the obtained transform coefficient based on a quantization-related parameter included in the bitstream.
- a quantized transform coefficient may be obtained, and a residual block of the current block may be reconstructed based on the inverse quantized transform coefficient.
- the quantization-related parameter may be obtained from an adaptive parameter set (APS) of the bitstream.
- the obtaining of the dequantized transform coefficient comprises: obtaining a scaling list for inverse quantization based on the quantization related parameter, the scaling list and a predetermined Based on the weight, deriving a scaling factor and applying the derived scaling factor to the transform coefficient may be included.
- the quantization-related parameter may include at least one of a copy mode flag, a prediction mode flag, a delta identifier, or difference coefficient information.
- the weight may be obtained from a weight candidate list pre-defined in the decoding apparatus.
- the number of weight candidate lists pre-defined in the decoding apparatus is two or more, and any one of the plurality of weight candidate lists based on an encoding parameter of the current block Can be used selectively.
- the adaptive parameter set is a syntax structure including a parameter set used in a predetermined video unit, and the parameter set is an adaptive loop filter (ALF) related parameter, It may include at least one of a mapping model-related parameter for luma mapping with chroma scaling or the quantization-related parameter.
- ALF adaptive loop filter
- the adaptive parameter set may further include at least one of an identifier for the adaptive parameter set or adaptive parameter set type information.
- the same identifier is allocated to different adaptive parameter set types, but the adaptive parameter set may be managed using different lists for each adaptive parameter set type. have.
- a transform coefficient of a current block is obtained, inverse quantization is performed on the transform coefficient based on a predetermined quantization-related parameter to obtain an inverse quantized transform coefficient, and the inverse quantization Based on the converted coefficient, the residual block of the current block may be restored.
- the quantization-related parameter may be included in the adaptive parameter set (APS) of the bitstream and transmitted.
- the video decoding method comprises: decoding the bitstream to obtain transform coefficients of a current block, the bitstream Performing inverse quantization on the obtained transform coefficients based on included quantization-related parameters, obtaining inverse quantized transform coefficients, and restoring a residual block of the current block based on the inverse quantized transform coefficient It may include.
- the quantization-related parameter may be obtained from an adaptive parameter set (APS) of the bitstream.
- video signal coding efficiency may be improved by using an adaptive parameter set.
- FIG. 1 is a block diagram showing an image encoding apparatus according to the present invention.
- FIG. 2 is a block diagram showing an image decoding apparatus according to the present invention.
- FIG 3 shows an embodiment of a syntax table of an adaptation parameter set (APS).
- FIG. 4 shows an embodiment of a syntax table for transmission and parsing of quantization-related parameters.
- FIG. 5 illustrates an embodiment of a method for reconstructing a residual block based on a quantization-related parameter.
- FIG. 6 is a diagram illustrating an embodiment of an APS syntax table to which an APS type for weight prediction is added.
- FIG. 7 is a diagram illustrating another embodiment of an APS syntax table to which an APS type for weight prediction is added.
- FIG. 8 is a diagram illustrating another embodiment of an APS syntax table to which an APS type for weight prediction is added.
- FIG. 9 is a diagram illustrating an embodiment of a syntax table for transmitting and parsing parameters for weight prediction.
- FIG. 10 is a diagram illustrating an embodiment of an APS syntax table to which an APS type for a block division structure is added.
- 11 and 12 illustrate embodiments of a syntax table for additionally signaled or parsed block structure parameters when the current APS type is a parameter for a block structure.
- FIG. 13 is a diagram illustrating a part of a syntax table for a slice header in order to show an embodiment of APS signaling or parsing for a block division structure in a slice header.
- FIG. 14 is a diagram showing the concept of managing APS by using different lists according to APS types.
- the present invention provides a video coding method and apparatus using an adaptive parameter set.
- a transform coefficient of a current block is obtained by decoding a bitstream, and inverse quantization is performed on the obtained transform coefficient based on a quantization-related parameter included in the bitstream.
- a quantized transform coefficient may be obtained, and a residual block of the current block may be reconstructed based on the inverse quantized transform coefficient.
- the quantization-related parameter may be obtained from an adaptive parameter set (APS) of the bitstream.
- the obtaining of the dequantized transform coefficient comprises: obtaining a scaling list for inverse quantization based on the quantization related parameter, the scaling list and a predetermined Based on the weight, deriving a scaling factor and applying the derived scaling factor to the transform coefficient may be included.
- the quantization-related parameter may include at least one of a copy mode flag, a prediction mode flag, a delta identifier, or difference coefficient information.
- the weight may be obtained from a weight candidate list pre-defined in the decoding apparatus.
- the number of weight candidate lists pre-defined in the decoding apparatus is two or more, and any one of the plurality of weight candidate lists based on an encoding parameter of the current block Can be used selectively.
- the adaptive parameter set is a syntax structure including a parameter set used in a predetermined video unit, and the parameter set is an adaptive loop filter (ALF) related parameter, It may include at least one of a mapping model-related parameter for luma mapping with chroma scaling or the quantization-related parameter.
- ALF adaptive loop filter
- the adaptive parameter set may further include at least one of an identifier for the adaptive parameter set or adaptive parameter set type information.
- the same identifier is allocated to different adaptive parameter set types, but the adaptive parameter set may be managed using different lists for each adaptive parameter set type. have.
- a video encoding method and apparatus obtains a transform coefficient of a current block, performs inverse quantization on the transform coefficient based on a predetermined quantization-related parameter to obtain an inverse quantized transform coefficient, and the inverse quantization Based on the converted coefficient, the residual block of the current block may be restored.
- the quantization-related parameter may be included in the adaptive parameter set (APS) of the bitstream and transmitted.
- the video decoding method comprises: decoding the bitstream to obtain transform coefficients of a current block, the bitstream Performing inverse quantization on the obtained transform coefficients based on included quantization-related parameters, obtaining inverse quantized transform coefficients, and restoring a residual block of the current block based on the inverse quantized transform coefficient It may include.
- the quantization-related parameter may be obtained from an adaptive parameter set (APS) of the bitstream.
- ⁇ (to) or the step of does not mean a step for.
- terms such as first and second may be used to describe various elements, but the elements should not be limited to the terms. These terms are used only for the purpose of distinguishing one component from another component.
- components shown in the embodiments of the present invention are independently shown to represent different characteristic functions, and it does not mean that each component is formed of separate hardware or a single software component. That is, each constituent unit is described by being listed as a respective constituent unit for convenience of description, and at least two constituent units of each constituent unit are combined to form one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform a function. An integrated embodiment and a separate embodiment of each of these components are also included in the scope of the present invention unless departing from the essence of the present invention.
- ⁇ unit ⁇ group”, “ ⁇ unit”, “ ⁇ module”, “ ⁇ block” are used to process at least one function or operation. It means a unit, which can be implemented in hardware or software, or a combination of hardware and software.
- a coding block refers to a processing unit of a set of target pixels on which encoding and decoding are currently performed, and may be used interchangeably as a coding block and a coding unit.
- the coding unit refers to a CU (Coding Unit) and may be generically referred to including a CB (Coding Block).
- quadtree splitting refers to one block being divided into four independent coding units
- binary splitting refers to one block being divided into two independent coding units
- ternary division refers to that one block is divided into three independent coding units in a 1:2:1 ratio.
- FIG. 1 is a block diagram showing an image encoding apparatus according to the present invention.
- the image encoding apparatus 100 includes a picture splitter 110, a prediction unit 120, 125, a transform unit 130, a quantization unit 135, a rearrangement unit 160, and an entropy encoder ( 165, an inverse quantization unit 140, an inverse transform unit 145, a filter unit 150, and a memory 155 may be included.
- the picture dividing unit 110 may divide the input picture into at least one processing unit.
- the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
- a coding unit may be used as a unit that performs encoding or a unit that performs decoding.
- the prediction unit may be split in a shape such as at least one square or rectangle of the same size within one coding unit, or one prediction unit among prediction units split within one coding unit is another prediction. It may be divided to have a shape and/or size different from the unit.
- intra prediction may be performed without dividing into a plurality of prediction units NxN.
- the prediction units 120 and 125 may include an inter prediction unit 120 that performs inter prediction or inter prediction, and an intra prediction unit 125 that performs intra prediction or intra prediction. It is possible to determine whether to use inter prediction or to perform intra prediction for the prediction unit, and determine specific information (eg, intra prediction mode, motion vector, reference picture, etc.) according to each prediction method.
- a residual value (residual block) between the generated prediction block and the original block may be input to the transform unit 130.
- prediction mode information, motion vector information, etc. used for prediction may be encoded by the entropy encoder 165 together with a residual value and transmitted to a decoder.
- the encoder since the encoder does not generate the prediction mode information and motion vector information, the corresponding information is not transmitted to the decoder.
- the encoder it is possible for the encoder to signal and transmit information indicating that motion information is derived and used from the side of the decoder and information on a technique used for inducing the motion information.
- the inter prediction unit 120 may predict a prediction unit based on information of at least one picture of a previous picture or a subsequent picture of the current picture. In some cases, the prediction unit may be predicted based on information of a partial region in the current picture that has been encoded. You can also predict units.
- various methods such as a merge mode, an advanced motion vector prediction (AMVP) mode, an affine mode, a current picture referencing mode, and a combined prediction mode may be used.
- AMVP advanced motion vector prediction
- affine mode affine mode
- a current picture referencing mode a combined prediction mode
- the merge mode at least one motion vector among spatial/temporal merge candidates may be set as a motion vector of a current block, and inter prediction may be performed using the motion vector.
- the pre-set motion vector may be corrected by adding an additional motion vector difference value (MVD) to the pre-set motion vector.
- the corrected motion vector may be used as the final motion vector of the current block, which will be described in detail with reference to FIG. 15.
- the Rane mode is a method of dividing a current block into a predetermined sub-block unit and performing inter prediction using a motion vector derived in each sub-block unit.
- the sub-block unit is represented by NxM, and N and M may be integers of 4, 8, 16 or more, respectively.
- the shape of the sub-block may be square or non-square.
- the sub-block unit may be a fixed one pre-committed to the encoding apparatus, or may be variably determined in consideration of the size/shape of the current block, and the component type.
- the current picture reference mode is an inter prediction method using a pre-restored region in the current picture to which the current block belongs and a predetermined block vector, which will be described in detail with reference to FIGS. 9 to 14.
- a first prediction block through inter prediction and a second prediction block through intra prediction are generated for one current block, respectively, and a predetermined weight is applied to the first and second prediction blocks. This is a method of generating the final prediction block.
- inter prediction may be performed using any one of the aforementioned inter prediction modes.
- the intra prediction may be performed by fixedly using only an intra prediction mode (eg, any one of a planar mode, a DC mode, a vertical/horizontal mode, and a diagonal mode) pre-set in the encoding apparatus.
- the intra prediction mode for intra prediction may be derived based on an intra prediction mode of a neighboring block (eg, at least one of left, upper, upper left, upper right, and lower right) adjacent to the current block.
- the number of neighboring blocks used may be fixed to one or two, or may be three or more. Even if all of the aforementioned neighboring blocks are available, only one of the left neighboring blocks or the upper neighboring blocks may be limited to be used, or only the left and upper neighboring blocks may be restricted.
- the weight may be determined in consideration of whether the aforementioned neighboring block is a block encoded in an intra mode. It is assumed that the weight w1 is applied to the first prediction block and the weight w2 is applied to the second prediction block.
- w1 may be a natural number smaller than w2. For example, the ratio of w1 and w2 (raito) may be [1:3].
- w1 may be a natural number greater than w2.
- the ratio of w1 and w2 (raito) may be [3:1].
- w1 may be set equal to w2.
- the inter prediction unit 120 may include a reference picture interpolation unit, a motion prediction unit, and a motion compensation unit.
- the reference picture interpolation unit may receive reference picture information from the memory 155 and generate pixel information of an integer number of pixels or less from the reference picture.
- a DCT-based interpolation filter with different filter coefficients may be used to generate pixel information of an integer pixel or less in units of 1/4 pixels.
- a DCT-based interpolation filter with different filter coefficients may be used to generate pixel information of an integer pixel or less in units of 1/8 pixels.
- the motion prediction unit may perform motion prediction based on the reference picture interpolated by the reference picture interpolation unit.
- Various methods such as a full search-based block matching algorithm (FBMA), three step search (TSS), and a new three-step search algorithm (NTS), can be used as a method for calculating a motion vector.
- FBMA full search-based block matching algorithm
- TSS three step search
- NTS new three-step search algorithm
- the motion vector may have a motion vector value in units of 1/2 or 1/4 pixels based on the interpolated pixels.
- the motion prediction unit may predict the current prediction unit by differently predicting the motion.
- the intra predictor 125 may generate a prediction unit based on reference pixel information around a current block, which is pixel information in the current picture.
- the neighboring block of the current prediction unit is a block that has performed inter prediction and the reference pixel is a pixel that has performed inter prediction
- the reference pixel included in the block that has performed inter prediction is a reference pixel of the block that has performed intra prediction Can be used in place of information. That is, when the reference pixel is not available, information on the reference pixel that is not available may be replaced with at least one reference pixel among the available reference pixels.
- a residual block including a prediction unit that performs prediction based on a prediction unit generated by the prediction units 120 and 125 and residual information that is a difference value from the original block of the prediction unit may be generated.
- the generated residual block may be input to the transform unit 130.
- the original block and the residual block including residual information of the prediction unit generated through the prediction units 120 and 125 are converted to DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), and KLT. You can convert it using the same conversion method. Whether to apply DCT, DST, or KLT to transform the residual block may be determined based on intra prediction mode information of a prediction unit used to generate the residual block.
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- KLT Discrete Sine Transform
- the quantization unit 135 may quantize values converted into the frequency domain by the transform unit 130. Quantization coefficients may vary depending on the block or the importance of the image. The value calculated by the quantization unit 135 may be provided to the inverse quantization unit 140 and the rearrangement unit 160.
- the rearrangement unit 160 may rearrange coefficient values on the quantized residual values.
- the rearrangement unit 160 may change the two-dimensional block shape coefficients into a one-dimensional vector shape through a coefficient scanning method. For example, the rearrangement unit 160 may scan from a DC coefficient to a coefficient in a high frequency region using a Zig-Zag Scan method, and change it into a one-dimensional vector form.
- a vertical scan that scans a two-dimensional block shape coefficient in a column direction and a horizontal scan that scans a two-dimensional block shape coefficient in a row direction may be used. That is, according to the size of the transformation unit and the intra prediction mode, it is possible to determine which of the zig-zag scan, vertical direction scan, and horizontal direction scan method is used.
- the entropy encoding unit 165 may perform entropy encoding based on values calculated by the rearrangement unit 160. Entropy coding may use various coding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC). In relation to this, the entropy encoder 165 may encode residual value coefficient information of a coding unit from the rearrangement unit 160 and the prediction units 120 and 125. In addition, according to the present invention, it is possible to signal and transmit information indicating that motion information is derived from the side of a decoder and used, and information on a technique used for inducing motion information.
- CAVLC Context-Adaptive Variable Length Coding
- CABAC Context-Adaptive Binary Arithmetic Coding
- the inverse quantization unit 140 and the inverse transform unit 145 inverse quantize values quantized by the quantization unit 135 and inverse transform the values transformed by the transform unit 130.
- the residual value generated by the inverse quantization unit 140 and the inverse transform unit 145 is reconstructed by being combined with the prediction units predicted through the motion estimation unit, motion compensation unit, and intra prediction unit included in the prediction units 120 and 125 Blocks (Reconstructed Block) can be created.
- the filter unit 150 may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
- the deblocking filter can remove block distortion caused by the boundary between blocks in the reconstructed picture, which will be described with reference to FIGS. 3 to 8.
- the offset correction unit may correct an offset from the original image in pixel units of the deblocking image. In order to perform offset correction for a specific picture, the pixels included in the image are divided into a certain number of areas, and then the area to be offset is determined and the offset is applied to the area, or offset by considering the edge information of each pixel. You can use the method to apply.
- Adaptive Loop Filtering ALF
- ALF Adaptive Loop Filtering
- the memory 155 may store the reconstructed block or picture calculated through the filter unit 150, and the stored reconstructed block or picture may be provided to the prediction units 120 and 125 when performing inter prediction.
- FIG. 2 is a block diagram showing an image decoding apparatus according to the present invention.
- the image decoder 200 includes an entropy decoder 210, a rearrangement unit 215, an inverse quantization unit 220, an inverse transform unit 225, prediction units 230 and 235, and a filter unit. 240) and a memory 245 may be included.
- the input bitstream may be decoded in a procedure opposite to that of the image encoder.
- the entropy decoder 210 may perform entropy decoding in a procedure opposite to that of performing entropy encoding in the entropy encoder of the image encoder. For example, various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied in response to the method performed by the image encoder.
- various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied in response to the method performed by the image encoder.
- the entropy decoder 210 may decode information related to intra prediction and inter prediction performed by the encoder.
- the rearrangement unit 215 may perform rearrangement based on a method in which the entropy-decoded bitstream by the entropy decoder 210 is rearranged by the encoder.
- the coefficients expressed in the form of a one-dimensional vector may be reconstructed into coefficients in the form of a two-dimensional block and rearranged.
- the inverse quantization unit 220 may perform inverse quantization based on a quantization parameter provided by an encoder and a coefficient value of a rearranged block.
- the inverse transform unit 225 may perform an inverse transform, that is, an inverse DCT, an inverse DST, and an inverse KLT, for transforms, that is, DCT, DST, and KLT, performed by the transform unit on the quantization result performed by the image encoder. Inverse transformation may be performed based on a transmission unit determined by the image encoder.
- the inverse transform unit 225 of the image decoder may selectively perform a transformation technique (eg, DCT, DST, KLT) according to a plurality of pieces of information such as a prediction method, a size of a current block, and a prediction direction.
- a transformation technique eg, DCT, DST, KLT
- the prediction units 230 and 235 may generate a prediction block based on information related to generation of a prediction block provided from the entropy decoder 210 and information on a previously decoded block or picture provided from the memory 245.
- Intra prediction for the prediction unit is performed based on the pixel existing at the top, but when the size of the prediction unit and the size of the transformation unit are different when performing intra prediction, the intra prediction is performed using a reference pixel based on the transformation unit. You can make predictions.
- intra prediction using NxN splitting for only the smallest coding unit may be used.
- the prediction units 230 and 235 may include a prediction unit determining unit, an inter prediction unit, and an intra prediction unit.
- the prediction unit discrimination unit receives various information such as prediction unit information input from the entropy decoder 210, prediction mode information of the intra prediction method, motion prediction related information of the inter prediction method, and divides the prediction unit from the current coding unit, and predicts It can be determined whether the unit performs inter prediction or intra prediction.
- the encoder 100 does not transmit motion prediction-related information for the inter prediction, instead, information indicating that motion information is derived from the side of the decoder and used, and information about a technique used for deriving motion information are transmitted. In this case, the prediction unit determination unit determines the prediction performance of the inter prediction unit 23 based on the information transmitted from the encoder 100.
- the inter prediction unit 230 uses information necessary for inter prediction of the current prediction unit provided by the video encoder to predict the current based on information included in at least one picture of a previous picture or a subsequent picture of the current picture containing the current prediction unit. Inter prediction for a unit can be performed. In order to perform inter prediction, an inter prediction mode of a prediction unit included in a corresponding coding unit may be determined based on a coding unit. Regarding the inter prediction mode, the above-described merge mode, AMVP mode, affine mode, current picture reference mode, combined prediction mode, and the like may also be used in the decoding apparatus, and detailed descriptions thereof will be omitted. The inter prediction unit 230 may determine an inter prediction mode of the current prediction unit with a predetermined priority, which will be described with reference to FIGS. 16 to 18.
- the intra prediction unit 235 may generate a prediction block based on pixel information in the current picture.
- intra prediction may be performed based on intra prediction mode information of a prediction unit provided from an image encoder.
- the intra prediction unit 235 may include an AIS (Adaptive Intra Smoothing) filter, a reference pixel interpolation unit, and a DC filter.
- the AIS filter is a part that performs filtering on a reference pixel of the current block, and may determine whether to apply the filter according to the prediction mode of the current prediction unit and apply it.
- AIS filtering may be performed on a reference pixel of a current block by using the prediction mode and AIS filter information of the prediction unit provided by the image encoder. When the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter may not be applied.
- the reference pixel interpolator may interpolate the reference pixel to generate a reference pixel of a pixel unit having an integer value or less.
- the prediction mode of the current prediction unit is a prediction mode that generates a prediction block without interpolating a reference pixel
- the reference pixel may not be interpolated.
- the DC filter may generate a prediction block through filtering when the prediction mode of the current block is the DC mode.
- the reconstructed block or picture may be provided to the filter unit 240.
- the filter unit 240 may include a deblocking filter, an offset correction unit, and an ALF.
- the deblocking filter of the image decoder information related to the deblocking filter provided by the image encoder is provided, and the image decoder can perform deblocking filtering on the corresponding block, which will be described with reference to FIGS. 3 to 8.
- the offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction applied to the image during encoding and information on the offset value.
- the ALF may be applied to a coding unit based on information on whether to apply ALF and ALF coefficient information provided from an encoder. Such ALF information may be provided by being included in a specific parameter set.
- the memory 245 can store the reconstructed picture or block so that it can be used as a reference picture or a reference block, and can provide the reconstructed picture to an output unit.
- the present disclosure relates to a method and apparatus for signaling various parameters that can be applied for each picture or slice, such as adaptive loop filter, reshaper, quantization, and weighted prediction among video coding technologies, in one parameter set.
- a parameter applied in a predetermined image unit may be transmitted to the encoding/decoding apparatus by using one pre-defined parameter set.
- the image unit may be at least one of a video sequence, a picture, a slice, a tile, or a brick.
- parameters that can be applied for each picture or slice such as an adaptive loop filter and a resharper, may be transmitted using one predefined parameter set.
- one parameter set is used, but an additional signaling method for the type of the parameter set may be used. Since different types are signaled using one parameter set, a parameter set identifier (ID) or a list managing parameter sets can be shared even though the types of parameter sets are different.
- ID parameter set identifier
- a list managing parameter sets can be shared even though the types of parameter sets are different.
- a method and apparatus for sharing a parameter set identifier and a list or independently managing them are proposed.
- FIG 3 shows an embodiment of a syntax table of an adaptation parameter set (APS).
- the adaptive parameter set defines/manages parameters for each APS type in an integrated manner, but is a parameter set for using/managing parameters by signaling only the identifier (ID) of the parameter set used in the video unit in the header of the video unit. That is, by using an adaptive parameter set, various parameters applied to a predetermined video unit (eg, one or more pictures, one or more slices) are defined as separate parameter sets, Signaling may not be performed in units of images.
- a predetermined video unit eg, one or more pictures, one or more slices
- ALF adaptive loop filter
- LMCS luma mapping with chroma scaling
- parameters related to weights for weighted prediction and parameters for block structure may also be included.
- a picture (or slice, tile, etc.) partition-related parameter, a reference picture set or reference structure-related parameter, a quantization-related parameter, a transform-related parameter, and other in-loop filter-related parameters may be included.
- Quantization-related parameters, APS types for them, weight-related parameters for weighted prediction, APS types for them, parameters for block structure, and APS types for them will be described in detail later in the present disclosure. .
- adaptation_parameter_set_id 301 which is an identifier for the adaptive parameter set, may be signaled.
- the signaling of the adaptive parameter set identifier 301 may mean giving a unique specific value (number) to each of one or more adaptive parameter sets transmitted through one video stream.
- the adaptive parameter set identifier 301 may mean information for specifying any one of a plurality of adaptive parameter sets pre-defined in the encoding/decoding apparatus.
- the adaptive parameter set identifier may be expressed as a value from 0 to 2 N -1, and may be transmitted using bits of a fixed length as N bits.
- N may be one of 2, 3, 4, 5, and 6.
- N is 3 is shown.
- the adaptive parameter set identifier 301 may use one number sequence even though the adaptive parameter set type 302 to be described later is different from the adaptive parameter set type without dependency.
- the adaptive parameter set identifier 301 may be defined with dependence on the adaptive parameter set type 302.
- the adaptive parameter set identifier 301 for the ALF adaptive parameter set type is from 0 to 7 It can have any one value.
- the adaptive parameter set identifier 301 for the LMCS adaptive parameter set type may have a value from 0 to 3.
- the adaptive parameter set identifier 301 for the quantization adaptive parameter set type may have any one value from 0 to 7.
- parameter sets having different adaptive parameter set types 302 may use the same value.
- the adaptive parameter set identifier (ALF_APS_ID) for ALF and the adaptive parameter set identifier (LMCS_APS_ID) for LMCS may use the same value.
- the adaptive parameter set identifier (ALF_APS_ID) for ALF and the adaptive parameter set identifier (SCALING_APS_ID) for quantization may use the same value.
- aps_params_type 302 which is information on an APS type specifying the type of a parameter included in the corresponding APS, may be signaled.
- an ALF APS type indicating a parameter for ALF an LMCS APS type indicating a parameter for LMCS, and the like may be defined.
- a SCALING APS type indicating a quantization related parameter may be additionally defined.
- parameters included in a corresponding APS may be different according to an APS type, and an additional parameter-related syntax parsing process for a corresponding APS type may be performed according to the APS type.
- alf_data() 303 can be called to perform ALF-related parameter parsing
- LMCS_APS when the current APS type is LMCS_APS, lmcs_data() (304) You can parse the parameters related to LMCS by calling. If the current APS type is SCALING_APS, a quantization-related parameter can be parsed by calling scaling_list_data( ).
- an ALF-related parameter may be extracted by calling alf_data() function. Extraction of the parameter may be performed based on the above-described identifier 301. To this end, in the alf_data() function, an ALF related parameter is defined for each identifier 310, and an ALF related parameter corresponding to the corresponding identifier 310 may be extracted. Alternatively, the extraction of the parameter may be performed without dependence on the above-described identifier 301. Likewise, if the current APS type is LMCS_APS, LMCS-related parameters can be extracted by calling the lmcs_data() function.
- parameters related to LMCS may be defined for each identifier 310.
- an LMCS-related parameter corresponding to the identifier 301 may be extracted.
- the parameter extraction may be performed without dependence on the above-described identifier 301.
- quantization-related parameters can be extracted by calling the scaling_list_data() function.
- a quantization-related parameter may be defined for each identifier 310.
- a quantization related parameter corresponding to the identifier 301 may be extracted.
- the parameter extraction may be performed without dependence on the above-described identifier 301.
- an ALF-related parameter may be extracted with dependence on the identifier 301 and the rest may be extracted without dependency on the identifier 301.
- ALF, LMCS, and quantization-related parameters may all be extracted with dependence on the identifier 301, or all may be extracted without dependency on the identifier 301.
- Whether to depend on the identifier 301 may be selectively determined according to the APS type.
- the selection may be a pre-committed to the encoding/decoding device, or may be determined based on a value of the identifier 301 or activation or not. This can be applied in the same/similar manner to various APS types described below.
- an APS type for weighted prediction, block structure, and the like may be defined.
- An embodiment of an APS syntax table in which APS types for weight prediction and block structure are defined will be described in detail later.
- FIG. 4 shows an embodiment of a syntax table for transmission and parsing of quantization-related parameters.
- a copy mode flag (scaling_list_copy_mode_flag) may be signaled.
- the copy mode flag may indicate whether the scaling list is obtained based on the copy mode. For example, when the copy mode flag is the first value, the copy mode may be used, otherwise, the copy mode may not be used.
- the copy mode flag may be parsed based on the identifier (id).
- the identifier id is information derived based on the encoding parameter of the current block, which will be described in detail later with reference to FIG. 5.
- a prediction mode flag (scaling_list_pred_mode_flag) may be signaled.
- the prediction mode flag may indicate whether the scaling list is obtained based on the prediction mode. For example, when the prediction mode flag is the first value, the prediction mode is used, otherwise, the prediction mode may not be used.
- the prediction mode flag may be parsed based on the copy mode flag. That is, it can be parsed only when the copy mode is not used according to the copy mode flag.
- a delta identifier (scaling_list_pred_id_delta) may be signaled.
- the delta identifier may be information for specifying a reference scaling list used to obtain the scaling list.
- the delta identifier may be signaled only when a copy mode is used according to the above-described copy mode flag or a prediction mode is used according to the prediction mode flag.
- the delta identifier is signaled in consideration of the above-described identifier (id), for example, as shown in FIG. 4, the identifier (id) is a value (0, 2, 8) pre-defined in the decoding device It can be signaled only if it does not correspond to.
- the delta identifier may not be signaled when the maximum value of the width and height of the current block is 4 or 8, the component type of the current block is the luminance component, and the prediction mode of the current block is the intra mode.
- difference coefficient information (scaling_list_delta_coef) may be signaled.
- the difference coefficient information may mean information encoded to specify a difference between a current coefficient and a previous coefficient of the scaling list.
- the difference coefficient information may be signaled only when the copy mode is not used in the copy mode flag. That is, the difference coefficient information may be used in a prediction mode and a transmission mode to be described later, respectively.
- FIG. 5 illustrates an embodiment of a method of reconstructing a residual block based on a quantization-related parameter.
- a transform coefficient of a current block may be obtained by decoding a bitstream (S500).
- the transform coefficient may mean a coefficient obtained by performing transform and quantization on a residual sample in an encoding apparatus.
- the transform coefficient may mean a coefficient obtained by skipping the transform of the residual sample and performing only quantization.
- Transform coefficients may be variously expressed as coefficients, residual coefficients, and transform coefficient levels.
- inverse quantization may be performed on the obtained transform coefficient to obtain an inverse quantized transform coefficient (S510).
- the dequantized transform coefficient may be derived by applying a predetermined scaling factor (hereinafter, referred to as a final scaling factor) to the transform coefficient.
- a predetermined scaling factor hereinafter, referred to as a final scaling factor
- the final scaling factor may be derived by applying a predetermined weight to the initial scaling factor.
- the initial scaling factor may be determined based on a scaling list corresponding to an identifier of the current block (hereinafter, referred to as a first identifier).
- the decoding apparatus may derive the first identifier based on the encoding parameter of the current block.
- the encoding parameter may include at least one of a prediction mode, a component type, a size, a shape, a transformation type, or whether a transformation is skipped.
- the size of the current block may be expressed as a width, height, sum of width and height, product of width and height, or maximum/minimum value of width and height.
- the first identifier may be derived as shown in Table 1.
- the first identifier may have a value of 0 to 27.
- the first identifier may be adaptively derived according to a maximum value among the width (nTbW) and height (nTbH) of the current block, a prediction mode (predMode), and a component type (cIdx).
- the scaling list according to the present disclosure has a form of an M x N matrix, and M and N may be the same or different.
- Each component of the matrix may be referred to as a coefficient or a matrix coefficient.
- the size of the matrix may be variably determined based on the first identifier of the current block. Specifically, when the first identifier is smaller than the first threshold size, at least one of M and N is determined as 2, and when the first identifier is greater than or equal to the first threshold size and smaller than the second threshold size, M and At least one of N may be determined as 4. When the first identifier is larger than the second threshold size, at least one of M and N may be determined as 8.
- the first threshold size may be an integer of 2, 3, 4, 5 or higher
- the second threshold size may be an integer of 8, 9, 10, 11 or higher.
- a scaling list for inverse quantization of the current block may be derived based on a quantization related parameter.
- the quantization-related parameter may include at least one of a copy mode flag, a prediction mode flag, a delta identifier, or difference coefficient information.
- Quantization-related parameters may be signaled in an adaptive parameter set (APS).
- the adaptive parameter set may mean a syntax structure including parameters applied to a picture and/or slice.
- one adaptive parameter set may be signaled through a bitstream, or a plurality of adaptive parameter sets may be signaled.
- the plurality of adaptive parameter sets may be identified by the adaptive parameter set identifier 301.
- Each adaptive parameter set may have different adaptive parameter set identifiers 301 from each other.
- the quantization-related parameter for the scaling list of the current block may be signaled from an adaptive parameter set specified by a predetermined identifier (hereinafter referred to as a second identifier) among a plurality of adaptive parameter sets.
- the second identifier is information encoded to specify any one of a plurality of adaptive parameter sets, and may be signaled in a predetermined image unit (picture, slice, tile, or block).
- a second identifier is signaled in the header of the corresponding image unit, and the corresponding image unit may obtain a scaling list by using a quantization related parameter extracted from an adaptive parameter set corresponding to the second identifier.
- a method of obtaining a scaling list based on a quantization-related parameter will be described.
- the scaling list of the current block may be set to be the same as the scaling list (ie, reference scaling list) corresponding to the reference identifier.
- the reference identifier may be derived based on the first identifier of the current block and a predetermined delta identifier.
- the delta identifier may be information that is encoded and signaled by an encoding device to identify the reference scaling list.
- the reference identifier may be set as a difference value between the first identifier of the current block and the delta identifier.
- the scaling list of the current block may be set to be the same as the default scaling list.
- the default scaling list is pre-defined in the decoding apparatus, and each coefficient of the default scaling list may have a predetermined constant value (e.g., 2, 4, 8, 16).
- the copy mode may be used based on a copy mode flag indicating whether or not the copy mode is used. For example, when the copy mode flag is the first value, the copy mode is used, otherwise, the copy mode may not be used.
- the scaling list of the current block may be determined based on the prediction scaling list and the differential scaling list.
- the prediction scaling list may be derived based on the aforementioned reference scaling list. That is, the reference scaling list specified by the first identifier and the delta identifier of the current block may be set as the prediction scaling list.
- the prediction scaling list may be determined based on the default scaling list.
- the differential scaling list likewise has the form of an M x N matrix, and each coefficient of the matrix can be derived based on differential coefficient information signaled from the bitstream. For example, difference coefficient information, which is a difference between a previous coefficient and a current coefficient, is signaled, and the current coefficient may be obtained by using the previous coefficient and the signaled difference coefficient information.
- difference coefficient information which is a difference between a previous coefficient and a current coefficient
- the scaling list of the current block may be determined by adding the prediction scaling list and the differential scaling list.
- the prediction mode may be used based on a prediction mode flag indicating whether the prediction mode is used. For example, if the prediction mode flag is the first value, the prediction mode is used, otherwise, the prediction mode may not be used.
- At least one coefficient of the scaling list of the current block may be derived based on difference coefficient information signaled by the encoding apparatus.
- the signaled difference coefficient information may be used to determine a difference coefficient that is a difference between the previous coefficient and the current coefficient. That is, the current coefficient of the scaling list may be derived using the previous coefficient and signaled difference coefficient information, and through this process, the scaling list of the current block may be obtained.
- a predetermined offset may be applied to at least one coefficient included in the obtained scaling list.
- the offset may be a fixed constant value (e.g., 2, 4, 8, 16) pre-committed to the decoding device.
- a final scaling list for inverse quantization may be obtained.
- the transmission mode may be used only when the above-described copy mode and prediction mode are not used according to the copy mode flag and the prediction mode flag.
- the above-described weight may be obtained from a weight candidate list pre-defined in the decoding apparatus.
- the weight candidate list may include one or more weight candidates. Any one of the weight candidates included in the weight candidate list may be set as the weight.
- the weight candidate list may be composed of six weight candidates.
- the weight candidate list may be defined as ⁇ 40, 45, 51, 57, 64, 72 ⁇ or ⁇ 57, 64, 72, 80, 90, 102 ⁇ .
- the present invention is not limited thereto, and the number of weight candidates may be 2, 3, 4, 5, 7 or more.
- the weight candidate list may include a weight candidate of a value less than 40 or a weight candidate of a value greater than 102.
- the number of pre-defined weight candidate lists may be one or two or more.
- any one weight candidate list may be selectively used. In this case, the selection may be performed in consideration of the encoding parameter of the current block.
- the encoding parameters are the same as described above, and redundant descriptions will be omitted.
- the pre-defined weight candidate list is ⁇ 40, 45, 51, 57, 64, 72 ⁇ (hereinafter referred to as the first list) and ⁇ 57, 64, 72, 80, 90, 102 ⁇ (hereinafter, (Referred to as the second list).
- the first list may be used, otherwise, the second list may be used.
- the shape of the current block is a square, the first list may be used, otherwise, the second list may be used.
- the first list is used, and if not, the first list or the second list may be selectively used according to the shape of the current block as described above.
- a residual block of a current block may be restored based on an inverse quantized transform coefficient (S520).
- the residual block When the transform skip is not applied, the residual block may be restored by performing inverse transform on the inverse quantized transform coefficient. On the other hand, when the transform skip is applied, the residual block may be restored by setting the inverse quantized transform coefficient as a residual sample.
- the above-described residual block reconstruction process may be performed in the same or similar manner in the encoding apparatus, and redundant descriptions will be omitted.
- FIG. 6 is a diagram illustrating an embodiment of an APS syntax table to which an APS type for weight prediction is added.
- parameters for weight prediction may be signaled and parsed using APS.
- an APS type for transmitting a parameter for weight prediction may be defined, which may be mapped to one of 0 to 2 N -1.
- N may be one of 2, 3, 4, and 5, and in the embodiment shown in FIG. 6, a case where N is 3 is illustrated as an example.
- a step 600 of signaling or parsing a parameter for weight prediction may be added.
- a weight prediction-related parameter may be extracted by calling the pred_weight_table() function.
- the pred_weight_table() function may define only parameters related to unidirectional weight prediction or may define only parameters related to bidirectional weight prediction. Alternatively, the pred_weight_table() function may define parameters for unidirectional and bidirectional weight prediction, respectively.
- the pred_weight_table() function may define at least one of a parameter related to implicit weight prediction or a parameter related to explicit weight prediction.
- the extraction of the parameter may be performed based on the above-described identifier 301.
- a weight prediction related parameter is defined for each identifier, and a weight prediction related parameter corresponding to the corresponding identifier 301 may be extracted.
- the extraction of the parameter may be performed without dependence on the above-described identifier 301.
- FIG. 7 is a diagram illustrating another embodiment of an APS syntax table to which an APS type for weight prediction is added.
- parameters for weight prediction may be signaled and parsed using APS.
- an APS type for transmitting a parameter for unidirectional weight prediction may be defined, and an APS type for transmitting a parameter for bidirectional weight prediction may be separately defined.
- the APS type for unidirectional weight prediction and the APS type for bidirectional weight prediction may be mapped to one number from 0 to 2 N -1, respectively.
- N may be one of 2, 3, 4, and 5, and in the example illustrated in FIG. 7, a case where N is 3 is illustrated as an example.
- an operation 700 or 701 of signaling or parsing a parameter for weight prediction may be added.
- a pred_weight_table() function for unidirectional weight prediction and a bipred_weight_table() function for bi-directional weight prediction may be defined, respectively.
- the pred_weight_table() function is called to extract parameters related to unidirectional weight prediction
- bipred_weight_table() function is called to extract parameters related to bi-directional weight prediction. Extraction of the parameter may be performed based on the above-described identifier 301.
- pred_weight_table() and bipred_weight_table() may define a weight prediction related parameter for each identifier, and a weight prediction related parameter corresponding to the corresponding identifier 301 may be extracted. Alternatively, the extraction of the parameter may be performed without dependence on the above-described identifier 301.
- FIG. 8 is a diagram illustrating another embodiment of an APS syntax table to which an APS type for weight prediction is added.
- parameters for weight prediction may be signaled and parsed using APS.
- an APS type for transmitting a parameter for unidirectional weight prediction may be defined, and an APS type for transmitting a parameter for bidirectional weight prediction may be separately defined.
- the APS type for unidirectional weight prediction and the APS type for bidirectional weight prediction may be mapped to one number from 0 to 2 N -1, respectively.
- N may be one of 2, 3, 4, and 5, and in the exemplary embodiment illustrated in FIGS. 7 and 8, a case where N is 3 is illustrated as an example.
- an operation 800 or 801 of signaling or parsing a parameter for weight prediction may be added.
- an APS type for unidirectional or bidirectional prediction may be used as an input to a step of signaling or parsing a parameter for weight prediction, and signaling or parsing accordingly may be performed.
- the pred_weight_table() function may define a parameter for unidirectional weight prediction and a parameter for bidirectional weight prediction, respectively.
- a parameter for weight prediction corresponding to the aforementioned APS type 302 may be extracted.
- a parameter for bidirectional weight prediction may be derived from a parameter for one-way weight prediction.
- pred_weight_table() may define a weight prediction related parameter for each identifier, and a weight prediction related parameter corresponding to the corresponding identifier 301 may be extracted.
- the extraction of the parameter may be performed without dependence on the above-described identifier 301.
- FIG. 9 is a diagram illustrating an embodiment of a syntax table for transmitting and parsing parameters for weight prediction.
- FIG. 8 A diagram showing an embodiment of an additional step (800, 801) of signaling or parsing a parameter for weight prediction shown in FIG. 8, aps_param_type corresponding to an APS type in a step of signaling or parsing a parameter for weight prediction This can be used as input.
- aps_param_type means bidirectional prediction according to the aps_param_type (901)
- an additional weight prediction parameter signaling or parsing step 920 for bidirectional prediction may be added.
- values such as the number of reference pictures can use a predefined fixed value, or can be used by referring to a parameter for a reference picture structure transmitted in advance. .
- FIG. 10 is a diagram illustrating an embodiment of an APS syntax table to which an APS type for a block division structure is added.
- aps_params_type 302 which is information about the APS type specifying the type of parameter included in the APS, may be signaled.
- an ALF APS type indicating a parameter for ALF an LMCS APS type indicating a parameter for LMCS, and the like may be defined.
- an APS type for transmitting parameters for a block division structure may be defined, and parameter transmission and parsing may be performed.
- parameters included in the corresponding APS may be different, and an additional parameter related syntax parsing process for the corresponding APS type may be performed according to the APS type.
- a step 1001 of signaling or parsing a parameter for a block partition structure may be additionally performed.
- parameters for weight prediction may be signaled and parsed using APS.
- an APS type for transmitting a parameter for weight prediction may be defined, which may be mapped to one of 0 to 2 N -1.
- N may be one of 2, 3, 4, and 5, and in the embodiment illustrated in FIG. 10, a case where N is 3 is illustrated as an example.
- 11 and 12 illustrate examples of a syntax table for parameters for a block structure that is additionally signaled or parsed when the current APS type is a parameter for a block partition structure.
- parameters for a block division structure that can be applied to an image unit are signaled with parameters 1110 for a luma tree and parameters 1120 for a chroma tree when a specific condition is satisfied in one parameter set.
- An example syntax table is shown.
- APS IDs for parameters for the block division structure may be signaled or parsed.
- FIG. 13 The embodiment is illustrated in FIG. 13.
- FIG. 13 is a diagram illustrating a part of a syntax table for a slice header in order to show an embodiment of APS signaling or parsing for a block division structure in a slice header.
- the current slice type or chroma separate tree (CST) technique is used in signaling or parsing the block division structure in a slice header, etc.
- CST chroma separate tree
- the block partitioning structure parameter corresponding to the APS ID parsed by slice_mtt_aps_id (1300) is set to luma tree and chroma The same applies to the tree.
- the block partitioning structure parameter corresponding to the APS ID parsed in slice_mtt_aps_id (1300) is applied to the luma tree
- slice_mtt_chroma_aps_id 1310
- the block division structure parameter corresponding to the parsed APS ID is applied to the chroma tree.
- FIG. 13 shows an embodiment of transmitting a block division structure in a slice header, but signaling or a sequence parameter set (SPS) or a picture parameter set (PPS) for the block division structure
- SPS sequence parameter set
- PPS picture parameter set
- signaling or parsing may be performed as in the example of the slice.
- FIG. 14 is a diagram showing the concept of managing APS by using different lists according to APS types.
- the parameter set identifier 301 can be defined.
- the adaptive parameter set identifier 301 for the ALF adaptive parameter set type is selected from 0 to 7 It can have any one value.
- the adaptive parameter set identifier 301 for the LMCS adaptive parameter set type may have a value from 0 to 3.
- the adaptive parameter set identifier 301 for the quantization adaptive parameter set type may have any one value from 0 to 7.
- parameter sets having different adaptive parameter set types 302 may use the same value.
- the adaptive parameter set identifier (ALF_APS_ID) for ALF and the adaptive parameter set identifier (LMCS_APS_ID) for LMCS may use the same value.
- the adaptive parameter set identifier (ALF_APS_ID) for ALF and the adaptive parameter set identifier (SCALING_APS_ID) for quantization may use the same value.
- the same APS_ID is allocated for different APS types, but different lists for each APS type can be used for management. Allocating the same APS_ID means that the intervals of the identifier 301 value defined for each APS type may be the same or overlap with each other. That is, as in the above-described example, ALF_APS_ID and SCALING_APS_ID may have any one of 0 to 7, and LMCS_APS_ID may have any one of 0 to 3. In this case, the same APS_ID may be allocated even if they have different APS types. As shown in Fig. 14, for each APS type, a list for ALF_APS, a list for LMCS_APS, a list for SCALING_APS, etc. are each defined/used, and each list has one or more adaptive ones having a different identifier (APS_ID). A set of parameters can be defined. Here, the list may be interpreted to mean a separate area or space.
- Different APS_IDs may be allocated according to the APS type, and adaptive parameter sets may be managed using different lists.
- a different APS_ID may be allocated for each APS type and managed using a single list.
- the same APS_ID may be allocated for different APS types, and the same list may be used for APS types having the same APS_ID to manage them.
- various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
- one or more ASICs Application Specific Integrated Circuits
- DSPs Digital Signal Processors
- DSPDs Digital Signal Processing Devices
- PLDs Programmable Logic Devices
- FPGAs Field Programmable Gate Arrays
- general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
- the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that allow an operation according to a method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
- a non-transitory computer-readable medium non-transitory computer-readable medium
- the present invention can be used to encode/decode video signals.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
max( nTbW, nTbH ) | 2 | 4 | 8 | 16 | 32 | 64 | |
predMode = MODE_INTRA | cIdx = 0 (Y) | - | 2 | 8 | 14 | 20 | 26 |
cIdx = 1 (Cb) | - | 3 | 9 | 15 | 21 | 21 | |
cIdx = 2 (Cr) | - | 4 | 10 | 16 | 22 | 22 | |
predMode = MODE_INTER | cIdx = 0 (Y) | - | 5 | 11 | 17 | 23 | 27 |
cIdx = 1 (Cb) | 0 | 6 | 12 | 18 | 24 | 24 | |
cIdx = 2 (Cr) | 1 | 7 | 13 | 19 | 25 | 25 |
Claims (10)
- 비트스트림을 복호화하여 현재 블록의 변환 계수를 획득하는 단계;상기 비트스트림에 포함된 양자화 관련 파라미터를 기반으로 상기 획득된 변환 계수에 역양자화를 수행하여, 역양자화된 변환 계수를 획득하는 단계; 및상기 역양자화된 변환 계수를 기반으로, 상기 현재 블록의 잔차 블록을 복원하는 단계를 포함하되,상기 양자화 관련 파라미터는 상기 비트스트림의 적응적 파라미터 셋(APS)으로부터 획득되는, 비디오 복호화 방법.
- 제1항에 있어서, 상기 역양자화된 변환 계수를 획득하는 단계는,상기 양자화 관련 파라미터를 기반으로, 상기 역양자화를 위한 스케일링 리스트를 획득하는 단계;상기 스케일링 리스트와 소정의 가중치를 기반으로, 스케일링 팩터를 유도하는 단계; 및상기 유도된 스케일링 팩터를 상기 변환 계수에 적용하는 단계를 포함하는, 비디오 복호화 방법.
- 제2항에 있어서,상기 양자화 관련 파라미터는, 복사 모드 플래그, 예측 모드 플래그, 델타 식별자 또는 차분 계수 정보 중 적어도 하나를 포함하는, 비디오 복호화 방법.
- 제2항에 있어서,상기 가중치는, 복호화 장치에 기-정의된 가중치 후보 리스트로부터 획득되는, 비디오 복호화 방법.
- 제4항에 있어서,상기 복호화 장치에 기-정의된 가중치 후보 리스트의 개수는 2개 이상이고,상기 현재 블록의 부호화 파라미터에 기초하여, 상기 복수의 가중치 후보 리스트 중 어느 하나가 선택적으로 이용되는, 비디오 복호화 방법.
- 제1항에 있어서,상기 적응적 파라미터 셋은, 소정의 영상 단위에서 사용되는 파라미터 셋을 포함하는 신택스 구조이고,상기 파라미터 셋은, 적응적 루프 필터(ALF) 관련 파라미터, 리쉐이퍼(luma mapping with chroma scaling)에 대한 매핑 모델 관련 파라미터 또는 상기 양자화 관련 파라미터 중 적어도 하나를 포함하는, 비디오 복호화 방법.
- 제6항에 있어서,상기 적응적 파라미터 셋은, 적응적 파라미터 셋에 대한 식별자 또는 적응적 파라미터 셋 타입 정보 중 적어도 하나를 더 포함하는, 비디오 복호화 방법.
- 제7항에 있어서,서로 다른 적응적 파라미터 셋 타입에 대해서 동일한 식별자를 할당하되,상기 적응적 파라미터 세트는, 상기 적응적 파라미터 셋 타입 별로 서로 다른 리스트를 이용하여 관리되는, 비디오 복호화 방법.
- 현재 블록의 변환 계수를 획득하는 단계;소정의 양자화 관련 파라미터를 기반으로 상기 변환 계수에 역양자화를 수행하여, 역양자화된 변환 계수를 획득하는 단계; 및상기 역양자화된 변환 계수를 기반으로, 상기 현재 블록의 잔차 블록을 복원하는 단계를 포함하되,상기 양자화 관련 파라미터는 상기 비트스트림의 적응적 파라미터 셋(APS)에 포함되어 전송되는, 비디오 부호화 방법.
- 비디오 복호화 방법에 의해 복호화되는 비트스트림을 저장하는 컴퓨터로 판독가능한 기록 매체에 있어서,상기 비디오 복호화 방법은,비트스트림을 복호화하여 현재 블록의 변환 계수를 획득하는 단계;상기 비트스트림에 포함된 양자화 관련 파라미터를 기반으로 상기 획득된 변환 계수에 역양자화를 수행하여, 역양자화된 변환 계수를 획득하는 단계; 및상기 역양자화된 변환 계수를 기반으로, 상기 현재 블록의 잔차 블록을 복원하는 단계를 포함하되,상기 양자화 관련 파라미터는 상기 비트스트림의 적응적 파라미터 셋(APS)으로부터 획득되는, 컴퓨터로 판독가능한 기록 매체.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112021023469A BR112021023469A2 (pt) | 2019-05-24 | 2020-05-22 | Método de codificação de vídeo e aparelho usando conjunto de parâmetros adaptativos |
MX2021014277A MX2021014277A (es) | 2019-05-24 | 2020-05-22 | Metodo y aparato de codificacion de video que utilizan conjunto de parametros adaptativos. |
CA3141350A CA3141350A1 (en) | 2019-05-24 | 2020-05-22 | Video coding method and apparatus using adaptive parameter set |
CN202080038371.2A CN113875240A (zh) | 2019-05-24 | 2020-05-22 | 使用自适应参数集的视频译码方法和设备 |
KR1020217042387A KR20220003124A (ko) | 2019-05-24 | 2020-05-22 | 적응적 파라미터 셋을 사용하는 비디오 코딩 방법 및 장치 |
US17/357,753 US11190769B2 (en) | 2019-05-24 | 2021-06-24 | Method and apparatus for coding image using adaptation parameter set |
US17/511,399 US11575898B2 (en) | 2019-05-24 | 2021-10-26 | Method and apparatus for coding image using adaptation parameter set |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0060975 | 2019-05-24 | ||
KR20190060975 | 2019-05-24 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/357,753 Continuation US11190769B2 (en) | 2019-05-24 | 2021-06-24 | Method and apparatus for coding image using adaptation parameter set |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020242145A1 true WO2020242145A1 (ko) | 2020-12-03 |
Family
ID=73552070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/006704 WO2020242145A1 (ko) | 2019-05-24 | 2020-05-22 | 적응적 파라미터 셋을 사용하는 비디오 코딩 방법 및 장치 |
Country Status (7)
Country | Link |
---|---|
US (2) | US11190769B2 (ko) |
KR (1) | KR20220003124A (ko) |
CN (1) | CN113875240A (ko) |
BR (1) | BR112021023469A2 (ko) |
CA (1) | CA3141350A1 (ko) |
MX (1) | MX2021014277A (ko) |
WO (1) | WO2020242145A1 (ko) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111602397B (zh) * | 2018-01-17 | 2024-01-23 | 英迪股份有限公司 | 使用各种变换技术的视频编码方法和装置 |
WO2020204419A1 (ko) * | 2019-04-03 | 2020-10-08 | 엘지전자 주식회사 | 적응적 루프 필터 기반 비디오 또는 영상 코딩 |
KR20220004767A (ko) * | 2019-07-05 | 2022-01-11 | 엘지전자 주식회사 | 루마 맵핑 기반 비디오 또는 영상 코딩 |
KR20220017440A (ko) * | 2019-07-08 | 2022-02-11 | 엘지전자 주식회사 | 스케일링 리스트 데이터의 시그널링 기반 비디오 또는 영상 코딩 |
WO2021006632A1 (ko) * | 2019-07-08 | 2021-01-14 | 엘지전자 주식회사 | 스케일링 리스트 파라미터 기반 비디오 또는 영상 코딩 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013034161A (ja) * | 2011-06-28 | 2013-02-14 | Sharp Corp | 画像復号装置、画像符号化装置、および符号化データのデータ構造 |
KR101418096B1 (ko) * | 2012-01-20 | 2014-07-16 | 에스케이 텔레콤주식회사 | 가중치예측을 이용한 영상 부호화/복호화 방법 및 장치 |
JP2015518353A (ja) * | 2012-04-26 | 2015-06-25 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | ビデオコーディングにおける量子化パラメータ(qp)コーディング |
KR20190033036A (ko) * | 2017-09-20 | 2019-03-28 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
KR20190033771A (ko) * | 2017-09-22 | 2019-04-01 | 삼성전자주식회사 | 영상 인코딩 장치, 영상 디코딩 장치, 영상 인코딩 방법 및 영상 디코딩 방법 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9451252B2 (en) * | 2012-01-14 | 2016-09-20 | Qualcomm Incorporated | Coding parameter sets and NAL unit headers for video coding |
KR102657933B1 (ko) * | 2018-03-30 | 2024-04-22 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 인트라 예측 기반 영상/비디오 코딩 방법 및 그 장치 |
-
2020
- 2020-05-22 CA CA3141350A patent/CA3141350A1/en active Pending
- 2020-05-22 WO PCT/KR2020/006704 patent/WO2020242145A1/ko active Application Filing
- 2020-05-22 CN CN202080038371.2A patent/CN113875240A/zh active Pending
- 2020-05-22 MX MX2021014277A patent/MX2021014277A/es unknown
- 2020-05-22 KR KR1020217042387A patent/KR20220003124A/ko unknown
- 2020-05-22 BR BR112021023469A patent/BR112021023469A2/pt unknown
-
2021
- 2021-06-24 US US17/357,753 patent/US11190769B2/en active Active
- 2021-10-26 US US17/511,399 patent/US11575898B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013034161A (ja) * | 2011-06-28 | 2013-02-14 | Sharp Corp | 画像復号装置、画像符号化装置、および符号化データのデータ構造 |
KR101418096B1 (ko) * | 2012-01-20 | 2014-07-16 | 에스케이 텔레콤주식회사 | 가중치예측을 이용한 영상 부호화/복호화 방법 및 장치 |
JP2015518353A (ja) * | 2012-04-26 | 2015-06-25 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | ビデオコーディングにおける量子化パラメータ(qp)コーディング |
KR20190033036A (ko) * | 2017-09-20 | 2019-03-28 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
KR20190033771A (ko) * | 2017-09-22 | 2019-04-01 | 삼성전자주식회사 | 영상 인코딩 장치, 영상 디코딩 장치, 영상 인코딩 방법 및 영상 디코딩 방법 |
Also Published As
Publication number | Publication date |
---|---|
CA3141350A1 (en) | 2020-12-03 |
US20210321104A1 (en) | 2021-10-14 |
CN113875240A (zh) | 2021-12-31 |
US11190769B2 (en) | 2021-11-30 |
MX2021014277A (es) | 2022-01-06 |
KR20220003124A (ko) | 2022-01-07 |
US11575898B2 (en) | 2023-02-07 |
BR112021023469A2 (pt) | 2022-01-18 |
US20220053190A1 (en) | 2022-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018212578A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018044088A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020242145A1 (ko) | 적응적 파라미터 셋을 사용하는 비디오 코딩 방법 및 장치 | |
WO2016175550A1 (ko) | 비디오 신호의 처리 방법 및 이를 위한 장치 | |
WO2017039256A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018008904A2 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2012081879A1 (ko) | 인터 예측 부호화된 동영상 복호화 방법 | |
WO2012023762A2 (ko) | 인트라 예측 복호화 방법 | |
WO2018044087A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2019078581A1 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
WO2020009419A1 (ko) | 병합 후보를 사용하는 비디오 코딩 방법 및 장치 | |
WO2012148138A2 (ko) | 인트라 예측 방법과 이를 이용한 부호화기 및 복호화기 | |
WO2019117639A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2013154366A1 (ko) | 블록 정보에 따른 변환 방법 및 이러한 방법을 사용하는 장치 | |
WO2016114583A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018056702A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013109122A1 (ko) | 계층적 부호화 단위에 따라 스캔 순서를 변경하는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 | |
WO2016159610A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2019078664A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016048092A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016122251A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020013609A1 (ko) | 화면내 예측 기반의 비디오 코딩 방법 및 장치 | |
WO2021034115A1 (ko) | 크로마 양자화 파라미터 오프셋 관련 정보를 코딩하는 영상 디코딩 방법 및 그 장치 | |
WO2018128222A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2015009009A1 (ko) | 스케일러블 비디오 신호 인코딩/디코딩 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20814106 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3141350 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021023469 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 20217042387 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112021023469 Country of ref document: BR Kind code of ref document: A2 Effective date: 20211122 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20814106 Country of ref document: EP Kind code of ref document: A1 |