CN117941356A - Video signal encoding/decoding method and recording medium storing bit stream - Google Patents

Video signal encoding/decoding method and recording medium storing bit stream Download PDF

Info

Publication number
CN117941356A
CN117941356A CN202280062043.5A CN202280062043A CN117941356A CN 117941356 A CN117941356 A CN 117941356A CN 202280062043 A CN202280062043 A CN 202280062043A CN 117941356 A CN117941356 A CN 117941356A
Authority
CN
China
Prior art keywords
prediction
block
motion
current
reference picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280062043.5A
Other languages
Chinese (zh)
Inventor
任星元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KT Corp
Original Assignee
KT Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KT Corp filed Critical KT Corp
Priority claimed from PCT/KR2022/013787 external-priority patent/WO2023043223A1/en
Publication of CN117941356A publication Critical patent/CN117941356A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The video encoding/decoding method according to the present disclosure may include the steps of: obtaining a first prediction block for the current block based on the first inter prediction mode; obtaining a second prediction block for the current block based on the second inter prediction mode; and obtaining a final prediction block for the current block based on the first prediction block and the second prediction block.

Description

Video signal encoding/decoding method and recording medium storing bit stream
Technical Field
The present disclosure relates to video signal processing methods and apparatus.
Background
Recently, demands for high resolution, high quality images, such as High Definition (HD) images and Ultra High Definition (UHD) images, have increased in various application fields. As the resolution and quality of video data are improved, the amount of data is increased relative to existing video data, and thus, when video data is transmitted using a medium such as a conventional wired broadband line or a wireless broadband line or stored using a conventional storage medium, transmission costs and storage costs are increased. Efficient video compression techniques may be used to address such problems that occur as the resolution and quality of video data increases.
There are various video compression techniques, such as an inter-prediction technique of predicting pixel values included in a current picture from a picture before or after the current picture, an intra-prediction technique of predicting pixel values included in the current picture using pixel information in the current picture, and an entropy encoding technique of assigning short codes to values having a high frequency of occurrence and long codes to values having a low frequency of occurrence. By using these video compression techniques, video data can be efficiently compressed and transmitted or stored.
Meanwhile, as the demand for high-resolution video increases, the demand for three-dimensional video content as a new video service also increases. Discussion is being made about video compression techniques for efficiently providing high-resolution and ultra-high-resolution three-dimensional video content.
Disclosure of Invention
Technical problem
The present disclosure has been made in view of the above-mentioned problems, and an object of the present disclosure is to provide a method of performing motion estimation on a decoder side based on a reconstructed picture and an apparatus for performing the method.
It is another object of the present disclosure to provide a method of improving prediction accuracy by combining a plurality of inter prediction modes and an apparatus for performing the same.
It is another object of the present disclosure to provide a method of adaptively determining a search range of motion estimation and an apparatus for performing the method.
It is another object of the present disclosure to provide a method of correcting motion information signaled from an encoder by motion estimation at a decoder side and an apparatus for performing the method.
The objects to be achieved by the present disclosure are not limited to the above-mentioned objects, and other objects not mentioned will be clearly understood by those skilled in the art from the following description.
Technical solution
According to an aspect of the present invention, the above and other objects can be accomplished by the provision of a video decoding method comprising: obtaining a first prediction block for the current block based on the first inter prediction mode; obtaining a second prediction block for the current block based on the second inter prediction mode; and obtaining a final prediction block for the current block based on the first prediction block and the second prediction block.
According to another aspect of the present disclosure, there is provided a video encoding method including: obtaining a first prediction block for the current block based on the first inter prediction mode; obtaining a second prediction block for the current block based on the second inter prediction mode; and obtaining a final prediction block for the current block based on the first prediction block and the second prediction block.
In the video decoding/encoding method according to the present disclosure, at least one of the first inter prediction mode or the second inter prediction mode may be a decoder-side motion estimation mode in which a decoder performs motion estimation using a previously reconstructed reference picture in the same manner as an encoder.
In a video decoding/encoding method according to the present disclosure, motion estimation may include: and a process of searching for a combination having an optimal cost among a combination of a current template including a previously reconstructed region around the current block and a reference template of the same size as the current template within the reference picture.
In the video decoding/encoding method according to the present disclosure, motion estimation may be performed on each of reference pictures having a reference picture index smaller than a threshold value in the reference picture list.
In the video decoding/encoding method according to the present disclosure, motion estimation may be performed on each of reference pictures in a reference picture list whose output order difference with respect to a current picture is equal to or less than a threshold.
In the video decoding/encoding method according to the present disclosure, the reference template may be searched within a search range set in the reference picture, and the search range may be set based on initial motion information of the current block.
In the video decoding/encoding method according to the present disclosure, the initial motion information may be motion information about an area larger than the current block.
In the video decoding/encoding method according to the present disclosure, the reference template may be searched within a search range set in the reference picture, the search range may be determined based on a motion characteristic of a region including the current block, and the motion characteristic of the region may be set to one of a region having strong motion or a region having weak motion.
In a video decoding/encoding method according to the present disclosure, motion estimation may include: a process of searching for a combination having an optimal cost among the combination of the L0 reference block included in the L0 reference picture and the L1 reference block included in the L1 reference picture.
In the video decoding/encoding method according to the present disclosure, the output order of the current picture may be between the output order of the L0 reference picture and the output order of the L1 reference picture.
In the video decoding/encoding method according to the present disclosure, a first inter prediction mode may be used for L0 direction prediction of a current block, and a second inter prediction mode may be used for L1 direction prediction of the current block.
In the video decoding/encoding method according to the present disclosure, a final prediction block may be derived based on a weighted sum operation of the first prediction block and the second prediction block, and a first weight assigned to the first prediction block and a second weight assigned to the second prediction block during the weighted sum operation may be adaptively determined according to a type of the first inter prediction mode or the second inter prediction mode.
In the video decoding/encoding method according to the present disclosure, in the case where the first inter prediction mode is a decoder-side motion estimation mode and the second inter prediction mode is a motion information signaling mode, the first weight may have a value greater than the second weight.
The features briefly summarized above with respect to the present disclosure are merely exemplary aspects of the detailed description of the disclosure described below and do not limit the scope of the disclosure.
Technical effects
According to the present disclosure, signaling overhead can be reduced by performing motion estimation based on pre-reconstructed pictures on the decoder side.
According to the present disclosure, prediction accuracy can be improved by combining a plurality of inter prediction modes.
In accordance with the present disclosure, the complexity of an encoder/decoder may be reduced by providing a method of adaptively determining the search range of motion estimation.
According to the present disclosure, prediction accuracy can be improved by correcting motion information signaled from an encoder through motion estimation at the decoder side.
Effects obtainable from the present disclosure are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood from the following description by those of ordinary skill in the art to which the present disclosure pertains.
Drawings
Fig. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present disclosure.
Fig. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present disclosure.
Fig. 3 shows an example in which motion estimation is performed.
Fig. 4 and 5 illustrate examples in which a prediction block for a current block is generated based on motion information generated through motion estimation.
Fig. 6 shows the positions referenced to derive motion vector predictors.
Fig. 7 is a diagram illustrating a template-based motion estimation method.
Fig. 8 shows an example of a template configuration.
Fig. 9 is a diagram showing a motion estimation method based on a bi-directional matching method.
Fig. 10 is a diagram showing a motion estimation method based on a one-way matching method.
Fig. 11 shows a predefined intra prediction mode.
Fig. 12 shows an example in which prediction samples are generated in a planar mode.
Fig. 13 shows an example in which prediction samples are generated in DC mode.
Fig. 14 shows an example in which prediction samples are generated in a directional intra-prediction mode.
Fig. 15 is a flowchart illustrating a method of deriving a prediction block for a current block using a combined prediction method.
Fig. 16 shows an example in which a plurality of prediction blocks are generated according to a combined prediction method.
Fig. 17 shows various examples in which a current block is segmented according to a segmentation type.
Fig. 18 is a view showing an example of directions in which division lines for dividing the current block are located.
Fig. 19 is an example showing the position of the dividing line.
Fig. 20 shows an example in which the current block is divided according to the descriptions of fig. 18 and 19.
Fig. 21 shows an example of deriving prediction samples in a weighted sum region and a non-weighted sum region.
Fig. 22 shows an example in which a filter is applied based on the boundary between the weighted sum region and the non-weighted sum region.
Fig. 23 is a flowchart of a search range adjustment method according to an embodiment of the present disclosure.
Fig. 24 and 25 show examples in which a current picture is divided into a plurality of regions.
Fig. 26 shows an example in which a search range is set around a reference point.
Fig. 27 shows an example of determining the motion characteristics of the current picture.
Fig. 28 shows an example of dividing a current picture and a reference picture into a plurality of regions.
Fig. 29 is a diagram showing an example in which the motion information of each sub-block is corrected.
Detailed Description
As the present disclosure is susceptible to various modifications and alternative embodiments, specific embodiments have been shown in the drawings and will be described in detail. It is not intended to limit the disclosure to the particular embodiments, and it is to be understood that the disclosure includes all changes, equivalents, or alternatives falling within the spirit and scope of the disclosure. In describing each of the drawings, like reference numerals are used for like parts.
Various components may be described using terms such as first, second, etc., but the components should not be limited by terms. The term is used merely to distinguish one component from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the rights of the present disclosure. The term and/or includes a combination of a plurality of relative entries or any of a plurality of relative entries.
When an element is referred to as being "linked" or "connected" to another element, it is understood that the element can be directly linked or connected to the other element but other elements can be present in the middle. On the other hand, when an element is referred to as being "directly linked" or "directly connected" to another element, it should be understood that there are no other elements in between.
Since the terminology used in the present application is for the purpose of describing particular embodiments only, it is not intended to be limiting of the disclosure. The singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. In the present application, it should be understood that terms such as "comprises" or "comprising," etc., refer to the existence of a feature, number, step, motion, component, section, or combination thereof entered in the specification, but do not preclude the possibility of the premature addition or existence of one or more other features, numbers, steps, motion, components, sections, or combinations thereof.
Hereinafter, desired embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings. Hereinafter, the same reference numerals are used for the same components in the drawings, and repeated descriptions of the same components are omitted.
Fig. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present disclosure.
Referring to fig. 1, an image encoding apparatus (100) may include: a picture segmentation unit (110), a prediction unit (120, 125), a transform unit (130), a quantization unit (135), a rearrangement (REARRANGEMENT) unit (160), an entropy coding unit (165), a dequantization unit (140), an inverse transform unit (145), a filter unit (150), and a memory (155).
Since each of the constituent units shown in fig. 1 is independently shown to represent different characteristic functions in the image encoding apparatus, this does not mean that each constituent unit is constituted by separate hardware or one software unit. That is, since each of the constituent units is included by being enumerated as each constituent unit for convenience of description, at least two constituent units of each constituent unit may be combined to constitute one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform functions, and even integrated embodiments and individual embodiments of each constituent unit are also included within the scope of the claims of the present disclosure as long as they do not depart from the essence of the present disclosure.
Furthermore, some components may be only optional components for improving performance, not essential components for performing the basic functions in the present disclosure. The present disclosure may be realized by including only the constitutional units necessary to realize the essence of the present disclosure while excluding only the components for improving the performance, and structures including only the necessary components while excluding only optional components for improving the performance are also included in the scope of the claims of the present disclosure.
The picture segmentation unit (110) may segment the input picture into at least one processing unit. In this case, the processing unit may be a Prediction Unit (PU), a Transform Unit (TU), or a Coding Unit (CU). In the picture dividing unit (110), one picture may be divided into a combination of a plurality of coding units, prediction units, and transform units, and the picture may be encoded by selecting the combination of one coding unit, prediction unit, and transform unit according to a predetermined criterion (e.g., a cost function).
For example, one picture may be divided into a plurality of coding units. In order to divide the coding units in the picture, a recursive tree structure such as a quadtree, a trigeminal tree, or a binary tree may be used, and coding units divided into other coding units by using one image or the largest coding unit as a path may be divided in as many child nodes as the number of divided coding units. The coding units that are no longer partitioned according to certain constraints become leaf nodes. In an example, when it is assumed that quadtree partitioning is applied to one coding unit, one coding unit may be partitioned into up to four other coding units.
Hereinafter, in the embodiments of the present disclosure, an encoding unit may be used as a unit for encoding or may be used as a unit for decoding.
The prediction units may be divided in one coding unit in the same size in at least one square or rectangular shape, etc., or may be divided such that any one of the prediction units divided in one coding unit may have a different shape and/or size from another prediction unit.
In intra prediction, the transform unit may be configured to be identical to the prediction unit. In this case, after dividing the coding unit into a plurality of transform units, intra prediction may be performed for each transform unit. The coding units may be partitioned in the horizontal direction or the vertical direction. The number of transform units generated by dividing the coding unit may be 2 or 4 according to the size of the coding unit.
The prediction units (120, 125) may include an inter prediction unit (120) performing inter prediction and an intra prediction unit (125) performing intra prediction. It may be determined whether inter prediction or intra prediction is performed on the encoding unit, and detailed information (e.g., intra prediction mode, motion vector, reference picture, etc.) according to each prediction method may be determined. In this case, the processing unit that performs prediction may be different from the processing unit that determines the prediction method and the specific content. For example, a prediction method, a prediction mode, or the like may be determined in the encoding unit, and prediction may be performed in the prediction unit or the transform unit. Residual values (residual blocks) between the generated prediction block and the original block may be input to a transform unit (130). Further, prediction mode information, motion vector information, and the like for prediction may be encoded together with the residual value in the entropy encoding unit (165) and may be transmitted to the decoding apparatus. When a specific coding mode is used, the original block may be encoded as it is without generating a prediction block by the prediction unit (120, 125) and transmitted to the decoding unit.
The inter-prediction unit (120) may predict the prediction unit based on information about at least one of a previous picture or a subsequent picture of the current picture, or may predict the prediction unit based on information about some coding regions in the current picture in some cases. The inter prediction unit (120) may include a reference picture interpolation unit, a motion prediction unit, and a motion compensation unit.
The reference picture interpolation unit may receive reference picture information from a memory (155) and generate pixel information equal to or smaller than integer pixels in the reference picture. For luminance pixels, a DCT-based 8-tap interpolation filter having different filter coefficients may be used to generate pixel information equal to or smaller than integer pixels in units of 1/4 pixel. For the chrominance signal, a DCT-based 4-tap interpolation filter having different filter coefficients may be used to generate pixel information equal to or smaller than integer pixels in units of 1/8 pixel.
The motion prediction unit may perform motion prediction based on the reference picture interpolated by the reference picture interpolation unit. As a method for calculating the motion vector, various methods such as FBMA (full search based block matching algorithm), TSS (three step search), NTS (new three step search algorithm), and the like can be used. The motion vector may have a motion vector value of 1/2 or 1/4 pixel unit based on the interpolated pixel. The motion prediction unit may predict the current prediction unit by changing a motion prediction method. As the motion prediction method, various methods such as a skip method, a merge method, an Advanced Motion Vector Prediction (AMVP) method, an intra block copy method, and the like may be used.
The intra prediction unit (125) may generate a prediction unit based on reference pixel information, which is pixel information in the current picture. The reference pixel information may be derived from a selected one of a plurality of reference pixel lines. An nth reference pixel line among the plurality of reference pixel lines may include a left pixel having an x-axis difference of N from an upper left pixel in the current block and a top pixel having a y-axis difference of N from the upper left pixel. The number of reference pixel lines that can be selected by the current block may be 1, 2, 3, or 4.
When the neighboring block in the current prediction unit is a block performing inter prediction and thus the reference pixel is a pixel performing inter prediction, the reference pixel included in the block performing inter prediction may be used by replacing with reference pixel information of surrounding blocks performing intra prediction. In other words, when a reference pixel is not available, the unavailable reference pixel information may be used by replacing with at least one piece of information of the available reference pixel.
The prediction mode of intra prediction may have a direction prediction mode using reference pixel information according to a prediction direction and a non-direction mode not using direction information when performing prediction. The mode for predicting luminance information may be different from the mode for predicting chrominance information, and intra prediction mode information or predicted luminance signal information for predicting luminance information may be used to predict chrominance information.
When the size of the prediction unit is the same as the size of the transform unit when performing intra prediction, intra prediction of the prediction unit may be performed based on a pixel at a left position, a pixel at an upper left position, and a pixel at a top position of the prediction unit.
The intra prediction method may generate the prediction block after applying a smoothing filter to the reference pixels according to the prediction mode. From the selected reference pixel line, it may be determined whether to apply a smoothing filter.
In order to perform the intra prediction method, the intra prediction mode in the current prediction unit may be predicted according to the intra prediction modes in prediction units surrounding the current prediction unit. In predicting a prediction mode in a current prediction unit by using mode information predicted according to a surrounding prediction unit, if an intra prediction mode in the current prediction unit is identical to an intra prediction mode in the surrounding prediction unit, information in which the prediction mode in the current prediction unit is identical to the prediction mode in the surrounding prediction unit may be transmitted by using predetermined flag information, and if the prediction mode in the current prediction unit is different from the prediction mode in the surrounding prediction unit, prediction mode information of the current block may be encoded by performing entropy encoding.
Further, a residual block including information about a residual value, which is a difference between a prediction unit performing prediction based on the prediction unit generated in the prediction units (120, 125) and an original block in the prediction unit, may be generated. The generated residual block may be input to a transform unit (130).
The transform unit (130) may transform the original block and a residual block comprising residual value information in the prediction unit generated by the prediction unit (120, 125) by using a transform method such as DCT (discrete cosine transform), DST (discrete sine transform), KLT. Whether to apply the DCT, DST, or KLT to transform the residual block may be determined based on at least one of a size of the transform unit, a form of the transform unit, a prediction mode in the prediction unit, or intra prediction mode information in the prediction unit.
The quantization unit (135) may quantize the value transformed to the frequency domain in the transformation unit (130). The quantization coefficients may vary depending on the importance of the block or image. The values calculated in the quantization unit (135) may be provided to a dequantization unit (140) and a rearrangement unit (160).
The rearrangement unit (160) may perform rearrangement of the coefficient values on the quantized residual values.
The rearrangement unit (160) can change the coefficients in the shape of the two-dimensional block into the shape of a one-dimensional vector by a coefficient scanning method. For example, the rearrangement unit (160) may scan the DC coefficient to a coefficient in a high frequency domain by using a zig-zag scanning method, and change it into a shape of a one-dimensional vector. Instead of the zigzag scanning, a vertical scanning of a coefficient in a two-dimensional block shape in a column direction, a horizontal scanning of a coefficient in a two-dimensional block shape in a row direction, or a diagonal scanning of a coefficient in a two-dimensional block shape in a diagonal direction may be used according to the size of the transform unit and the intra prediction mode. In other words, which scan method among the zig-zag scan, the vertical direction scan, the horizontal direction scan, or the diagonal line scan is to be used may be determined according to the size of the transform unit and the intra prediction mode.
The entropy encoding unit (165) may perform entropy encoding based on the values calculated by the rearrangement unit (160). For example, entropy encoding may use various encoding methods, such as exponential golomb (Exponential Golomb), CAVLC (context adaptive variable length coding), and CABAC (context adaptive binary arithmetic coding).
The entropy encoding unit (165) may encode various information such as residual value coefficient information and block type information in the encoding unit, prediction mode information, partition unit information, prediction unit information and transmission unit information, motion vector information, reference frame information, block interpolation information, filtering information, and the like from the rearrangement unit (160) and the prediction units (120, 125).
The entropy encoding unit (165) may perform entropy encoding on coefficient values in the encoding unit input from the rearrangement unit (160).
A dequantization unit (140) and an inverse transformation unit (145) dequantize the value quantized in the quantization unit (135) and inverse-transform the value transformed in the transformation unit (130). Residual values generated by the dequantization unit (140) and the inverse transform unit (145) may be combined with prediction units predicted by a motion prediction unit, a motion compensation unit, and an intra prediction unit included in the prediction units (120, 125) to generate a reconstructed block.
The filter unit (150) may include at least one of a deblocking filter, an offset correction unit, and an Adaptive Loop Filter (ALF).
The deblocking filter may remove block distortion generated by boundaries between blocks in the reconstructed picture. To determine whether to perform deblocking, whether to apply a deblocking filter to a current block may be determined based on pixels included in several rows or columns included in the block. When a deblocking filter is applied to a block, a strong filter or a weak filter may be applied according to a required deblocking filter strength. Further, in the case of applying the deblocking filter, when horizontal filtering and vertical filtering are performed, horizontal direction filtering and vertical direction filtering may be set to parallel processing.
The offset correction unit may correct an offset from the original image in units of pixels for an image on which deblocking is performed. In order to perform offset correction on a specific picture, an area where offset is to be performed may be determined after dividing pixels included in an image into a certain number of areas, and a method of applying offset to a corresponding area or a method of applying offset by considering edge information of each pixel may be used.
Adaptive Loop Filtering (ALF) may be performed based on values obtained by comparing the filtered reconstructed image with the original image. After dividing pixels included in an image into predetermined groups, filtering may be differently performed on each group by determining one filter to be applied to the corresponding group. Information on whether to apply ALF may be transmitted per Coding Unit (CU) for a luminance signal, and the shape and filter coefficients of an ALF filter to be applied may be different per block. Furthermore, an ALF filter having the same shape (fixed shape) can be applied regardless of the characteristics of the block to be applied.
The memory (155) may store the reconstructed block or picture calculated by the filter unit (150) and may provide the stored reconstructed block or picture to the prediction unit (120, 125) when performing inter prediction.
Fig. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present disclosure.
Referring to fig. 2, the image decoding apparatus (200) may include an entropy decoding unit (210), a rearrangement unit (215), a dequantization unit (220), an inverse transform unit (225), a prediction unit (230, 235), a filter unit (240), and a memory (245).
When an image bitstream is input from an image encoding apparatus, the input bitstream may be decoded according to a process inverse to that of the image encoding apparatus.
The entropy decoding unit (210) may perform entropy decoding according to a process inverse to a process of performing entropy encoding in the entropy encoding unit of the image encoding apparatus. For example, in response to a method performed in the image encoding apparatus, various methods such as exponential golomb (Exponential Golomb), CAVLC (context adaptive variable length coding), and CABAC (context adaptive binary arithmetic coding) may be applied.
The entropy decoding unit (210) may decode information related to intra prediction and inter prediction performed in the encoding apparatus.
The rearrangement unit (215) may perform rearrangement based on a method of rearranging the bit stream entropy-decoded in the entropy decoding unit (210) in the encoding unit. Coefficients expressed in one-dimensional vector form may be rearranged by being reconstructed into coefficients in two-dimensional block form. The rearrangement unit (215) may receive information on the coefficient scan performed in the encoding unit and perform rearrangement by a method of reversely performing the scan based on the scan order performed in the corresponding encoding unit.
The inverse quantization unit (220) may perform dequantization based on the quantization parameter supplied from the encoding apparatus and the coefficient value of the rearranged block.
The inverse transformation unit (225) may perform the transformation performed in the transformation unit, i.e., the inverse transformation for the DCT, the DST, and the KLT, i.e., the inverse DCT, the inverse DST, and the inverse KLT, on the quantized result performed in the image encoding apparatus. The inverse transform may be performed based on a transmission unit determined in the image encoding apparatus. In an inverse transform unit (225) of the image decoding apparatus, a transform technique (e.g., DCT, DST, KLT) may be selectively performed according to a plurality of information such as a prediction method, a size or shape of a current block, a prediction mode, an intra prediction direction, and the like.
The prediction unit (230, 235) may generate a prediction block based on information related to generation of the prediction block provided from the entropy decoding unit (210) and pre-decoded block or picture information provided from the memory (245).
As described above, when intra prediction is performed in the same manner as the operation in the image encoding apparatus, the size of the prediction unit is the same as the size of the transform unit, intra prediction of the prediction unit may be performed based on the pixel at the left side position, the pixel at the upper left side position, and the pixel at the top position of the prediction unit, but when the size of the prediction unit is different from the size of the transform unit when intra prediction is performed, intra prediction may be performed by using the reference pixel based on the transform unit. Furthermore, intra prediction using nxn partition may be used only for the minimum coding unit.
The prediction unit (230, 235) may comprise: the prediction unit determination unit, the inter prediction unit, and the intra prediction unit. The prediction unit determination unit may receive various information input from the entropy decoding unit (210), such as prediction unit information, prediction mode information of an intra prediction method, motion prediction-related information of an inter prediction method, etc., divide a prediction unit in a current encoding unit, and determine whether the prediction unit performs inter prediction or intra prediction. The inter prediction unit (230) may perform inter prediction on the current prediction unit by using information required for inter prediction in the current prediction unit provided from the image encoding apparatus, based on information included in at least one of a previous picture or a subsequent picture of the current picture including the current prediction unit. Alternatively, the inter prediction may be performed based on information about some regions pre-reconstructed in a current picture including a current prediction unit.
In order to perform inter prediction, it may be determined whether a motion prediction method in a prediction unit included in a corresponding coding unit is a skip mode, a merge mode, an AMVP mode, or an intra block copy mode, based on the coding unit.
An intra prediction unit (235) may generate a prediction block based on pixel information in a current picture. When the prediction unit is a prediction unit in which intra prediction is performed, intra prediction may be performed based on intra prediction mode information in the prediction unit provided from the image encoding apparatus. The intra prediction unit (235) may include an Adaptive Intra Smoothing (AIS) filter, a reference pixel interpolation unit, and a DC filter. As part of performing filtering on the reference pixels of the current block, the AIS filter may be applied by determining whether to apply the filter according to a prediction mode in the current prediction unit. By using the prediction mode and the AIS filter information in the prediction unit supplied from the image encoding apparatus, the AIS filtering may be performed on the reference pixels of the current block. In the case where the prediction mode of the current block is a mode in which the AIS filtering is not performed, the AIS filter may not be applied.
When the prediction mode in the prediction unit is a prediction unit that performs intra prediction based on a pixel value that interpolates a reference pixel, the reference pixel interpolation unit may interpolate the reference pixel to generate the reference pixel in units of pixels equal to or smaller than an integer value. When the prediction mode in the current prediction unit is a prediction mode in which the prediction block is generated without interpolating the reference pixel, the reference pixel may not be interpolated. In case that the prediction mode of the current block is a DC mode, the DC filter may generate the prediction block by filtering.
The reconstructed block or picture may be provided to a filter unit (240). The filter unit (240) may include a deblocking filter, an offset correction unit, and an ALF.
Information on whether to apply the deblocking filter to the corresponding block or picture and information on whether the strong filter or the weak filter is applied when the deblocking filter is applied may be provided from the image encoding apparatus. Information related to the deblocking filter provided from the image encoding apparatus may be provided in the deblocking filter of the image decoding apparatus, and deblocking filtering of the corresponding block may be performed in the image decoding apparatus.
The offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction applied to the image at the time of performing encoding, the offset value information.
The ALF may be applied to the encoding unit based on information on whether to apply the ALF, ALF coefficient information, or the like, which is provided from the encoding apparatus. Such ALF information may be provided by being included in a specific parameter set.
The memory (245) may store the reconstructed picture or block to be used as a reference picture or reference block and provide the reconstructed picture to the output unit.
As described above, for convenience of description, in the embodiments of the present disclosure, the term "encoding unit" is hereinafter used as an encoding unit, but it may also be a unit that performs decoding.
Further, the current block represents an encoding/decoding target block, and depending on the encoding/decoding stage, the current block may represent an encoding tree block (or encoding tree unit), an encoding block (or encoding unit), a transform block (or transform unit), a prediction block (or prediction unit), or a block to which an in-loop filter is applied. In this specification, "unit" may represent a basic unit for performing a specific encoding/decoding process, and "block" may represent a predetermined-sized pixel array. Unless otherwise indicated, "blocks" and "units" may be used in the same sense. For example, in an embodiment to be described later, the encoding block and the encoding unit may be understood to have equivalent meanings.
In addition, a picture including the current block will be referred to as a current picture.
When encoding a current picture, redundant data between pictures may be removed by inter-prediction. Inter prediction may be performed on a block basis. In particular, a prediction block for the current block may be generated from a reference picture using motion information of the current block. Here, the motion information may include at least one of a motion vector, a reference picture index, or a prediction direction.
Motion information of the current block may be generated through motion estimation.
Fig. 3 shows an example in which motion estimation is performed.
In fig. 3, it is assumed that a Picture Order Count (POC) of the current picture is T, and that POC of the reference picture is (T-1).
The search range of motion estimation may be set from the same position as the reference point of the current block in the reference picture. Here, the reference point may be a position of a left upper sample of the current block.
As an example, fig. 3 shows that rectangles of the sizes (w0+w01) and (h0+h1) are set as search ranges centering on a reference point. In this example, w0, w1, h0, and h1 may have the same value. Alternatively, at least one of w0, w1, h0, or h1 may be set to have a different value from the other. Alternatively, w0, w1, h0, and h1 may be sized such that a Coding Tree Unit (CTU) boundary, a slice boundary, a tile boundary, or a picture boundary is not exceeded.
After setting the reference blocks having the same size as the current block within the search range, the cost of each reference block with respect to the current block may be measured. The similarity between two blocks can be used to calculate the cost.
As an example, the cost may be calculated based on the absolute sum of the differences between the original samples in the current block and the original samples (or reconstructed samples) in each reference block. As the absolute sum decreases, the cost may decrease.
After comparing the costs of the reference blocks, the reference block having the optimal cost may be set as a prediction block for the current block.
Further, a distance between the current block and the reference block may be set as a motion vector. Specifically, an x-coordinate difference and a y-coordinate difference between the current block and the reference block may be set as motion vectors.
Further, an index of a picture containing a reference block identified by motion estimation is set as a reference picture index.
Further, the prediction direction may be set based on whether the reference picture belongs to the L0 reference picture list or the L1 reference picture list.
Further, motion estimation may be performed for each of the L0 direction and the L1 direction. If prediction is performed in both the L0 direction and the L1 direction, motion information in the L0 direction and motion information in the L1 direction may be generated.
Fig. 4 and 5 illustrate examples in which a prediction block for a current block is generated based on motion information generated through motion estimation.
Fig. 4 shows an example of generating a prediction block by unidirectional (i.e., L0 direction) prediction, and fig. 5 shows an example of generating a prediction block by bidirectional (i.e., L0 direction and L1 direction) prediction.
In the case of unidirectional prediction, a prediction block for the current block is generated using one piece of motion information. As an example, the motion information may include an L0 motion vector, an L0 reference picture index, and prediction direction information indicating an L0 direction.
In the case of bi-prediction, two motion information are used to create a prediction block. As an example, a reference block in the L0 direction identified based on motion information (L0 motion information) regarding the L0 direction may be set as an L0 prediction block, and a reference block in the L1 direction identified based on motion information (L1 motion information) regarding the L1 direction may be set as an L1 prediction block. Thereafter, the L0 prediction block and the L1 prediction block may be subjected to weighted summation to generate a prediction block for the current block.
In the examples shown in fig. 3-5, the L0 reference picture exists before the current picture (i.e., the POC value is less than the current picture), and the L1 reference picture exists after the current picture (i.e., the POC value is greater than the current picture).
However, unlike the example, the L0 reference picture may exist after the current picture, or the L1 reference picture may exist before the current picture. For example, both the L0 reference picture and the L1 reference picture may exist before the current picture, or both the L0 reference picture and the L1 reference picture may exist after the current picture. Alternatively, bi-prediction may be performed using an L0 reference picture existing after the current picture and an L1 reference picture existing before the current picture.
Motion information of a block on which inter prediction is performed may be stored in a memory. In this case, the motion information may be stored in the sample unit. In particular, motion information of a block to which a specific sample belongs may be stored as motion information of the specific sample. The stored motion information may be used to derive motion information for neighboring blocks that are encoded/decoded later.
The encoder may signal information obtained by encoding residual samples corresponding to differences between samples of the current block (i.e., original samples) and prediction samples and motion information required to generate the prediction block to the decoder. The decoder may decode information about the signaled difference values to derive difference samples and add prediction samples within the prediction block generated using the motion information to the difference samples to generate reconstructed samples.
Here, in order to efficiently compress motion information signaled to a decoder, one of a plurality of inter prediction modes may be selected. Here, the plurality of inter prediction modes may include a motion information merge mode and a motion vector prediction mode.
The motion vector prediction mode is a mode in which a difference between a motion vector and a motion vector predicted value is encoded and signaled. Here, the motion vector predictor may be derived based on motion information of a neighboring block or neighboring sample adjacent to the current block.
Fig. 6 shows the positions referenced to derive motion vector predictors.
For convenience of description, it is assumed that the current block has a size of 4×4.
In the illustrated example, "LB" indicates samples included in the leftmost column and the bottommost row in the current block. "RT" indicates samples included in the rightmost column and the uppermost row in the current block. A0 to A4 indicate samples adjacent to the left side of the current block, and B0 to B5 indicate samples adjacent to the top of the current block. As an example, A1 indicates a sample adjacent to the left side of LB, and B1 indicates a sample adjacent to the top of RT.
Col indicates the position of samples adjacent to the lower right of the current block in the Co-located (Co-located) picture. The co-located picture is a different picture than the current picture, and the information for identifying the co-located picture may be explicitly encoded in the bitstream and signaled. Alternatively, a reference picture with a predefined reference picture index may be set as a co-located picture.
The motion vector predictor of the current block may be derived from at least one motion vector prediction candidate included in the motion vector prediction list.
The number of motion vector prediction candidates that can be included in the motion vector prediction list (i.e., the size of the list) may be predefined in the encoder and decoder. As an example, the maximum number of motion vector prediction candidates may be two.
A motion vector stored at a position of a neighboring sample adjacent to the current block or a scaled motion vector obtained by scaling the motion vector may be inserted into the motion vector prediction list as a motion vector prediction candidate. At this time, the motion vector prediction candidates may be derived by scanning neighboring samples adjacent to the current block in a predetermined order.
As an example, it is possible to check whether or not a motion vector is stored at each position in order from A0 to A4. According to the scan order, the first found available motion vector may be inserted into the motion vector prediction list as a motion vector prediction candidate.
As another example, it is checked whether a motion vector is stored at each position in order from A0 to A4, and a first found motion vector corresponding to a position having the same reference picture as the current block may be inserted into the motion vector prediction list as a motion vector prediction candidate. If there are no neighboring samples with the same reference picture as the current block, a motion vector prediction candidate may be derived based on the first found available vector. Specifically, after scaling the first found available motion vector, the scaled motion vector may be inserted into the motion vector prediction list as a motion vector prediction candidate. Here, scaling may be performed based on an output order difference (i.e., POC difference) between the current picture and the reference picture and an output order difference (i.e., POC difference) between the current picture and the reference picture of the neighboring sample.
Further, it is possible to check whether or not a motion vector is stored at each position in the order from B0 to B5. According to the scan order, the first found available motion vector may be inserted into the motion vector prediction list as a motion vector prediction candidate.
As another example, whether or not a motion vector is stored at each position may be checked in order from B0 to B5, and a first found motion vector corresponding to a position having the same reference picture as the current block may be inserted into the motion vector prediction list as a motion vector prediction candidate. If there are no neighboring samples with the same reference picture as the current block, a motion vector prediction candidate may be derived based on the first found available vector. Specifically, after scaling the first found available motion vector, the scaled motion vector may be inserted into the motion vector prediction list as a motion vector prediction candidate. Here, scaling may be performed based on an output order difference (i.e., POC difference) between the current picture and the reference picture and an output order difference (i.e., POC difference) between the current picture and the reference picture of the neighboring sample.
As shown in the above example, the motion vector prediction candidate may be derived from a sample adjacent to the left side of the current block, and the motion vector prediction candidate may be derived from a sample adjacent to the top of the current block.
Here, the motion vector prediction candidates derived from the left sample may be inserted into the motion vector prediction list before the motion vector prediction candidates derived from the top sample. In this case, the index allocated to the motion vector prediction candidate derived from the left sample may have a smaller value than the index allocated to the motion vector prediction candidate derived from the top sample.
On the other hand, the motion vector prediction candidate derived from the top sample may be inserted into the motion vector prediction list prior to the motion vector prediction candidate derived from the left sample.
Among the motion vector prediction candidates included in the motion vector prediction list, the motion vector prediction candidate having the highest coding efficiency may be set as a Motion Vector Predictor (MVP) of the current block. Further, index information indicating a motion vector prediction candidate set as a motion vector predictor of the current block among the plurality of motion vector prediction candidates may be encoded and signaled to a decoder. When the number of motion vector prediction candidates is two, the index information may be a 1-bit flag (e.g., MVP flag). In addition, a Motion Vector Difference (MVD), which is the difference between the motion vector of the current block and the motion vector predictor, may be encoded and signaled to the decoder.
The decoder may construct a motion vector prediction list in the same manner as the encoder. Further, the decoder may decode the index information from the bitstream and select one of a plurality of motion vector prediction candidates based on the decoded index information. The selected motion vector prediction candidate may be set as a motion vector predictor of the current block.
Furthermore, the decoder may decode the motion vector difference from the bitstream. Thereafter, the decoder may derive a motion vector of the current block by summing the motion vector predictor and the motion vector difference.
In the case of applying bi-prediction to the current block, a motion vector prediction list may be generated for each of the L0 direction and the L1 direction. That is, the motion vector prediction list may include motion vectors in the same direction. Accordingly, the motion vector of the current block and the motion vector prediction candidates included in the motion vector prediction list have the same direction.
In case of selecting the motion vector prediction mode, the reference picture index and the prediction direction information may be explicitly encoded and signaled to the decoder. As an example, in a case where there are a plurality of reference pictures in the reference picture list and motion estimation is performed on each of the plurality of reference pictures, a reference picture index for identifying a reference picture from which motion information of a current block is derived among the plurality of reference pictures may be explicitly encoded and signaled to a decoder.
At this time, if the reference picture list includes only one reference picture, encoding/decoding of the reference picture index may be omitted.
The prediction direction information may be an index indicating one of L0 unidirectional prediction, L1 unidirectional prediction, or bidirectional prediction. Alternatively, an L0 flag indicating whether to perform prediction in the L0 direction and an L1 flag indicating whether to perform prediction in the L1 direction may be encoded and signaled.
The motion information merge mode is a mode in which motion information of a current block is set to be the same as motion information of a neighboring block. In the motion information merge mode, motion information may be encoded/decoded using a motion information merge list.
The motion information merge candidates may be derived based on motion information of neighboring blocks or neighboring samples adjacent to the current block. For example, reference positions around the current block may be predefined, and then it may be checked whether motion information exists at the predefined reference positions. If there is motion information at a predefined reference location, the motion information at that location may be inserted into the motion information merge list as a motion information merge candidate.
In the example of fig. 6, the predefined reference locations may include at least one of: a0, A1, B0, B1, B5 or Col. Further, the motion information merge candidates may be derived in the order of A1, B0, A0, B5, and Col.
Among the motion information merge candidates included in the motion information merge list, the motion information of the motion information merge candidate having the optimal cost may be set as the motion information of the current block. Further, index information (e.g., a merge index) indicating a motion information merge candidate selected from among the plurality of motion information merge candidates may be encoded and transmitted to a decoder.
In the decoder, the motion information merge list may be constructed in the same manner as the encoder. Then, a motion information merge candidate may be selected based on a merge index decoded from the bitstream. The motion information of the selected motion information combining candidate may be set as the motion information of the current block.
Unlike the motion vector prediction list, the motion information merge list is configured as a single list regardless of the prediction direction. That is, the motion information merge candidates included in the motion information merge list may have only L0 motion information or L1 motion information, or may have bi-directional motion information (i.e., L0 motion information and L1 motion information).
The motion information of the current block may also be derived using reconstructed sample regions surrounding the current block. Here, the reconstructed sample region used to derive the motion information of the current block may be referred to as a template.
Fig. 7 is a diagram illustrating a template-based motion estimation method.
In fig. 3, a prediction block for a current block is determined based on costs between the current block and a reference block in a search range. According to the present embodiment, unlike the example shown in fig. 3, motion estimation of a current block may be performed based on a cost between a template adjacent to the current block (hereinafter, referred to as a current template) and a reference template having the same size and shape as the current template.
As an example, the cost may be calculated based on an absolute sum of differences between reconstructed samples in the current template and reconstructed samples in the reference block. With the absolute sum reduced, the cost can be reduced.
After determining the current template within the search range and the reference template having the optimal cost, a reference block adjacent to the reference template may be set as a prediction block for the current block.
Further, the motion information of the current block may be set based on a distance between the current block and the reference block, an index of a picture to which the reference block belongs, and whether the reference picture is included in the L0 or L1 reference picture list.
Since the previously reconstructed region around the current block is defined as a template, the decoder can perform motion estimation in the same manner as the encoder. Therefore, in the case of deriving motion information using a template, it is not necessary to encode and signal the motion information other than information indicating whether or not the template is used.
The current template may include at least one of: an area adjacent to the top of the current block or an area adjacent to the left side of the current block. Here, the region adjacent to the top may include at least one row, and the region adjacent to the left side may include at least one column.
Fig. 8 shows an example of a template configuration.
The current template may be constructed according to one of the examples shown in fig. 8.
Alternatively, unlike the example shown in fig. 8, the template may be constructed using only an area adjacent to the left side of the current block or using only an area adjacent to the top of the current block.
The size and/or shape of the current template may be predefined in the encoder and decoder.
Alternatively, a plurality of template candidates having different sizes and/or shapes may be predefined, and then index information for identifying one of the plurality of template candidates may be encoded and signaled to the decoder.
Alternatively, one of the plurality of candidate templates may be adaptively selected based on at least one of the size, shape, or position of the current block. For example, if the current block is in contact with the upper boundary of the CTU, the current template may be reconstructed using only the region adjacent to the left side of the current block.
Template-based motion estimation may be performed on each reference picture stored in the reference picture list. Alternatively, motion estimation may be performed on only some reference pictures. As an example, motion estimation may be performed only on a reference picture having a reference picture index of 0, or may be performed only on a reference picture having a reference picture index less than a threshold value or a reference picture whose POC difference from the current picture is less than a threshold value.
Alternatively, the reference picture index may be explicitly encoded and signaled, and then motion estimation may be performed only on the reference picture indicated by the reference picture index.
Alternatively, motion estimation may be performed on reference pictures of neighboring blocks corresponding to the current template. For example, if the template includes a left neighboring region and a top neighboring region, at least one reference picture may be selected using at least one of a reference picture index of the left neighboring block or a reference picture index of the top neighboring block. Thereafter, motion estimation may be performed on the at least one selected reference picture.
Information indicating whether template-based motion estimation has been applied may be encoded and signaled to the decoder. The information may be a 1-bit flag. For example, if the flag is true (1), it indicates that template-based motion estimation has been applied to the L0 direction and L1 direction of the current block. On the other hand, if the flag is false (0), it indicates that no template-based motion estimation is applied. In this case, the motion information of the current block may be derived based on the motion information merge mode or the motion vector prediction mode.
On the other hand, the template-based motion estimation may be applied only in a case where it is determined that the motion information merge mode and the motion vector prediction mode are not applied to the current block. For example, when both the first flag indicating whether the motion information merge mode has been applied and the second flag indicating whether the motion vector prediction mode has been applied are 0, template-based motion estimation may be performed.
For each of the L0 direction and the L1 direction, information indicating whether template-based motion estimation has been applied may be signaled. That is, whether the template-based motion estimation is applied to the L0 direction and whether the template-based motion estimation is applied to the L1 direction may be determined independently of each other. Thus, template-based motion estimation may be applied to one of the L0 direction and the L1 direction, and another mode (e.g., a motion information merge mode or a motion vector prediction mode) may be applied to the other direction.
When template-based motion estimation is applied to both the L0 direction and the L1 direction, a prediction block for the current block may be generated based on a weighted sum operation of the L0 prediction block and the L1 prediction block. Alternatively, even when the template-based motion estimation is applied to one of the L0 direction and the L1 direction, another mode is applied to the other direction, the prediction block for the current block may be generated based on a weighted sum operation of the L0 prediction block and the L1 prediction block. This will be described later using equation 2.
Alternatively, the template-based motion estimation method may be inserted as a motion information merge candidate in a motion information merge mode or a motion vector prediction candidate in a motion vector prediction mode. In this case, whether to apply the template-based motion estimation method may be determined based on whether the selected motion information merge candidate or the selected motion vector prediction candidate indicates the template-based motion estimation method.
Motion information of the current block may also be generated based on a bi-directional matching method.
Fig. 9 is a diagram showing a motion estimation method based on a bi-directional matching method.
The bi-directional matching method may be performed only when the temporal order of the current picture (i.e., POC) exists between the temporal order of the L0 reference picture and the temporal order of the L1 reference picture.
When the bi-directional matching method is applied, a search range may be set for each of the L0 reference picture and the L1 reference picture. At this time, an L0 reference picture index for identifying an L0 reference picture and an L1 reference picture index for identifying an L1 reference picture may be encoded and signaled.
As another example, only the L0 reference picture index is encoded and signaled, and the L1 reference picture may be selected based on a distance between the current picture and the L0 reference picture (hereinafter referred to as L0 POC difference). As an example, among L1 reference pictures included in the L1 reference picture list, an L1 reference picture having the same absolute value of a distance (hereinafter referred to as L1 POC difference) from the current picture as that of a distance between the current picture and the L0 reference picture may be selected. If there is no L1 reference picture having the same L1 POC difference as the L0 POC difference, an L1 reference picture having the L1 POC difference most similar to the L0 POC difference may be selected from among the L1 reference pictures.
Here, among L1 reference pictures, only L1 reference pictures having different temporal directions from L0 reference pictures may be used for bi-directional matching. For example, if the POC of the L0 reference picture is smaller than that of the current picture, one of the L1 reference pictures having a POC greater than that of the current picture may be selected.
On the other hand, only the L1 reference picture index may be encoded and signaled, and the L0 reference picture may be selected based on the distance between the current picture and the L1 reference picture.
Alternatively, the bi-directional matching method may be performed using an L0 reference picture closest to the current picture among L0 reference pictures and an L1 reference picture closest to the current picture among L1 reference pictures.
Alternatively, the bi-directional matching method may also be performed using an L0 reference picture to which a predefined index (e.g., index 0) is allocated in the L0 reference picture list and an L1 reference picture to which a predefined index (e.g., index 0) is allocated in the L1 reference picture list.
Alternatively, LX (X is 0 or 1) reference pictures may be selected based on explicitly signaled reference picture indexes, and a reference picture closest to a current picture among l|x-1|reference pictures or a reference picture having a predefined index in the l|x-1|reference picture list may be selected as the l|x-1|reference picture.
As another example, the L0 reference picture and/or the L1 reference picture may be selected based on motion information of neighboring blocks of the current block. As an example, the L0 reference picture and/or L1 reference picture to be used for bi-directional matching may be selected using the reference picture index of the left or top neighboring block of the current block.
The search range may be set within a predetermined range from the co-located block in the reference picture.
As another example, the search range may be set based on the initial motion information. The initial motion information may be derived from neighboring blocks of the current block. For example, motion information of a left neighboring block or a top neighboring block of the current block may be set as initial motion information of the current block.
In the case where the bi-directional matching method is applied, the L0 motion vector and the L1 motion vector are set to opposite directions. This indicates that the L0 motion vector and the L1 motion vector have opposite signs. Further, the size of the LX motion vector may be proportional to the distance (i.e., POC difference) between the current picture and the LX reference picture.
Hereinafter, the motion estimation may be performed using a cost between a reference block (hereinafter referred to as an L0 reference block) within a search range of the L0 reference picture and a reference block (hereinafter referred to as an L1 reference block) within a search range of the L1 reference picture.
If an L0 reference block having a vector (x, y) with respect to the current block is selected, an L1 reference block located at a distance (-Dx, -Dy) from the current block may be selected. Here, D may be determined by a ratio of a distance between the current picture and the L0 reference picture to a distance between the L1 reference picture and the current picture.
For example, in the example shown in fig. 9, the absolute value of the distance between the current picture (T) and the L0 reference picture (T-1) is the same as the absolute value of the distance between the current picture (T) and the L1 reference picture (t+1). Thus, in the example shown, the L0 motion vector (x 0, y 0) and the L1 motion vector (x 1, y 1) have the same size, but opposite directions. If an L1 reference picture with POC (t+2) is used, the L1 motion vector (x 1, y 1) is set to (-2 x0 ).
After the L0 reference block and the L1 reference block having the optimal cost are selected, the L0 reference block and the L1 reference block may be set as an L0 prediction block and an L1 prediction block for the current block. Thereafter, a final prediction block for the current block may be generated by a weighted sum operation of the L0 reference block and the L1 reference block. As an example, a prediction block for the current block may be generated according to equation 2, which will be described later.
When the bi-directional matching method is applied, the decoder may perform motion estimation in the same manner as the encoder. Accordingly, information indicating whether or not the bi-directional motion matching method is applied is explicitly encoded/decoded, and encoding/decoding of motion information such as motion vectors may be omitted. As described above, at least one of the L0 reference picture index or the L1 reference picture index may be explicitly encoded/decoded.
As another example, information indicating whether the bi-directional matching method has been applied may be explicitly encoded/decoded, and when the bi-directional matching method has been applied, an L0 motion vector or an L1 motion vector may be explicitly encoded and signaled. In the case where the L0 motion vector has been signaled, the L1 motion vector may be derived based on the POC difference between the current picture and the L0 reference picture and the POC difference between the current picture and the L1 reference picture. In case that the L1 motion vector has been signaled, the L0 motion vector may be derived based on the POC difference between the current picture and the L0 reference picture and the POC difference between the current picture and the L1 reference picture. At this time, the encoder may explicitly encode the smaller one of the L0 motion vector and the L1 motion vector.
The information indicating whether the bi-directional matching method has been applied may be a 1-bit flag. As an example, if the flag is true (e.g., 1), it may be indicated that the bi-directional matching method has been applied to the current block. If the flag is false (e.g., 0), it may indicate that the bi-directional matching method is not applied to the current block. In this case, a motion information merge mode or a motion vector prediction mode may be applied to the current block.
On the other hand, only in the case where it is determined that the motion information combining mode and the motion vector prediction mode are not applied to the current block, the bi-directional matching method may be applied. For example, when the first flag indicating whether to apply the motion information merge mode and the second flag indicating whether to apply the motion vector prediction mode are both 0, the bi-directional matching method may be applied.
Alternatively, the bi-directional matching method may be inserted as a motion information merge candidate in the motion information merge mode or a motion vector prediction candidate in the motion vector prediction mode. In this case, whether to apply the bi-directional matching method may be determined based on whether the selected motion information merge candidate or the selected motion vector prediction candidate indicates the bi-directional matching method.
It has been described that the temporal order of the current picture needs to exist between the temporal order of the L0 reference picture and the temporal order of the L1 reference picture in the bi-directional matching method. The prediction block for the current block may also be generated by using a one-way matching method to which the above-described restriction of the two-way matching method is not applied. In particular, in the one-way matching method, two reference pictures having a temporal order smaller than that of the current block (i.e., POC) or two reference pictures having a temporal order greater than that of the current block may be used. Here, both reference pictures may be derived from the L0 reference picture list or the L1 reference picture list. Alternatively, one of the two reference pictures may be derived from the L0 reference picture list, and the other reference picture may be derived from the L1 reference picture list.
Fig. 10 is a diagram showing a motion estimation method based on a one-way matching method.
The unidirectional matching method may be performed based on two reference pictures (i.e., forward reference pictures) having POC smaller than that of the current picture or two reference pictures (i.e., backward reference pictures) having POC greater than that of the current picture. Fig. 10 illustrates performing motion estimation based on a unidirectional matching method based on a first reference picture (T-1) and a second reference picture (T-2) having POC smaller than that of a current picture (T).
Here, a first reference picture index for identifying a first reference picture and a second reference picture index for identifying a second reference picture may be encoded and signaled. Among the two reference pictures for the unidirectional matching method, a reference picture having a smaller POC difference from the current picture may be set as the first reference picture. Accordingly, when the first reference picture is selected, only the reference picture having a larger POC difference from the current picture than the first reference picture is set as the second reference picture. The second reference picture index may be set such that it indicates one of the rearranged reference pictures having the same temporal direction as the first reference picture and having a larger POC difference than the current picture than the first reference picture.
On the other hand, a reference picture having a larger POC difference from the current picture among the two reference pictures may be set as the first reference picture. In this case, the second reference picture index may be set such that it indicates one of rearranged reference pictures having the same temporal direction as the first reference picture and having a smaller POC difference from the current picture than the first reference picture.
Alternatively, the unidirectional matching method may be performed using a reference picture to which a predefined index is allocated in the reference picture list and a reference picture having the same temporal direction as the reference picture. As an example, a reference picture having an index of 0 in the reference picture list may be set as the first reference picture, and a reference picture having the smallest index among reference pictures having the same temporal direction as the first reference picture in the reference picture list may be selected as the second reference picture.
Both the first reference picture and the second reference picture may be selected from the L0 reference picture list or the L1 reference picture list. Fig. 10 shows that two L0 reference pictures are used for the unidirectional matching method. Alternatively, the first reference picture may be selected from the L0 reference picture list, and the second reference picture may be selected from the L1 reference picture list.
The information indicating whether the first reference picture and/or the second reference picture belongs to the L0 reference picture list or the L1 reference picture list may be additionally encoded/decoded.
Alternatively, one of the L0 reference picture list and the L1 reference picture list, which is set as default, may be used to perform unidirectional matching. Alternatively, two reference pictures may be selected from an L0 reference picture list and an L1 reference picture list having a larger number of reference pictures.
Thereafter, a search range within the first reference picture and the second reference picture may be set.
The search range may be set within a predetermined range from the co-located block in the reference picture.
As another example, the search range may be set based on the initial motion information. The initial motion information may be derived from neighboring blocks of the current block. For example, motion information of a left neighboring block or a top neighboring block of the current block may be set as initial motion information of the current block.
Thereafter, motion estimation may be performed using a cost between a first reference block within a search range of the first reference picture and a second reference block within a search range of the second reference picture.
At this time, in the one-way matching method, the size of the motion vector needs to be set to increase in proportion to the distance between the current picture and the reference picture. In particular, in the case of selecting a first reference block having a vector (x, y) with respect to the current picture, a second reference block needs to be spaced apart from the current block by (Dx, dy). Here, D may be determined by a ratio of a distance between the current picture and the first reference picture to a distance between the current picture and the second reference picture.
For example, in the example of fig. 10, the distance between the current picture and the first reference picture (i.e., POC difference) is 1, and the distance between the current picture and the second reference picture (i.e., POC difference) is 2. Accordingly, when the first motion vector of the first reference block in the first reference picture is (x 0, y 0), the second motion vector (x 1, y 1) of the second reference block in the second reference picture may be set to (2 x0,2y 0).
When the first and second reference blocks having the optimal cost are selected, the first and second reference blocks may be set as the first and second prediction blocks for the current block. Thereafter, a final prediction block for the current block may be generated by a weighted sum operation of the first prediction block and the second prediction block. As an example, a prediction block for the current block may be generated according to equation 2, which will be described later.
In case that the unidirectional matching method is applied, the decoder may perform motion estimation in the same manner as the encoder. Accordingly, information indicating whether or not to apply the unidirectional motion matching method is explicitly encoded/decoded, and encoding/decoding of motion information such as a motion vector may be omitted. As described above, at least one of the first reference picture index or the second reference picture index may be explicitly encoded/decoded.
As another example, information indicating whether the one-way matching method has been applied may be explicitly encoded/decoded, and in case that the one-way matching method has been applied, the first motion vector or the second motion vector may be explicitly encoded and signaled. In case the first motion vector has been signaled, the second motion vector may be derived based on the POC difference between the current picture and the first reference picture and the POC difference between the current picture and the second reference picture. In case the second motion vector has been signaled, the first motion vector may be derived based on the POC difference between the current picture and the first reference picture and the POC difference between the current picture and the second reference picture. At this time, the encoder may explicitly encode the smaller one of the first motion vector and the second motion vector.
The information indicating whether the one-way matching method has been applied may be a 1-bit flag. As an example, if the flag is true (e.g., 1), it may indicate that the one-way matching method is applied to the current block. If the flag is false (e.g., 0), it may indicate that the one-way matching method is not applied to the current block. In this case, a motion information merge mode or a motion vector prediction mode may be applied to the current block.
On the other hand, the unidirectional matching method may be applied only in the case where it is determined that the motion information combining mode and the motion vector prediction mode are not applied to the current block. For example, when both the first flag indicating whether to apply the motion information merge mode and the second flag indicating whether to apply the motion vector prediction mode are 0, the one-way matching method may be applied.
Alternatively, the one-way matching method may be inserted as a motion information merge candidate in the motion information merge mode or a motion vector prediction candidate in the motion vector prediction mode. In this case, whether to apply the one-way matching method may be determined based on whether the selected motion information merge candidate or the selected motion vector prediction candidate indicates the one-way matching method.
Intra prediction is a method of obtaining a prediction block for a current block using reference samples having spatial similarity to the current block. The reference samples for intra prediction may be reconstructed samples. As an example, previously reconstructed samples around the current block may be set as reference samples. Alternatively, in the case where it is determined that the reconstructed sample at the specific position is not available, the adjacent reconstructed sample may be set as the reference sample at the specific position.
Unlike what is described, the original sample may also be set as the reference sample.
As shown in the above example, a method of performing motion estimation in the decoder in the same manner as in the encoder, that is, at least one of a template-based motion estimation method, a bi-directional estimation method, or a uni-directional estimation method, may be defined as an inter prediction mode. Here, in the case where a plurality of decoder-side motion estimation methods are defined as the inter prediction mode, an index indicating one of the plurality of decoder-side motion estimation methods may be encoded and signaled together with a flag indicating whether to apply the decoder-side motion estimation method. As an example, an index indicating at least one of a template-based motion estimation method, a bi-directional estimation method, or a uni-directional estimation method may be encoded and signaled.
Intra prediction may be performed based on at least one of a plurality of intra prediction modes predefined in the encoder and decoder.
Fig. 11 shows a predefined intra prediction mode.
The predefined intra prediction modes in the encoder and decoder may include a non-directional intra prediction mode and a directional prediction mode. For example, in the example shown in fig. 11, mode 0 represents a planar mode, which is a non-directional mode, and mode 1 represents a DC mode, which is a non-directional mode. Further, fig. 11 shows 65 directional intra prediction modes (2 to 66).
More or fewer intra prediction modes than shown may be predefined in the encoder and decoder.
One of the predefined intra prediction modes may be selected, and a prediction block for the current block may be obtained based on the selected intra prediction mode. At this time, the number and positions of reference samples for generating the prediction intra prediction samples may be adaptively determined according to the selected intra prediction mode.
Fig. 12 shows an example in which prediction samples are generated in a planar mode.
In the example shown in fig. 12, when a prediction block is generated in a planar mode, a reference sample T adjacent to the upper right corner of the current block and a reference sample L adjacent to the lower left corner of the current block may be used.
P1 represents a prediction sample in the horizontal direction, and P2 represents a prediction sample in the vertical direction. P1 may be generated by linearly interpolating a reference sample having the same y-coordinate as P1 (i.e., a reference sample located in the horizontal direction of P1) and a reference sample T. P2 may be generated by linearly interpolating the reference sample L and a reference sample having the same x-coordinate as P2 (i.e., a reference sample located in the vertical direction of P2).
Thereafter, a final prediction sample may be obtained by a weighted sum operation of the horizontal prediction sample P1 and the vertical prediction sample P2. Equation 1 represents an example of generating final prediction samples.
[ 1]
(α×P1+β×P2)/(α+β)。
In equation 1, α indicates a weight assigned to the horizontal prediction sample P1, and β indicates a weight assigned to the vertical prediction sample P2. The weights α and β may be determined based on the width and height of the current block. The weights α and β may have the same value or different values depending on the width and height of the current block. For example, if one side of a block is longer than the other side, the weight assigned to the prediction sample in the direction parallel to the long side may be set to have a larger value. Alternatively, on the other hand, the weight assigned to the prediction sample in the direction parallel to the long side may be set to have a smaller value.
Fig. 13 shows an example in which prediction samples are generated in DC mode.
In DC mode, an average value of reference samples around the current block may be calculated. Fig. 13 shows the range of reference samples used to calculate the average. As in the example shown in fig. 13, the average value may be calculated based on the upper reference sample and the left reference sample.
Depending on the type of the current block, the average value may be calculated using only the upper reference sample or only the left reference sample. For example, if the width of the current block is greater than the height, or if the ratio between the width and the height of the current block is equal to or greater than (or less than) a predefined value, the average value may be calculated using only the upper reference sample.
On the other hand, if the width of the current block is smaller than the height, or if the ratio between the width and the height of the current block is smaller than (or greater than) a predefined value, the average value may be calculated using only the left reference sample.
Fig. 14 shows an example in which prediction samples are generated in a directional intra-prediction mode.
In the case of applying the directional intra prediction mode to the current block, projection may be performed in a direction in which the reference sample is located at each sample position in the current block according to an angle of the directional intra prediction mode.
If there is a reference sample at the projection position (that is, if the projection position is an integer position of the reference sample), the reference sample at the corresponding position may be set as the prediction sample.
On the other hand, if there is no reference sample at the projection position (i.e., if the projection position is a fractional position of the reference sample), the reference sample around the projection position may be interpolated, and the interpolated value may be set as the prediction sample.
For example, in the example shown in fig. 14, when projection based on the angle of the directional intra prediction mode is performed at the position of the sample B in the current block, the reference sample R3 exists at the projection position. Thus, the reference sample R3 may be set as a prediction sample of the position of the sample B.
On the other hand, when projection based on the angle of the directional intra-prediction mode is performed at the position of sample a in the current block, there is no reference sample at the projection position. In this case, an integer position reference sample existing around the projection position may be interpolated, and the interpolated value may be set as a predicted sample of the position of sample a. Here, the value generated by interpolating the integer position reference sample may be referred to as a fractional position reference sample (r in fig. 14).
As described above, the prediction block for the current block may be generated through inter prediction or intra prediction. At this time, the inter prediction may be performed based on at least one of a plurality of inter prediction modes, and the plurality of inter prediction modes include at least one of: a motion vector merge mode, a motion vector prediction mode, a template-based motion estimation method, a bi-directional matching method, or a uni-directional matching method.
In the following embodiments, for convenience of description, among inter prediction modes, an inter prediction mode in which a decoder performs motion estimation in the same manner as an encoder to generate a prediction block (i.e., a template-based motion estimation method, a bi-directional matching method, and/or a uni-directional matching method) is referred to as a decoder-side motion estimation mode. Further, among the inter prediction modes, an inter prediction mode in which information generated by motion estimation in an encoder is explicitly encoded and signaled (i.e., a motion information merge mode and/or a motion vector prediction mode) is referred to as a motion information signaling mode.
According to an embodiment of the present disclosure, a prediction block for a current block may be generated by combining two or more inter prediction methods. As an example, a plurality of prediction blocks may be generated based on a plurality of inter prediction methods, and then a final prediction block for the current block may be generated based on the plurality of prediction blocks.
The above-described inter prediction method may be referred to as a combined prediction method.
For convenience of description, in an example to be described later, it is assumed that the number of prediction blocks used to generate the final prediction block is two.
The information indicating whether to apply the combined prediction method may be explicitly encoded and signaled. The information may be a 1-bit flag.
When two different inter prediction modes are combined, information for identifying each of the two inter prediction modes may be additionally encoded/decoded. As an example, two or more of the following may be encoded and signaled: a flag indicating whether to apply template-based motion estimation, a flag indicating whether to apply a bi-directional matching method, a flag indicating whether to apply a uni-directional matching method, and a flag indicating whether to apply a motion information merge mode.
Alternatively, a plurality of combination candidates formed by combining two of the inter prediction modes may be predefined, and an index for identifying one of the plurality of combination candidates may be encoded and signaled.
Hereinafter, the combination prediction method will be described in detail.
Fig. 15 is a flowchart illustrating a method of deriving a prediction block for a current block using a combined prediction method.
First, a first inter prediction mode may be applied to a current block to generate a first prediction block (S1510). Here, the first inter prediction mode may be any one of the following: a motion vector merge mode, a motion vector prediction mode, a template-based motion estimation method, a bi-directional matching method, or a uni-directional matching method.
A second inter prediction mode may be applied to the current block to generate a second prediction block (S1520). Here, the second inter prediction mode may be any one of the following: a motion vector merge mode, a motion vector prediction mode, a template-based motion estimation method, a bi-directional matching method, and a uni-directional matching method, which are different from the first inter prediction mode.
Alternatively, one of the first inter prediction mode and the second inter prediction mode may be forcedly set to the decoder-side motion estimation mode, and the other may be forcedly set to the motion information signaling mode.
Fig. 16 shows an example in which a plurality of prediction blocks are generated by combining prediction methods.
The example shown in fig. 16 shows that a template-based motion estimation method is applied in the L0 direction, and a general motion estimation method of searching for a reference block similar to the current block is applied in the L1 direction.
In particular, a template-based motion estimation method may be applied in the L0 direction to generate a first prediction block (i.e., an L0 prediction block) for the current block. Specifically, the current template and the reference template having the lowest cost within the search range of the L0 reference picture may be searched. After determining the reference template having the lowest cost, the distance between the current template and the reference template may be set to the L0 direction motion vector.
Further, a reference block adjacent to the reference template may be set as a first prediction block (i.e., L0 prediction block) for the current block.
A general motion estimation method may be applied in the L1 direction to generate a second prediction block (i.e., an L1 prediction block) of the current block. Specifically, the current block and the reference block having the lowest cost within the search range of the L1 reference picture may be searched. After determining the reference block having the lowest cost, the distance between the current block and the reference template may be set to the motion vector of the L1 direction.
In addition, the reference block may be set as a second prediction block (i.e., an L1 prediction block) of the current block.
In the case where the template-based motion estimation method is applied, information indicating that the template-based motion estimation method has been applied is signaled, and motion information (e.g., a motion vector) does not need to be additionally signaled. That is, for the L0 direction, it is sufficient to encode and signal information indicating that the template-based motion estimation method has been applied. The decoder may determine whether a template-based motion estimation method has been applied based on the information. In case it is determined that the template-based motion estimation method has been applied, the decoder may generate the motion vector and/or the first prediction block by the template-based motion estimation method in the same manner as the encoder.
That is, the decoder may set a template-based motion estimation method as the first inter prediction mode and perform template-based motion estimation to generate the first prediction block.
On the other hand, in the case where the general motion estimation method is applied, information generated based on the motion vector merge mode or the motion vector prediction mode needs to be explicitly encoded and signaled.
Accordingly, the decoder may set the motion information merge mode or the motion vector prediction mode as the second inter prediction mode and generate the second prediction block according to motion information derived based on information parsed from the bitstream. For example, if the second inter prediction mode is a motion information merge mode, motion information may be derived based on a motion information merge index parsed from the bitstream. Alternatively, if the second inter prediction mode is a motion vector prediction mode, the motion information may be derived based on a motion vector prediction flag and a motion vector difference parsed from the bitstream.
As in the example shown in fig. 16, the combined prediction method may be performed by applying different inter prediction modes to L0 inter prediction and L1 inter prediction.
As another example, bi-prediction (i.e., L0 and L1 prediction) may be performed in a first inter-prediction mode, and uni-prediction (i.e., L0 or L1 prediction) may be performed in a second inter-prediction mode. On the other hand, unidirectional prediction may be performed in a first inter prediction mode, and bidirectional prediction may be performed in a second inter prediction mode.
Alternatively, bi-prediction may be performed in both the first inter prediction mode and the second inter prediction mode.
Alternatively, the prediction direction of the second inter prediction mode may be set according to the prediction direction of the first inter prediction mode. For example, when the first inter prediction mode is applied to L0 direction prediction, the second inter prediction mode may be set to be applied to L1 direction prediction or bi-directional prediction.
Alternatively, the first inter prediction mode and the second inter prediction mode may be selected regardless of the prediction direction.
In the example shown in fig. 16, the L0 reference picture is a forward reference picture having a POC smaller than that of the current picture, and the L1 reference picture is a backward reference picture having a POC larger than that of the current picture. As in the illustrated example, the first inter prediction mode may be enforced using the forward reference picture and the second inter prediction mode may be enforced using the backward reference picture.
As another example, unlike the illustrated example, a forward reference picture may be used in both the L0 direction and the L1 direction, or a backward reference picture may be used in both the L0 direction and the L1 direction.
When the first and second prediction blocks are generated, a final prediction block for the current block may be generated through a weighted sum operation of the first and second prediction blocks (S1530).
The following equation 2 represents an example of generating a final prediction block for a current block through a weighted sum operation.
[ 2]
P[x,y]=P0[x,y]*w0+P1[x,y]*(1-w0),
0≤x<W-1,0≤y<H-1。
In equation 2, P indicates a final prediction block for the current block, and P0 and P1 indicate a first prediction block and a second prediction block, respectively. In addition, [ x, y ] represents coordinates of samples in the current block. Further, W represents the width of the current block, and H represents the height of the current block.
In the above example, the weight applied to the first prediction block P0 is w0, and the weight applied to the second prediction block P1 is (1-w 0). The weights (1-w 0) may be applied to the first prediction block P0, and the weight w0 may be applied to the second prediction block P1.
In the weighted sum operation, the same weight may be assigned to P0 and P1. That is, by setting w0 to 1/2, the average value of P0 and P1 can be set as the final prediction block.
As another example, the weight assigned to each prediction block may be adaptively set according to an inter prediction mode used to generate each prediction block. As an example, the weight assigned to the prediction block generated in the decoder-side motion estimation mode may be set to a value greater than the weight assigned to the prediction block generated in the motion information signaling mode. In fig. 16, a first prediction block (i.e., an L0 prediction block) is generated by a template-based motion estimation method, and a second prediction block (i.e., an L1 prediction block) is generated by a motion information merge mode or a motion vector prediction mode. Accordingly, the weight assigned to the first prediction block may have a larger value than the weight assigned to the second prediction block.
As another example, the weight may be determined based on a POC of a first reference picture used to generate the first prediction block and a POC of a second reference picture used to generate the second prediction block. As an example, the weight w0 may be determined based on a ratio between an absolute value of a POC difference between the first reference picture and the current picture and an absolute value of a POC difference between the second reference picture and the current picture. A higher weight may be assigned to a prediction block derived from a reference picture having a smaller absolute value of POC difference from the current picture among the first reference picture and the second reference picture.
As another example, the weights applied to each prediction block may be determined based on a predefined weight table. Here, the weight table may be a lookup table in which different indexes are assigned to weight candidates that can be set as the weight w 0.
As an example, a lookup table in which different indexes are assigned to five weight candidates may be stored in advance in the encoder and decoder. For example, a weight table [4/8, 5/8, 3/8, 10/8, -2/8] may be predefined in which five weight candidates with indexes 0 to 4 assigned according to the listed order are included. In the lookup table, index information for identifying candidate values having the same value as w0 may be explicitly encoded and signaled.
In order to perform the integer operation-based processing, the candidate values in the weight table may also be amplified N times, and then a weighted sum operation is performed on the first prediction block P0 and the second prediction block P1. In this case, normalization needs to be performed by narrowing down the weighted sum result based on N.
The number of weight candidates and/or the value of the weight candidates included in the weight table may be adaptively determined based on at least one of an inter prediction mode, a size of the current block, a shape of the current block, a temporal direction of the reference picture, or a temporal order of the reference picture.
For example, in the case of generating the first prediction block based on the decoder-side motion estimation mode, the weight w0 may be selected from N weight candidates. On the other hand, in the case of generating the first prediction block based on the motion information signaling mode, the weight w0 may be selected from M weight candidates. Here, M may be a natural number different from N.
In the above example, the combined prediction method is applied by combining the first inter prediction mode and the second inter prediction mode different from the first inter prediction mode. Unlike the example, a final prediction block for the current block may be generated by generating two prediction blocks based on one inter prediction mode.
For example, in the example shown in fig. 16, the same inter prediction mode may be applied to both the L0 direction and the L1 direction. In particular, the template-based motion estimation method may be applied to both the L0 direction and the L1 direction. To this end, for each of the L0 direction and the L1 direction, information (e.g., flag) indicating whether to apply the template-based motion estimation method may be explicitly encoded and signaled.
Alternatively, the two prediction blocks may be generated by applying a template-based motion estimation method to each of the two forward reference pictures or the two backward reference pictures.
Alternatively, two prediction blocks may be generated by selecting a plurality of motion information merge candidates from the motion information merge list. As an example, the first prediction block may be derived based on first motion information derived from the first motion information merge candidate, and the second prediction block may be derived based on second motion information derived from the second motion information merge candidate.
As described above, in the case where the decoder-side motion estimation mode is included as one of the motion information merge candidates in the motion information merge list, at least one of the first motion information merge candidate or the second motion information merge candidate may be forced to indicate a motion information merge candidate corresponding to the decoder-side motion estimation mode.
In the case where bi-prediction is performed by the first inter prediction mode and uni-prediction is performed by the second prediction mode, a total of three prediction blocks may be generated for the current block. For example, in the case where the first inter prediction mode is a motion information merge mode and motion information derived based on the motion information merge list has bi-directional motion information, an L0 prediction block and an L1 prediction block may be generated using the motion information. In the case where the second inter prediction mode is a template-based motion estimation method and the template-based motion estimation method is applied only to the L0 direction, one L0 prediction block may be generated. In this case, for the current block, three prediction blocks (i.e., two L0 prediction blocks and one L1 prediction block) may be generated.
In this case, a prediction block generated by performing a weighted sum operation (or an average operation) on an L0 prediction block and an L1 prediction block generated based on the first inter prediction mode may be set as the first prediction block, an L0 prediction block generated based on the second inter prediction mode may be set as the second prediction block, and then a final prediction block for the current block may be generated by equation 2. Alternatively, a prediction block generated by performing a weighted sum operation (or an average operation) on two L0 prediction blocks may be set as a first prediction block, one L1 prediction block may be set as a second prediction block, and then a final prediction block for the current block may be generated by equation 2.
Even when each of the first inter prediction mode and the second inter prediction mode is applied to bi-directional prediction, a final prediction block for the current block may be generated in the same manner as described above. For example, if the first inter prediction mode is a motion information merge mode and motion information derived based on the motion information merge list has bi-directional motion information, an L0 prediction block and an L1 prediction block may be generated using the motion information. If the second inter prediction mode is a template-based motion estimation method and the template-based motion estimation method is applied to each of the L0 direction and the L1 direction, an L0 prediction block and an L1 prediction block may be generated. In this case, for the current block, four prediction blocks (i.e., two L0 prediction blocks and two L1 prediction blocks) may be generated.
In this case, a prediction block generated by performing a weighted sum operation (or an average operation) on an L0 prediction block and an L1 prediction block generated based on the first inter prediction mode may be set as the first prediction block, and a prediction block generated by performing a weighted sum operation (or an average operation) on an L0 prediction block and an L1 prediction block generated based on the second inter prediction mode may be set as the second prediction block. Alternatively, a prediction block generated by performing a weighted sum operation (or an average operation) on an L0 prediction block generated based on the first inter prediction mode and an L0 prediction block generated based on the second inter prediction mode may be set as the first prediction block, and a prediction block generated by performing a weighted sum operation (or an average operation) on an L1 prediction block generated based on the first inter prediction mode and an L1 prediction block generated based on the second inter prediction mode may be set as the second prediction block.
Alternatively, the combined prediction method may be performed using three or more inter prediction modes. In this case, three or more prediction blocks may be generated for the current block, and a final prediction block for the current block may be generated by performing a weighted sum operation on the three prediction blocks. At this time, each of the three prediction blocks may be generated based on the decoder-side motion estimation mode or the motion information signaling mode.
In this case, the result of the weighted sum (or average) operation performed on the L0 prediction block may be set as the first prediction block, and the result of the weighted sum (or average) operation performed on the L1 prediction block may be set as the second prediction block. Alternatively, the result of the weighted sum (or average) operation performed on the prediction block derived based on the decoder-side motion estimation mode among the plurality of prediction blocks may be set as the first prediction block, and the result of the weighted sum (or average) operation performed on the prediction block derived based on the motion information signaling mode may be set as the second prediction block.
As another example, in the case of generating three or more prediction blocks for the current block, a final prediction block for the current block may be derived based on a weighted sum operation of the three or more prediction blocks. At this time, the weights applied to each prediction block may be determined based on a predefined weight table. To this end, information for identifying one of the weight candidates included in the weight table may be explicitly encoded and signaled.
The template-based motion estimation method may be applied multiple times to generate multiple prediction blocks for the current block. In particular, a template-based motion estimation method may be applied to one of the reference pictures that may be used by the current block, and based thereon, up to N reference blocks may be selected from each reference picture. Here, N is 1 or an integer greater than 1. For example, if N is 1, the same number of reference blocks as the reference pictures may be generated.
Alternatively, by applying a template-based motion estimation method to each of the reference pictures, M reference blocks may be derived based on the cost relative to the current template. As an example, assume that template-based motion estimation is applied to each of L reference pictures, and as a result, a reference block having the optimal cost in each reference picture is selected. Among the L reference blocks selected through the above-described procedure, M reference blocks may be selected in descending order of cost with respect to the current template. Here, M is an integer of 1 or more.
In the above examples, N and/or M may have predefined values in the encoder and decoder. Alternatively, N and/or M may be determined based on at least one of the size of the current block, the shape of the current block, or the number of samples in the current block.
By performing a weighted sum operation on the plurality of reference blocks obtained through the above-described process, a final prediction block for the current block can be obtained. At this time, the weight assigned to each reference block may be determined based on the cost ratio between the reference templates. As an example, if the final prediction block for the current block is obtained by performing a weighted sum operation on two reference blocks, the weight assigned to each of the first and second reference blocks may be determined by a ratio between the cost a of the first reference template around the first reference block and the cost b of the second reference template around the second reference block. For example, the weight of b/(a+b) may be applied to the first reference block, and the weight of a/(a+b) may be applied to the second reference block.
As another example, a ratio between the cost a of the first reference template and the cost b of the second reference template or a difference between the cost a and the cost b may be compared with a threshold value, and weights to be applied to the first reference block and the second reference block may then be determined. For example, if the ratio or difference does not exceed the threshold, the same weight may be applied to the first reference block and the second reference block. Otherwise, different weights may be determined for the first reference block and the second reference block.
If the ratio or difference exceeds a threshold, a reference block having an optimal cost among a plurality of reference blocks may be selected as a prediction block for the current block.
In the case of generating the final prediction block based on a plurality of prediction blocks, the weighted sum operation may be performed on only some regions of the current block, not the entire region of the current block.
As an example, the final prediction samples included in the first region of the current block may be obtained by a weighted sum operation of the prediction samples included in the first prediction block and the prediction samples included in the second prediction block. On the other hand, the final prediction samples included in the second region of the current block may be set as the prediction samples included in the first prediction block, or may be set as the prediction samples included in the second prediction block. That is, in an area where the weighted sum operation is not performed, a value copied from a reference sample in the first reference picture or the second reference picture may be set as a final prediction sample.
The region to which the weighted sum operation is applied may be determined based on at least one of: distance from a specific boundary of the current block, distance from reconstructed pixels around the current block, size of the current block, shape of the current block, number of samples in the current block, inter prediction mode for obtaining a motion vector of the current block, or whether bi-prediction is applied to the current block.
As another example, information identifying the region where the weighted sum operation is applied may be explicitly encoded and signaled through a bitstream. As an example, the information may identify at least one of the position or angle of a dividing line that separates an area to which a weighted sum operation is applied (hereinafter referred to as a weighted sum area) from other areas (hereinafter referred to as non-weighted sum areas).
Fig. 17 shows various examples in which a current block is segmented according to a segmentation type.
According to the example shown in fig. 17, at least one of the following factors may be determined according to the partition type of the current block.
1) The partition direction (horizontal partition/vertical partition) of the current block;
2) Whether the current block is symmetrically partitioned (whether the partition is a 1:1 partition);
3) The ratio (or type) of weighted sum regions to non-weighted sum regions (whether the ratio between weighted sum regions and non-weighted sum regions is 1:3 or 3:1);
4) The location of the weighted sum region (whether the weighted sum region is a first partition or a second partition).
Each of the four factors described above may be encoded as a 1-bit flag and signaled. That is, the partition type of the current block may be determined by a plurality of flags.
For example, referring to the example in (d) of fig. 17, horizontal division is applied to the current block, and the division of the two divisions has a ratio of 1:3. Therefore, the division direction flag may be set to indicate a horizontal direction, and the symmetrical division flag may be set to indicate that the ratio is not 1:1.
Furthermore, it is shown that the second (i.e. right) of the two partitions is designated as a weighted sum region. Therefore, a flag indicating the ratio of the weighted sum region to the non-weighted sum region is set to indicate 3:1, and a flag indicating the position of the weighted sum region is set to indicate the second division.
Referring to the example shown in (h) of fig. 17, vertical segmentation is applied to the current block, and the segmentation of the two partitions has a ratio of 1:1. Thus, the split direction flag may be set to indicate a vertical direction, and the symmetric split flag may be set to indicate a 1:1 ratio. At this time, since the weighted sum region and the non-weighted sum region have the same size, encoding/decoding of a flag indicating the ratio of the weighted sum region to the non-weighted sum region may be omitted.
Since the second (i.e., right) of the two partitions is designated as the weighted sum region, a flag indicating the position of the weighted sum region is set to indicate the second partition.
As another example, an index indicating one of the plurality of partition type candidates may be encoded and signaled. The plurality of division type candidates may be configured as shown in fig. 17. Alternatively, more or fewer segmentation type candidates than those shown in fig. 17 may be defined.
Encoding/decoding of information indicating a ratio between a weighted sum region and a non-weighted sum region may be omitted, and a larger or smaller one of the two partitions may be set as the weighted sum region. For example, in the above example, if the current block is divided at a ratio of 1:3 or 3:1, encoding/decoding of a flag indicating the position of the weighted sum region may be omitted, and a larger or smaller one of the two divisions may be set as the weighted sum region.
In the above examples, the ratio between the weighted sum region and the non-weighted sum region is 1:1, 1:3, or 3:1. Unlike this example, the current block may be partitioned such that the ratio between the weighted sum region and the non-weighted sum region is 1:15, 15:1, 1:31, or 31:1.
Instead of the flag indicating whether the weighted sum area and the non-weighted sum area within the current block are divided in a symmetrical form, information indicating the ratio occupied by the weighted sum area or the non-weighted sum area within the current block may be encoded and signaled, or an index corresponding to the ratio among a plurality of ratio values may be encoded and signaled.
Unlike the example shown in fig. 17, the weighted sum region and the non-weighted sum region may be divided by a diagonal line.
Fig. 18 is an example showing the direction in which a division line for dividing the current block is located, and fig. 19 is an example showing the position of the division line.
In fig. 18, 32 division directions are shown, and different pattern numbers are assigned to the respective division directions. Further, in fig. 19, the division line passing through the center point of the current block is referred to as a division line having a distance of 0, and it is assumed that there is a division line having a distance of 1 to 3 according to the distance from the center line.
The current block may be divided by a division line perpendicular to the division direction shown in fig. 18. At this time, in order to prevent overlap between modes having opposite division directions, a division line having a distance of 0 may be set as a candidate for use as only one of the modes having opposite division directions.
For example, as in the example shown in fig. 19, one of distances 0 to 3 may be selected when mode #4 is selected, and one of distances 1 to 3 may be selected when mode #20 is selected.
As in the examples shown in fig. 18 and 19, the division type of the current block may be determined based on the division direction and the distance of the division line. For this, information indicating the division direction of the current block and information indicating the distance of the division line may be encoded and signaled.
Fig. 20 shows an example in which the current block is divided according to the descriptions of fig. 18 and 19.
When mode #5 in fig. 18 is selected and distance 2 is selected, the current block may be divided into two areas as in the example shown in fig. 20.
Furthermore, information indicating a weighted sum region among two regions may be additionally encoded and signaled. Alternatively, a larger or smaller region of the two regions may be designated as a default weighted sum region.
In the above example, in the weighted sum region, the prediction samples are obtained by the weighted sum operation on the first prediction block and the second prediction block, and in the non-weighted sum region, the prediction samples are obtained using only the first prediction block or the second prediction block.
At this time, information indicating whether the first prediction block or the second prediction block is used to derive the prediction samples of the non-weighted sum region may be explicitly encoded and signaled.
Alternatively, the prediction block used to derive the prediction samples of the non-weighted sum region may be determined according to the priorities of the first prediction block and the second prediction block. Here, the priorities of the first prediction block and the second prediction block may be determined based on at least one of: the temporal distance between the current picture and the reference picture, the temporal direction of the reference picture, the inter prediction mode, the location of the non-weighted sum region, or the cost relative to the current template.
For example, a prediction block derived from a reference picture having a shorter distance (i.e., a smaller absolute value of POC difference) from the current picture between the first reference picture and the second reference picture may have a higher priority. That is, if the temporal distance between the first reference picture and the current picture is smaller than the temporal distance between the current picture and the second reference picture, a prediction sample of the non-weighted sum region may be derived using the first prediction block.
Alternatively, one of the first prediction block and the second prediction block may be selected in consideration of costs between the current template and the reference template. As an example, the cost between the current template and a first reference template including a reconstructed region around a first prediction block (i.e., a first reference block) and the cost between the current template and a second reference template including a reconstructed region around a second prediction block (i.e., a second reference block) are compared, and then prediction samples of a non-weighted sum region may be derived using prediction blocks adjacent to the reference template having lower costs.
Fig. 21 shows an example of deriving prediction samples in a weighted sum region and a non-weighted sum region.
The prediction samples included in the weighted sum region may be derived by performing a weighted sum operation on the prediction samples included in the first prediction block and the prediction samples included in the second prediction block. For example, as in the example shown in fig. 21, the prediction samples within the weighted sum region may be derived by applying a weight w0 to the prediction samples included in the first prediction block and a weight w1 to the prediction samples included in the second prediction block.
The prediction samples of the non-weighted sum region may be generated by copying the prediction samples included in the first prediction block or the second prediction block. In the example shown in fig. 21, the prediction samples of the non-weighted sum region are generated by copying the prediction samples included in the first prediction block.
Filtering may also be performed at the boundaries of the weighted sum region and the non-weighted sum region. Filtering may be performed for smoothing between the prediction samples included in the weighted sum region and the prediction samples included in the non-weighted sum region.
The filtering may be performed by assigning a first weight to the first prediction samples included in the weighted sum region and a second weight to the second prediction samples included in the non-weighted sum region. Here, the first prediction sample may be generated by performing a weighted sum operation on the prediction samples included in the first prediction block and the second prediction block, and the second prediction sample may be generated by copying the prediction samples included in the first prediction block or the second prediction block.
At this time, the weights assigned to the first prediction sample and the second prediction sample may be adaptively determined according to the positions.
Fig. 22 shows an example in which a filter is applied based on the boundary between the weighted sum region and the non-weighted sum region.
In fig. 22, it is assumed that pattern #5 is applied to 8×8 blocks. Further, in the illustrated example, the number indicated on each sample represents the weight assigned to the first predicted sample. The weight applied to the second prediction sample may be derived by subtracting the weight assigned to the first prediction sample from the integer.
In the example shown in fig. 22, it can be determined that the first weight and the second weight are adaptively determined according to the distance of the dividing line and the position of the sample.
After the weight for each location is determined, the final prediction sample may be obtained by a weighted sum operation on the first prediction sample and the second prediction sample. Equations 3 through 5 below represent examples of obtaining filtered prediction samples.
[ 3]
Px [ y ] = (wx [ y ] [ P0[ x ] [ y ] + (W Max -wx [ y ]) P1[ x ] [ y ] + offset) > shift.
[ 4]
Shift = Log 2(WMax).
[ 5]
Offset = 1 < (shift-1).
In equation 3, P0 indicates a first prediction sample, and P1 indicates a second prediction sample. W denotes a weight matrix, and w_max denotes the sum of the maximum value and the minimum value in the weight matrix. For example, referring to the example of fig. 22, w_max may be set to 8. The shift and offset represent normalization constants. Since w_max is set to 8 (2^3), the value of the shift may be set to 3, and the value of the offset may be set to 4.
In the above example, the current block is divided into a weighted sum region and a non-weighted sum region, and then different prediction sample derivation methods are set for the respective regions. Unlike the description, as shown in fig. 22, a prediction sample may be derived by a weighted sum operation of the first prediction block and the second prediction block over the entire region of the current block, and weights applied to the first prediction block and the second prediction block may be set differently for each sample position. In this case, the above-described division line may be used as a factor for determining the weight assigned to each sample, instead of a factor for dividing the current block into a weighted sum region and a non-weighted sum region.
To generate motion information for inter prediction, an encoder may perform motion estimation. Further, as described above, in the case where the template-based motion estimation method, the bi-directional matching method, or the uni-directional matching method is applied, the decoder may also perform motion estimation in the same manner as the encoder.
However, as motion estimation processes are applied, the complexity of the encoder and/or decoder increases dramatically. In order to solve such a problem, a method of adaptively adjusting a search range based on characteristics of an area within a picture has been proposed. Hereinafter, the search range adjustment method will be described in detail.
Fig. 23 is a flowchart of a search range adjustment method according to an embodiment of the present disclosure.
The current picture may be divided into a plurality of regions (S2310). Each region may include one or more Coding Tree Units (CTUs). Alternatively, multiple CTUs, multiple CUs, or multiple PUs may be defined as a single region.
After that, motion estimation may be performed for each of the partitioned areas (S2320). Through motion estimation, motion information about each region can be determined.
Thereafter, when the search range of the current block is set, motion information of the region including the current block may be set as initial motion information of the current block (S2330). The initial motion information may be used when setting a search area of the current block. As an example, the search area may be set within a predetermined range from a position identified by a motion vector indicated by the initial motion information within the reference picture indicated by the initial motion information.
Fig. 24 shows an example in which a current picture is divided into a plurality of regions.
In the example shown in fig. 24, the current picture is divided into four regions of equal size, and each region includes 64 CTUs. The CTU may be set to a size of 64×64, 128×128, or 256×256. If the CTU has a size of 64×64, each region shown in fig. 24 has a size of 512×512.
The regions need not be the same size as in the example shown in fig. 24. For example, if each region includes 8×8 CTUs, there may be a case in which less than 8 CTU columns or CTU rows remain in a region adjacent to the right or bottom boundary of the current picture. In this case, the region adjacent to the right or bottom boundary of the current picture may have a size of less than 8×8 (i.e., 64 CTUs).
Alternatively, slices, tiles, or sub-pictures may be set as a single region.
After the current picture is divided into a plurality of regions, motion estimation may be performed for each region. Thus, a reference picture and a motion vector for each region can be determined.
In the example of fig. 24, it is assumed that one divided area has a size of 512×512. Accordingly, motion estimation can be performed on the reference picture in units of 512×512 regions.
At this time, in the case where motion estimation is performed on a region adjacent to the boundary of the current picture, a problem of forcing motion estimation of the region to be performed in a specific direction may occur. For example, in the example shown in fig. 24, the motion estimation of the region a adjacent to the top boundary and the left boundary of the current picture may be performed only in the right-side direction and the left-side direction, and cannot be performed in the left-side direction and the top direction. Therefore, there may occur a problem that the motion information about the region cannot appropriately reflect the motion characteristics of the blocks belonging to the region.
In order to solve the above-described problem, a block adjacent to the current picture or a block N rows and/or M columns from the boundary of the current picture may be set not to be classified as a specific region.
Fig. 25 shows an example of this.
Fig. 25 illustrates that the remaining CTUs except for one CTU line adjacent to the current picture are divided into a plurality of regions. Thus, the motion estimation of the region a can be performed not only in the right-side direction and the bottom direction, but also in the left-side direction and the top direction as the size of the CTU.
When motion estimation is performed on a region, motion information about the region may be set as initial motion information of a block (e.g., a coding unit or a prediction unit) belonging to the region. For example, in performing motion estimation of the coding unit, motion information of a region to which the coding unit belongs may be set as initial motion information. When a reference position within the reference picture is specified based on the initial motion information, a search range of the encoding unit may be set based on the reference position.
Fig. 26 shows an example in which a search range is set around a reference point.
For example, in the example shown in fig. 3, a search range having a width w0+w1 and a height h0+h1 is set centering on the parity position (x, y).
In the case where the search range is set around the reference point, w0, w1, h0, and h1 shown in fig. 3 are used, and the search range set around the reference point may be limited so as not to exceed the search range shown in fig. 3. For example, if the reference point is (x-n, y-m) (n and m are natural numbers), the search range may have sizes of (w 0-n), w1, (h 0-m), and h1 centered on the reference point.
That is, by using the initial motion information, the width and/or height of the search range may be reduced by n and/or m.
Instead of performing motion estimation in units of regions, motion estimation may be performed only on the first CTU within a region (e.g., the upper left CTU within a region). In this case, motion information generated as a result of motion estimation performed on the first CTU within the region may be set as initial motion information of a block (e.g., a coding unit or a prediction unit) included within the region. Thus, n and/or m may be differently set in the case of based on motion information generated according to regions and in the case of based on motion information generated according to CTUs.
As another example, motion estimation is performed on a block (e.g., an encoding unit or a prediction unit) belonging to a first CTU in a region, and then at least one of an average value, a minimum value, a maximum value, or a median value of motion vectors of the block may be set as an initial motion vector. In this case, as in the example shown in fig. 3, motion estimation of the block belonging to the first CTU may be performed based on the search range to which the w0, w1, h0, and h1 variables have been applied.
Alternatively, n and/or m may be determined based on: the difference between the motion vector generated by the motion estimation of the region and the motion vector generated by the motion estimation of the first CTU in the region, or the difference between the motion vector generated by the motion estimation of the region and the average, minimum, maximum or median of the motion vectors of the blocks belonging to the first CTU.
Alternatively, the result of motion estimation of neighboring CTUs of the CTU to which the current block belongs may be set as the initial motion information. Here, the neighboring CTUs may include at least one of a left CTU or a top CTU. Alternatively, one of the average value, the minimum value, the maximum value, or the median value of the motion vectors belonging to the coding units adjacent to the CTU may be set as the initial motion vector of the current block.
The initial motion information (e.g., motion information about the region) may be explicitly encoded and signaled to the decoder. When the above-described decoder-side motion estimation mode is applied, the search range may be set based on the initial motion information. For example, when the search range is set by the above-described template-based motion estimation method, bi-directional matching method, or uni-directional matching method, the search range may be set based on the initial motion information.
Alternatively, in the case where the initial motion information is derived based on a previously reconstructed region (e.g., neighboring CTUs), the encoding/decoding of the initial motion information may be omitted. In this case, the decoder may obtain initial motion information by performing motion estimation in the same manner as the encoder.
By allowing the search range to be adaptively adjusted in the decoder-side motion estimation mode, the complexity of the decoder can be reduced and the reduction in coding efficiency can be minimized.
Unlike the above examples, the search range may be adjusted based on the motion characteristics within the picture. The motion characteristics may be derived based on the difference of samples in the current picture and the reference picture.
Fig. 27 shows an example of determining the motion characteristics of the current picture.
The current picture may be divided into a region having strong motion and a region having weak motion. For example, in the example shown in fig. 27 (a), it can be predicted that the background region will be a region with weak motion, and the region where the subject appears will be a region with strong motion.
More specifically, in order to determine the motion characteristics, the current picture and the reference picture may be divided into a plurality of regions.
Fig. 28 shows an example of dividing a current picture and a reference picture into a plurality of regions.
In fig. 28, each region is a block of size n×n. One or more CTUs may be defined as a single region. Alternatively, one or more coding units (or prediction units) may be defined as a single region.
Alternatively, information indicating the size of the region may be explicitly encoded and signaled. For example, in the example shown in fig. 28, when the current picture and the reference picture are divided into square blocks of n×n size, information indicating N may be explicitly encoded and signaled.
Unlike the illustrated example, the motion characteristics may be determined based on slices, tiles, or sub-pictures.
After dividing the current picture and the reference picture into a plurality of regions, differences between co-located samples within the regions are derived. That is, a difference between a sample included in the current picture and a sample at the same position as the sample in the reference picture can be derived.
Thereafter, the standard deviation of the difference value can be derived for each region. By comparing the standard deviation derived for each region with a threshold value, the motion characteristics of the region can be determined. Here, the threshold value may be a value predefined in the encoder and/or decoder. Alternatively, the threshold may be determined based on the bit depth of the current picture. Alternatively, after setting the appropriate threshold in the encoder, information about the threshold may be encoded and signaled to the decoder.
If the standard deviation is smaller than the threshold value, this means that there is almost no movement in the corresponding region. In this case, the region may be classified as a region having weak motion. If the standard deviation is equal to or greater than the threshold, this means that there is significant movement in the region. In this case, the region may be classified as a region having strong motion.
Fig. 27 (b) shows an example in which the current picture is classified into a region having strong motion and a region having weak motion according to a comparison result between the standard deviation of each region and a threshold value.
In (b) of fig. 27, a portion expressed in black represents a region having strong motion, and a portion expressed in white represents a region having weak motion.
The search range of motion estimation can be determined by referring to the motion characteristics of each region. In particular, the search range of the current block may be determined based on whether the current block belongs to a region having strong motion or a region having weak motion.
For example, the search range when the current block belongs to the region having strong motion may be greater than the search range when the current block belongs to the region having weak motion.
Even when the decoder-side motion estimation mode is applied, the search range can be adjusted based on the motion characteristics of the current picture. In this case, information representing the motion movement characteristics of each region may be explicitly encoded and signaled.
When bi-prediction is applied to the current block, an L0 prediction block is generated based on L0 motion information, and an L1 prediction block is generated based on L1 motion information. Thereafter, a prediction block for the current block may be generated by a weighted sum of the L0 prediction block and the L1 prediction block.
At this time, the weight applied to the L0 prediction block and the weight applied to the L1 prediction block may be derived based on the weight list. Specifically, after one of the plurality of weight candidates included in the weight list is selected, the weight applied to the L0 prediction block and the weight applied to the L1 prediction block may be determined based on the selected weight candidate.
Equation 6 shows an example of generating a prediction block by a weighted sum operation performed on an L0 prediction block and an L1 prediction block.
[ 6]
Pbi((1-wf)*P0+wf*P1)。
In equation 6, P0 indicates an L0 prediction block, and P1 indicates an L1 prediction block. P bi indicates a prediction block generated by a weighted sum operation performed on the L0 prediction block and the L1 prediction block. w f represents the weight. The weight w f may be set as one of a plurality of weight candidates included in the weight list. As an example, in equation 6, the weight of w f is applied to the L1 prediction block, and the weights of (1-w f) are applied to the L0 prediction block. On the other hand, the weight of w f may be applied to the L0 prediction block, and the weight of (1-w f) may be applied to the L1 prediction block.
The weight w f may have a real value. In this case, in order to use integer operation instead of real operation, the above equation 6 may be modified to the following equations 7 and 8.
[ 7]
M=1《prec,Off=1《(prec-1)。
[ 8]
Pbi=((M-w)*PO+w*P1+Off)》Prec。
In equations 7 and 8, prec indicates a proportional value of the integer operation. Hereinafter, prec is assumed to be 3.
The weight list may include a plurality of weight candidates. The size of the weight list indicates the number of weight candidates included in the weight list.
An example of a weight list of size 5 is { -2,3,4,5, 10}. Further, an example of a weight list of size 3 is {3,4,5}.
The number and/or types of weight candidates constituting the weight list are not limited to the above examples. Further, a weight candidate having a value of 0 may be included in the weight list.
The size of the weight list may be predefined in the encoder and decoder. Further, the size of the weight list may be determined based on at least one of: whether the POC of the current picture is between the POC of the L0 reference picture and the POC of the L1 reference picture, whether the POC of the current picture is less than or greater than the POC of the L0 reference picture and/or the L1 reference picture, the POC difference between the current picture and the L0 reference picture, or the POC difference between the current picture and the L1 reference picture.
Alternatively, one of a plurality of weight lists may be selected. In this case, index information for identifying one of the plurality of weight lists may be encoded and signaled. Alternatively, one of the plurality of weight lists may be adaptively selected based on at least one of: whether the POC of the current picture is between the POC of the L0 reference picture and the POC of the L1 reference picture, whether the POC of the current picture is less than or greater than the POC of the L0 reference picture and/or the L1 reference picture, the POC difference between the current picture and the L0 reference picture, or the POC difference between the current picture and the L1 reference picture.
The encoder may explicitly encode index information indicating the selected weight candidates in the weight list and transmit it to the decoder. Alternatively, one of the weight candidates may be selected based on at least one of: the size of the current block, the POC of the L0 reference picture, or the POC of the L1 reference picture.
Whether to use the weight list to determine the weights may be determined based on at least one of: the size of the current block, the number of samples included in the current block, the shape of the current block, the POC of the L0 reference picture, or the POC of the L1 reference picture. For example, the weight list may be used to determine the weight only when the number of samples included in the current block is equal to or greater than a threshold value. That is, if the number of samples included in the current block is less than the threshold value, it may be set that the weight list is not used.
If it is determined that the weight list is not used, the prediction block may be derived by averaging the L0 prediction block and the L1 prediction block. For example, in equation 7, if prec is 3, then M is 8. Therefore, when the weight list is not applied, the weight w may be set to 4 in equation 7.
If it is determined that the weight list is not used, signaling of index information indicating one of the weight candidates may be omitted and the same method (i.e., averaging operation) is used in the encoder and decoder to derive the prediction block.
In the case where the motion information combining mode is applied, the weight of the current block may be derived from the motion information combining candidates. As an example, the weight of the current block may be set to be the same as the motion information merge candidate. For this, the motion information merge candidate may include information on weights, and information on motion vectors, reference picture indexes, and prediction directions.
The motion information of the current block can be corrected in the same way in the encoder and decoder. In particular, if the motion information of the current block is derived based on the decoder-side motion estimation mode or the motion information signaling mode, the derived motion information may be corrected using a template-based motion estimation method, a bi-directional matching method, or a uni-directional matching method.
For example, when L0 motion information and L1 motion information have been generated by a template-based motion estimation method, the L0 motion information and/or the L1 motion information may be corrected by applying a bi-directional matching method or a uni-directional matching method.
Alternatively, when the motion information of the current block has been derived through the motion information merge list, the motion information may be corrected by applying a template-based motion estimation method, a bi-directional matching method, or a uni-directional matching method.
Information indicating whether to correct the motion information may be explicitly encoded and signaled.
Alternatively, whether to correct the motion information may be determined based on a predefined condition. At least one of the following may be used as a predefined condition: whether motion information has been derived based on a motion information merge mode, whether bi-prediction is applied to the current block, whether the width and/or height of the current block is equal to or greater than a threshold, whether the number of samples within the current block is equal to or greater than a threshold, whether the weights applied to the L0 prediction block and the L1 prediction block are the same (i.e., whether an averaging operation has been applied), or whether a luma compensation technique (e.g., weighted prediction) has not been used.
When there are a plurality of predefined conditions, the correction motion information may be determined if at least one or all of the plurality of predefined conditions are satisfied.
As another example, whether to correct the motion information may be determined by comparing the cost between the prediction blocks identified by the motion information or the cost between the current template and the reference template with a threshold. As an example, the motion information may be corrected only when the cost between an L0 prediction block (i.e., an L0 reference block) derived based on the L0 motion information and an L1 prediction block (i.e., an L1 reference block) derived based on the L1 motion information is greater than a threshold.
Alternatively, whether to correct the motion information may be determined by comparing the cost between the current template and the reference template with a threshold. The reference template may be determined based on the motion information. For example, the motion information may be corrected only if the cost between the current template and the reference template is greater than a threshold.
In the above example, the threshold may be set based on the size of the current block or the number of samples within the current block. As an example, the number of samples (e.g., width×height) in the current block may be set as a threshold, or N times the number of samples may be set as a threshold. As in an example to be described later, when motion compensation is performed on a sub-block basis, the number of samples in the sub-block or N times the number of samples may be set as a threshold.
Alternatively, the threshold value may be predefined in the encoder and/or decoder.
Alternatively, the threshold may be adaptively determined based on at least one of the bit depth or the color component.
If the correction motion information is determined, the motion information may be corrected based on a template-based motion estimation method, a bi-directional matching method, or a uni-directional matching method. At this time, a method for correcting motion information may be predefined in the encoder and decoder. As an example, the motion information may be corrected using a template-based motion estimation method fixedly.
Alternatively, information indicating one of the plurality of decoder-side motion estimation methods may be encoded and signaled. For example, when motion compensation is performed using one of a template-based motion estimation method and a bi-directional matching method, information (e.g., a flag) indicating one of the two methods may be encoded and signaled.
Alternatively, information indicating whether to correct motion information using a template-based motion estimation method and information indicating whether to correct motion information using a bi-directional matching method may be separately encoded and signaled.
Alternatively, the method on which the motion compensation is based may be selected based on at least one of the following: the size of the current block, the shape of the current block, the POC of the current picture, the POC of the L0 reference picture, the POC of the L1 reference picture, or whether bi-prediction is performed. For example, in the case where the motion information derived based on the motion information merge candidate has unidirectional information (e.g., L0 motion information or L1 motion information), the motion information may be corrected by applying a template-based motion estimation method. On the other hand, in the case where the motion information derived based on the motion information merge candidate has bidirectional information (i.e., L0 motion information and L1 motion information), the motion information may be corrected by applying a bidirectional matching method or a unidirectional matching method.
When motion information about the coding unit or the prediction unit has been derived, motion correction may be performed on the coding unit or the prediction unit.
Alternatively, the motion information may be corrected in units smaller than sub-blocks of the coding unit or the prediction unit.
Hereinafter, a process of correcting motion information will be described. The motion information before correction will be referred to as initial motion information.
After deriving the initial motion information, the current block may be partitioned into a plurality of sub-blocks. The size of the sub-block may be predefined in the encoder and decoder. As an example, the current block may be divided into sub-blocks of a size of 4×4 or 8×8.
Alternatively, the size of the sub-block may be adaptively determined based on at least one of the size or shape of the current block.
Thereafter, the motion information may be corrected for each sub-block.
Fig. 29 is a diagram showing an example in which the motion information of each sub-block is corrected.
For convenience of description, it is assumed that the size of the current block is 8×8 and the size of the sub-block is 4×4.
Further, it is assumed that initial motion information is derived based on the motion information merge list, and that an initial motion vector is corrected using a bi-directional matching method.
Based on the initial motion information, a starting point within the reference picture may be specified for each sub-block. Specifically, a position spaced apart from the upper left position of the sub-block within the reference picture indicated by the initial reference picture index may be designated as a start point.
The reference region may be set by expanding the boundary of the sub-reference block by a predetermined value based on the sub-reference block having the upper left position as a start point. For example, in the example shown in fig. 29, the reference region is set by expanding boundaries of four sides of the sub-reference block by w0, w1, h0, and h1, respectively.
If w0, w1, h0 and h1 are each 2, the reference region of the 4×4 sub-block may have a size of 8×8.
If the starting point indicated by the motion vector is a fractional position, the reference region may be generated by interpolation.
If the initial motion information includes both L0 motion information and L1 motion information, a reference region may be set for each of the L0 direction and the L1 direction.
Then, within the reference region in the L0 direction and the reference region in the L1 direction, a bi-directional matching method is applied to select a combination of the L0 sub-reference block and the L1 sub-reference block having the optimal cost.
For example, if the size of the sub-block is 4×4 and the size of the reference region is 8×8, a total cost of 25 pairs of L0 sub-reference block and L1 sub-reference block combinations may be calculated. Here, a vector between L0 sub-reference blocks at the start point of the L0 reference picture (i.e., an L0 difference vector) and a vector between L1 sub-reference blocks at the start point of the L1 reference picture (i.e., an L1 difference vector) may have the same size and opposite directions.
After determining the pair having the lowest cost among the L0 sub-reference block and L1 sub-reference block combinations, the motion vector may be corrected to the positions of the corresponding L0 sub-reference block and L1 sub-reference block.
Thus, in the example of fig. 29, the corrected motion vector can be derived by adding a value between (-2, -2) and (2, 2) to the initial motion vector (x 0, y 0).
According to the example shown in fig. 29, as a result of correction of the initial motion vector, the horizontal component (i.e., x-axis component) and the vertical component (i.e., y-axis component) of the motion vector are corrected by an integer between-2 and 2. Thus, such correction may be referred to as integer position correction.
After the integer position correction is performed, the fractional position correction may be additionally performed. Fractional position correction may be performed using a reference block that includes fractional position samples within a reference region.
In this case, the position of the reference block determined to have the optimal cost in the integer position correction process may be set as a start point of the fractional position correction, and the fractional position reference sample may be generated in a predefined unit based on the start point. Here, the predefined units may be 1/16, 1/8, 1/4 or 1/2.
Thereafter, a combination of the L0 sub-reference block and the L1 sub-reference block having the optimal cost is selected among the L0 sub-reference block and the L1 sub-reference block including the decimal position reference pixel.
After determining the pair having the lowest cost among the L0 sub-reference block and L1 sub-reference block combinations, the motion vector may be corrected to the positions of the corresponding L0 sub-reference block and L1 sub-reference block.
Whether to perform fractional position correction may be determined based on whether the sub-reference block selected according to the integer position correction is in contact with the boundary of the reference area. That is, whether to correct the fractional position may be determined based on whether the corrected motion vector points to the edge of the reference region. If the corrected motion vector points to an edge of the reference area, the fractional position correction may not be performed. Otherwise, the fractional position correction may be additionally performed.
Here, the edge of the reference region may represent one of a left boundary, a top boundary, a bottom boundary, or a right boundary.
Alternatively, the minimum cost calculated in the integer position correction process may be compared to a threshold to determine whether to perform fractional positioning. As an example, the fractional position correction may be performed only when a combination having the smallest cost among the L0 sub-reference block and the L1 sub-reference block combinations is equal to or greater than a threshold.
Alternatively, whether to perform the fractional position correction may also be determined in consideration of whether the precision of the initial motion vector is an integer unit or a fractional unit.
Fig. 29 shows correction of an initial motion vector based on a sub-block. Unlike the illustrated example, the initial motion vector may be corrected for the coding unit or the prediction unit.
Furthermore, the initial motion vector may be corrected using a template-based motion estimation method instead of the bi-directional matching method. In this case, correction of the initial motion vector may be performed based on the cost between the current template and the reference template.
As another example, after correcting the initial motion vector for the coding unit or the prediction unit, the corrected motion vector may be re-corrected based on Yu Zikuai.
Alternatively, the initial motion vector may be corrected for only one of the L0 direction and the L1 direction, and the initial motion vector may not be corrected for the other direction. For example, for the L0 direction, the initial motion vector may be corrected, and then the L0 prediction block is derived using the corrected motion vector. On the other hand, for the L1 direction, the L1 prediction block may be derived using the initial motion vector without correcting the initial motion vector.
As in the above example, correcting a motion vector for only one of the L0 direction and the L1 direction may be applied based on a block level or a sub-block.
When applying the bi-directional matching method, weights may be applied when calculating the cost between two reference blocks. That is, instead of calculating the cost based on the difference between the L0 reference block and the L1 reference block, the cost may be calculated based on the difference between the L0 reference block to which the first weight has been applied and the L1 reference block to which the second weight has been applied.
For example, in the case where the motion information merge candidate selected from the motion information merge list has bi-directional motion information, weights to be applied to the L0 reference block and the L1 reference block may be determined based on weights derived from the motion information merge candidate.
As an example, if the L0 direction weight (e.g., M-w in equation 8) derived from the motion information merge candidate is 5 and the L1 direction weight is 3, weights of 5 and 3 are applied to the L0 block and the L1 reference block, respectively, to calculate the cost. That is, the cost may be calculated based on the difference between (5×l0_pred) and (3×l1_pred). Here, l0_pred represents an L0 reference block, and l1_pred represents an L1 reference block.
Alternatively, weights for the L0 reference block and the L1 reference block may be determined based on the weight list. In this case, the weight list for bi-prediction may be used in the same manner. Alternatively, a simplified list compared to the weight list for bi-prediction may be used. A reduced list means a list comprising fewer weight candidates than the weight list used for bi-prediction.
Alternatively, a weight list separate from the weight list for bi-prediction may be set.
Each of the weight candidates included in the weight list may be applied to the L0 reference block and the L1 reference block. In this case, the motion vector of the current block may be corrected by selecting one of the combinations of the weight candidate, the L0 reference block, and the L1 reference block having the optimal cost.
Of the weight candidates included in the weight list, only some candidates may be set to be available.
In the case of correcting a motion vector using a template-based motion estimation method in addition to the bi-directional matching method, a weight may be used. At this time, the first weight applied to the L0 reference block and the second weight applied to the L1 reference block may be applied to the reference template and the current template, respectively.
It is within the scope of the present disclosure to apply embodiments described focusing on the decoding process or encoding process to the encoding process or decoding process. Embodiments that are described in a predetermined order are modified to be in an order different from the predetermined order are also included in the scope of the present disclosure.
Although the above disclosure has been described based on a series of steps or flowcharts, this is not limiting of the temporal order of the present disclosure, and a series of steps may be performed simultaneously or in a different order as desired. In addition, each of the components (e.g., units, modules, etc.) constituting the block diagrams in the above disclosure may be implemented as a hardware device or software, and a plurality of the components may be combined to form a single hardware device or software. The above disclosure may be implemented in the form of program instructions executable by various computer components and recorded in a computer-readable recording medium. The computer readable recording medium may include program instructions, data files, data structures, etc. alone or in combination. Examples of the computer readable recording medium include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs, and DVDs, magneto-optical media such as floppy disks, and hardware devices specially configured to store and execute program instructions, such as ROMs, RAMs, and flash memories. The hardware device may be configured to operate as one or more software modules to perform processes according to the present disclosure, and the software device may be configured to operate as one or more hardware modules to perform processes according to the present disclosure.
INDUSTRIAL APPLICABILITY
The present disclosure may be applied to computing or electronic devices that may encode/decode video signals.

Claims (15)

1. A video decoding method, the method comprising:
obtaining a first prediction block for the current block based on the first inter prediction mode;
obtaining a second prediction block for the current block based on a second inter prediction mode; and
A final prediction block for the current block is obtained based on the first prediction block and the second prediction block.
2. The video decoding method of claim 1, wherein at least one of the first inter-prediction mode or the second inter-prediction mode is a decoder-side motion estimation mode in which a decoder performs motion estimation in the same manner as an encoder using a previously reconstructed reference picture.
3. The video decoding method of claim 2, wherein the motion estimation comprises: searching for a combination having an optimal cost among a combination of a current template and a reference template of the same size as the current template within the reference picture, the current template including a previously reconstructed region around the current block.
4. The video decoding method of claim 3, wherein the motion estimation is performed on each of the reference pictures in the reference picture list having a reference picture index less than a threshold.
5. The video decoding method of claim 3, wherein the motion estimation is performed for each of the reference pictures in the reference picture list whose output order difference relative to the current picture is equal to or less than a threshold.
6. The video decoding method of claim 3, wherein the reference template is searched within a search range set in the reference picture, and the search range is set based on initial motion information of the current block.
7. The video decoding method of claim 6, wherein the initial motion information is motion information about a region larger than the current block.
8. The video decoding method of claim 3, wherein the reference template is searched within the search range set in the reference picture, the search range is determined based on a motion characteristic of a region including the current block, and the motion characteristic of the region is set to one of a region having strong motion or a region having weak motion.
9. The video decoding method of claim 2, wherein the motion estimation comprises: a process of searching for a combination having an optimal cost among combinations of an L0 reference block included in the L0 reference picture and an L1 reference block included in the L1 reference picture.
10. The video decoding method of claim 9, wherein the output order of the current picture is between the output order of the L0 reference picture and the output order of the L1 reference picture.
11. The video decoding method of claim 1, wherein the first inter-prediction mode is used for L0-direction prediction of the current block, and the second inter-prediction mode is used for L1-direction prediction of the current block.
12. The video decoding method of claim 1, wherein the final prediction block is derived based on a weighted sum operation of the first prediction block and the second prediction block, and a first weight assigned to the first prediction block and a second weight assigned to the second prediction block during the weighted sum operation are adaptively determined depending on a type of the first inter prediction mode or the second inter prediction mode.
13. The video decoding method of claim 12, wherein the first weight has a value greater than the second weight if the first inter prediction mode is a decoder-side motion estimation mode and the second inter prediction mode is a motion information signaling mode.
14. A video encoding method, the method comprising:
obtaining a first prediction block for the current block based on the first inter prediction mode;
obtaining a second prediction block for the current block based on a second inter prediction mode; and
A final prediction block for the current block is obtained based on the first prediction block and the second prediction block.
15. A computer-readable recording medium recording a bitstream generated by a video encoding method, the video encoding method comprising:
obtaining a first prediction block for the current block based on the first inter prediction mode;
obtaining a second prediction block for the current block based on a second inter prediction mode; and
A final prediction block for the current block is obtained based on the first prediction block and the second prediction block.
CN202280062043.5A 2021-09-15 2022-09-15 Video signal encoding/decoding method and recording medium storing bit stream Pending CN117941356A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR10-2021-0123305 2021-09-15
KR10-2022-0003541 2022-01-10
KR20220044068 2022-04-08
KR10-2022-0044068 2022-04-08
PCT/KR2022/013787 WO2023043223A1 (en) 2021-09-15 2022-09-15 Video signal encoding/decoding method and recording medium having bitstream stored therein

Publications (1)

Publication Number Publication Date
CN117941356A true CN117941356A (en) 2024-04-26

Family

ID=90756103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280062043.5A Pending CN117941356A (en) 2021-09-15 2022-09-15 Video signal encoding/decoding method and recording medium storing bit stream

Country Status (1)

Country Link
CN (1) CN117941356A (en)

Similar Documents

Publication Publication Date Title
US11909955B2 (en) Image signal encoding/decoding method and apparatus therefor
KR20180018388A (en) Method for encoding/decoding video and apparatus thereof
KR102619133B1 (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
KR20180061041A (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
CN115002456A (en) Image encoding and decoding method and image decoding apparatus
CN113382234B (en) Video signal encoding/decoding method and apparatus for the same
US20230336768A1 (en) Video signal encoding and decoding method, and apparatus therefor
CN112106359A (en) Method and apparatus for processing video signal
CN113574878A (en) Method for encoding/decoding video signal and apparatus therefor
CN112672161A (en) Method and apparatus for processing video signal
EP4404566A1 (en) Video signal encoding/decoding method and recording medium having bitstream stored therein
CN117941356A (en) Video signal encoding/decoding method and recording medium storing bit stream
EP4407985A1 (en) Video signal encoding/decoding method, and recording medium having bitstream stored thereon
EP4404565A1 (en) Video signal encoding/decoding method, and recording medium having bitstream stored thereon
KR20240014456A (en) A method of encoding/decoding a video and recording medium storing bitstream
KR20240034140A (en) A method of encoding/decoding a video and recording medium storing bitstream
KR20230142375A (en) Method of encode/decoding a video signa, and recording medium stroing a bitstream
KR20240006458A (en) A method of encoding/decoding a video and recording medium storing bitstream
KR20240001073A (en) A method of encoding/decoding a video and recording medium storing bitstream
KR20230100677A (en) Method of encoding/decoding a video signal, and recording medium stroing a bitstream
CN118044194A (en) Method, apparatus and recording medium for image encoding/decoding
CN117813821A (en) Video signal encoding/decoding method based on intra prediction in sub-block units and recording medium for storing bit stream
CN117795961A (en) Video signal encoding/decoding method and apparatus based on intra prediction and recording medium storing bit stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication