CN110740321B - Motion prediction based on updated motion vectors - Google Patents

Motion prediction based on updated motion vectors Download PDF

Info

Publication number
CN110740321B
CN110740321B CN201910663403.7A CN201910663403A CN110740321B CN 110740321 B CN110740321 B CN 110740321B CN 201910663403 A CN201910663403 A CN 201910663403A CN 110740321 B CN110740321 B CN 110740321B
Authority
CN
China
Prior art keywords
motion vector
block
motion
updated
reference motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910663403.7A
Other languages
Chinese (zh)
Other versions
CN110740321A (en
Inventor
刘鸿彬
张莉
张凯
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN110740321A publication Critical patent/CN110740321A/en
Application granted granted Critical
Publication of CN110740321B publication Critical patent/CN110740321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/583Motion compensation with overlapping blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video processing method, comprising: determining that the current block is associated with a first reference motion vector and a second reference motion vector; generating an updated first reference motion vector and an updated second reference motion vector, respectively, based on a sum of the scaled first motion refinement and the first reference motion vector and a sum of the scaled first motion refinement and the second reference motion vector, wherein the first motion refinement is derived based on a bi-directional optical flow pattern; and performing a conversion between a current video block and a bitstream representation of video data comprising the current block based on the updated first reference motion vector and the updated second reference motion vector.

Description

Motion prediction based on updated motion vectors
Cross Reference to Related Applications
The present application claims priority and benefit from international patent application No. PCT/CN2018/096384 entitled "motion prediction based on updated motion vectors" filed on 2018, 7, 20, according to applicable patent laws and/or rules under the paris convention. The entire disclosure of the international patent application No. PCT/CN2018/096384 is incorporated by reference as part of the disclosure of the present application.
Technical Field
This document relates to video encoding and video decoding techniques, devices and systems.
Background
Despite advances in video compression, digital video still accounts for the largest bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for pre-counting digital video usage will continue to grow.
Disclosure of Invention
Devices, systems, and methods related to digital video coding are described, and in particular, motion prediction based on updated motion vectors. The described methods may be applied to existing video coding standards, such as High Efficiency Video Coding (HEVC), and future video coding standards or video codecs.
In one representative aspect, the disclosed techniques may be used to provide a method of video encoding. The method includes determining that a current block is associated with a first reference motion vector and a second reference motion vector; generating an updated first reference motion vector and an updated second reference motion vector, respectively, based on a sum of the scaled first motion refinement and the first reference motion vector and a sum of the scaled first motion refinement and the second reference motion vector, wherein the first motion refinement is derived based on a bi-directional optical flow pattern; and performing a conversion between a current video block and a bitstream representation of video data comprising the current block based on the updated first reference motion vector and the updated second reference motion vector.
In another representative aspect, the disclosed techniques may be used to provide another method for processing video data. The method includes receiving a bitstream representation of a current block of video data and generating the current block by selectively using motion compensation for Overlapped Blocks (OBMC) to process the bitstream representation based on characteristics of the current block without signaling an OBMC flag.
In yet another representative aspect, the above-described methods are embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, an apparatus configured or operable to perform the above-described method is disclosed. The apparatus may include a processor programmed to implement the method.
In yet another representative aspect, a video decoder device may implement a method as described herein.
The above aspects and features and other aspects and features of the disclosed technology are described in more detail in the accompanying drawings, the description and the claims.
Drawings
Fig. 1 shows an example of building a Merge candidate list.
Fig. 2 shows an example of the positions of spatial candidates.
Fig. 3 shows an example of a candidate pair subjected to a redundancy check of a spatial Merge candidate.
Fig. 4A and 4B illustrate examples of the location of the second prediction unit PU based on the size and shape of the current block.
Fig. 5 shows an example of motion vector scaling for the temporal Merge candidate.
Fig. 6 shows an example of candidate positions for the time Merge candidate.
Fig. 7 shows an example of generating combined bidirectional predictive Merge candidates.
Fig. 8 shows an example of constructing a motion vector prediction candidate.
Fig. 9 shows an example of motion vector scaling for spatial motion vector candidates.
Fig. 10 shows an example of motion prediction using an Alternative Temporal Motion Vector Prediction (ATMVP) algorithm for a Coding Unit (CU).
Fig. 11 shows an example of a Coding Unit (CU) having sub-blocks and neighboring blocks used by a spatial motion vector prediction (STMVP) algorithm.
Fig. 12A and 12B show example snapshots of sub-blocks when using the motion compensation for Overlapped Blocks (OBMC) algorithm.
Fig. 13 shows an example of neighboring samples used to derive parameters for a Local Illumination Compensation (LIC) algorithm.
FIG. 14 shows an example of a simplified affine motion model.
Fig. 15 shows an example of an affine Motion Vector Field (MVF) of each sub-block.
Fig. 16 shows an example of Motion Vector Prediction (MVP) for the AF _ INTER affine motion mode.
Fig. 17A and 17B show example candidates for the AF _ MERGE affine motion mode.
Fig. 18 shows an example of bilateral matching in a mode-matched motion vector derivation (PMMVD) mode, which is a special Merge mode based on a Frame Rate Up Conversion (FRUC) algorithm.
Fig. 19 shows an example of template matching in the FRUC algorithm.
Fig. 20 shows an example of uni-directional motion estimation in the FRUC algorithm.
FIG. 21 shows an example of an optical flow trace used by a bi-directional optical flow (BIO) algorithm.
FIGS. 22A and 22B show example snapshots using a bi-directional optical flow (BIO) algorithm without block expansion.
Fig. 23 shows an example of a decoder-side motion vector refinement (DMVR) algorithm based on two-sided template matching.
Fig. 24 shows an example of a template definition used in transform coefficient context modeling.
FIG. 25 shows examples of inner and boundary sub-blocks in a PU/CU.
Fig. 26 illustrates a flow diagram of an example method for video encoding in accordance with the presently disclosed technology.
Fig. 27 shows a flow diagram of another example method for video encoding in accordance with the presently disclosed technology.
Fig. 28 is a block diagram of an example of a hardware platform for implementing the visual media decoding or visual media encoding techniques described in this document.
Detailed Description
Due to the increasing demand for higher resolution video, video encoding methods and techniques are ubiquitous in modern technology. Video codecs typically include electronic circuits or software that compress or decompress digital video, and are continually being improved to provide higher coding efficiency. The video codec converts uncompressed video into a compressed format and vice versa. There is a complex relationship between video quality, the amount of data used to represent the video (determined by the bit rate), the complexity of the encoding and decoding algorithms, the susceptibility to data loss and errors, the ease of editing, random access, and end-to-end delay (latency). The compression format typically conforms to a standard video compression specification, such as the High Efficiency Video Coding (HEVC) standard (also known as h.265 or MPEG-H part 2), the general video coding standard to be completed, or other current and/or future video coding standards.
Embodiments of the disclosed techniques may be applied to existing video coding standards (e.g., HEVC, h.265) and future standards to improve compression performance. Section headings are used in this document to improve the readability of the description, and do not limit the discussion or the embodiments (and/or implementations) in any way to only the corresponding sections.
1. Example of inter prediction in HEVC/H.265
Video coding standards have improved significantly over the years and now provide, in part, high coding efficiency and support for higher resolution. Recent standards such as HEVC and h.265 are based on hybrid video coding structures that utilize temporal prediction plus transform coding.
1.1 example of inter prediction
Each inter-predicted PU (prediction unit) has motion parameters for one or two reference picture lists. In some embodiments, the motion parameters include a motion vector and a reference picture index. In other embodiments, inter _ pred _ idc may also be used to signal the use of one of the two reference picture lists. In still other embodiments, the motion vectors may be explicitly coded as deltas relative to the predictor.
When a CU is encoded using skip mode, one PU is associated with the CU and there are no significant residual coefficients, no motion vector delta or reference picture index to encode. The Merge mode is specified, whereby the motion parameters for the current PU are obtained from neighboring PUs, including spatial and temporal candidates. The Merge mode may be applied to any inter-predicted PU, and is not only applicable to the skip mode. An alternative to the Merge mode is the explicit transmission of motion parameters (explicit transmission), where the motion vectors, corresponding reference picture indices for each reference picture list, reference picture list usage are explicitly signaled by each PU.
When the signaling indicates that one of the two reference picture lists is to be used, the PU is generated from one sample block. This is called "one-way prediction". Unidirectional prediction may be used for P slices and B slices.
When the signaling indicates that two reference picture lists are to be used, the PU is generated from two blocks of samples. This is called "bi-prediction". Bi-prediction can only be used for B slices.
1.1.1 example of constructing candidates for Merge mode
When predicting a PU using the Merge mode, an index pointing to an entry in a Merge candidate list (Merge candidates list) is parsed from the bitstream, and the index is used to retrieve motion information. The construction of this list can be summarized according to the following sequence of steps:
step 1: initial candidate derivation
Step 1.1: spatial candidate derivation
Step 1.2: redundancy check of spatial candidates
Step 1.3: temporal candidate derivation
Step 2: additional candidate insertions
Step 2.1: creating bi-directional prediction candidates
Step 2.2: inserting zero motion candidates
Figure 1 shows an example of building a Merge candidate list based on the order of steps summarized above. For spatial Merge candidate derivation, a maximum of four Merge candidates are selected among the candidates located at five different positions. For time Merge candidate derivation, at most one Merge candidate is selected among the two candidates. Since the number of candidates per PU is assumed to be constant at the decoder, additional candidates are generated when the number of candidates does not reach the maximum number of Merge candidates (MaxNumMergeCand) signaled in the slice header. Since the number of candidates is constant, the index of the best Merge candidate is encoded using binary unary Truncation (TU). If the size of the CU is equal to 8, all PUs of the current CU share a single Merge candidate list, which is the same as the Merge candidate list of the 2N × 2N prediction unit.
1.1.2 construction of spatial Merge candidates
In the derivation of spatial Merge candidates, a maximum of four Merge candidates are selected among the candidates located in the positions depicted in FIG. 2. The order of derivation is A1、B1、B0、A0And B2. Only when in position A1、B1、B0Position B is only considered when any PU of a0 is not available (e.g., because the PU belongs to another slice or slice) or is intra-coded2. At the addition position A1After the candidate of (a), the addition of the remaining candidates is subjected to a redundancy check that ensures that candidates with the same motion information are excluded from the list, thereby improving coding efficiency.
In order to reduce computational complexity, not all possible candidate pairs are considered in the mentioned redundancy check. Instead, only pairs linked with arrows in fig. 3 are considered, and a corresponding candidate for redundancy check is only added to the list if the candidate has different motion information. Another source of repeated motion information is the "second PU" associated with a partition other than 2Nx 2N. As an example, FIGS. 4A and 4B depict the second case for the N × 2N and 2N × N cases, respectivelyTwo PUs. Position A when the current PU is divided into Nx2N1The candidates of (b) are not considered for list construction. In some embodiments, adding the candidate may result in two prediction units with the same motion information, which is redundant to having only one PU in the coding unit. Similarly, when the current PU is divided into 2N, position B is not considered1
1.1.3 construction time Merge candidates
In this step, only one candidate is added to the list. In particular, in the derivation of the temporal Merge candidate, the scaled motion vector is derived based on a co-located (co-located) PU belonging to the picture having the smallest POC difference with respect to the current picture within the given reference picture list. The derived reference picture list for the co-located PU is explicitly signaled in the slice header.
Fig. 5 shows an example of derivation of a scaled motion vector for a temporal Merge candidate (e.g., dashed line) scaled from the motion vector of a co-located PU using POC distances tb and td, where tb is defined as the POC difference between a reference picture of a current picture and the current picture, and td is defined as the POC difference between a reference picture of a co-located picture and the co-located picture. The reference picture index of the temporal Merge candidate is set equal to zero. For B slices, two motion vectors, one for reference picture list 0 and one for reference picture list 1, are obtained and combined to produce a bi-predictive Merge candidate.
As shown in FIG. 6, in the co-located PU (Y) belonging to the reference frame, in the candidate C0And C1Selects a location for the time candidate. If at position C0Where PU is not available, intra-coded or outside the current CTU, then position C is used1. Otherwise, position C is used in the derivation of the time Merge candidate0
1.1.4 construction of additional types of Merge candidates
In addition to the space-time Merge candidates, there are two additional types of Merge candidates: a combined bi-directional predicted Merge candidate and zero Merge candidate. A combined bidirectional predictive Merge candidate is generated by using the space-time Merge candidate. The combined bi-directionally predicted Merge candidates are for B slices only. A combined bi-directional prediction candidate is generated by combining the first reference picture list motion parameters of the initial candidate with the second reference picture list motion parameters of the other candidate. If these two tuples provide different motion hypotheses, they will form a new bi-directional prediction candidate.
Fig. 7 shows an example of this process in which two candidates with mvL0 and refIdxL0 or mvL1 and refIdxL1 in the original list (710, on the left) are used to create a combined bipredictive Merge candidate that is added to the final list (720, on the right).
Zero motion candidates are inserted to fill the remaining entries in the Merge candidate list and thus reach the maxnummerge capacity. These candidates have a zero spatial displacement and a reference picture index that starts from zero and increases each time a new zero motion candidate is added to the list. The number of reference frames that these candidates use is 1 and 2 for unidirectional and bi-directional prediction, respectively. In some embodiments, no redundancy check is performed on these candidates.
1.1.5 examples of motion estimation regions for parallel processing
To speed up the encoding process, motion estimation may be performed in parallel, thereby deriving motion vectors for all prediction units within a given region simultaneously. The derivation of Merge candidates from spatial neighbors may interfere with parallel processing because a prediction unit cannot derive motion parameters from neighboring PUs until its associated motion estimation is complete. To mitigate the trade-off between coding efficiency and processing latency, Motion Estimation Regions (MERs) may be defined. The size of the MER may be signaled in a Picture Parameter Set (PPS) using a "log 2_ parallel _ merge _ level _ minus 2" syntax element. When MER is defined, the Merge candidates falling into the same region are marked as unavailable and are therefore also not considered in the list construction.
1.2 example of Advanced Motion Vector Prediction (AMVP)
AMVP exploits the spatial-temporal correlation of motion vectors with neighboring PUs, which is used for explicit transmission of motion parameters. Constructing a motion vector candidate list by: the availability of the left, upper temporally neighboring PU location is first checked, the redundant candidates are removed, and a zero vector is added to make the candidate list a constant length. The encoder may then select the best predictor from the candidate list and transmit a corresponding index indicating the selected candidate. Similar to the Merge index signaling, unary truncation is used to encode the index of the best motion vector candidate. The maximum value to be encoded in this case is 2 (see fig. 8). In the following sections, details are provided regarding the derivation process of motion vector prediction candidates.
1.2.1 example of constructing motion vector prediction candidates
Fig. 8 summarizes the derivation process for motion vector prediction candidates and may be implemented for each reference picture list with refidx as input.
In motion vector prediction, two types of motion vector candidates are considered: spatial motion vector candidates and temporal motion vector candidates. As previously shown in fig. 2, for spatial motion vector candidate derivation, two motion vector candidates are finally derived based on the motion vectors of each PU located at five different positions.
For temporal motion vector candidate derivation, one motion vector candidate is selected from two candidates derived based on two different co-located positions. After the first list of space-time candidates is made, duplicate motion vector candidates in the list are removed. If the number of potential candidates is greater than 2, the motion vector candidate whose reference picture index within the associated reference picture list is greater than 1 is removed from the list. If the number of spatial-temporal motion vector candidates is less than 2, additional zero motion vector candidates are added to the list.
1.2.2 construction of spatial motion vector candidates
In the derivation of spatial motion vector candidates, a maximum of two candidates are considered among five potential candidates from PUs located at positions as previously shown in fig. 2, these positions being associated with a motion MerThose positions of ge are the same. The derivation order of the left side of the current PU is defined as A0、A1And scaled A0Zoom of A1. The derivation order of the upper side of the current PU is defined as B0、B1、B2Zoomed B0Zoomed B1Zoomed B2. Thus, for each side, there are four cases that can be used as motion vector candidates, two of which do not require the use of spatial scaling and two of which use spatial scaling. Four different scenarios are summarized below.
No spatial scaling
(1) Same reference picture list, and same reference picture (same POC)
(2) Different reference picture lists, but the same reference picture (same POC)
Spatial scaling
(3) Same reference picture list, but different reference pictures (different POCs)
(4) Different reference picture lists, and different reference pictures (different POCs)
The case of no spatial scaling is checked first, followed by the case of allowing spatial scaling. Regardless of the reference picture list, spatial scaling is considered when POC is different between the reference pictures of the neighboring PU and the reference pictures of the current PU. If all PUs of the left candidate are not available or are intra coded, scaling of the above motion vector is allowed to aid in the parallel derivation of the left and above MV candidates. Otherwise, no spatial scaling is allowed for the upper motion vectors.
As shown in the example in fig. 9, for the spatial scaling case, the motion vectors of neighboring PUs are scaled in a similar manner as the temporal scaling. One difference is that the reference picture list and index of the current PU are given as input; the actual scaling procedure is the same as the time scaling procedure.
1.2.3 constructing temporal motion vector candidates
All procedures for derivation of temporal Merge candidates are the same as those for derivation of spatial motion vector candidates, except for reference picture index derivation (as shown in the example in FIG. 6). In some embodiments, the reference picture index is signaled to the decoder.
2. Example of inter-frame prediction method in Joint Exploration Model (JEM)
In some embodiments, reference software called Joint Exploration Model (JEM) is used to explore future video coding techniques. In JEM, sub-block based prediction such as affine prediction, Alternative Temporal Motion Vector Prediction (ATMVP), spatial-temporal motion vector prediction (STMVP), bi-directional optical flow (BIO), frame rate up-conversion (FRUC), Local Adaptive Motion Vector Resolution (LAMVR), motion compensation of Overlapped Blocks (OBMC), Local Illumination Compensation (LIC), and decoder-side motion vector refinement (DMVR) is employed in several coding tools.
2.1 example of sub-CU-based motion vector prediction
In a JEM with a quadtree plus binary tree (QTBT), each CU may have at most one set of motion parameters for each prediction direction. In some embodiments, two sub-CU level motion vector prediction methods are considered in the encoder by dividing the large CU into sub-CUs and deriving motion information for all sub-CUs of the large CU. The Alternative Temporal Motion Vector Prediction (ATMVP) method allows each CU to obtain multiple sets of motion information from multiple blocks smaller than the current CU in the co-located reference picture. In the spatial-temporal motion vector prediction (STMVP) method, a motion vector of a sub-CU is recursively derived by using a temporal motion vector predictor and a spatial neighboring motion vector. In some embodiments, and in order to preserve more accurate motion fields for sub-CU motion prediction, motion compression of the reference frame may be disabled.
2.1.1 example of Alternative Temporal Motion Vector Prediction (ATMVP)
In the ATMVP method, a Temporal Motion Vector Prediction (TMVP) method is modified by acquiring a plurality of sets of motion information (including a motion vector and a reference index) from a block smaller than a current CU.
Fig. 10 shows an example of the ATMVP motion prediction process for CU 1000. The ATMVP method predicts the motion vector of sub-CU 1001 within CU 1000 in two steps. The first step is to identify the corresponding block 1051 in the reference picture 1050 using the temporal vector. The reference picture 1050 is also referred to as a motion source picture. The second step is to divide the current CU 1000 into sub-CUs 1001 and obtain a motion vector and a reference index for each sub-CU from the block corresponding to each sub-CU.
In the first step, the reference picture 1050 and the corresponding block are determined by motion information of spatially neighboring blocks of the current CU 1000. To avoid a repeated scanning process of neighboring blocks, the first Merge candidate in the Merge candidate list of the current CU 1000 is used. The first available motion vector and its associated reference index are set to the temporal vector and index of the motion source image. In this way, the corresponding block can be identified more accurately than the TMVP, where the corresponding block (sometimes referred to as a co-located block) is always in a lower right or center position with respect to the current CU.
In a second step, the corresponding block of sub-CU 1051 is identified from the time vector in the motion source image 1050 by adding the time vector to the coordinates of the current CU. For each sub-CU, the motion information of its corresponding block (e.g., the minimum motion grid covering the center samples) is used to derive the motion information of the sub-CU. After identifying the motion information of the corresponding nxn block, it is converted into a motion vector and reference index of the current sub-CU in the same way as the TMVP of HEVC, in which motion scaling and other processes are applied. For example, the decoder checks whether a low delay condition is met (e.g., POC of all reference pictures of the current picture is less than POC of the current picture) and possibly uses a motion vector MVx (e.g., a motion vector corresponding to reference picture list X) for predicting a motion vector MVy of each sub-CU (e.g., where X equals 0 or 1 and Y equals 1-X).
2.1.2 example of spatial motion vector prediction (STMVP)
In the STMVP method, the motion vectors of sub-CUs are recursively derived in raster scan order.
Fig. 11 shows an example of one CU having four sub-blocks and adjacent blocks. Consider an 8 × 8CU 1100 that includes four 4 × 4 sub-CUs a (1101), B (1102), C (1103), and D (1104). The neighboring 4x4 blocks in the current frame are labeled (1111), b (1112), c (1113), and d (1114).
The motion derivation of sub-CU a starts by identifying its two spatial neighbors. The first neighbor is an N × N block (block c 1113) above sub-CU a 1101. If this block c (1113) is not available or is intra-coded, the other N × N blocks above the sub-CU a (1101) are checked (from left to right, starting from block c 1113). The second neighbor is the block to the left of sub-CU a1101 (block b 1112). If block b (1112) is not available or is intra-coded, the other blocks to the left of sub-CU a1101 are checked (from top to bottom, starting from block b 1112). The motion information obtained from the neighboring blocks of each list is scaled to the first reference frame of the given list. Next, the Temporal Motion Vector Predictor (TMVP) of sub-block a1101 is derived by following the same procedure as the TMVP derivation specified in HEVC. The motion information of the co-located block at block D1104 is acquired and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors are separately averaged for each reference list. The average motion vector is specified as the motion vector of the current sub-CU.
2.1.3 example of sub-CU motion prediction mode signaling
In some embodiments, the sub-CU mode is enabled as an additional Merge candidate and no additional syntax element is needed to signal the notification mode. Two additional Merge candidates are added to Merge the candidate lists of each CU to represent ATMVP mode and STMVP mode. In other embodiments, up to seven Merge candidates may be used if the sequence parameter set indicates ATMVP and STMVP are enabled. The encoding logic of the additional Merge candidates is the same as the encoding logic of the Merege candidates in the HM, which means that two additional Merge candidates may need to be subjected to two more RD checks for each CU in a P or B slice. In some embodiments, e.g., JEM, all bins of the Merge index are context coded by CABAC (context-based adaptive binary arithmetic coding). In other embodiments, such as HEVC, only the first bin (bin) is context coded and the remaining bins are context bypass coded.
2.2 example of adaptive motion vector disparity resolution
In some embodiments, when use _ integer _ mv _ flag in the slice header is equal to 0, a Motion Vector Difference (MVD) (between the motion vector of the PU and the predicted motion vector) is signaled in units of quarter luminance samples. In JEM, a locally adaptive motion vector resolution (lamfr) is introduced. In the JEM, the MVD may be encoded in units of quarter luminance samples, integer luminance samples, or four luminance samples. The MVD resolution is controlled at the Coding Unit (CU) level, and an MVD resolution flag is conditionally signaled to each CU having at least one non-zero MVD component.
For a CU with at least one non-zero MVD component, a first flag is signaled to indicate whether quarter luma sample MV precision is used in the CU. When the first flag (equal to 1) indicates that quarter-luma sample MV precision is not used, another flag is signaled to indicate whether integer-luma sample MV precision or four-luma sample MV precision is used.
When the first MVD resolution flag of a CU is zero, or not coded for the CU (meaning that all MVDs in the CU are zero), a quarter-luma sample MV resolution is used for the CU. When a CU uses integer luma sample MV precision or four luma sample MV precision, the MVPs in the AMVP candidate list of the CU are rounded to the corresponding precision.
In the encoder, CU level RD check is used to determine which MVD resolution will be used for the CU. In other words, the CU-level RD check is performed three times for each MVD resolution. To speed up the encoder speed, the following encoding scheme is applied in JEM.
During RD-checking of a CU with normal quarter-luma sample MVD resolution, the motion information (integer luma sample precision) of this current CU is stored. For the same CU with integer luma sample and 4 luma sample MVD resolutions, the stored motion information (after rounding) is used as a starting point for further small-range motion vector refinement during RD-check, so that the time-consuming motion estimation process is not repeated three times.
Conditionally invoke RD checking of CUs with 4 luma samples MVD resolution. For a CU, when the RD cost of the integer luma sample MVD resolution is much greater than the RD cost of the quarter-luma sample MVD resolution, the RD checking of the 4 luma sample MVD resolutions of the CU is skipped.
2.3 example of higher motion vector storage precision
In HEVC, the motion vector precision is one-quarter pixel (pel) (one-quarter luma samples and one-eighth chroma samples for 4:2:0 video). In JEM, the accuracy of the internal motion vector storage and the Merge candidate is increased to 1/16 pixels. The higher motion vector precision (1/16 pixels) is used for motion compensated inter prediction of CUs encoded in skip/Merge mode. For CUs encoded using the normal AMVP mode, integer-pixel or quarter-pixel motion is used.
An SHVC upsampling interpolation filter with the same filter length and normalization factor as the HEVC motion compensated interpolation filter is used as the motion compensated interpolation filter for the additional fractional pixel positions. The chroma component motion vector precision in JEM is 1/32 samples, and an additional interpolation filter for 1/32 pixel fractional positions is derived by using the average of the filters for two adjacent 1/16 pixel fractional positions.
2.4 example of motion Compensation (OBMC) for overlapping blocks
In JEM, OBMC can switch on and off at CU level using syntax elements. When OBMC is used in JEM, OBMC is performed on all Motion Compensated (MC) block boundaries except the right and bottom boundaries of the CU. Furthermore, it applies to both luminance and chrominance components. In the JEM, the MC block corresponds to the coding block. When a CU is encoded using sub-CU modes (including sub-CU Merge, affine, and FRUC modes), each sub-block of the CU is an MC block. In order to uniformly process the boundaries of CUs, OBMC is performed at a sub-block level for all MC block boundaries with the size of the sub-block set to 4x4, as shown in fig. 12A and 12B.
Fig. 12A shows sub-blocks at the CU/PU boundary, the shaded sub-blocks being where OBMC is applied. Similarly, fig. 12B shows the sub-PU in ATMVP mode.
When OBMC is applied to the current sub-block, in addition to the current MV, the vectors of the four neighboring sub-blocks (if available and not exactly the same as the current motion vector) are also used to derive the prediction block for the current sub-block. The plurality of prediction blocks based on the plurality of motion vectors are combined to generate a final prediction signal of the current sub-block.
The prediction block based on the motion vector of the neighboring sub-blocks is represented as PN, where N indicates the indexes of the sub-blocks adjacent to the upper, lower, left, and right sides, and the prediction block based on the motion vector of the current sub-block is represented as PC. OBMC is not performed from the PN when the PN is based on motion information of neighboring sub-blocks and the motion information is the same as that of the current sub-block. Otherwise, each sample of PN is added to the same sample in the PC, i.e., four rows/columns of PN are added to the PC. The weighting factors {1/4,1/8,1/16,1/32} are for PN and the weighting factors {3/4,7/8,15/16,31/32} are for PC. The exception is that only two rows/columns of PN are added to the PC for small MC blocks (i.e., when the height or width of the coding block is equal to 4 or the CU uses sub-CU mode coding). In this case, the weighting factors {1/4,1/8} are used for PN, and the weighting factors {3/4,7/8} are used for PC. For a PN generated based on motion vectors of vertically (horizontally) adjacent sub-blocks, samples in the same row (column) of the PN are added to PCs having the same weighting factor.
In JEM, for CUs with a size less than or equal to 256 luma samples, a CU level flag is signaled to indicate whether OBMC is to be applied to the current CU. For CUs with a size larger than 256 luma samples or without AMVP mode coding, OBMC is applied by default. At the encoder, when OBMC is applied to a CU, its effect is taken into account during the motion estimation phase. The prediction signal formed by OBMC using the motion information of the top-neighboring block and the left-neighboring block is used to compensate the top boundary and the left boundary of the original signal of the current CU, and then the normal motion estimation process is applied.
2.5 example of Local Illumination Compensation (LIC)
LIC uses a scaling factor a and an offset b based on a linear model for the luminance variation. And, the LIC is adaptively enabled or disabled for each inter-mode coded Coding Unit (CU).
When LIC is applied to a CU, the parameters a and b are derived using the least square error method by using neighboring samples of the current CU and their corresponding reference samples. Fig. 13 shows an example of adjacent samples for deriving parameters of an IC algorithm. Specifically, and as shown in fig. 13, neighboring samples of the sub-sampling (2:1 sub-sampling) of the CU and corresponding samples in the reference image (which are identified by motion information of the current CU or sub-CU) are used. IC parameters are derived and applied separately for each prediction direction.
When a CU is encoded using the Merge mode, copying LIC flags from neighboring blocks in a manner similar to the motion information copy in the Merge mode; otherwise, the LIC flag is signaled to the CU to indicate whether LIC is applicable.
When LIC is enabled for an image, an additional CU level RD check is needed to determine whether to apply LIC to the CU. When LIC is enabled for a CU, the integer-pixel motion search and fractional-pixel motion search are performed separately, using the mean-removed sum of absolute differences (MR-SAD) and the mean-removed sum of absolute Hadamard-transformed differences (MR-SATD), instead of SAD and SATD.
To reduce the coding complexity, the following coding scheme is applied in JEM.
When there is no significant brightness variation between the current image and its reference image, LIC is disabled for the entire image. To identify this, at the encoder, a histogram of the current image and each reference image of the current image is computed. Disabling LIC for a current picture if the histogram difference between the current picture and each reference picture of the current picture is less than a given threshold; otherwise, the LIC is enabled for the current image.
Example of 2.6 affine motion compensated prediction
In HEVC, only the translational motion model is applied to Motion Compensated Prediction (MCP). However, the camera and the object may have various motions such as zoom in/out, rotation, perspective motion, and other irregular motions. In another aspect, JEM applies simplified affine transform motion compensated prediction. FIG. 14 shows a motion vector V from two control points0And V1An example of an affine motion field of block 1400 is described. The Motion Vector Field (MVF) of block 1400 may be described by the following equation:
Figure BDA0002139287630000141
as shown in FIG. 14, (v)0x,v0y) Is the motion vector of the control point of the left corner, (v)1x,v1y) Is the motion vector of the right corner control point. To simplify motion compensated prediction, sub-block based affine transform prediction may be applied. The subblock size M × N is derived as the following equation:
Figure BDA0002139287630000142
where MvPre is the motion vector fractional precision (e.g., 1/16 in JEM), (v)2x,v2y) Is the motion vector of the lower left control point, calculated according to equation (1). If desired, M and N can be adjusted downward to be divisors of w and h, respectively.
Fig. 15 shows an example of affine MVF of each sub-block of the block 1500. To derive the motion vector for each M × N sub-block, the motion vector for the center sample of each sub-block may be calculated according to equation (1) and rounded to motion vector fractional precision (e.g., 1/16 in JEM). A motion compensated interpolation filter may then be applied to generate a prediction for each sub-block using the derived motion vectors. After MCP, the high precision motion vector of each sub-block is rounded and saved with the same precision as the normal motion vector.
In JEM, there are two affine motion patterns: AF _ INTER mode and AF _ MERGE mode. For CUs with a width and height greater than 8, the AF _ INTER mode may be applied. An affine flag at the CU level is signaled in the bitstream to indicate whether AF _ INTER mode is used. In AF _ INTER mode, neighboring blocks are used to construct pairs of motion vectors { (v)0,v1)|v0={vA,vB,vc},v1={vD,vE} of the candidate list.
Fig. 16 shows an example of Motion Vector Prediction (MVP) for a block 1600 in AF _ INTER mode. Movement from block A, B or C as shown in FIG. 16Selecting v from vectors0. The motion vectors from the neighboring blocks may be scaled according to the reference list. The motion vector may also be scaled according to a relationship between a reference Picture Order Count (POC) of the neighboring block, the reference POC of the current CU, and the POC of the current CU. Selecting v from adjacent sub-blocks D and E1The method of (3) is similar. If the number of candidate lists is less than 2, the list is populated by pairs of motion vectors constructed by repeating each AMVP candidate. When the candidate list is larger than 2, the candidates may first be classified according to neighboring motion vectors (e.g., based on the similarity of the two motion vectors in the candidate pair). In some implementations, the first two candidates are retained. In some implementations, a Rate Distortion (RD) cost check is used to determine which motion vector pair candidate to select as the Control Point Motion Vector Predictor (CPMVP) for the current CU. An index indicating the location of the CPMVP in the candidate list may be signaled in the bitstream. After determining the CPMVP of the current affine CU, affine motion estimation is applied and Control Point Motion Vectors (CPMVs) are found. The differences between CPMV and CPMVP are then signaled in the bitstream.
When a CU is applied in AF _ MERGE mode, it obtains the first block encoded using affine mode from the valid neighboring reconstructed blocks. Fig. 17A shows an example of the selection order of candidate blocks of the current CU 1700. As shown in fig. 17A, the selection order may be from left (1701), above (1702), above-right (1703), below-left (1704), to above-left (1705) of the current CU 1700. Fig. 17B shows another example of a candidate block of the current CU 1700 in the AF _ MERGE mode. As shown in fig. 17B, if the adjacent lower left block 1701 is encoded in an affine mode, a motion vector v containing the top left corner, top right corner, and bottom left corner of the CU of the sub-block 1701 is derived2、v3And v4. Based on v2、v3And v4To calculate the motion vector v of the top left corner of the current CU 17000. The motion vector v at the top right of the current CU can be calculated accordingly1
Calculating the CPMV v of the current CU according to the affine motion model in equation (1)0And v1Thereafter, an MVF for the current CU may be generated. To identify whether the current CU is drivingWith AF _ MERGE mode encoding, an affine flag may be signaled in the bitstream when there is at least one neighboring block encoded in affine mode.
2.7 example of motion vector derivation by Pattern matching (PMMVD)
The PMMVD mode is a special Merge mode based on a Frame Rate Up Conversion (FRUC) method. Using this mode, the motion information of the block is not signaled but derived at the decoder side.
When the Merge flag of a CU is true, the FRUC flag may be signaled to the CU. When the FRUC flag is false, the Merge index may be signaled and the normal Merge mode used. When the FRUC flag is true, an additional FRUC mode flag may be signaled to indicate which method (e.g., bilateral matching or template matching) will be used to derive the motion information for the block.
At the encoder side, the decision on whether to use FRUC Merge mode for a CU is based on RD cost selection as done for normal Merge candidates. For example, the various matching patterns (e.g., bilateral matching and template matching) of the CUs are verified by using RD cost selection. The matching pattern that results in the smallest cost is further compared to other CU patterns. If the FRUC matching pattern is the most efficient pattern, the FRUC flag is set to true for the CU and the relevant matching pattern is used.
Generally, the motion derivation process in FRUC Merge mode has two steps. CU-level motion search is performed first, followed by sub-CU-level motion refinement. At the CU level, an initial motion vector is derived for the entire CU based on bilateral matching or template matching. First, a list of MV candidates is generated and the candidate that results in the smallest matching cost is selected as the starting point for further CU-level refinement. Then, a local search based on bilateral matching or template matching is performed around the starting point. The MV that results in the smallest matching cost is taken as the MV of the entire CU. Subsequently, the motion information is further refined at the sub-CU level, with the derived CU motion vector as a starting point.
For example, the following derivation process is performed for W × HCU motion information derivation. In the first stage, the MVs of the overall W × HCU are derived. In the second stage, the CU is further divided into M × M sub-CUs. The value of M is calculated as in equation (3), D is a predefined partition depth, which is set to 3 by default in JEM. The MV of each sub-CU is then derived.
Figure BDA0002139287630000161
Fig. 18 shows an example of bilateral matching used in a Frame Rate Up Conversion (FRUC) method. Bilateral matching is used to derive motion information for a current CU by finding a closest match between two blocks along a motion trajectory of the current CU (1800) in two different reference images (1810,1811). Under the assumption of a continuous motion trajectory, the motion vectors MV0(1801) and MV1(1802) pointing to the two reference blocks are proportional to the temporal distance between the current picture and the two reference pictures, e.g., TD0(1803) and TD1 (1804). In some embodiments, bilateral matching becomes a mirror-based bidirectional MV when the current picture (1800) is temporally between two reference pictures (1810,1811) and the temporal distance from the current picture to the two reference pictures is the same.
Fig. 19 shows an example of template matching used in a Frame Rate Up Conversion (FRUC) method. Template matching may be used to derive motion information for the current CU 1900 by finding a closest match between a template in the current image (e.g., the top-neighboring block and/or the left-neighboring block of the current CU) and a block in the reference image 1910 (e.g., having the same size as the template). In addition to the FRUC Merge mode described above, template matching may also be applied to the AMVP mode. In both JEM and HEVC, AMVP has two candidates. Using a template matching method, new candidates can be derived. If the newly derived candidate matched by the template is different from the first existing AMVP candidate, it is inserted into the very beginning of the AMVP candidate list and then the list size is set to 2 (e.g., by removing the second existing AMVP candidate). When applied to AMVP mode, only CU level search is applied.
The CU-level MV candidate set may include: (1) the current CU is the original AMVP candidate if it is in AMVP mode, (2) all Merge candidates, (3) several MVs in the interpolated MV field (described later), and top and left neighboring motion vectors.
When using bilateral matching, each valid MV of the Merge candidate may be used as an input to generate MV pairs assuming bilateral matching. For example, a valid MV of a Merge candidate is in the reference list A (MVa, ref)a). Then, the reference picture ref of its paired bilateral MV is found in the other reference list BbSo that refaAnd refbTemporally on different sides of the current image. If such a ref in list B is referencedbIf not, then refbIs determined as being equal to refaDifferent references, and refbThe temporal distance to the current image is the minimum in list B. In the determination of refbThen, based on the current image and refa、refbThe temporal distance between them is derived by scaling MVa to derive MVb.
In some implementations, four MVs from the interpolated MV domain may also be added to the CU level candidate list. More specifically, interpolation MVs at positions (0,0), (W/2,0), (0, H/2), and (W/2, H/2) of the current CU are added. When FRUC is applied to AMVP mode, the original AMVP candidate is also added to the CU level MV candidate set. In some implementations, at the CU level, 15 MVs for AMVP CUs, 13 MVs for Merge CUs may be added to the candidate list.
The MV candidate sets at the sub-CU level include: (1) MV determined from CU level search, (2) top, left, top left and top right neighboring MVs, (3) scaled versions of collocated MVs from reference pictures, (4) one or more ATMVP candidates (e.g., up to four), and (5) one or more STMVP candidates (e.g., up to four). The scaled MV from the reference image is derived as follows. The reference images in both lists are traversed. The MVs at the collocated positions of the sub-CUs in the reference picture are scaled to the reference of the starting CU level MV. ATMVP and STMVP candidates may be the first four candidates. At the sub-CU level, one or more MVs (e.g., up to seventeen) are added to the candidate list.
Generation of interpolated MV domains
Before encoding a frame, an interpolated motion field is generated for the entire image based on one-sided ME. The motion field may then be used later as a CU-level or sub-CU-level MV candidate.
In some embodiments, the motion domain of each reference image in the two reference lists is traversed at the 4x4 block level. Fig. 20 shows an example of uni-directional Motion Estimation (ME)2000 in the FRUC method. For each 4x4 block, if the motion associated with the block passes through a 4x4 block in the current image and the block has not been assigned any interpolated motion, the motion of the reference block is scaled to the current image according to temporal distances TD0 and TD1 (in the same way as the MV scaling of TMVP in HEVC) and the scaled motion is assigned to the block in the current frame. If no scaled MVs are assigned to a 4x4 block, the motion of the block is marked as unavailable in the interpolated motion domain.
Interpolation and matching costs
When the motion vector points to a fractional sample position, motion compensated interpolation is required. To reduce complexity, both bilateral matching and template matching may use bilinear interpolation instead of conventional 8-tap HEVC interpolation.
The matching cost is calculated somewhat differently at different steps. When selecting candidates from the candidate set at the CU level, the matching cost may be a Sum of Absolute Differences (SAD) of bilateral matching or template matching. After determining the starting MV, the matching cost C for the bilateral matching of the sub-CU level search is calculated as follows:
Figure BDA0002139287630000181
where w is a weighting factor. In some embodiments, w is empirically set to 4, MV and MVsIndicating the current MV and the starting MV, respectively. SAD may still be used as the matching cost for template matching for sub-CU level search.
In FRUC mode, MVs are derived by using only luminance samples. The derived motion will be used for the luminance and chrominance of the MC inter prediction. After the MV is determined, the final MC is performed using an 8-tap interpolation filter for luminance and a 4-tap interpolation filter for chrominance.
MV refinement is a pattern-based MV search with a bilateral matching cost or a template matching cost as a criterion. In JEM, two search modes are supported-an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search (adaptive cross search) for MV refinement at the CU level and the sub-CU level, respectively. For CU-level and sub-CU-level MV refinement, the MV is searched directly with quarter-luma sample MV precision and then refined with eighth-luma sample MV. The search range for MV refinement for the CU step and the sub-CU step is set equal to 8 luma samples.
In the bilateral matching Merge mode, bi-prediction is applied, because the motion information of a CU is derived based on the closest match between two blocks along the motion trajectory of the current CU in two different reference images. In the template matching Merge mode, the encoder may select among unidirectional prediction from list 0, unidirectional prediction from list 1, or bi-directional prediction for a CU. The selection may be based on the template matching cost, as follows:
if costBi & gt factor & ltmin (cost0, cost1)
Using bi-directional prediction;
otherwise, if cost0< ═ cost1
Using one-way prediction from list 0;
if not, then,
using unidirectional prediction from list 1;
where cost0 is the SAD of the List 0 template match, cost1 is the SAD of the List 1 template match, and cost Bi is the SAD of the bidirectional prediction template match. For example, when the value of the factor is equal to 1.25, this means that the selection process is biased towards bi-directional prediction. Inter prediction direction selection may be applied to the CU level template matching process.
2.8 example of bidirectional optical flow (BIO)
The bi-directional optical flow (BIO) method is a sample-wise motion refinement performed on top of block-wise motion compensation for bi-directional prediction. In some implementations, the sample level motion refinement does not use signaling.
Let I(k)Is the luminance of reference k (k 0,1) after block motion compensationA value of and will
Figure BDA0002139287630000195
Figure BDA0002139287630000196
Are respectively represented as I(k)The horizontal and vertical components of the gradient. Assuming that the optical flow is valid, the motion vector field (v)x,vy) Given by the following equation.
Figure BDA0002139287630000191
Combining the optical flow equation with Hermite interpolation to obtain the motion track of each sampling point, and matching the function value I(k)Partial derivative of sum
Figure BDA0002139287630000192
A unique third order polynomial. The value of this polynomial when t is 0 is BIO prediction:
Figure BDA0002139287630000193
fig. 21 shows an example of an optical flow trajectory in the bidirectional optical flow (BIO) method. Wherein, tau0And τ1Indicating the distance to the reference frame. Distance tau0And τ1Based on Ref0And Ref1To calculate the POC of: tau is0POC (current) -POC (Ref0), τ1POC (Ref1) -POC (current). If both predictions are from the same time direction (either from the past or the future), then the sign is different (e.g., τ)0·τ1<0). In this case, if the predictions are not from the same time instant (e.g., τ)0≠τ1) BIO is applied. Both reference regions have non-zero motion (e.g., MVx)0,MVy0,MVx1,MVy1Not equal to 0) and block motion vector to temporal distance (e.g., MVx)0/MVx1=MVy0/MVy1=-τ01) Is in direct proportion.
Determining a motion vector field (v) by minimizing the difference Δ between the values in points A and Bx,vy). Fig. 22A and 22B show examples of the intersection of the motion trajectory and the reference frame plane. The model uses only the first linear term for the local taylor expansion of Δ:
Figure BDA0002139287630000194
all values in the above equation depend on the sample position, denoted (i ', j'). Assuming that the motion is consistent in the local surrounding area, Δ can be minimized within a (2M +1) × (2M +1) square window Ω centered on the current predicted point (i, j), where M equals 2:
Figure BDA0002139287630000201
for this optimization problem, JEM uses a simplified approach, first minimizing in the vertical direction, and then minimizing in the horizontal direction. This results in
Figure BDA0002139287630000202
Figure BDA0002139287630000203
Wherein the content of the first and second substances,
Figure BDA0002139287630000204
to avoid division by zero or very small values, regularization parameters r and m are introduced in equations (9) and (10).
r=500·4d-8 (12)
m=700·4d-8 (13)
Where d is the bit depth of the video samples.
To keep the memory access for BIO the same as for conventional bi-predictive motion compensation, all predictions and gradient values I are computed for the position within the current block(k)
Figure BDA0002139287630000205
FIG. 22A shows an example of an access location outside of block 2200. As shown in fig. 22A, in equation (9), (2M +1) × (2M +1) square window Ω centered on the current prediction point on the boundary of the prediction block needs to access the position outside the block. In JEM, I outside the block(k)
Figure BDA0002139287630000206
Is set equal to the nearest available value within the block. This may be accomplished, for example, as filling region 2201, as shown in fig. 22B.
Using BIO, it is possible that the motion field can be refined for each sample. To reduce computational complexity, a block-based BIO design may be used in JEM. The motion refinement may be calculated based on 4x4 blocks. In block-based BIO, s in equation (9) can be aggregated for all samples in a 4 × 4 blocknValue, and then polymerizing the polymerized snThe values are used for the derived BIO motion vector offset for the 4x4 block. More specifically, the following formula may be used for block-based BIO derivation:
Figure BDA0002139287630000211
wherein b iskRepresents a set of samples belonging to the kth 4x4 block of the prediction block. From ((s)n,bk)>>4) S in substitution equation (9) and equation (10)nTo derive the associated motion vector offset.
In some cases, the MV group (region) of the BIO may be unreliable due to noise or irregular motion. Therefore, in the BIO, the magnitude of the MV group is limited to the threshold. Based on whether the reference pictures of the current picture are all from one directionA threshold value is determined. For example, if all reference pictures of the current picture come from one direction, the threshold is set to 12 × 214 -d(ii) a Otherwise, it is set to 12 × 213-d
The gradient of the BIO may be computed simultaneously with motion compensated interpolation using operations consistent with the HEVC motion compensation process (e.g., a 2D separable finite impulse response FIR). In some embodiments, the input to this 2D separable FIR is the same reference frame sample as the motion compensation process and fractional position (fracX, fracY) according to the fractional part of the block motion vector. For horizontal gradients
Figure BDA0002139287630000213
The signal is vertically interpolated using the bialters first corresponding to the fractional position fracY with the de-scaling offset d-8, and then the gradient filter bialterg is applied in the horizontal direction corresponding to the fractional position fracX with the de-scaling offset 18-d. For vertical gradients
Figure BDA0002139287630000214
The signal shifting is performed in the horizontal direction using the bialters, first with respect to the fractional position fracY with the de-scaling offset d-8, vertically applying the gradient filter using the bialterg, and then with respect to the fractional position fracX with the de-scaling offset 18-d. The length of the interpolation filter used for gradient computation bialterg and signal displacement bialterf may be short (e.g., 6 taps) to maintain reasonable complexity. Table 1 shows an example filter that can be used for gradient calculations for different fractional positions of block motion vectors in a BIO. Table 2 shows an example interpolation filter that may be used for prediction signal generation in BIO.
Exemplary Filter for gradient computation in Table 1 BIO
Figure BDA0002139287630000212
Figure BDA0002139287630000221
Exemplary interpolation Filter for prediction Signal Generation in Table 2 BIO
Fractional pixel position Interpolation filter for prediction signal (BIOfilters)
0 {0,0,64,0,0,0}
1/16 {1,-3,64,4,-2,0}
1/8 {1,-6,62,9,-3,1}
3/16 {2,-8,60,14,-5,1}
1/4 {2,-9,57,19,-7,2}
5/16 {3,-10,53,24,-8,2}
3/8 {3,-11,50,29,-9,2}
7/16 {3,-11,44,35,-10,3}
1/2 {3,-10,35,44,-11,3}
In JEM, BIO may be applied to all bi-directionally predicted blocks when the two predictions are from different reference pictures. When Local Illumination Compensation (LIC) is enabled for a CU, the BIO may be disabled.
In some embodiments, OBMC is applied to the block after the normal MC process. To reduce computational complexity, BIO may not be applied during the OBMC process. This means that BIO is applied to the MC process of a block when its own MV is used, and BIO is not applied to the MC process when the MVs of the neighboring blocks are used in the OBMC process.
2.9 example of decoder-side motion vector refinement (DMVR)
In the bi-directional prediction operation, in order to predict one block region, two prediction blocks respectively formed using Motion Vectors (MVs) of list 0 and MVs of list 1 are combined to form a single prediction signal. In the decoder-side motion vector refinement (DMVR) method, the two motion vectors of the bi-prediction are further refined by a double-sided template matching process. The double-sided template matching is applied in the decoder to perform a distortion-based search between the double-sided template and reconstructed samples in the reference picture to obtain refined MVs without transmitting additional motion information.
As shown in fig. 23, in DMVR, two-sided templates are generated as a weighted combination (i.e., average) of two prediction blocks from the initial MV0 of list 0 and the MV1 of list 1, respectively. The template matching operation includes calculating a cost metric between the generated template and a sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV yielding the smallest template cost is considered as the updated MV of the list to replace the original template. In JEM, nine MV candidates are searched for each list. The nine MV candidates include the original MV and 8 surrounding MVs, where one luma sample is shifted to the original MV in the horizontal or vertical direction or in both directions. Finally, as shown in fig. 23, two new MVs, MV0 'and MV1', are used to generate the final bi-directional prediction results. The Sum of Absolute Differences (SAD) is used as a cost measure.
DMVR is applied to the Merge mode for bi-prediction, using one MV from a past reference picture and another MV from a future reference picture without transmitting additional syntax elements. In JEM, DMVR is not applied when LIC, affine motion, FRUC, or sub-CU Merge candidates are enabled for a CU.
Example of CABAC modification
In JEM, CABAC contains the following three major changes compared to the design in HEVC:
context modeling of modified transform coefficients
Multi-hypothesis probability estimation with context-dependent update speed
Adaptive initialization of context models
3.1 example of context modeling of transform coefficients
In HEVC, transform coefficients of a coded block are coded using non-overlapping Coefficient Groups (CGs), and each CG contains coefficients of a 4 × 4 block of the coded block. The CGs within the coding blocks and the transform coefficients within the CGs are coded according to a predefined scan order. The coding of transform coefficient levels for a CG having at least one non-zero transform coefficient may be divided into multiple scan channels. In the first pass, the first bin (denoted by bin0, also called significant _ coeff _ flag, which indicates that the magnitude of the coefficient is greater than 0) is encoded. Next, two scan channels for context coding the second/third bin (denoted bin1 and bin2, respectively, also referred to as coeff _ abs _ greater1_ flag and coeff _ abs _ greater2_ flag) may be applied. Finally, more than two scan channels for encoding the symbol information and the residual value (also called coeff _ abs _ level _ remaining) are called, if necessary. Only the bins in the first three scan channels are encoded in the normal mode, and these bins are referred to as normal bins in the following description.
In JEM, the context modeling of the regular bin is changed. When bin i is coded in the ith scan channel (i is 0,1,2), the context index depends on the value of the ith bin of previously coded coefficients in the neighborhood covered by the local template. Specifically, the context index is determined based on the sum of the ith bin of the neighboring coefficients.
As shown in fig. 24, the local template contains up to five spatially adjacent transform coefficients, where x indicates the position of the current transform coefficient and xi (i is 0 to 4) indicates its five neighbors. In order to capture the characteristics of transform coefficients of different frequencies, one coding block may be divided into up to three regions, and the division method is fixed regardless of the coding block size. For example, when coding the bin0 of luma transform coefficients, a coding block is divided into three regions marked with different colors and context indexes assigned to each region are listed. The luma and chroma components are processed in a similar manner, but with separate sets of context models. Furthermore, the context model selection of the bin0 (i.e., the active flag) for the luma component is further dependent on the transform size.
3.2 example of Multi-hypothesis probability estimation
Binary arithmetic coder estimates P based on two probabilities associated with each context model0And P1The "multi-hypothesis" probability update model is applied and independently updated at different adaptation rates as follows:
Figure BDA0002139287630000241
wherein the content of the first and second substances,
Figure BDA0002139287630000242
and
Figure BDA0002139287630000243
representing the probabilities before and after decoding the bin, respectively. Variable Mi(4, 5,6,7) is a parameter that controls the probability update speed of the context model with index equal to i; and k represents the accuracy of the probability (here equal to 15).
The probability estimate P for an interval subdivision in a binary arithmetic encoder is the average of estimates from two hypotheses:
P=(P0 new+P1 new)/2 (16)
in JEM, parameter M used in equation (15) for controlling the probability update speed of each context modeliThe values of (c) are assigned as follows:
at the encoder side, the encoded bins associated with each context model are recorded. After encoding a slice, the computation uses a different M for each context model with index equal to iiRate cost of value (4, 5,6,7) and select one M that provides the minimum rate costiThe value is obtained. For simplicity, this selection process is only performed when a new combination of slice type and slice-level quantization parameter is encountered.
Signaling a 1-bit flag for each context model i to indicate MiWhether different from the default value of 4. When the flag is 1, two bits are used to indicate MiWhether it is equal to 5,6 or 7.
3.3 example of initialization of context model
Instead of using a fixed table for initialization of context models in HEVC, the initial probability state of a context model for a slice of inter-coding may be initialized by copying the state from a previously coded picture. More specifically, after encoding the centrally located CTU for each picture, the probability states of all context models are stored to be used as initial states for the corresponding context models on subsequent pictures. In JEM, the initial state set of each inter-coded slice is copied from the storage state of the previously coded picture with the same slice type and the same slice level QP as the current slice. This lacks loss robustness, but is used for coding efficiency experimental purposes in the current JEM scheme.
4. Examples relating to embodiments and methods
Methods related to the disclosed technology include extended LAMVR, where the supported motion vector resolution ranges from 1/4 pixels to 4 pixels (1/4 pixels, 1/2 pixels, 1 pixel, 2 pixels, and 4 pixels). When the MVD information is signaled, information on the motion vector resolution is signaled at the CU level.
Adjusting Motion Vectors (MVs) and motion vectors of a CU depending on the resolution of the CUBoth quantitative predictors (MVPs). If the applied motion vector resolution is denoted as R (R may be 1/4, 1/2, 1,2, 4), then MV (MV)x,MVy) And MVP (MVP)x,MVPy) Is represented as follows:
(MVx,MVy)=(Round(MVx/(R*4))*(R*4),Round(MVy/(R*4))*(R*4))
(MVPx,MVPy)=(Round(MVPx/(R*4))*(R*4),Round(MVPy/(R*4))*(R*4))
MVD (MVD) because both the motion vector predictor and the MV are adjusted by adaptive resolutionx,MVDy) Also aligned with the resolution and signaled according to the resolution as follows:
(MVDx,MVDy)=((MVx–MVPx)/(R*4),(MVy–MVPy)/R*4))
in this proposal, the motion vector resolution index (MVR index) indicates the MVP index as well as the motion vector resolution. As a result, the proposed method has no MVP index signaling. The table shows what each value of the MVR index represents.
Table 3 example of MVR index representation
Figure BDA0002139287630000251
In the case of bi-prediction, the AMVR has 3 modes for each resolution. AMVR Bi-directional index indicates whether to signal the MVD of each reference List (List 0 or List 1)x、MVDy. An example definition of the AMVR bi-directional index is shown in the following table.
Example of Table 4 AMVR Bi-directional indexing
AMVR bidirectional indexing List 0 (MVD)x,MVDy) List 1 (MVD)x,MVDy)
0 Signaling notification Signaling notification
1 Non-signaling notification Signaling notification
2 Signaling notification Non-signaling notification
5. Examples of existing implementations
In one prior implementation using BIO, the calculated MV between the reference block/sub-block in list 0 (represented by refblk 0) and the reference block/sub-block in list 1 (refblk1) -is composed of (v @)x,vy) Representation-only for motion compensation of the current block/sub-block and not for motion prediction, deblocking, OBMC, etc. of future coding blocks, which may be inefficient.
In another prior implementation using OBMC, for AMVP mode, it is determined at the encoder whether OBMC is enabled for small blocks (width x height < > 256) and signaled to the decoder. This increases the complexity of the encoder. Meanwhile, for a given block/sub-block, when OBMC is enabled, it is always applied to luminance and chrominance, which may cause a reduction in coding efficiency.
6. Example method for motion prediction based on updated MVs
Embodiments of the presently disclosed technology overcome the shortcomings of existing implementations, thereby providing video coding with higher coding efficiency. Based on the disclosed techniques, motion prediction using updated motion vectors may enhance existing and future video coding standards, as set forth in the examples described below for various implementations. The examples of the disclosed technology provided below illustrate general concepts and are not meant to be construed as limiting. In one example, various features described in these examples may be combined unless explicitly indicated to the contrary.
With respect to terminology, reference pictures for current pictures from list 0 and list 1 are denoted as Ref0 and Ref1, respectively. Take tau0POC (current) -POC (Ref0), τ1POC (Ref1) -POC (current), and reference blocks for the current block from Ref0 and Ref1 are represented by refblk0 and refblk1, respectively. For the sub-block in the current block, the MV pointing to refblk1 of its corresponding sub-block in refblk0 is represented by (v)x,vy) And (4) showing. The MVs of the sub-blocks in Ref0 and Ref1 are respectively composed of (mvL0x,mvL0y) And (mvL 1)x,mvL1y) And (4) showing. As described in this patent document, the updated motion vector-based approach for motion prediction can be extended to existing and future video coding standards.
Example 1. It is proposed to modify the motion information of the BIO coding block (e.g. differently than used in motion compensation), which can be used later, such as in a subsequent motion prediction (e.g. TMVP) process.
(a) In one example, it is proposed to scale the MV (v) derived in the BIOx,vy) And adds it to the original MV (mvLX) of the current block/sub-blockx,mvLXy) (X ═ 0 or 1). The updated MV is calculated as follows: mvL 0'x=-vx*(τ0/(τ01))+mvL0x,mvL0’y=-vy*(τ0/(τ01))+mvL0y,and mvL1’x=vx*(τ1/(τ01))+mvL1x,mvL1’y=vy*(τ1/(τ01))+mvL1y
(i) In one example, the updated MV is used for future motion prediction (as in AMVP, Merge, and affine modes), deblocking (deblocking), OBMC, and so on.
(ii) Alternatively, the updated MV can only be used for motion prediction on CUs/PUs that are not immediately following it in decoding order.
(iii) Alternatively, the updated MV may only be used as TMVP in AMVP, Merge, or affine mode.
Example 2. It is proposed that for BIO, DMVR, FRUC, template matching or other methods that require updating MVs (or motion information including MVs and/or reference pictures) derived from the bitstream, the use of updated motion information may be constrained.
(a) If motion information can be updated at the sub-block level, both updated and non-updated motion information for different sub-blocks may be stored. In one example, updated motion information may be stored for some sub-blocks and for other remaining sub-blocks, non-updated motion information may be stored.
(b) In one example, if the MV (or motion information) is updated at the sub-block level, the updated MV is stored only for the inner sub-blocks (i.e., sub-blocks not at PU/CU/CTU boundaries) and then used for motion prediction, deblocking, OBMC, etc., as shown in fig. 25.
(c) In one example, the updated MV or motion information is not used for motion prediction and OBMC. Alternatively, in addition, the updated MV or motion information is not used for deblocking.
(d) In one example, the updated MV or motion information is used only for motion compensation and temporal motion prediction, such as TMVP/ATMVP.
Example 3. It is proposed to implicitly enable/disable OBMC depending on coding mode, motion information, size or location of PU/CU/block and thus not to signal OBMC flag.
(a) In one example, OBMC is disabled for a PU/CU/block encoded in AMVP mode or AFFINE _ INTER mode if one of the following conditions is met (where w and h are the width and height of the PU/CU/block).
(i)w×h<=T
(ii)w<=T&&h<=T
(b) In one example, OBMC is always enabled for PU/CU/blocks encoded in both the large mode and the AFFINE _ Merge mode.
(c) Alternatively, in addition, vertical and horizontal OBMC are separately disabled/enabled. If the PU/CU/block height is less than T, then vertical OBMC is disabled. If the width of the PU/CU/block is less than T, horizontal OBMC is disabled.
(d) In one example, no neighboring MVs from the upper row are used in OBMC for PU/CU/block/sub-block at the top CTU boundary.
(e) In one example, the neighboring MVs from the left column are not used in OBMC for PU/CU/block/sub-block at the left CTU boundary.
(f) In one example, OBMC is only enabled for uni-directionally predicted PU/CU/block/sub-block.
(g) In one example, OBMC is disabled for PU/CU/blocks whose MVD resolution is greater than or equal to integer pixels.
Example 4. Whether OBMC is proposed to be enabled may depend on the motion information of the current PU/CU/block/sub-block and its neighboring PU/CU/block/sub-block.
(a) In one example, if a neighboring PU/CU/block/sub-block has motion information that is quite different (quadrature differential) from the current PU/CU/block/sub-block, its motion information is not used in OBMC.
(i) In one example, the neighboring PU/CU/block/sub-block has a different prediction direction or reference picture than the current PU/CU/block/sub-block.
(ii) In one example, the neighboring PU/CU/block/sub-block has the same prediction direction and reference picture as the current PU/CU/block/sub-block, however, the absolute horizontal/vertical MV difference between the neighboring PU/CU/block/sub-block and the current PU/CU/block/sub-block in the prediction direction X (X ═ 0 or 1) is greater than a given threshold MV _ TH.
(b) Alternatively, if the neighboring PU/CU/block/sub-block has similar (similar) motion information as the current PU/CU/block/sub-block, its motion information is not used in OBMC.
(i) In one example, the neighboring PU/CU/block/sub-block has the same prediction direction and reference picture as the current PU/CU/block/sub-block, and the absolute horizontal/vertical MV difference between the neighboring PU/CU/block/sub-block and the current PU/CU/block/sub-block in all prediction directions is less than a given threshold MV _ TH.
Example 5. It is proposed that OBMC can be performed at block sizes different from the sub-block size in ATMVP/STMVP, affine mode, or other mode where each sub-block (size N × M) within a PU/CU has separate motion information.
(a) In one example, the sub-block size is 4x4, and OBMC is performed only at 8 x 8 block boundaries.
Example 6. How many rows/columns are proposed to be processed in OBMC may depend on PU/CU/block/sub-block size.
(a) In one example, if the width of the PU/CU/block/sub-block is greater than N, then 4 left columns of the PU/CU/block/sub-block are processed; otherwise, only the 2 (or 1) left columns of PU/CU/block/sub-block are processed.
(b) In one example, if the height of the PU/CU/block/sub-block is greater than N, then the 4 upper rows of the PU/CU/block/sub-block are processed; otherwise, only the 2 (or 1) upper rows of PU/CU/block/sub-block are processed.
Example 7. It is proposed to enable/disable OBMC for luminance and chrominance components independently, and the rules described in examples 2 and 3 can be applied to each component separately.
Example 8. It is proposed to use a short-tap interpolation filter (such as a bilinear, 4-tap or 6-tap filter) when generating a prediction block using neighboring motion information.
(a) In one example, an asymmetric 6-tap filter is used for the luminance component. For the sub-pixel positions, 4 pixels on the left/upper side and 2 pixels on the right/lower side are used for interpolation.
Example 9. The proposed method may be applied to certain modes, block sizes/shapes and/or certain sub-block sizes.
(a) The proposed method may be applied to certain modes, such as traditional translational motion (i.e. affine mode is disabled).
(b) The proposed method can be applied to certain block sizes.
(i) In one example, it only applies to blocks with w h ≧ T, where w and h are the width and height of the current block.
(ii) In another example, it only applies to blocks where w ≧ T & & h ≧ T.
Example 10. The proposed method can be applied to all chroma components. Alternatively, they may be applied only to certain chrominance components. For example, they may be applied only to the luminance component.
The examples described above may be incorporated in the context of methods described below, e.g., methods 2600 and 2700, which may be implemented at a video decoder.
Fig. 26 shows a flow diagram of an exemplary method for video decoding. The method 2600 includes, at step 2610, receiving a bitstream representation of a current block of video data.
The method 2600 includes, at step 2620, generating an updated first reference motion vector and an updated second reference motion vector based on the first motion vector and a weighted sum of the first reference motion vector and the second reference motion vector, respectively. In some embodiments, the first motion vector is derived based on a first reference motion vector from a first reference block and a second reference motion vector from a second reference block, and the current block is associated with the first reference block and the second reference block.
The method 2600 includes, at step 2630, processing the bitstream representation based on the updated first reference motion vector and the updated second reference motion vector to generate the current block.
In some embodiments, and as described in the context of example 1, the first motion vector is derived based on bi-directional optical flow (BIO) refinement using the first reference motion vector and the second reference motion vector. In an example, the weighted sum includes a weight based on Picture Order Count (POC) of the current block, the first reference block, and the second reference block.
In some embodiments, the processing may be based on bi-directional optical flow (BIO) refinement, decoder-side motion vector refinement (DMVR), Frame Rate Up Conversion (FRUC) techniques, or template matching techniques. In one example, an updated first reference motion vector and an updated second reference motion vector are generated for intra sub-blocks that are not on the boundary of the current block. In another example, an updated first reference motion vector and an updated second reference motion vector are generated for a subset of sub-blocks of the current block.
In some embodiments, the processing does not include motion prediction or Overlapped Block Motion Compensation (OBMC).
Fig. 27 shows a flow diagram of another exemplary method for video decoding. The method 2700 includes, at step 2710, receiving a bitstream representation of a current block of video data.
The method 2700 includes, at step 2720, generating the current block by selectively using Overlapped Block Motion Compensation (OBMC) to process the bitstream representation based on a characteristic of the current block without signaling an OBMC flag.
In some embodiments, the characteristic comprises a dimension of the current block or a location of the current block in the image. In other embodiments, the characteristic includes motion information of the current block. In one example, OBMC may not be used if the motion information of the current block is different from the motion information of the neighboring block. In another example, OBMC may be used if the motion information of the current block is the same as the motion information of the neighboring block.
In some embodiments, and as described in the context of example 7, OBMC may be applied independently to the luma and chroma components. In one example, OBMC is applied to the chroma component of the current block, and wherein OBMC is not applied to the luma component of the current block. In another example, OBMC is applied to a luma component of the current block, and wherein OBMC is not applied to a chroma component of the current block.
In some embodiments, and as described in the context of example 6, processing the bitstream representation includes processing a predetermined number of rows or columns of the current block using OBMC, and wherein the predetermined number is based on a size of a sub-block of the current block.
7. Example implementations of the disclosed technology
Fig. 28 is a block diagram of the video processing apparatus 2800. Apparatus 2800 may be used to implement one or more methods described herein. The apparatus 2800 may be implemented in a smartphone, tablet computer, internet of things (IoT) receiver, and/or the like. The apparatus 2800 may include one or more processors 2802, one or more memories 2804, and video processing hardware 2806. The processor(s) 2802 can be configured to implement one or more methods described in this document (including, but not limited to, methods 2600 and 2700). The memory(s) 2804 may be used to store data and code for implementing the methods and techniques described herein. Video processing hardware 2806 may be used to implement some of the techniques described in this document in hardware circuits.
In some embodiments, the video encoding method may be implemented using an apparatus implemented on a hardware platform as described with respect to fig. 28.
Various embodiments and techniques disclosed in this document may be described in the following list of examples.
1. A video processing method, comprising: determining that the current block is associated with a first reference motion vector and a second reference motion vector; generating an updated first reference motion vector and an updated second reference motion vector, respectively, based on a sum of the scaled first motion refinement and the first reference motion vector and a sum of the scaled first motion refinement and the second reference motion vector, wherein the first motion refinement is derived based on a bi-directional optical flow pattern; and performing a conversion between a current video block and a bitstream representation of video data comprising the current block based on the updated first reference motion vector and the updated second reference motion vector.
2. The method of example 1, wherein the first reference motion vector relates to a reference picture of a first picture list and the second reference motion vector relates to another reference picture of a second picture list.
3. The method of example 1 or 2, wherein the first motion refinement is scaled based on a Picture Order Count (POC) of the current block, a POC of the first reference block, and a POC of the second reference block.
4. The method of example 3, wherein the POC difference τ0And τ1The calculation is as follows:
τ0POC (current block) -POC (first reference block),
τ1POC (second reference block) -POC (current block).
5. The method of example 4, wherein generating the updated first reference motion vector and the updated second reference motion vector is as follows:
mvL0’x=-vx*(τ0/(τ01))+mvL0x
mvL0’y=-vy*(τ0/(τ01))+mvL0y
mvL1’x=vx*(τ1/(τ01))+mvL1xand an
mvL1’y=vy*(τ1/(τ01))+mvL1y
Wherein (mvL 0'x,mvL0’y) Is the updated first reference motion vector, (mvL 1'x,mvL1’y) Is the updated second reference motion vector, (v)x,vy) Is the first motion refinement, (mvL 0)x,mvL0y) Is the first reference motion vector, and (mvL 1)x,mvL1y) Is the second reference motion vector.
6. The method of any of examples 1 to 5, wherein the updated motion vector is used for motion prediction, deblocking or motion compensation of Overlapping Blocks (OBMC).
7. The method of any of examples 1 to 6, wherein the updated motion vector is used as a Temporal Motion Vector Prediction (TMVP) in Advanced Motion Vector Prediction (AMVP), Merge mode, or affine mode.
8. The method of any of examples 1 to 6, wherein the updated motion vector is used only in motion prediction of non-immediately following Coding Units (CUs) or Prediction Units (PUs) in decoding order.
9. The method of any of examples 1-8, wherein the method is applied to translational motion and affine mode is disabled.
10. The method of any of examples 1 to 8, wherein the method is applied only to blocks of wxh ≧ T, where w and h are the width and height of the current block, and T is a threshold.
11. The method of any of examples 1 to 8, wherein the method is applied only to blocks of w ≧ T and h ≧ T, where w and h are the width and height of the current block, and T is a threshold.
12. The method of any of examples 1 to 11, wherein the method is applied to all chroma components.
13. The method of any of examples 1 to 11, wherein the method is applied to only a luma component.
14. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of examples 1-13.
15. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for implementing the method of any of examples 1 to 13.
From the foregoing it will be appreciated that specific embodiments of the disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the disclosed technology is not to be restricted except in the spirit of the appended claims.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a combination of substances that affect a machine-readable propagated signal, or a combination of one or more of them. The term "data processing unit" or "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language file), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, with the examples being meant as examples. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, the use of "or" is intended to include "and/or" unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples have been described, and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (15)

1. A video processing method, comprising:
determining that the current block is associated with a first reference motion vector and a second reference motion vector;
generating an updated first reference motion vector and an updated second reference motion vector, respectively, based on a sum of the scaled first motion refinement and the first reference motion vector and a sum of the scaled first motion refinement and the second reference motion vector, wherein the first motion refinement is derived based on a bi-directional optical flow pattern; and
performing a conversion between a current video block and a bitstream representation of video data comprising the current block based on the updated first reference motion vector and the updated second reference motion vector.
2. The method of claim 1, wherein the first reference motion vector relates to a reference picture of a first picture list and the second reference motion vector relates to another reference picture of a second picture list.
3. The method of claim 1 or 2, wherein the first motion refinement is scaled based on a Picture Order Count (POC) of the current block, a POC of the first reference motion vector, and a POC of the second reference motion vector.
4. The method of claim 3, wherein the POC difference τ0And τ1The calculation is as follows:
τ0POC (current block) -POC (first reference motion vector),
τ1POC (second reference motion vector) -POC (current block).
5. The method of claim 4, wherein generating the updated first reference motion vector and the updated second reference motion vector is as follows:
mvL0’x=-vx*(τ0/(τ01))+mvL0x
mvL0’y=-vy*(τ0/(τ01))+mvL0y
mvL1’x=vx*(τ1/(τ01))+mvL1xand an
mvL1’y=vy*(τ1/(τ01))+mvL1y
Wherein (mvL 0'x,mvL0’y) Is the updated first reference motion vector, (mvL 1'x,mvL1’y) Is the updated second reference motion vector, (v)x,vy) Is the first motion refinement, (mvL 0)x,mvL0y) Is the first reference motion vector, and (mvL 1)x,mvL1y) Is the second reference motion vector.
6. The method of claim 1 or 2, wherein the updated first reference motion vector and the updated second reference motion vector are used for motion prediction, deblocking or Overlapped Block Motion Compensation (OBMC).
7. The method of claim 1 or 2, wherein the updated first reference motion vector and the updated second reference motion vector are used as Temporal Motion Vector Prediction (TMVP) in Advanced Motion Vector Prediction (AMVP), Merge mode, or affine mode.
8. The method of claim 1 or 2, wherein the updated first reference motion vector and the updated second reference motion vector are used only in motion prediction of non-immediately following Coding Units (CUs) or Prediction Units (PUs) in decoding order.
9. The method of claim 1 or 2, wherein the method is applied to translational motion and affine mode is disabled.
10. The method of claim 1 or 2, wherein the method is applied only to blocks of wx h ≧ T, where w and h are the width and height of the current block, and T is a threshold.
11. The method of claim 1 or 2, wherein the method is applied only to blocks of w ≧ T and h ≧ T, where w and h are the width and height of the current block, and T is a threshold.
12. The method of claim 1 or 2, wherein the method is applied to all chroma components.
13. A method as claimed in claim 1 or 2, wherein the method is applied only to the luminance component.
14. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1-13.
15. A non-transitory computer-readable medium having code stored thereon, which, when executed by a processor, causes the processor to implement the method of any one of claims 1 to 13.
CN201910663403.7A 2018-07-20 2019-07-22 Motion prediction based on updated motion vectors Active CN110740321B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2018/096384 2018-07-20
CN2018096384 2018-07-20

Publications (2)

Publication Number Publication Date
CN110740321A CN110740321A (en) 2020-01-31
CN110740321B true CN110740321B (en) 2022-03-25

Family

ID=68051836

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201910663403.7A Active CN110740321B (en) 2018-07-20 2019-07-22 Motion prediction based on updated motion vectors
CN201910662916.6A Active CN110740327B (en) 2018-07-20 2019-07-22 Method, device and readable medium for processing video data
CN201910663422.XA Active CN110740332B (en) 2018-07-20 2019-07-22 Motion prediction based on updated motion vectors

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201910662916.6A Active CN110740327B (en) 2018-07-20 2019-07-22 Method, device and readable medium for processing video data
CN201910663422.XA Active CN110740332B (en) 2018-07-20 2019-07-22 Motion prediction based on updated motion vectors

Country Status (3)

Country Link
CN (3) CN110740321B (en)
TW (3) TWI734147B (en)
WO (3) WO2020016859A2 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114073090A (en) * 2019-07-01 2022-02-18 交互数字Vc控股法国公司 Affine motion compensated bi-directional optical flow refinement
CN111654708B (en) * 2020-06-07 2022-08-23 咪咕文化科技有限公司 Motion vector obtaining method and device and electronic equipment
CN111901590B (en) * 2020-06-29 2023-04-18 北京大学 Refined motion vector storage method and device for inter-frame prediction
CN112004097B (en) * 2020-07-30 2021-09-14 浙江大华技术股份有限公司 Inter-frame prediction method, image processing apparatus, and computer-readable storage medium
US20220201282A1 (en) * 2020-12-22 2022-06-23 Qualcomm Incorporated Overlapped block motion compensation
WO2022140724A1 (en) * 2020-12-22 2022-06-30 Qualcomm Incorporated Overlapped block motion compensation
CN117044206A (en) * 2021-03-18 2023-11-10 Vid拓展公司 Motion stream coding based on YUV video compression of deep learning
WO2022197772A1 (en) * 2021-03-18 2022-09-22 Vid Scale, Inc. Temporal structure-based conditional convolutional neural networks for video compression

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017188566A1 (en) * 2016-04-25 2017-11-02 엘지전자 주식회사 Inter-prediction method and apparatus in image coding system
CN107534778A (en) * 2015-04-14 2018-01-02 联发科技(新加坡)私人有限公司 Obtain the method and device of motion vector prediction

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107465922B (en) * 2011-11-08 2020-10-09 株式会社Kt Method for decoding video signal by using decoding device
US9883203B2 (en) * 2011-11-18 2018-01-30 Qualcomm Incorporated Adaptive overlapped block motion compensation
US10230980B2 (en) * 2015-01-26 2019-03-12 Qualcomm Incorporated Overlapped motion compensation for video coding
CN105578195B (en) * 2015-12-24 2019-03-12 福州瑞芯微电子股份有限公司 A kind of H.264 inter-frame prediction system
KR20190029748A (en) * 2016-09-22 2019-03-20 엘지전자 주식회사 Inter prediction method and apparatus in video coding system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534778A (en) * 2015-04-14 2018-01-02 联发科技(新加坡)私人有限公司 Obtain the method and device of motion vector prediction
WO2017188566A1 (en) * 2016-04-25 2017-11-02 엘지전자 주식회사 Inter-prediction method and apparatus in image coding system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Algorithm Description of joint exploration test model 5;Chen J et Al;《5. JEVC meeting》;20170211;第2.3章节 *
EE3-related: A block-based design for Bi-directional optical flow (BIO);6. JEVC meeting;《6. JEVC meeting》;20170315;第1-2章节 *

Also Published As

Publication number Publication date
TWI734147B (en) 2021-07-21
CN110740321A (en) 2020-01-31
WO2020016859A2 (en) 2020-01-23
WO2020016857A1 (en) 2020-01-23
TW202008787A (en) 2020-02-16
WO2020016859A3 (en) 2020-03-05
TW202008786A (en) 2020-02-16
TWI723472B (en) 2021-04-01
WO2020016858A1 (en) 2020-01-23
TWI709332B (en) 2020-11-01
CN110740332A (en) 2020-01-31
CN110740327B (en) 2022-05-31
CN110740327A (en) 2020-01-31
CN110740332B (en) 2022-09-13
TW202023283A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN110809159B (en) Clipping of updated or derived MVs
CN111010569B (en) Improvement of temporal gradient calculation in BIO
CN110933420B (en) Fast algorithm for adaptive motion vector resolution in affine mode
CN110620923B (en) Generalized motion vector difference resolution
CN110740321B (en) Motion prediction based on updated motion vectors
CN112956197A (en) Restriction of decoder-side motion vector derivation based on coding information
CN113412623A (en) Recording context of affine mode adaptive motion vector resolution
CN113366851A (en) Fast algorithm for symmetric motion vector difference coding and decoding mode
CN110809164B (en) MV precision in BIO
CN111010570B (en) Affine motion information based size restriction
CN110876064B (en) Partially interleaved prediction
CN110881124B (en) Two-step inter prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant