CN114598882A - Symmetric intra block copy mode - Google Patents

Symmetric intra block copy mode Download PDF

Info

Publication number
CN114598882A
CN114598882A CN202111483742.0A CN202111483742A CN114598882A CN 114598882 A CN114598882 A CN 114598882A CN 202111483742 A CN202111483742 A CN 202111483742A CN 114598882 A CN114598882 A CN 114598882A
Authority
CN
China
Prior art keywords
video
sibc
block
color component
component blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111483742.0A
Other languages
Chinese (zh)
Inventor
张凯
张莉
刘鸿彬
张玉槐
马思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN114598882A publication Critical patent/CN114598882A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A symmetric intra block copy mode is described. Systems, methods, and apparatus for encoding, decoding, or transcoding digital video are described. An example method of video processing includes: a conversion is performed between a video block of video comprising two or more color component blocks and a bitstream representation of the video. Two or more color component blocks are coded using a plurality of prediction methods according to a coding rule, at least one of the plurality of prediction methods being a Symmetric Intra Block Copy (SIBC) method.

Description

Symmetric intra block copy mode
Technical Field
This patent document relates to digital video encoding and decoding techniques, including video encoding, transcoding, or decoding.
Background
Digital video accounts for the largest bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for pre-counting the use of digital video will continue to grow.
Disclosure of Invention
This document discloses techniques that may be used by video encoders and decoders in order to process a codec representation of a video or image according to a file format.
In one example aspect, a video processing method is disclosed. The method comprises the following steps: performing a conversion between a video block of video comprising two or more color component blocks and a bitstream representation of the video; wherein the two or more color component blocks are coded and decoded using a plurality of prediction methods according to coding and decoding rules, at least one of the plurality of prediction methods being a Symmetric Intra Block Copy (SIBC) method.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: for a conversion between a video block of a video comprising one or more color component blocks and a codec representation of the video, determining whether to use a Symmetric Intra Block Copy (SIBC) method to codec the color component blocks of the video block according to a codec rule; and performing a conversion based on the determination.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: performing a conversion between a video block of a video comprising one or more color component blocks and a bitstream representation of the video, wherein the bitstream representation complies with a format rule, wherein the format rule specifies whether and how to use a Symmetric Intra Block Copy (SIBC) method to encode and decode the color component blocks of the video block in the bitstream representation.
In yet another example aspect, a video encoder apparatus is disclosed. The video encoder comprises a processor configured to implement the above-described method.
In yet another example aspect, a video decoder apparatus is disclosed. The video decoder comprises a processor configured to implement the above method.
In yet another example aspect, a computer-readable medium having code stored thereon is disclosed. The code embodies one of the methods described herein in the form of processor executable code.
In yet another example aspect, a computer-readable medium having a bitstream stored thereon is disclosed. The bitstream is generated or processed using the methods described in this document.
These and other features are described throughout this document.
Drawings
Fig. 1 is a block diagram of an example video processing system.
Fig. 2 is a block diagram of a video processing apparatus.
Fig. 3 is a flow diagram of an example method of video processing.
Fig. 4 is a block diagram illustrating a video codec system according to some embodiments of the present disclosure.
Fig. 5 is a block diagram illustrating an encoder in accordance with some embodiments of the present disclosure.
Fig. 6 is a block diagram illustrating a decoder according to some embodiments of the present disclosure.
Fig. 7 is a block diagram of a video encoder.
Detailed Description
For ease of understanding, section headings are used in this document and do not limit the applicability of the techniques and embodiments disclosed in each section to that section only. Furthermore, the use of the h.266 term in some descriptions is merely for ease of understanding and is not intended to limit the scope of the disclosed technology. As such, the techniques described herein are also applicable to other video codec protocols and designs. In this document, with respect to the current draft of the VVC specification, editing changes to text are shown by a strikethrough indicating deletion of text and highlighting (including bold italics) indicating addition of text.
1. Preliminary discussion
This document relates to video coding and decoding techniques. In particular, this document relates to transform skip modes and transform types (i.e., including identity transforms) in video codecs. It can be applied to existing video codec standards, such as HEVC, or to pending standards (universal video codec). It may also be applied to future video codec standards or video codecs.
2. Video coding and decoding foundation
The video codec standard has evolved largely by the development of the well-known ITU-T and ISO/IEC standards. ITU-T developed H.261 and H.263, ISO/IEC developed MPEG-1 and MPEG-4 visuals, and both organizations jointly developed the H.262/MPEG-2 Video, H.264/MPEG-4 Advanced Video Coding (AVC), and H.265/HEVC standards. Since h.262, the video codec standard is based on a hybrid video codec structure, in which temporal prediction plus transform coding is employed. To explore future Video coding and decoding technologies beyond HEVC, VCEG and MPEG united in 2015 to form Joint Video Exploration Team (jfet). Since then, jfet has adopted many new methods and applied them to reference software known as Joint Exploration Model (JEM). In month 4 of 2018, a Joint Video Expert Team (jmet) was created between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11(MPEG) in an effort to study the VVC standard with a 50% bit rate reduction compared to HEVC.
The latest version of the VVC draft, the universal video codec (draft 10), can be found at the following web site:
http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=10399
the latest reference software named VTM for VVC can be found at the following web site:
https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/-/tags/VTM-10.0
2.1. codec flow for a typical video codec
Fig. 7 shows an example of a block diagram of an encoder for VVC, which includes three in-loop filter blocks: deblocking Filter (DF), Sample Adaptive Offset (SAO), and ALF. Unlike DF using predefined filters, SAO and ALF signal the coding side information of offsets and filter coefficients, respectively, to reduce the mean square error between original and reconstructed samples with the original samples of the current picture by adding offsets and applying Finite Impulse Response (FIR) filters. ALF is located at the final processing stage of each picture and can be viewed as a tool to attempt to capture and fix artifacts produced by previous stages.
3. Examples of technical problems solved by the disclosed solution
The current design of IBC has the following problems:
1. because many text characters and computer-generated graphics are symmetric, symmetry can be observed more often in pictures of screen content. Independently coding a pattern and its symmetric pictures is redundant.
4. Example embodiments and solutions
The following list should be considered as an example to explain the general inventive concept. These terms should not be interpreted narrowly. Further, these items may be combined in any manner.
min (x, y) returns the smaller of x and y.
max (x, y) returns to the larger of x and y.
Symmetric/mirror intra block copy
It is proposed to first determine whether an IBC reference block is flipped horizontally or vertically, or refined by other transform methods (e.g. clockwise rotation or counterclockwise rotation), before it is used to predict the current block. This method is called "symmetric IBC".
The "block" may be a Transform Unit (TU)/Prediction Unit (PU)/Coding Unit (CU)/Transform Block (TB)/Prediction Block (PB)/Coding Block (CB) or other video unit covering a plurality of samples/pixels. The TU/PU/CU may include one or more color components, such as only a luma component for the two-tree split and the color component currently being coded is luma; and for a dual tree split there are two chroma components and the color component of the current codec is chroma; or three color components for the single tree split case.
In the following disclosure, W and H denote the width and height of the mentioned block, or the width and height of the mentioned block in a particular color component (e.g. the luminance of 3 color components CU).
1. It is proposed to use multiple prediction methods for a block, wherein at least one of the multiple prediction methods is SIBC.
a. In one example, for luminance blocks, SIBC is utilized, while for the other two color components, an intra prediction method is applied.
b. In one example, for luminance blocks, SIBC is utilized, while for the other two color components, an inter prediction method is applied.
c. In one example, for luminance blocks, SIBC is utilized, while for the other two color components, a palette prediction method is applied.
d. In one example, for a luminance block, SIBC is utilized, while for the other two color components, the prediction from the reconstructed luminance block is applied.
Implicit signaling using SIBC
2. The determination to use SIBC for the first block may depend on the color components of the first block.
a. For example, SIBC may be applied to a first component (e.g., luma) of a first block, while normal intra prediction is always used for a second component (e.g., Cb or Cr) of the first block.
b. For example, SIBC may be applied to all color components of the first block.
3. The determination to use SIBC for the first block may depend on the dimensions of the first block.
a. SIBC is only applicable when W > -T1 and H > -T2. For example, T1-T2-4.
b. SIBC is only applicable when W < ═ T1 and H < ═ T2. For example, T1-T2-16.
c. SIBC is only applicable when max (W, H) < ═ T1. For example, T1 ═ 16.
d. SIBC is only applicable when min (W, H) < ═ T1. For example, T1-4.
e. SIBC is only applicable if W × H > -T1. For example, T1 ═ 16.
f. SIBC is only applicable when W × H < ═ T1. Such as T1 ═ 256.
g. SIBC is only applicable if max (W, H)/min (W, H) < ═ T1. For example, T1 ═ 1 or T1 ═ 2.
h. In the above bulletins, ">" -may be replaced with ">", and "< >" -may be replaced with "<".
4. The determination to use SIBC may further depend on the coding information of the current block.
a. In one example, the determination may further depend on Block Vector (BV) information.
i. In one example, BVy is zero, then vertical SIBC does not apply to the current block.
in one example, BVx is zero, then the level SIBC does not apply to the current block.
b. In one example, whether SIBC is applied may depend on the ABVR precision of the IBC codec block.
i. In one example, SIBC is only applicable if the ABVR precision is equal to one or some specific value. For example, SIBC is only applicable when the ABVR precision is equal to 1 pixel.
c. In one example, the determination may further depend on block locations within the slice/picture/sub-picture.
i. In one example, SIBC may be disabled if the block is located at a picture boundary.
Explicit signaling using SIBC
5. Multi-level signaling indicating whether and/or how to apply SIBC may be utilized, with one signaling at a higher level (e.g., at the level comprising more samples than a block) and one signaling at a lower level (e.g., at the block level).
a. In one example, the first higher level is a sequence level and the second higher level is a picture level.
i. Optionally, further, the third higher level is a stripe level.
Optionally, further, the lower level is a block level.
b. In one example, the first higher level is a picture level and the second higher level is a slice level.
i. Optionally, further, the lower level is a block level.
c. Optionally, in addition, the indication at a higher level may be signaled conditionally, e.g. depending on whether IBC is enabled or not.
d. In one example, the information how to apply SIBC may include whether a type of SIBC (e.g., horizontal SIBC or vertical SIBC) is applicable to a video unit (e.g., a sequence or a picture or a slice).
e. In one example, whether SIBC may be applied to a block may be signaled in a higher level unit, e.g., in SPS/sequence header/PPS/picture header/slice header.
i. For example, SIBC _ enable _ flag may be signaled in the sequence header to indicate whether SIBC can be applied to the sequence.
1) The sibc _ enable _ flag can be signaled only if IBC is allowed for the sequence (e.g., IBC _ enable _ flag is equal to 1).
6. Information on whether SIBC is used for the first block (denoted as SIBC _ flag) may be signaled conditionally.
a. If the SIBC _ flag is not present in the bitstream, the SIBC _ flag is inferred to be a default value, e.g. 0.
b. SIBC _ flag is signaled only when the first block uses IBC.
c. The SIBC _ flag is signaled only if the higher level unit including the first block indicates that SIBC is allowed to be used.
d. If it is determined that the SIBC does not apply to the block, the SIBC _ flag is not signaled.
e. Whether the SIBC flag is signaled may depend on the dimension of the first block.
i. The SIBC _ flag is signaled only when W > -T1 and H > -T2. For example, T1-T2-4.
SIBC _ flag is signaled only when W < ═ T1 and H < ═ T2. For example, T1-T2-16.
SIBC _ flag is signaled only when max (W, H) < ═ T1. For example, T1 ═ 16.
SIBC _ flag is signaled only when min (W, H) < ═ T1. For example, T1 ═ 4.
v. only when W × H > T1, SIBC _ flag is signaled. For example, T1 ═ 16.
Only when W × H ═ T1, SIBC _ flag is signaled. . Such as T1 ═ 256.
SIBC is only applicable when max (W, H)/min (W, H) < ═ T1. For example, T1 ═ 1 or T1 ═ 2.
In the above bulletins, "> =" may be replaced with ">", and "< ═" may be replaced with "<".
7. After the first syntax element of the block (denoted SIBC _ flag), a second syntax element (denoted SIBC _ dir _ flag) indicating which SIBC to use may be signaled.
a. In one example, the SIBC _ flag indicates whether SIBC is applied to the block.
SIBC _ dir _ flag indicates which SIBC is applied.
i. For example, SIBC _ dir _ flag equal to 0 means that horizontal SIBC is used, and SIBC _ dir _ flag equal to 1 means that vertical SIBC is used.
SIBC _ dir _ flag is conditionally signaled by SIBC _ flag.
i. Only when SIBC _ flag indicates that SIBC is used, the SIBC _ dir _ flag is signaled.
d. The SIBC _ dir _ flag and/or the SIBC _ flag may be coded by arithmetic coding.
SIBC _ dir _ flag and/or SIBC _ flag may be bypass coded.
A SIBC _ dir _ flag and/or a SIBC _ flag may be coded with one or more contexts.
1) The context of the SIBC _ flag used to codec the current block may depend on the SIBC _ flag of neighboring blocks.
2) The context of the sibc _ dir _ flag used to code the current block may depend on the sibc _ dir _ flag of the neighboring block.
3) The context may be derived based on the block dimensions.
4) The context may be derived based on the codec information of the neighboring blocks.
e. If it is determined that only one SIBC is applicable, the SIBC _ dir _ flag is not signaled.
f. Alternatively, which SIBC is applicable is not signaled, but is dynamically derived.
8. The information on whether to use SIBC for the first block and, if SIBC is used, which SIBC (denoted as SIBC _ type _ idx) to use may be signaled with a non-binary syntax element.
a. In one example, the original IBC (i.e., without any TRANSFORM) is further considered as a new TRANSFORM (e.g., NON _ TRANSFORM) type of SIBC mode.
b. In one example, the non-binary syntax element may be binarized into a binary string having a truncated unary, fixed length, K-th order exponential golomb (exp-golomb), etc.
i. Optionally, in addition, at least one bin is context coded and at least another bin is bypass coded.
Optionally, further, for at least two binary bits, the same context is utilized.
Optionally, in addition, for at least two binary bits in the binary string, different contexts are utilized.
A context may be derived based on the block dimension.
v. a context may be derived based on the coding information of neighboring blocks.
c. In one example, SIBC is disabled when the non-binary syntax element is equal to K (e.g., K ═ 0), while the original IBC design is used (i.e., without any transform).
i. Optionally, further, when the non-binary syntax element is not equal to K, SIBC is enabled
Optionally, furthermore, applying the lth transform of the SIBC method when the non-binary syntax element is not equal to K and is equal to L.
d. In one example, SIBC is disabled when the first bin of a non-bivariate syntax element is equal to K (e.g., K ═ 0), while the original IBC design is used (i.e., without any transformation).
i. Optionally, further, SIBC is enabled when the first binary bit is not equal to K.
e. In one example, the mapping between the decoded values of the non-binary syntax elements and the transform type may be fixed.
i. Alternatively, the mapping may be determined dynamically.
5. Examples of the invention
5.1. Example #1
This section gives an example of a solution for Implicit Selection of Transform Skip (ISTS) mode for Transform Skip. Basically, it follows the design principles of Implicit Selection of Transforms (IST) employed by AVS 3. A high level flag is signaled in the picture header to indicate that ISTS is enabled. If ISTS is enabled, the allowed set of transforms is set to { DCT-II TS }, and the determination of the TS mode is based on the parity of the number of non-zero coefficients in the block. Simulation results show that the proposed ISTS achieves 15.86% and 12.79% bit rate reduction for screen content codecs in AI and RA configurations, respectively, compared to HPM 6.0. It can be asserted that the increase in encoder and decoder complexity is negligible.
5.1.2. Proposed method
In this contribution, SIBC is proposed. Signaling a flag in each IBC codec and high precision ABVR CU to indicate whether the prediction mode of the current CU is symmetric and the CU size is in the range of 4 x 4 to 32 x 32. 0 indicates normal IBC mode and 1 indicates SIBC mode. When SIBC is applied, one more 1-bit flag is signaled to indicate whether horizontal or vertical flipping is used. After the horizontal flipping, the sample value, denoted as S (x, y) at coordinates (x, y), is derived as,
S(x,y)=S'(W-1-x,y), (1)
where S' represents the sample values before flipping and W represents the block width. Similarly, after vertical flipping, the sample values denoted as S (x, y) are derived as,
S(x,y)=S'(x,H-1-y), (1)
where H represents the block height.
5.1.3. Modifications proposed to syntax tables, semantics and decoding processes
For modifying newly added syntax tables, semantics and decoding processesBold underlineFor text that is highlighted and deleted
Figure BDA0003396588540000093
And (4) marking.
7.1.2.2 sequence header
TABLE 14 sequence header definitions
Figure BDA0003396588540000091
7.1.6 coding and decoding unit
TABLE 15 codec Unit definitions
Figure BDA0003396588540000092
Figure BDA0003396588540000101
7.2.2.2 sequence header
The symmetric intra block copy prediction enable flag sibc _ enable _ flag is a binary variable. A value of '1' indicates that Using a symmetric intra block copy prediction method; a value of '0' indicates that no symmetric intra block copy prediction method is used. The value of SibcEnablFlag is equal to sibc _ enable _ flag. If the flag sibc _ enable \ is not present in the bitstream flag, the value of SibcEnableFlag is 0.
7.2.6 coding and decoding unit
The symmetric intra block copy mode flag sibc _ flag is a binary variable. A value of '1' indicates the current codec unit Copying the prediction mode with symmetric intra blocks; a value of '0' indicates that the current codec unit does not use symmetric intra block copy prediction Mode(s). The value of SibcFlag is equal to sibc _ flag. If there is no sibc _ flag in the bitstream, the value of SibcFlag is equal to 0。
The symmetric intra block copy mode direction flag sibc _ dir _ flag is a binary variable. A value of '1' indicates the current codec The code unit uses a vertically symmetric intra block copy prediction mode; a value of '0' indicates that the current codec unit uses horizontally symmetric frames Intra block copy prediction mode. The value of SibcDirFlag is equal to sibc _ dir _ flag. If sibc _ dir \ is not present in the bitstream flag, the value of SibcDirFlag is equal to 0.
8.3.3.2 derivation of binary symbol model
8.3.3.2.1 derivation of binary symbolic model
Table 61 syntax elements corresponding to ctxIdxStart and ctxIdxInc
Syntax element ctxIdxInc ctxIdxStart Number of ctx
abvr_index binIdx 109 2
sbic_flag 0 111 1
sibc_dir_flag 0 112 1
9.8 symmetric/mirror intra block copy prediction
9.8.2 prediction sample derivation
If the current prediction block is a brightness prediction block, the position of the sampling point at the upper left corner of the current prediction block in the brightness sampling point matrix of the current image is taken as (xE, yE), and the following operations are executed:when SibcFlag is 0,element predMatrix Ibc [ x ] in luminance prediction sample matrix predMatrix Ibc][y]Is the position (((xE + x)) + BvE _ x) in the integer-pixel-precision luminance sample matrix of the unfiltered reconstruction of the current image>>2,(yE+y))+BvE_y>>2) ) sample values. Where BvE _ x and BvE _ y are the horizontal and vertical components, respectively, of the current prediction unit block vector BvE.SibcFlag is 1 and brightness when SibcDirFlag is 0Predicting the element predMatrix Ibc [ x ] in the sample matrix predMatrix Ibc][y]Value of (A) Is the position (((xE + W-1-x)) + BvE _ x) in the integer-pixel-precision luminance sample point matrix of the unfiltered reconstruction of the current image>>2, (yE+y))+BvE_y>>2) ) sample values. Where W is the width of the matrix of luminance prediction samples. SibcFlag is 1 and with SibcDirFlag at 1, the element predMatrix Ibc [ x ] in the luma prediction sample matrix predMatrixIbc][y]Value of (A) Is the position (((xE + x)) + BvE _ x) in the integer-pixel-precision luminance sample point matrix of the unfiltered reconstruction of the current image>>2,(yE+H- 1-y))+BvE_y>>2) ) sample values. Where H is the height of the matrix of luminance prediction samples.
If the current prediction block is a chroma prediction block, and the position of the upper-left sample point of the brightness prediction block containing the upper-left sample point of the current prediction block in the brightness sample point matrix of the current image is (xE, yE), the following operations are executed:SibcFlag of 0 When the temperature of the water is higher than the set temperature,element predMatrix IBc [ x ] in a chroma prediction sample matrix predMatrix Ibc][y]Is the position (((xE +2 x)) in the 1/2 precision chroma sampling point matrix of the unfiltered reconstruction of the current image, + BvC _ x)>>2,(yE+2×y))+BvC_y>>2) ) sample values. Where BvC _ x and BvC _ y are the horizontal and vertical components, respectively, of the block vector BvE for the spatial motion information storage unit containing the lower-right corner sample of the current prediction block. The element values of all positions in the chromaticity 1/2 precision sampling point matrix of the reference image are obtained by an interpolation method.When SibcFlag is 1 and SibcDirFlag is 0, chroma prediction sampling point matrix Element predMatrixIBc [ x ] in predMatrixIbc][y]Is the 1/2 precision chroma of the current image unfiltered reconstruction The position in the sampling point matrix is (((xE +2 × (W-1-x))) + BvC _ x>>2,(yE+2×y))+BvC_y>>2) ) sample values. It is composed of And W is the width of the chroma prediction sample point matrix. When SibcFlag is 1 and SibcDirFlag is 1, chroma prediction sampling point matrix Element predMatrixIBc [ x ] in predMatrixIbc][y]Is the 1/2 precision chroma of the current image unfiltered reconstruction The position in the sampling point matrix is (((xE +2 x)) and + BvC _ x>>2,(yE+2×(H-1-y)))+BvC_y>>2) ) sample values. It is provided with And H is the height of the chroma prediction sample point matrix.
Fig. 1 is a block diagram illustrating an example video processing system 1900 in which various techniques disclosed herein may be implemented. Various embodiments may include some or all of the components of system 1900. The system 1900 may include an input 1902 for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format. Input 1902 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of Network interfaces include wired interfaces such as ethernet, Passive Optical Network (PON), etc., and wireless interfaces such as Wi-Fi or cellular interfaces.
The system 1900 may include a codec component 1904 that may implement various codecs or encoding methods described in this document. The codec component 1904 may reduce the average bit rate of the video from the input 1902 to the output of the codec component 1904 to produce a codec representation of the video. Codec techniques are therefore sometimes referred to as video compression or video transcoding techniques. The output of the codec component 1904 may be stored or transmitted via a communication connection as represented by component 1906. The stored or communicated bitstream (or codec) of video received at input 1902 represents displayable video that may be used by component 1908 to generate pixel values or communicated to display interface 1910. The process of generating user-viewable video from a bitstream representation is sometimes referred to as video decompression. Furthermore, while certain video processing operations are referred to as "codec" operations or tools, it will be understood that codec tools or operations are used at the encoder and that corresponding decoding tools or operations that reverse the codec results will be performed by the decoder.
Examples of a peripheral bus Interface or display Interface may include a Universal Serial Bus (USB), or a High Definition Multimedia Interface (HDMI), or a Displayport (Displayport), among others. Examples of storage interfaces include SATA (Serial Advanced Technology Attachment), PCI, IDE interfaces, and the like. The techniques described in this document may be embodied in various electronic devices such as mobile phones, laptops, smart phones, or other devices capable of performing digital data processing and/or video display.
Fig. 2 is a block diagram of the video processing device 3600. The apparatus 3600 may be used to implement one or more methods described herein. The apparatus 3600 may be embodied in a smartphone, tablet, computer, internet of things (IoT) receiver, and/or the like. The apparatus 3600 may include one or more processors 3602, one or more memories 3604, and video processing hardware 3606. The processor(s) 3602 may be configured to implement one or more of the methods described in this document. The memory (es) 604 may be used to store data and code for implementing the methods and techniques described herein. The video processing hardware 606 may be used to implement some of the techniques described in this document in hardware circuitry. In some embodiments, the video processing hardware 3606 may be included, at least in part, in the processor 3602 (e.g., a graphics coprocessor).
Fig. 4 is a block diagram illustrating an example video codec system 100 that may utilize techniques of the present disclosure.
As shown in fig. 4, the video codec system 100 may include a source device 110 and a target device 120. Source device 110 generates encoded video data, where source device 110 may be referred to as a video encoding device. Target device 120 may decode the encoded video data generated by source device 110, and target device 120 may be referred to as a video decoding device.
The source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
The video source 112 may include sources such as a video capture device, an interface that receives video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of these sources. The video data may include one or more pictures. The video encoder 114 encodes video data from the video source 112 to generate a bitstream. The bitstream may comprise a sequence of bits forming a codec representation of the video data. The bitstream may include coded pictures and related data. A coded picture is a coded representation of a picture. The related data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be transmitted directly to the target device 120 via the I/O interface 116 through the network 130 a. The encoded video data may also be stored on a storage medium/server 130b for access by the target device 120.
Target device 120 may include I/O interface 126, video decoder 124, and display device 122.
I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may obtain encoded video data from source device 110 or storage medium/server 130 b. The video decoder 124 may decode the encoded video data. Display device 122 may display the decoded video data to a user. The display device 122 may be integrated with the target device 120 or may be external to the target device 120 configured to interface with an external display device.
The video encoder 114 and the video decoder 124 may operate in accordance with video compression standards such as the High Efficiency Video Codec (HEVC) standard, the universal video codec (VVM) standard, and other current and/or additional standards.
Fig. 5 is a block diagram illustrating an example of a video encoder 200, the video encoder 200 may be the video encoder 114 in the system 100 shown in fig. 4.
Video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 5, video encoder 200 includes a number of functional components. The techniques described in this disclosure may be shared among various components of video encoder 200. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
The functional components of the video encoder 200 may include a partitioning unit 201, a prediction unit 202 (which may include a mode selection unit 203, a motion estimation unit 204, a motion compensation unit 205, and an intra prediction unit 206), a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy coding unit 214.
In other examples, video encoder 200 may include more, fewer, or different functional components. In an example, the prediction unit 202 may include an Intra Block Copy (IBC) unit. The IBC unit may perform prediction in IBC mode, where the at least one reference picture is a picture in which the current video block is located.
Furthermore, some components such as the motion estimation unit 204 and the motion compensation unit 205 may be highly integrated, but are separately represented in the example of fig. 5 for explanation purposes.
The partition unit 201 may partition a picture into one or more video blocks. The video encoder 200 and the video decoder 300 may support various video block sizes.
The mode selection unit 203 may select one of the coding modes (e.g., intra or inter) based on the error result, and supply the resulting intra coded block or inter coded block to the residual generation unit 207 to generate residual block data, and to the reconstruction unit 212 to reconstruct the coded block to be used as a reference picture. In some examples, mode selection unit 203 may select a combination of intra and inter prediction modes (CIIP), where the prediction is based on the inter prediction signal and the intra prediction signal. In the case of inter prediction, the mode selection unit 203 may also select the resolution (e.g., sub-pixel or integer-pixel precision) of the motion vector of the block.
To perform inter prediction on the current video block, motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block. Motion compensation unit 205 may determine a prediction video block for the current video block based on the motion information and decoded samples of pictures from buffer 213 other than the picture associated with the current video block.
The motion estimation unit 204 and the motion compensation unit 205 may perform different operations on the current video block, e.g., depending on whether the current video block is in an I-slice, a P-slice, or a B-slice.
In some examples, motion estimation unit 204 may perform uni-directional prediction on the current video block, and motion estimation unit 204 may search list 0 or list 1 reference pictures for a reference video block of the current video block. Motion estimation unit 204 may then generate a reference index indicating a reference picture in list 0 or list 1 that includes the reference video block and a motion vector indicating spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, the prediction direction indicator, and the motion vector as motion information of the current video block. The motion compensation unit 205 may generate a prediction video block of the current block based on a reference video block indicated by motion information of the current video block.
In other examples, motion estimation unit 204 may perform bi-prediction on the current video block, and motion estimation unit 204 may search for a reference video block of the current video block in a reference picture in list 0 and may also search for another reference video block of the current video block in list 1. Motion estimation unit 204 may then generate reference indices that indicate reference pictures in list 0 and list 1 that include reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. Motion estimation unit 204 may output the reference index and the motion vector of the current video block as motion information for the current video block. Motion compensation unit 205 may generate a prediction video block for the current video block based on the reference video block indicated by the motion information for the current video block.
In some examples, the motion estimation unit 204 may output the complete set of motion information for the decoding process of the decoder.
In some examples, the motion estimation unit 204 may not output the complete set of motion information for the current video. Motion estimation unit 204 may signal motion information of the current video block with reference to motion information of another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of the neighboring video block.
In one example, motion estimation unit 204 may indicate a value in a syntax structure associated with the current video block that indicates to video decoder 300 that the current video block has the same motion information as another video block.
In another example, motion estimation unit 204 may identify another video block and a Motion Vector Difference (MVD) in a syntax structure associated with the current video block. The motion vector difference indicates a difference between a motion vector of the current video block and a motion vector of the indicated video block. The video decoder 300 may determine a motion vector for the current video block using the indicated motion vector for the video block and the motion vector difference.
As discussed above, the video encoder 200 may predictively signal the motion vectors. Two examples of prediction signaling techniques that may be implemented by video encoder 200 include Advanced Motion Vector Prediction (AMVP) and Merge mode signaling.
The intra prediction unit 206 may perform intra prediction on the current video block. When intra prediction unit 206 performs intra prediction on a current video block, intra prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a prediction video block and various syntax elements.
Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., as indicated by a minus sign) the predicted video block(s) of the current video block from the current video block. The residual data for the current video block may include residual video blocks corresponding to different sample components of samples in the current video block.
In other examples, for example in skip mode, there may be no residual data for the current video block, and the residual generation unit 207 may not perform the subtraction operation.
Transform processing unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
After transform processing unit 208 generates a transform coefficient video block associated with the current video block, quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more Quantization Parameter (QP) values associated with the current video block.
Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transform, respectively, to the transform coefficient video blocks to reconstruct residual video blocks from the transform coefficient video blocks. Reconstruction unit 212 may add the reconstructed residual video block to corresponding sample points from one or more prediction video blocks generated by prediction unit 202 to produce a reconstructed video block associated with the current block for storage in buffer 213.
After reconstruction unit 212 reconstructs the video block, a loop filtering operation may be performed to reduce video blocking artifacts in the video block.
Entropy encoding unit 214 may receive data from other functional components of video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
Fig. 6 is a block diagram illustrating an example of a video decoder 300, the video decoder 300 may be the video decoder 114 in the system 100 shown in fig. 4.
Video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 6, the video decoder 300 includes a number of functional components. The techniques described in this disclosure may be shared among various components of the video decoder 300. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
In the example of fig. 6, the video decoder 300 includes an entropy decoding unit 301, a motion compensation unit 302, an intra prediction unit 303, an inverse quantization unit 304, an inverse transformation unit 305, and a reconstruction unit 306 and a buffer 307. In some examples, video decoder 300 may perform a decoding process that is generally the inverse of the encoding process described for video encoder 200 (fig. 5).
The entropy decoding unit 301 may retrieve the encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data). The entropy decoding unit 301 may decode the entropy-coded video data, and the motion compensation unit 302 may determine motion information including a motion vector, a motion vector precision, a reference picture list index, and other motion information from the entropy-decoded video data. The motion compensation unit 302 may determine such information, for example, by performing AMVP and Merge modes.
The motion compensation unit 302 may generate a motion compensation block and may perform interpolation based on the interpolation filter. An identifier of the interpolation filter to be used with sub-pixel precision may be included in the syntax element.
The motion compensation unit 302 may calculate the interpolation of sub-integer pixels of the reference block using an interpolation filter as used by the video encoder 200 during encoding of the video block. The motion compensation unit 302 may determine an interpolation filter used by the video encoder 200 according to the received syntax information and generate a prediction block using the interpolation filter.
The motion compensation unit 302 may use some syntax information to determine the size of blocks used to encode the frame(s) and/or slice(s) of the encoded video sequence, partition information describing how each macroblock of a picture of the encoded video sequence is partitioned, a mode indicating how each partition is encoded, one or more reference frames (and reference frame lists) of each inter-coded block, and other information used to decode the encoded video sequence.
The intra prediction unit 303 may form a prediction block from spatially adjacent blocks using, for example, an intra prediction mode received in a bitstream. The inverse quantization unit 303 inversely quantizes, i.e., dequantizes, the quantized video block coefficients provided in the bitstream and decoded by the entropy decoding unit 301. The inverse transform unit 303 applies inverse transform.
The reconstruction unit 306 may add the residual block to the corresponding prediction block generated by the motion compensation unit 202 or the intra prediction unit 303 to form a decoded block. A deblocking filter may also be applied to filter the decoded blocks, if desired, to remove blockiness. The decoded video blocks are then stored in buffer 307, provide reference blocks for subsequent motion compensation/intra prediction, and also generate decoded video for presentation on a display device.
A list of solutions preferred by some embodiments is provided next.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 1).
1. A video processing method (e.g., method 700 depicted in fig. 3), comprising: performing (702) a conversion between a video block of video comprising two or more color component blocks and a bitstream representation of the video; wherein the two or more color component blocks are coded and decoded using a plurality of prediction methods according to a coding rule, at least one of the plurality of prediction methods being a Symmetric Intra Block Copy (SIBC) method.
2. The method of solution 1, wherein the rule specifies that the SIBC method is used for a luma component block of the two or more color component blocks and the intra coding method is used for other color component blocks of the two or more color component blocks.
3. The method of solution 1, wherein the rule specifies that the SIBC method is used for a luma component block of the two or more color component blocks and the inter-coding method is used for other color component blocks of the two or more color component blocks.
4. The method of solution 1, wherein the rule specifies using the SIBC method for a luma component block of the two or more color component blocks and using the palette predictive coding method for other color component blocks of the two or more color component blocks.
5. The method of solution 1, wherein the rule specifies that the SIBC method is used for a luma component block of the two or more color component blocks and that other color component blocks of the two or more color component blocks are coded using a reconstructed luma block of the luma component block.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 2).
6. A video processing method, comprising: for a conversion between a video block of a video comprising one or more color component blocks and a codec representation of the video, determining whether to use a Symmetric Intra Block Copy (SIBC) method to codec the color component blocks of the video block according to a codec rule; and performing a conversion based on the determination.
7. The method of solution 6, wherein the coding rules specify that luma component blocks are coded using the SIBC method and other color component blocks are coded using the intra prediction method.
8. The method of solution 6, wherein the coding rule specifies that each of the one or more component blocks is coded using a SIBC method.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 3).
9. The method of solution 6, wherein the rule is based on a dimension of the video block.
10. The method of solution 9, wherein the rule specifies that the following conditions are satisfied for the use of the SIBC method:
a. SIBC is only applicable if W > -T1 and H > -T2,
b. SIBC is only applicable when W < ═ T1 and H < ═ T2,
c. SIBC is only applicable if max (W, H) < ═ T1,
d. SIBC is only applicable when min (W, H) < ═ T1,
e. SIBC is only applicable if W × H > -T1,
f. SIBC is only applicable when W H < ═ T1,
g. SIBC is only applicable when max (W, H)/min (W, H) < ═ T1;
wherein T1 and T2 are rational numbers.
11. The method of solution 10, wherein T1 is 1, 2, 4, 8, 16, or 256 and T2 is 4 or 16.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 4).
12. The method of any of solutions 1-11, wherein the rule depends on codec information of the video block.
13. The method of solution 12, wherein the coding information comprises information regarding whether the video block is coded using a block vector.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., items 5-8).
14. A video processing method, comprising: performing a conversion between a video block of the video comprising one or more color component blocks and a bitstream representation of the video, wherein the bitstream representation complies with a format rule, wherein the format rule specifies whether and how to use a Symmetric Intra Block Copy (SIBC) method to encode the color component blocks of the video block in the bitstream representation.
15. The method of solution 14, wherein the format rule specifies that to indicate the bitstream representation comprises at least a first syntax element at a first codec level and a second syntax element at a second codec level, wherein the first level is a higher level than the video block level and the second level is a video block level or a lower level.
16. The method of solution 14, wherein the format rule specifies that the syntax element is conditionally included for indication based on a codec characteristic.
17. The method of solution 16, wherein the codec characteristics include whether intra block copy mode is enabled for the video block.
18. The method of solution 16, wherein the codec characteristics comprise dimensions of the video block.
19. The method according to any of the solutions 14-18, wherein the format rule specifies that two fields are included to indicate the use of the SIBC method, wherein a first field signals the use of the SIBC method and a second field following the first field indicates the type of symmetry used by the SIBC method.
20. The method of solution 19, wherein the first field and the second field comprise non-meta syntax elements.
21. The method according to any of solutions 1-20, wherein converting comprises generating a bitstream representation from the video.
22. The method of any of solutions 1-20, wherein converting comprises decoding the bitstream representation to generate the video.
23. A video decoding apparatus comprising a processor configured to implement the method of one or more of solutions 1 to 22.
24. A video encoding apparatus comprising a processor configured to implement the method of one or more of solutions 1 to 22.
25. A computer program product having computer code stored thereon, which when executed by a processor causes the processor to implement the method of any of solutions 1 to 22.
26. A computer readable medium storing a bitstream representation generated according to any one of solutions 1 to 22.
27. A method, apparatus or system as described in this document.
In the solution described herein, the encoder may comply with the format rules by generating a codec representation according to the format rules. In the solution described herein, a decoder may parse syntax elements in a codec representation using format rules, knowing the presence and absence of the syntax elements according to the format rules, to produce decoded video.
In this document, the term "video processing" may refer to video encoding, video decoding, video compression, or video decompression. For example, a video compression algorithm may be applied during the conversion from a pixel representation of the video to a corresponding bitstream representation, and vice versa. The bitstream representation of the current video block may, for example, correspond to collocated or differently spread bits within the bitstream, as defined by the syntax. For example, a macroblock may be encoded from the transformed and coded error residual values and also using bits in the header and other fields in the bitstream. Furthermore, during the transition, the decoder may, based on this determination, parse the bitstream knowing that some fields may or may not be present, as described in the above solution. Similarly, the encoder may determine that certain syntax fields are included or excluded and generate the codec representation accordingly by including or excluding the syntax fields from the codec representation.
The disclosed and other solutions, examples, embodiments, modules, and functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware (including the structures disclosed in this document and their structural equivalents), or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a combination of substances which affect a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (Field Programmable Gate Array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not require such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular technology. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only some embodiments and examples are described and other embodiments, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (26)

1. A video processing method, comprising:
performing a conversion between a video block of a video comprising two or more color component blocks and a bitstream representation of the video;
wherein the two or more color component blocks are coded using a plurality of prediction methods according to a coding rule, at least one of the plurality of prediction methods being a Symmetric Intra Block Copy (SIBC) method.
2. The method of claim 1, wherein the rule specifies that a SIBC method is used for a luma component block of the two or more color component blocks and an intra coding method is used for other color component blocks of the two or more color component blocks.
3. The method of claim 1, wherein the rule specifies that a SIBC method is used for a luma component block of the two or more color component blocks and an inter-coding method is used for other color component blocks of the two or more color component blocks.
4. The method of claim 1, wherein the rule specifies that a SIBC method is used for a luma component block of the two or more color component blocks and a palette prediction coding method is used for other color component blocks of the two or more color component blocks.
5. The method of claim 1, wherein the rule specifies that a SIBC method is used for a luma component block of the two or more color component blocks and that reconstructed luma blocks of the luma component block are used to codec other color component blocks of the two or more color component blocks.
6. A video processing method, comprising:
for a conversion between a video block of a video comprising one or more color component blocks and a codec representation of the video, determining whether to use a Symmetric Intra Block Copy (SIBC) method to codec the color component blocks of the video block according to a codec rule; and
performing the conversion based on the determination.
7. The method of claim 6, wherein the coding rules specify that luma component blocks are coded using a SIBC method and other color component blocks are coded using an intra prediction method.
8. The method of claim 6, wherein the coding rule specifies that each of the one or more component blocks is coded using a SIBC method.
9. The method of claim 6, wherein the rule is based on a dimension of the video block.
10. The method of claim 9, wherein the rule specifies that the following condition is satisfied for use of a SIBC method:
a. SIBC is only applicable if W > -T1 and H > -T2,
b. SIBC is only applicable when W < ═ T1 and H < ═ T2,
c. SIBC is only applicable if max (W, H) < ═ T1,
d. SIBC is only applicable when min (W, H) < ═ T1,
e. SIBC is only applicable if W × H > -T1,
f. SIBC is only applicable when W × H ═ T1,
g. SIBC is only applicable when max (W, H)/min (W, H) < ═ T1;
wherein T1 and T2 are rational numbers.
11. The method of claim 10, wherein T1 is 1, 2, 4, 8, 16, or 256 and T2 is 4 or 16.
12. The method of claim 6, wherein the rule depends on coding information of the video block.
13. The method of claim 12, wherein the coding information comprises information regarding whether the video block is coded using a block vector.
14. A video processing method, comprising:
performing a conversion between a video block of a video comprising one or more color component blocks and a bitstream representation of the video,
wherein the bitstream representation complies with a format rule,
wherein the format rule specifies whether and how to use a Symmetric Intra Block Copy (SIBC) method to encode a color component block of the video block in the bitstream representation.
15. The method of claim 14, wherein the format rule specifies that for the indication, the bitstream representation includes at least a first syntax element at a first codec level and a second syntax element at a second codec level, wherein the first codec level is a higher level than a video block level and the second codec level is a video block level or a lower level.
16. The method of claim 14, wherein the format rule specifies that a syntax element is conditionally included for the indication based on a codec characteristic.
17. The method of claim 16, wherein the coding characteristics comprise whether intra block copy mode is enabled for the video block.
18. The method of claim 16, wherein the coding characteristics comprise dimensions of the video block.
19. The method of claim 14, wherein the format rule specifies that two fields are included to indicate use of a SIBC method, wherein a first field signals use of the SIBC method and a second field after the first field indicates a type of symmetry used by the SIBC method.
20. The method of claim 19, wherein the first field and the second field comprise non-binary syntax elements.
21. The method of claim 1, wherein the converting comprises generating the bitstream representation from the video.
22. The method of claim 1, wherein the converting comprises decoding the bitstream representation to generate the video.
23. A video decoding apparatus comprising a processor configured to implement the method of one or more of claims 1 to 22.
24. A video encoding apparatus comprising a processor configured to implement the method of one or more of claims 1 to 22.
25. A computer program product having computer code stored thereon, which, when executed by a processor, causes the processor to carry out the method of any of claims 1 to 22.
26. A computer readable medium storing a bitstream representation generated according to any one of claims 1 to 22.
CN202111483742.0A 2020-12-07 2021-12-07 Symmetric intra block copy mode Pending CN114598882A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2020/134201 2020-12-07
CN2020134201 2020-12-07

Publications (1)

Publication Number Publication Date
CN114598882A true CN114598882A (en) 2022-06-07

Family

ID=81803653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111483742.0A Pending CN114598882A (en) 2020-12-07 2021-12-07 Symmetric intra block copy mode

Country Status (1)

Country Link
CN (1) CN114598882A (en)

Similar Documents

Publication Publication Date Title
WO2021083257A1 (en) Cross-component adaptive loop filter
US11528497B2 (en) Palette mode with different partition structures
CN114586370A (en) Use of chrominance quantization parameters in video coding and decoding
CN113728642A (en) Quantized residual differential pulse codec modulation representation of codec video
CN113261291A (en) Two-step cross-component prediction mode based on multiple parameters
WO2021057751A1 (en) Setting intra-block copy virtual buffer based on virtual pipeline data unit
US11490089B2 (en) Transform bypass coded residual blocks in digital video
JP2023513519A (en) Deblocking parameter for chroma components
WO2021121418A1 (en) Joint use of adaptive colour transform and differential coding of video
CN114258680A (en) Residual coding of transform skipped blocks
CN114270838B (en) Signaling of transition skip mode
CN113853787B (en) Using transform skip mode based on sub-blocks
US20220210419A1 (en) Quantization parameter derivation for palette mode
CN116671101A (en) Signaling of quantization information in a codec video
CN114598882A (en) Symmetric intra block copy mode
WO2021136470A1 (en) Clustering based palette mode for video coding
US11778176B2 (en) Intra block copy buffer and palette predictor update
US11743506B1 (en) Deblocking signaling in video coding
CN115398898A (en) Stripe type in video coding and decoding
CN115486067A (en) Filter parameter signaling in video picture headers
CN115462085A (en) Advanced control of filtering in video coding and decoding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination