CN117676167A - Sub-segmentation in intra-coding - Google Patents

Sub-segmentation in intra-coding Download PDF

Info

Publication number
CN117676167A
CN117676167A CN202311686113.7A CN202311686113A CN117676167A CN 117676167 A CN117676167 A CN 117676167A CN 202311686113 A CN202311686113 A CN 202311686113A CN 117676167 A CN117676167 A CN 117676167A
Authority
CN
China
Prior art keywords
video block
current video
sub
block
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311686113.7A
Other languages
Chinese (zh)
Inventor
张凯
张莉
刘鸿彬
邓智玭
张娜
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN117676167A publication Critical patent/CN117676167A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present application relates to subdivision in intra-coding. Methods, apparatus, and systems related to video processing are disclosed. In one example aspect, a method of video processing includes: conversion is performed between a block of a current picture of the video and a codec representation of the video using an intra sub-block segmentation (ISP) mode. Prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture using ISP mode. The block is partitioned into a plurality of sub-partitions including a first sub-partition having the same upper left corner position as the upper left corner position of the block.

Description

Sub-segmentation in intra-coding
The present application is a divisional application of the invention patent application with the application number 202080061155.X, the application date of 2020, 08, 31 and the name of "sub-division in intra-frame codec".
Cross Reference to Related Applications
The parent application of the present application is the chinese national phase application of international patent application No. pct/CN2020/112425, filed 8/31/2020, which claims priority and benefit from international patent application No. pct/CN2019/103762, filed 8/30/2019. The entire disclosure of the foregoing application is incorporated by reference as part of the disclosure of this application.
Technical Field
This patent document relates to video encoding and decoding.
Background
Despite advances in video compression, digital video still occupies the greatest bandwidth in the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for digital video usage are expected to continue to increase.
Disclosure of Invention
Devices, systems and methods related to digital video coding, and in particular to video and image coding and decoding, wherein intra sub-segmentation mode is used for coding or decoding of video blocks.
In one example aspect, a video processing method is disclosed. The method comprises the following steps: conversion is performed between a block of a current picture of a video and a codec representation of the video using an intra sub-block partitioning (ISP) mode. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The block is partitioned into a plurality of sub-partitions including a first sub-partition having a same upper left corner position as the upper left corner position of the block.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: for a transition between a block of video and a codec representation of the video, a determination is made based on a rule whether wide-angle intra prediction mode mapping is enabled. The wide-angle prediction mode is a mode in which a reference sample and a sample to be predicted form an obtuse angle with respect to an upper left direction. The rules specify a determination using dimensions of a prediction unit with a codec tool enabled for the conversion of the block. The method further includes performing the conversion based on the determination.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: conversion is performed between a codec unit of a video region of a video and a codec representation of the video. The codec unit is partitioned into one or more partitions, and the codec unit is encoded in the codec representation using a quantized residual signal obtained by intra-prediction processing of each of the one or more partitions. The codec representation includes syntax elements indicating quantization parameters for quantization. For the codec unit, the codec representation includes the syntax element at most once and indicates a difference of a value of the quantization parameter and another quantization value of a previously processed codec unit based on the video.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: for a transition between a video block comprising one or more partitions and a coded representation of the video using an intra-sub-block partition (ISP) mode, a determination is made whether to skip a transform operation during encoding or whether to skip an inverse transform operation during decoding based on characteristics of the block or the ISP mode. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The method further includes performing the conversion based on the determination.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: for a transition between a video block comprising one or more partitions and a codec representation of the video, a type of transform used during the transition is determined based on whether an intra sub-block partition (ISP) mode is used for the transition. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The converting includes: the transform is applied during encoding prior to encoding in a codec representation or the inverse of the transform is applied to coefficient values parsed from the codec representation prior to reconstructing sample values of the block. The method further includes performing the conversion based on the determination.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: for a transition between a video block containing one or more partitions and a codec representation of the video, a limit for an intra sub-block partition (ISP) mode is determined based on whether lossless codec processing is applied to the block. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The method further includes performing the conversion based on the determination.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: conversion between a codec unit of a video region of a video and a codec representation of the video is performed according to rules, wherein the codec unit is divided into a plurality of transform units. The rule specifies a relationship between a Quantization Parameter (QP) of the codec unit and quantization parameters of one or more of the plurality of transform units.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: for a transition between a video region and a codec representation of the video region, determining whether and/or how to apply a deblocking filter to an edge based on Quantization Parameters (QPs) of transform units associated with the edge, wherein the video region includes one or more codec units and one or more transform units. The method further includes performing the conversion based on the determination.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: for a transition between a video unit comprising one or more sub-partitions and a codec representation of the video unit, determining that the transition uses an intra sub-block partition mode; and performing a conversion based on the determination such that the intra prediction process is used for conversion of each of the one or more sub-partitions.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: determining whether to use a wide-angle intra-prediction map during a transition between a video block and a codec representation of the video block based on an applicability of a codec tool and/or a size of a prediction unit of the video block without using a codec unit size of the video block; and performing conversion based on the determination result.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: for a transition between video regions including a codec unit, determining an delta quantization parameter (delta QP) applicable to all intra sub-block partitions of the codec unit and a transition of a codec representation of the video region, wherein the codec unit includes the intra sub-block partitions; performing conversion using delta QP; wherein the delta QP is signaled for the codec unit in the codec representation.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: for a transition between a video region and a codec representation of the video region, determining a Quantization Parameter (QP) for the transition of a Codec Unit (CU) in the video region based on a QP of a Transform Unit (TU) in the video region; and performing conversion using the QP of the TU and/or the QP of the CU.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: for conversion between video areas including one or more codec units and one or more transform units, determining whether to apply a deblocking filter to edges of video blocks for conversion based on the transform unit to which the edges belong; and performing the conversion based on the determination.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: for a transition between a video block using an intra sub-partition mode and a codec representation of the video block, determining whether a transform operation is skipped based on a dimension of the codec block or the prediction block or the transform block; and performing the conversion based on the determination.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: for a transition between a video block and a codec representation of the video block, determining a type of transform to apply based on whether an intra-sub-partition mode or a lossless codec mode is used for the transition; and performing the conversion according to the determination.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: the conversion is performed between the video block and the codec representation of the video block according to an exclusivity rule, due to which lossless codec mode is used for the conversion or intra-sub-split mode is used for the conversion, wherein the codec representation comprises an indication of using lossless codec mode or using intra-sub-split mode.
In yet another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, an apparatus configured or operable to perform the above-described method is disclosed. The apparatus may include a processor programmed to implement the method.
In yet another representative aspect, a video decoder device can implement a method as described herein.
The above and other aspects and features of the disclosed technology are described in more detail in the accompanying drawings, description and claims.
Drawings
Fig. 1 is a block diagram showing an example of intra-frame subdivision.
Fig. 2 is a block diagram illustrating an example of intra-frame subdivision.
Fig. 3 is a block diagram of an example implementation of a hardware platform for video processing.
Fig. 4 is a flow chart of an example method for video processing.
FIG. 5 is a block diagram of an example video processing system in which the disclosed techniques may be implemented.
Fig. 6 is a flow chart representation of a method for video processing in accordance with the present technique.
Fig. 7 is a flow chart representation of another method for video processing in accordance with the present technique.
Fig. 8 is a flow chart representation of another method for video processing in accordance with the present technique.
Fig. 9 is a flow chart representation of another method for video processing in accordance with the present technique.
Fig. 10 is a flow chart representation of another method for video processing in accordance with the present technique.
Fig. 11 is a flow chart representation of another method for video processing in accordance with the present technique.
Fig. 12 is a flow chart representation of another method for video processing in accordance with the present technique.
Fig. 13 is a flow chart representation of yet another method for video processing in accordance with the present technique.
Detailed Description
This document relates to video codec technology. In particular, it relates to intra sub-partition prediction in video coding. It can be applied to existing video/image codec standards such as HEVC, or to a standard to be finalized (general video codec). It may also be applicable to future video codec standards or video codecs.
Embodiments of the disclosed technology may be applied to existing video codec standards (e.g., HEVC, h.265) and future standards to improve compression performance. Section headings are used in this document to enhance the readability of the description, and discussion or embodiments (and/or implementations) are not limited in any way to the corresponding sections only.
1. Preliminary discussion
Video codec standards have been developed primarily by developing the well-known ITU-T and ISO/IEC standards. ITU-T makes h.261 and h.263, ISO/IEC makes MPEG-1 and MPEG-4 video, and these two organizations together make h.262/MPEG-2 video and h.264/MPEG-4 Advanced Video Codec (AVC) and h.265/HEVC standards. Starting from h.262, the video codec standard is based on a hybrid video codec structure, where temporal prediction plus transform coding is utilized. To explore future video codec technologies beyond HEVC, VCEG and MPEG have jointly established a joint video exploration team (jfet) in 2015. Since then, jfet has adopted many new approaches and applied it to reference software known as the Joint Exploration Model (JEM). In month 4 2018, a joint video expert team (jfet) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) holds to aim at a VVC standard that is aimed at 50% bit rate reduction compared to HEVC.
1.1 example embodiment of intra-frame sub-splitting (ISP)
In some embodiments, as shown in table 1, the ISP tool vertically or horizontally divides the luma intra prediction block into 2 or 4 sub-partitions according to the block size dimension. Fig. 1 and 2 show examples of these two possibilities. Fig. 1 shows an example of the division of 4×8 and 8×4 blocks. Fig. 2 shows an example of division of all blocks except 4×8, 8×4, and 4×4. And all sub-partitions satisfy the condition of having at least 16 samples.
Table 1: number of subdivisions depending on block size
For each of these sub-partitions, a residual signal is generated by entropy decoding the coefficients sent by the codec, followed by inverse quantization and inverse transformation. Then, intra prediction is performed on the sub-partitions, and finally, corresponding reconstructed samples are obtained by adding the residual signal to the prediction signal. Thus, the reconstructed value of each sub-partition will be available to generate a prediction of the next partition, the process is repeated, and so on. All sub-partitions share the same intra mode.
Based on the intra mode and the utilized partitioning, two different classes of processing orders are used, which are referred to as normal order and reverse order. The first subdivision to be processed is the subdivision containing the left hand sample of the CU, and then continues downwards (horizontal division) or to the right (vertical division) in the normal order. As a result, the reference points for generating the sub-divided prediction signals are located only to the left and above the rows. On the other hand, the reverse processing order starts either from the subdivision containing the left hand sample of the CU and then continues upwards, or from the subdivision containing the right hand sample of the CU and then continues to the left.
Example syntax, semantics and processing associated with an ISP are as follows:
/>
/>
/>
/>
/>
/>
intra_subpartitions_mode_flag[x0][y0]equal to 1 designates the current intra-frame codec unit Cut into NumIntra sub-fractions [ x0 ]][y0]The rectangular transform blocks are sub-divided. ntra_subbations_mode/u flag[x0][y0]A value equal to 0 specifies that the current intra-frame codec unit is not divided into rectangular transform block subdivisions.
If there is no intra_sub-alternatives_mode_flag [ x0 ]]][y0]It is inferred to be equal to 0.
intra_subpartitions_split_flag[x0]][y0]Specifying whether intra-frame subdivision partition types are horizontal or not Is vertical. When there is no intra_sub-options_split_flag [ x0 ]]][y0]At this time, it can be inferred as follows:
-if cbHeight is greater than MaxTbSizeY, then intra_sub_distributions_split_flag [ x0 ]]] [y0]It is estimated to be equal to 0.
Otherwise (cbWidth is greater than MaxTbSizeY), then intra_sub_distributions_split_flag [ x0 ]]] [y0]Inferred to be equal to 1.
The variable IntraPartisSplitType specifies the partition type for the current luma codec block, e.g Tables 7-16 show. The derivation of the IntraParticSplitType is as follows:
-if intra_sub-alternatives_mode_flag [ x0 ]]][y0]Equal to 0, will The intrasubpartitionsplit type is set to 0.
-otherwise, setting the intrasubpartitionsplit type equal to 1+intra_subtotations u split_flag[x0][y0]。
TABLE 7-16-associated with the name of IntraParticSplitType
IntraSubPartitionsSplitType Name of IntraSubPartitionsSplitType
0 ISP_NO_SPLIT
1 ISP_HOR_SPLIT
2 ISP_VER_SPLIT
Variable numintra sub-partitions specifies the transform block sub-partition into which the intra luma codec block is partitioned Number of parts. The derivation of numintrasub is as follows:
-if the intrasubpartitionsplit type is equal to isp_no_split NumIntraPartification is set equal to 1.
Otherwise, set numintrasub to equal 2 if one of the following conditions is met:
-cbWidth equals 4, cbheight equals 8
cbWidth is equal to 8 and cbheight is equal to 4.
Otherwise, numintrasub is set equal to 4.
Decoding process of 8.4.5 intra block
Conventional decoding processing of 8.4.5.1 intra blocks
The inputs to this process are:
a sample point position (xTb, yTb 0) of a left sample point of the current transform block with respect to the left sample point of the current picture is specified,
a variable nTbW, specifying the width of the current transform block,
a variable nTbH specifying the height of the current transform block,
the variable predModeIntra, specifies the intra prediction mode,
the variable cIdx, specifies the color component of the current block.
The output of this process is a modified reconstructed picture before loop filtering.
The width maxTbWidth and height of the largest transform block are derived as follows:
maxTbWidth=(cIdx==0)?MaxTbSizeY:MaxTbSizeY/SubWidthC (8-41)
maxTbHeight=(cIdx==0)?MaxTbSizeY:MaxTbSizeY/SubHeightC (8-42)
the derivation of the luminance sample position is as follows:
(xTbY,yTbY)=(cIdx==0)?(xTb0,yTb0):(xTb0*SubWidthC,yTb0*SubHeightC) (8-43)
According to maxTbSize, the following conditions apply:
-if the intrasubpartitionsplit type is equal to isp_no_split and nTbW is greater than maxTbWidth or nTbH is greater than maxTbHeight, the following ordering steps apply:
1. the variables newTbW and newTbH are derived as follows:
newTbW=(nTbW>maxTbWidth)?(nTbW/2):nTbW (8-44)
newTbH=(nTbH>maxTbHeight)?(nTbH/2):nTbH (8-45)
2. the conventional decoding process of the intra block specified in this section is invoked with the position (xTb, yTb 0), the transform block width nTbW set equal to newTbW, the height nTbH set equal to newTbH, the intra prediction mode predModeIntra, and the variable cIdx as inputs, and the output is a modified reconstructed picture prior to loop filtering.
3. If nTbW is greater than maxTbWidth, the normal decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb 0+newtbw, yTb), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, an intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
4. If nTbH is greater than maxtbhight, the conventional decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb, yTb 0+newtbh), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, an intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
5. If nTbW is greater than maxTbWidth and nTbH is greater than maxTbHeight, the normal decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb 0+newtbw, yTb0 +newtbh), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
Otherwise, the following ordered steps apply:
the derivation of variables nW, nH, nPbW, pbFactor, xPartInc and yPartInc is as follows:
nW=IntraSubPartitionsSplitType==ISP_VER_SPLIT?
nTbW/NumIntraSubPartitions:nTbW (8-46)
nH=IntraSubPartitionsSplitType==ISP_HOR_SPLITnTbH/ NumIntraSubPartitions:nTbH (8-47)
xPartInc=ISP_VER_SPLIT1:0 (8-48)
yPartInc=ISP_HOR_SPLIT1:0 (8-49)
nPbW=Max(4,nW) (8-50)
pbFactor=nPbW/nW (8-51)
the variables xPartIdx and yPartIdx are set equal to 0.
For i=0..numintrasub-components-1, the following conditions apply:
1. the variables xPartIdx and yPartIdx are updated as follows:
xPartIdx=xPartIdx+xPartInc (8-52)
yPartIdx=yPartIdx+yPartInc (8-53)
xPartPbIdx=xPartIdx%pbFactor (8-54)
2. when xPartPbIdx is equal to 0, it is set equal to (xTb 0+nw x xPartIdx, yTb0+nh x yPartIdx) Is set to be equal to nPbW and nH), intra prediction mode predModeintra, transform block width The degrees nTbW and the heights nTbH, the codec block widths nCbW and the heights nCbH set equal to nTbW and nTbH, and the variable cIdx are used As input, invoke intra-sample prediction processing specified in section 8.4.5.2, and output as (nW) x (nH) array resSamples。
3. The scaling and transformation process specified in section 8.7.2 is invoked with the luminance position (xTbY, yTbY) set equal to (xtby+nw, ytby+nh, yPartIdx), the variable cIdx, the transformation width nTbW and the transformation height nTbH set equal to nW and nH as inputs, and the output is the (nW) x (nH) array resSamples.
4. With transform block positions (xTbComp, yTbComp) set equal to (xTb 0+nw x patedidx, yTb0+nh x ypatidx), transform block widths nTbW and transform block heights nTbH set equal to nW and nH, variable cIdx, (nW) x (nH) array predSamples [ x ] [ y ] (where x=xpartpbidx x nW. ] (xpartpbidx+1) nW-1, y=0..nh-1), and (nW) x (nH)
Array resSamples as input, invokes the picture reconstruction process of the color component specified in section 8.7.5, and
the output is a modified reconstructed picture prior to loop filtering.
8.4.5.2.5 conventional intra-sample prediction
The inputs to this process are:
specifying a sample point position (xTbCmp, yTbCmp) of a left-hand sample point of the current transform block relative to a left-hand sample point of the current picture,
the variable predModeIntra, specifies the intra prediction mode,
a variable nTbW, specifying the width of the transformed block,
a variable nTbH, specifying the height of the transform block,
A variable nCbW, specifying the width of the codec block,
a variable nCbH, specifying the height of the codec block,
the variable cIdx, specifies the color component of the current block.
The output of this process is the predicted samples predSamples [ x ] [ y ], where x=0..ntbw-1, y=0..ntbh-1.
The variables refW and refH are derived as follows:
-if the intrasubpartitionsplit type is equal to isp_no_split or cIdx is not equal to 0, then apply The following conditions were:
refW=nTbW*2(8-118)
refH=nTbH*2(8-119)
otherwise (InstroParticSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0), apply The following conditions were:
refW=nCbW+nTbW (8-120)
refH=nCbH+nTbH (8-121)
the variable refIdx specifying the intra prediction reference line index is derived as follows:
refIdx=(cIdx==0)?IntraLumaRefLineIdx[xTbCmp][yTbCmp]:0(8-122)
the wide-angle intra prediction mode mapping process specified in section 8.4.5.2.6 is invoked with predModeIntra, nTbW, nTbH and cIdx as inputs and modified predModeIntra as outputs.
The derivation of the variable refFilterFlag is as follows:
-if predModeIntra is equal to one of the following values: 0, -14, -12, -10, -6,2, 34, 66, 72, 76, 78, 80, refFilterFlag is set equal to 1.
Otherwise, refFilterFlag is set equal to 0.
For the generation of the reference samples p [ x ] [ y ] (where x= -1-refIdx, y= -1-refidx..refh-1), the following ordered steps apply:
1. The reference sample availability flag process specified in section 8.4.5.2.7 is invoked with the sample point (xTbCmp, yTbCmp), intra prediction reference line index refIdx, reference sample point width refW, reference sample point height refH, color component index cIdx as inputs, and reference sample point refufilt [ x ] [ y ] (where x= -1-refIdx, y= -1-refidx..refh-1 and x= -refidx..refw-1, y= -1-refIdx) as outputs.
2. When at least one sample refafilt [ x ] [ y ] (wherein x= -1-refIdx, y= -1-refidx..refh-1 and x= -refidx..refw-1, y= -1-refIdx) is marked as "unavailable for intra prediction", then intra prediction reference line index refIdx, reference sample width refW, reference sample height refH, reference sample refafilt [ x ] (wherein x= -1-refIdx, y= -1-refidx..refidx, y= -1-refIdx), and color component index cIdx are used as inputs, the modified reference sample refafilt [ x ] [ y ] (wherein x= -1-refx, y= -1-refIdx) is used as an input, and the reference sample refH-1-refIdx is called for.
3. The intra prediction reference line index refIdx, the transform block width nTbW and the height nTbH, the reference sample width refW, the reference sample height refH, the reference filter flag refFilterFlag, the unfiltered sample refufilt [ x ] [ y ] (where x= -1-refIdx, y= -1-refidx..refh-1 and x= -refidx..refw-1, y= -1-refIdx), and the color component index cIdx are taken as inputs, and the reference sample p [ x ] [ y ] (where x= -1-refIdx, y= -1-refidx..refh-1 and x= -refidx..refw-1, y= -1-refIdx) is taken as outputs, and the reference sample filtering process specified in section 8.4.5.2.9 is invoked.
The intra-sample prediction process according to predModeIntra is as follows:
-if predModeIntra is equal to intra_planar, invoking the corresponding INTRA prediction mode processing specified in section 8.4.5.2.10 with the transform block width nTbW, the transform block height nTbH and the reference sample array p as inputs, and the output is a prediction sample array predSamples.
Otherwise, if predModeIntra is equal to intra_dc, then the corresponding INTRA prediction mode processing specified in section 8.4.5.2.11 is invoked with the transform block width nTbW, the transform block height nTbH, the INTRA prediction reference line index refIdx, and the reference sample array p as inputs, and the output is a prediction sample array predSamples.
Otherwise, if predModeIntra is equal to intra_lt_cclm, intra_l_cclm, or intra_t_cclm, the corresponding INTRA prediction mode processing specified in section 8.4.5.2.13 is invoked as input with INTRA prediction mode predModeIntra, sample position (xTbC, yTbC) set equal to (xTbCmp, yTbCmp), transform block width nTbW and height nTbH, color component index cIdx, and reference sample array p, and the output is prediction sample array predSamples.
Otherwise, the corresponding intra prediction mode processing specified in section 8.4.5.2.12 is invoked with the intra prediction mode predModeIntra, the intra prediction reference line index refIdx, the transform block width nTbW, the transform block height nTbH, the reference sample width refW, the reference sample height refH, the codec block width nCbW and height nCbH, the reference filter flag refFilterFlag, the color component index cIdx, and the reference sample array p as inputs, and the output is the prediction sample array predSamples.
When all of the following conditions are satisfied, the position-dependent prediction sample filtering process specified in section 8.4.5.2.14 is invoked with the intra prediction mode predModeIntra, the transform block width nTbW, the transform block height nTbH, the prediction sample point predSamples [ x ] [ y ] (where x=0..ntbw-1, y=0..ntbh-1), the reference sample point width refW, the reference sample point height refH, the reference sample point p [ x ] [ y ] (where x= -1, y= -1..refh-1 and x=0..refw-1, y= -1), and the color component index cIdx as inputs, and the output is the modified prediction sample point array predSamples:
-nTbW greater than or equal to 4 and nTbH greater than or equal to 4 or cIdx not equal to 0
refIdx is equal to 0 or cIdx is not equal to 0
BdpcmFlag [ xTbClmp ] is equal to 0
-one of the following conditions is fulfilled:
predModeIntra equals INTRA_PLANAR
predModeIntra equals INTRA_DC
-predModeIntra is less than or equal to intra_anguar18
-predModeIntra is less than or equal to intra_anguar 50
8.4.5.2.6 wide-angle intra prediction mode mapping process
The inputs to this process are:
the variable predModeIntra, specifies the intra prediction mode,
a variable nTbW, specifying the width of the transformed block,
a variable nTbH, specifying the height of the transform block,
The variable cIdx, specifies the color component of the current block.
The output of this process is the modified intra prediction mode predModeIntra.
The output of this process is the modified intra prediction mode predModeIntra.
The variables nW and nH are derived as follows:
-if the intrasubpartitionsplit type is equal to isp_no_split or cIdx is not equal to 0, then apply The following conditions were:
nW=nTbW (8-123)
nH=nTbH (8-124)
otherwise (InstroParticSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0), apply The following conditions were:
nW=nCbW (8-125)
nH=nCbH (8-126)
the variable whRatio is set equal to Abs (Log 2 (nW/nH)).
For non-square blocks (nW is not equal to nH), the intra prediction mode predModeintra is modified as follows:
predModeIntra is set equal to (predmodeintra+65) if all of the following conditions are met.
-nW is greater than nH
-predModeIntra is greater than or equal to 2
-predModeIntra is less than (whRatio > 1)? (8+2 x whRatio) 8
Otherwise, if all of the following conditions are met, predModeintra is set equal to (predModeintra-67).
-nH is greater than nW
-predModeIntra is less than or equal to 66
-predModeIntra is greater than (whRatio > 1)? (60-2 XwhRatio) 60
Transform processing of 8.7.4 scaled transform coefficients
8.7.4.1 overview
The inputs to this process are:
Specifying a luminance position (xTbY, yTbY) of a left upsampled point of the current luminance transformation block relative to a left upsampled point of the current picture,
a variable nTbW, specifying the width of the current transform block,
a variable nTbH specifying the height of the current transform block,
the variable cIdx, specifies the color component of the current block,
-an array d [ x ] [ y ] of scaled (nTbW) x (nTbH) transform coefficients, wherein x = 0.
The output of this process is the residual samples of the (nTbW) x (nTbH) array r [ x ] [ y ], where x=0..ntbw-1, y=0..ntbh-1.
When lfnst_idx [ xTbY ] [ yTbY ] is not equal to 0 and both nTbW and nTbH are greater than or equal to 4, then the following conditions apply:
the variables predModeIntra, nLfnstOutSize, log, lfnstSize, nLfnstSize and nonZeroSize are derived as follows:
predModeIntra=(cIdx==0)?IntraPredModeY[xTbY][yTbY]:IntraPredModeC[xTbY][yTbY](8-965)
nLfnstOutSize=(nTbW>=8&&nTbH>=8)?48:16 (8-966)
log2LfnstSize=(nTbW>=8&&nTbH>=8)?3:2 (8-967)
nLfnstSize=1<<log2LfnstSize (8-968)
nonZeroSize=((nTbW==4&&nTbH==4)||(nTbW==8&&nTbH==8))?8:16 (8-969)
when intra_mip_flag [ xTbComp ] [ yTbComp ] equals 1 and cIdx equals 0, predModeIntra is set equal to intra_planar.
When predModeintra is equal to INTRA_LT_CCLM, INTRA_L_CCLM or INTRA_T_CCLM, predModeintra is set equal to IntraPredModeY [ xTbY+nTbW/2] [ yTbY+nTbH/2].
-invoking the wide-angle intra prediction mode mapping process specified in section 8.4.5.2.6 with predModeIntra, nTbW, nTbH and cIdx as inputs and modified predModeIntra as output.
The values of the list u [ x ] (where x=0..non zerosize-1) are derived as follows:
xC=DiagScanOrder[2][2][x][0] (8-970)
yC=DiagScanOrder[2][2][x][1] (8-971)
u[x]=d[xC][yC] (8-972)
-taking as input the input length non zerosize of the scaled transform coefficients, the set equal to the nLfnstOutSize transform output length nts, the list u [ x ] (where x=0..non zerosize-1) of scaled non-zero transform coefficients, the intra prediction mode predModeIntra for LFNST set selection, and the LFNST index lfnst_idx [ xTbY ] [ yTbY ] in the selected LFNST set for transform selection, and taking as output the list v [ x ] (where x=0..nlfnstoutsize-1) invoking the one-dimensional low frequency insertible transform process specified in section 8.7.4.2.
-derivation of the array x [ x ] [ y ] (where x=0..nlfnstsize-1, y=0..nlfnstsize-1) as follows:
-if predModeIntra is less than or equal to 34, the following conditions apply:
d[x][y]=(y<4)?v[x+(y<<log2LfnstSize)]:((x<4)?v[32+x+((y-4)<<2)]:d[x][y]) (8-973)
otherwise, the following conditions apply:
d[x][y]=(x<4)?v[y+(x<<log2LfnstSize)]:((y<4)?v[32+y+((x-4)<<2)]:d[x][y]) (8-974)
the derivation of the variable implicitmttsenabled is as follows:
-if sps_mts_enabled_flag is equal to 1 and one of the following conditions is met, setting implicitmttsenabled equal to 1:
-InstrosubpartitionSplitType is not equal to ISP_NO_SPLIT
-cu_sbt_flag is equal to 1 and Max (nTbW, nTbH) is less than or equal to 32
-sps_explicit_mts_intra_enabled_flag equals 0, and CuPredMode [0] [ xtbY ] [ yTbY ] equals MODE_INTRA, and lfnst_idx [ x0] [ y0] equals 0, and intra_mipjflag [ x0] [ y0] equals 0
Otherwise, the implicitmttsenabled is set equal to 0.
The variable trTypeHor specifying the horizontal transform kernel and the variable trTypeVer specifying the vertical transform kernel are derived as follows:
-if cIdx is greater than 0, then trTypeHor and trTypeVer are set equal to 0.
Otherwise, if implicitmttsenabled is equal to 1, the following conditions apply:
if the IntraParticParticSplitType is not equal to ISP_NO_SPLIT or sps_explicit_mts/u intra_enabled_flag is equal to 0 and CuPredMode [0 ]][xTbY][yTbY]Equal to MODE_INTRA, then trTypeHor And trTypeVer is derived as follows:
trTypeHor=(nTbW>=4&&nTbW<=16)?1:0 (8-975)
trTypeVer=(nTbH>=4&&nTbH<=16)?1:0 (8-976)
otherwise (cu_sbt_flag is equal to 1), trTypeHor and trTypeVer are specified in tables 8-15 according to the cu_sbt_horizontal_flag and the cu_sbt_pos_flag.
Otherwise, according to tu_mts_idx[yTbY]TrTypeHor and trTypeVer are specified in tables 8-14.
The variables nonZeroW and nonZeroH were derived as follows:
-if lfnst_idx [ xTbY ] [ yTbY ] is not equal to 0 and nTbW is greater than or equal to 4 and nTbH is greater than or equal to 4, then the following condition applies:
nonZeroW=(nTbW==4||nTbH==4)?4:8 (8-977)
nonZeroH=(nTbW==4||nTbH==4)?4:8 (8-978)
otherwise, the following conditions apply:
nonZeroW=Min(nTbW,(trTypeHor>0)?16:32) (8-979)
nonZeroH=Min(nTbH,(trTypeVer>0)?16:32) (8-980)
the (nTbW) x (nTbH) array r of residual samples is derived as follows:
1. when nTbH is greater than 1, the scaled transform coefficients d [ x ] [ y ] (where x=0..nonzerow-1, y=0.nonzeroh-1) for each (vertical) column are transformed to e [ x ] [ y ] (where x=0..nonzerow-1, y=0..ntbh-1) by invoking the one-dimensional transform process specified in section 8.7.4.4 for each column with the height nTbH of the transform block, the non-zero height nonZeroH of the scaled transform coefficients, the list d [ x ] [ y ] (where y=0..nonzerow-1, y=0.nonzeroh-1), and the transform type variable trType set equal to trTypeVer as inputs, and outputs as the list e [ x ] [ y ] (where y=0..nonzerow-1, y=0..nzeroh-1).
2. When both nTbH and nTbW are greater than 1, the intermediate sample value g [ x ] [ y ] (where x=0..non zerow-1, y=0..ntbh-1) is derived as follows:
g[x][y]=Clip3(CoeffMin,CoeffMax,(e[x][y]+64)>>7) (8-981)
3. when nTbW is greater than 1, the result array g [ x ] [ y ] for each (horizontal) row is transformed into r [ x ] [ y ] (where x=0..ntbw-1, y=0..ntbh-1) by invoking a one-dimensional transform process specified in section 8.7.4.4 for each row y=0..ntbh-1 with the width of the transform block nTbW, the non-zero width of the result array g [ x ] [ y ] (where x=0..nonzero w-1, y=0..ntbh-1) as input, and the transform type variable trType set equal to trTypeHor as output.
4. When nTbW is equal to 1, r [ x ] [ y ] is set equal to e [ x ] [ y ] (where x=0..ntbw-1, y=0..ntbh-1).
TABLE 8-14-according to tu_mts_idx[y]Specifications of trTypeHor and trTypeVer
tu_mts_idx[x0][y0] 0 1 2 3 4
trTypeHor 0 1 2 1 2
trTypeVer 0 1 1 2 2
TABLE 8-15-specifications of trTypeHor and trTypeVer according to the cu_sbt_horizontal_flag and the cu_sbt_pos_flag
cu_sbt_horizontal_flag cu_sbt_pos_flag trTypeHor trTypeVer
0 0 2 1
0 1 1 1
1 0 1 2
1 1 1 1
2. Examples of technical problems addressed by the technical solutions provided in this document.
Some example problems are listed below:
(1) In some scenarios, xPartIdx and yPartIdx are increased by xPardInc and yPartInc before invoking intra-sample prediction processing for the first TU. Thus, when an ISP is applied, e.g., xPatInc or yPartInc is not equal to zero, the first part of the CU cannot be properly predicted.
(2) When ISP is applied, wide-angle intra prediction mode mapping is performed according to CU dimension instead of TU dimension.
(3) The delta QP for the CU encoded with the ISP is signaled. However, there may be a delay in not signaling the delta QP in the first TU of the ISP.
(4) The ISP codec blocks do not allow transform skipping.
(5) Intra prediction reference samples are extracted according to whether the current block is ISP encoded or not.
(6) When an ISP is applied, the implicit transform selection method does not consider the case where a TU is not a prediction unit.
(7) The deblocking filter needs to access the QP for encoding/decoding the codec block covering the samples at the edges. However, when one CU contains multiple TUs (e.g., when an ISP is enabled), the QP for the codec block (e.g., CU) is undefined.
3. List of example embodiments and techniques
The following list should be considered as an example to explain the conventional concepts. These items should not be interpreted in a narrow sense. Furthermore, these items may be combined in any manner.
In the following description, the term "ISP" may not be interpreted narrowly. Any type of tool that can divide a CU into multiple TUs/PUs can also be considered an ISP.
1. When using an ISP, the intra prediction process should be applied to each sub-partition (including the first sub-partition that is the same as the upper left position of the current CU).
a. In one example, the variables xPartIdx, yPartIdx and xPartPbIdx defined in section 8.4.5.1 (of the VVC standard) may be updated after the intra-prediction process and/or the scaling/transformation process and/or the reconstruction process.
2. When a specific codec is applied, the wide-angle intra prediction mode mapping will be based on the prediction unit dimension instead of the CU dimension.
a. Alternatively, when a specific codec is applied, wide-angle intra prediction is not applied.
i. In one example, the wide-angle intra prediction map is not invoked when a particular codec is applied.
in one example, the wide-angle intra-prediction map is an equivalent map when a particular codec is applied, e.g., after mapping, any pattern M remains M.
b. In one example, the particular codec tool may be an ISP.
3. When an ISP is applied, the delta QP is signaled only once for the entire CU.
a. In one example, the notification may be signaled in the first TU.
b. Alternatively, when an ISP is applied, the QP is always signaled in the last TU.
c. Alternatively, when an ISP is applied, the delta QP is not signaled.
d. In one example, the delta QP is signaled with a specific TU (first/last), whether or not the delta QP contains non-zero coefficients (luma block or luma and chroma blocks).
i. Alternatively, the delta QP is signaled with a specific TU (first/last) only when the delta QP contains non-zero coefficients (luma block or luma and chroma blocks).
1) Alternatively, in addition, if there are no non-zero coefficients, the delta QP is inferred to be 0.
4. It is suggested to define the QP of a CU as the QP associated with a TU within the CU.
a. In one example, the QP of a CU may be defined as the QP associated with the first/last TU in the CU.
b. Alternatively, the QP of a CU may be defined as the QP before the delta QP of a different TU in the current CU.
c. Alternatively, the QP of a CU may be defined as a QP derived from a function (e.g., average) in which QP of multiple TUs is applied with delta QP.
d. In one example, how the deblocking filter is applied may use the QP of the CU defined above.
5. The decision whether/how to apply a deblocking filter (e.g., luma/chroma block edges) may depend on the QP of the transform block/transform unit used to cover the corresponding samples instead of the codec unit.
a. Alternatively, when one block is encoded in ISP mode, the QP check of the CU may be modified to check the QP of the TU.
b. Alternatively, when one block is larger than the VPDU/maximum transform block size, the checking of QP of the CU may be modified to check QP of the TU.
6. Transform skipping may be used when applying an ISP.
a. In one example, whether transform skipping is used when applying an ISP may depend on the codec block/prediction block/transform block dimensions.
b. In one example, whether to use transform skipping when applying an ISP may depend on whether a vertical or horizontal ISP is applied.
7. The same intra prediction reference samples will be extracted regardless of whether the current block is ISP codec or not.
a. In one example, assuming the width and height of the current transform block are W and H, respectively, then when the current block is ISP-coded, the above-and 2*H left-hand neighboring samples of 2*W will be extracted.
8. Implicit transform selection is made in different ways depending on whether an ISP is used or not.
a. In one example, the horizontal transform and/or the vertical transform may be selected based on whether the transform block width is greater than K (K is an integer such as 1 or 2).
9. The horizontal transform and/or the vertical transform may be selected based on whether the transform block height is greater than K (K is an integer such as 1 or 2). When lossless codec is applied, certain transformations may be limited on the blocks of ISP codec.
a. When lossless codec is applied, 4x4 transforms may be limited on the blocks of ISP codec.
b. In one example, when lossless codec is applied, transform size restriction, which is set to p×q (such as 4×4), may be applied to blocks of ISP codec.
i. In one example, if an m×n block is vertically divided into four M/4×n sub-partitions by ISP mode, it may be inferred that the M/4×n block is divided into 4×4 transform blocks and transform and quantization are performed for each sub-partition.
c. In one example, when lossless codec is applied, codec block size restrictions may be applied to ISP blocks.
i. In one example, the width of each ISP subdivision must not be less than 4.
in one example, the height of each ISP subdivision must not be less than 4.
d. In one example, when lossless codec is applied, an ISP partition flag (such as intra_sub-partitions_split_flag) may depend on the codec block dimension.
i. In one example, a partitioning direction (horizontal or vertical) that results in an ISP sub-partition width or height less than 4 may not be allowed.
in one example, for blocks of an 8×16ISP codec, the partition flag may be inferred as a horizontal partition, so the ISP partition flag (e.g., intra_sub_split_flag) is not signaled and inferred.
10. When lossless codec is applied, ISP may be disabled.
a. Alternatively, lossless codec may be disabled in the ISP codec block.
b. In one example, a current video unit (such as a current CU/CTU/VPDU/slice/picture/sequence) may not allow use of the ISP when the CU/CTU/VPDU/slice/picture/sequence level transform-quantization bypass enable flag is true.
11. All TUs that are allowed to be partitioned by an ISP do not have non-zero coefficients.
a. All TUs partitioned by an ISP are allowed to have only zero coefficients.
b. The Cbf flag may be signaled for all TUs partitioned by the ISP.
4. Additional example embodiments
In the following example, the newly added portions are shown in bold italic underlined fonts, and the deleted portions are indicated between [ ].
4.1 example modification of conventional decoding processing of intra blocks
Conventional decoding processing of 8.4.5.1 intra blocks
The inputs to this process are:
a sample point position (xTb, yTb 0) of a left sample point of the current transform block with respect to the left sample point of the current picture is specified,
a variable nTbW, specifying the width of the current transform block,
a variable nTbH specifying the height of the current transform block,
the variable predModeIntra, specifies the intra prediction mode,
The variable cIdx, specifies the color component of the current block.
The output of this process is a modified reconstructed picture before loop filtering.
The width maxTbWidth and height of the largest transform block are derived as follows:
maxTbWidth=(cIdx==0)?MaxTbSizeY:MaxTbSizeY/SubWidthC (8-41)
maxTbHeight=(cIdx==0)?MaxTbSizeY:MaxTbSizeY/SubHeightC (8-42)
the derivation of the luminance sample position is as follows:
(xTbY,yTbY)=(cIdx==0)?(xTb0,yTb0):(xTb0*SubWidthC,yTb0*SubHeightC) (8-43)
according to maxTbSize, the following conditions apply:
-if the intrasubpartitionsplit type is equal to isp_no_split and nTbW is greater than maxTbWidth or nTbH is greater than maxTbHeight, the following ordering steps apply:
1. the variables newTbW and newTbH are derived as follows:
newTbW=(nTbW>maxTbWidth)?(nTbW/2):nTbW (8-44)
newTbH=(nTbH>maxTbHeight)?(nTbH/2):nTbH (8-45)
2. the conventional decoding process of the intra block specified in this section is invoked with the position (xTb, yTb 0), the transform block width nTbW set equal to newTbW, the height nTbH set equal to newTbH, the intra prediction mode predModeIntra, and the variable cIdx as inputs, and the output is a modified reconstructed picture prior to loop filtering.
3. If nTbW is greater than maxTbWidth, the normal decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb 0+newtbw, yTb), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, an intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
4. If nTbH is greater than maxtbhight, the conventional decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb, yTb 0+newtbh), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, an intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
5. If nTbW is greater than maxTbWidth and nTbH is greater than maxTbHeight, the normal decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb 0+newtbw, yTb0 +newtbh), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
Otherwise, the following ordered steps apply:
the derivation of variables nW, nH, nPbW, pbFactor, xPartInc and yPartInc is as follows:
nW=IntraSubPartitionsSplitType==ISP_VER_SPLITnTbW/NumIntraSubPartitions:nTbW (8-46)
nH=IntraSubPartitionsSplitType==ISP_HOR_SPLITnTbH/NumIntraSubPartitions:nTbH (8-47)
xPartInc=ISP_VER_SPLIT1:0 (8-48)
yPartInc=ISP_HOR_SPLIT1:0 (8-49)
nPbW=Max(4,nW) (8-50)
pbFactor=nPbW/nW (8-51)
the variables xPartIdx and yPartIdx are set equal to 0.
For i=0..numintrasub-components-1, the following conditions apply:
the variables xPartIdx and yPartIdx are updated as follows:
xPartIdx=xPartIdx+xPartInc (8-52)
yPartIdx=yPartIdx+yPartInc (8-53)
xPartPbIdx=xPartIdx%pbFactor (8-54)]]
1. When xPartPbIdx is equal to 0, the intra sample prediction process specified in section 8.4.5.2 is invoked with the position (xTbCmp, yTbCmp) set equal to (xTb 0+nw x xPartIdx, yTb0+nh x yPartIdx), the intra prediction mode predModeIntra, the transform block width nTbW and height nTbH set equal to nTbW and nH, the codec block width nCbW and height nCbH set equal to nTbW and nTbH, and the variable cIdx as inputs, and the output is (nW) x (nH) array resSamples.
2. The scaling and transformation process specified in section 8.7.2 is invoked with the luminance position (xTbY, yTbY) set equal to (xtby+nw, ytby+nh, yPartIdx), the variable cIdx, the transformation width nTbW and the transformation height nTbH set equal to nW and nH as inputs, and the output is the (nW) x (nH) array resSamples.
3. Picture reconstruction processing of the color component specified in section 8.7.5 is invoked with a transform block position (xTbComp, yTbComp) set equal to (xTb 0+nw x patedidx, yTb0+nh x partedidx), a transform block width nTbW and a transform block height nTbH set equal to nW and nH, a variable cIdx, (nW) x (nH) array predsamplex ] [ y ] (where x=xpartpbidx x nW. # (xpartpbidx+1) nW-1, y=0..nh-1), and (nW) x (nH) array resSamples as inputs, and the output is a modified reconstructed picture prior to loop filtering.
4. Variables xPartIdx, yPartIdx and xPartPbIdx are updated as follows:
4.2 example modification of Wide-angle Intra prediction mapping of Intra blocks
Wide-angle intra prediction mode mapping process
The inputs to this process are:
the variable predModeIntra, specifies the intra prediction mode,
a variable nTbW, specifying the width of the transformed block,
a variable nTbH, specifying the height of the transform block,
the variable cIdx, specifies the color component of the current block.
The output of this process is the modified intra prediction mode predModeIntra.
The output of this process is the modified intra prediction mode predModeIntra.
The variables nW and nH are derived as follows:
- [ [ if IntraSubParticULTType equals ISP_NO_SPLIT or cIdx does not equal 0, then the following conditions apply: ]]
nW=nTbW (8-123)
nH=nTbH (8-124)
- [ [ otherwise (InstroParticSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0), the following conditions apply:
nW=nCbW (8-125)
nH=nCbH (8-126)]]
the variable whRatio is set equal to Abs (Log 2 (nW/nH)).
For non-square blocks (nW is not equal to nH), the intra prediction mode predModeintra is modified as follows:
predModeIntra is set equal to (predmodeintra+65) if all of the following conditions are met.
-nW is greater than nH
-predModeIntra is greater than or equal to 2
-predModeIntra is less than (whRatio > 1)? (8+2 x whRatio) 8
Otherwise, if all of the following conditions are met, predModeintra is set equal to (predModeintra-67).
-nH is greater than nW
-predModeIntra is less than or equal to 66
-predModeIntra is greater than (whRatio > 1)? (60-2 XwhRatio) 60
Transform processing of 8.7.4 scaled transform coefficients
8.7.4.1 overview
The inputs to this process are:
specifying a luminance position (xTbY, yTbY) of a left upsampled point of the current luminance transformation block relative to a left upsampled point of the current picture,
a variable nTbW, specifying the width of the current transform block,
a variable nTbH specifying the height of the current transform block,
the variable cIdx, specifies the color component of the current block,
-an array d [ x ] [ y ] of scaled (nTbW) x (nTbH) transform coefficients, wherein x = 0.
The output of this process is the residual samples of the (nTbW) x (nTbH) array r [ x ] [ y ], where x=0..ntbw-1, y=0..ntbh-1.
When lfnst_idx [ xTbY ] [ yTbY ] is not equal to 0 and both nTbW and nTbH are greater than or equal to 4, then the following conditions apply:
the variables predModeIntra, nLfnstOutSize, log, lfnstSize, nLfnstSize and nonZeroSize are derived as follows:
predModeIntra=(cIdx==0)?IntraPredModeY[xTbY][yTbY]:IntraPredModeC[xTbY][yTbY] (8-965)
nLfnstOutSize=(nTbW>=8&&nTbH>=8)?48:16 (8-966)
log2LfnstSize=(nTbW>=8&&nTbH>=8)?3:2 (8-967)
nLfnstSize=1<<log2LfnstSize (8-968)
nonZeroSize=((nTbW==4&&nTbH==4)||(nTbW==8&&nTbH==8))?8:16 (8-969)
when intra_mip_flag [ xTbComp ] [ yTbComp ] equals 1 and cIdx equals 0, predModeIntra is set equal to intra_planar.
When predModeintra is equal to INTRA_LT_CCLM, INTRA_L_CCLM or INTRA_T_CCLM, predModeintra is set equal to IntraPredModeY [ xTbY+nTbW/2] [ yTbY+nTbH/2].
By predModeIntra,And cIdx as input and modified predModeIntra as output, invoking the wide-angle intra prediction mode mapping process specified in section 8.4.5.2.6.
–…
Conventional decoding processing of 8.4.5.1 intra blocks
The inputs to this process are:
a sample point position (xTb, yTb 0) of a left sample point of the current transform block with respect to the left sample point of the current picture is specified,
a variable nTbW, specifying the width of the current transform block,
a variable nTbH specifying the height of the current transform block,
the variable predModeIntra, specifies the intra prediction mode,
the variable cIdx, specifies the color component of the current block.
The output of this process is a modified reconstructed picture before loop filtering.
The width maxTbWidth and height of the largest transform block are derived as follows:
maxTbWidth=(cIdx==0)?MaxTbSizeY:MaxTbSizeY/SubWidthC (8-41)
maxTbHeight=(cIdx==0)?MaxTbSizeY:MaxTbSizeY/SubHeightC (8-42)
the derivation of the luminance sample position is as follows:
(xTbY,yTbY)=(cIdx==0)?(xTb0,yTb0):(xTb0*SubWidthC,yTb0*SubHeightC) (8-43)
according to maxTbSize, the following conditions apply:
-if the intrasubpartitionsplit type is equal to isp_no_split and nTbW is greater than maxTbWidth or nTbH is greater than maxTbHeight, the following ordering steps apply:
1. The variables newTbW and newTbH are derived as follows:
newTbW=(nTbW>maxTbWidth)?(nTbW/2):nTbW (8-44)
newTbH=(nTbH>maxTbHeight)?(nTbH/2):nTbH (8-45)
2. the conventional decoding process of the intra block specified in this section is invoked with the position (xTb, yTb 0), the transform block width nTbW set equal to newTbW, the height nTbH set equal to newTbH, the intra prediction mode predModeIntra, and the variable cIdx as inputs, and the output is a modified reconstructed picture prior to loop filtering.
3. If nTbW is greater than maxTbWidth, the normal decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb 0+newtbw, yTb), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, an intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
4. If nTbH is greater than maxtbhight, the conventional decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb, yTb 0+newtbh), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, an intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
5. If nTbW is greater than maxTbWidth and nTbH is greater than maxTbHeight, the normal decoding process of the intra block specified in this section is invoked with a position (xTb, yTb 0) set equal to (xTb 0+newtbw, yTb0 +newtbh), a transform block width nTbW set equal to newTbW, a height nTbH set equal to newTbH, intra prediction mode predModeIntra, and a variable cIdx as inputs, and the output is a modified reconstructed picture before loop filtering.
Otherwise, the following ordered steps apply:
the derivation of variables nW, nH, nPbW, pbFactor, xPartInc and yPartInc is as follows:
nW=IntraSubPartitionsSplitType==ISP_VER_SPLITnTbW/NumIntraSubPartitions:nTbW (8-46)
nH=IntraSubPartitionsSplitType==ISP_HOR_SPLITnTbH/NumIntraSubPartitions:nTbH (8-47)
xPartInc=ISP_VER_SPLIT1:0 (8-48)
yPartInc=ISP_HOR_SPLIT1:0 (8-49)
nPbW=Max(4,nW) (8-50)
pbFactor=nPbW/nW (8-51)
the variables xPartIdx and yPartIdx are set equal to 0.
For i=0..numintrasub-components-1, the following conditions apply:
the variables xPartIdx and yPartIdx are updated as follows:
xPartIdx=xPartIdx+xPartInc (8-52)
yPartIdx=yPartIdx+yPartInc (8-53)
xPartPbIdx=xPartIdx%pbFactor (8-54)]]
1. when xPartPbIdx is equal to 0, the intra sample prediction process specified in section 8.4.5.2 is invoked with the position (xTbCmp, yTbCmp) set equal to (xTb 0+nw x xPartIdx, yTb0+nh x yPartIdx), the intra prediction mode predModeIntra, the transform block width nTbW and height nTbH set equal to nTbW and nH, the codec block width nCbW and height nCbH set equal to nTbW and nTbH, and the variable cIdx as inputs, and the output is (nW) x (nH) array resSamples.
2. The scaling and transformation process specified in section 8.7.2 is invoked with the luminance position (xTbY, yTbY) set equal to (xtby+nw, ytby+nh, yPartIdx), the variable cIdx, the transformation width nTbW and the transformation height nTbH set equal to nW and nH as inputs, and the output is the (nW) x (nH) array resSamples.
3. With a transform block position (xTbComp, yTbComp) set equal to (xTb 0+nW x PARTIdx, yTb0+nH x yPartIdx), a transform block width nTbW and a transform block height nTbH set equal to nW and nH,Variable cIdx, (nW) x (nH) array predSamples [ x ]][y](wherein x=xpartpbidx nW.. (xpartpbidx+1) ×nw-1, y=0..nh-1), and (nW) x (nH) array resSamples as input, call a picture reconstruction process of the color component specified in 8.7.5 th section, andthe output is a modified reconstructed picture prior to loop filtering.
4.3 example modification of delta QP
/>
/>
/>
4.4 example modified deblocking Filter
Decision processing of 8.8.3.6.1 luma block edges
The inputs to this process are:
picture sample array recaicture,
specifying a position (xCb, yCb) of a left upsampling point of the current codec block relative to a left upsampling point of the current picture,
specifying the position of the left upsampling point of the current block relative to the left upsampling point of the current codec block (xBl, yBl),
The variable edgeType, specifying whether to filter vertical (EDGE _ VER) or horizontal (EDGE _ HOR) EDGEs,
a variable bS, a specified boundary filter strength,
the variable maxfilterlength p, specifies the maximum filter length,
the variable maxfilterlength q specifies the maximum filter length.
The output of this process is:
the variables dE, diep and dEq containing the decisions,
the modified filter length variables maxfilterLengthP and maxfilterLengthQ,
-variable t C
Sample value p i,k And q j,k (wherein i=0..maxfilterlength p, j=0..maxfilterlength q and k=0 and 3) is derived as follows:
-if the edgeType is equal to edge_ver, the following conditions apply:
q j,k =recPicture L [xCb+xBl+j][yCb+yBl+k] (8-1066)
p i,k =recPicture L [xCb+xBl-i-1][yCb+yBl+k] (8-1067)
otherwise (edgeType equals edge_hor), the following conditions apply:
q j,k =recPicture[xCb+xBl+k][yCb+yBl+j] (8-1068)
p i,k =recPicture[xCb+xBl+k][yCb+yBl-i-1] (8-1069)
the derivation of the variable qpOffset is as follows:
-if sps_ladf_enabled_flag is equal to 1, the following condition applies:
the variable lumaLevel of the reconstructed luminance level is derived as follows:
lumaLevel=((p 0,0 +p 0,3 +q 0,0 +q 0,3 )>>2), (8-1070)
setting the variable qpOffset equal to sps_ladf_lowest_interval_qp_offset and modifying as follows:
for(i=0;i<sps_num_ladf_intervals_minus2+1;i++){
if(lumaLevel>SpsLadfIntervalLowerBound[i+1])
qpOffset=sps_ladf_qp_offset[i] (8-1071)
else
break
}
otherwise, qpOffset is set equal to 0.
The variable Qp Q And Qp P Is set equal to[ [ codec ]]]Qp of a cell Y Value, the transformation [ [ codec ]]]The units comprise sample points q 0,0 And p 0,0 Is encoded and decoded.
The derivation of the variable qP is as follows:
qP=((Qp Q +Qp P +1)>>1)+qpOffset (8-1072)
as specified in tables 8-18,
Q=Clip3(0,63,qP+(slice_beta_offset_div2<<1)) (8-1073)
wherein slice_beta_offset_div2 is the contained sample point q 0,0 The syntax element slice beta offset div2 of the slice.
Decision processing of 8.8.3.6.3 chroma block edges
This process is invoked only when the chromaarraypype is not equal to 0.
The inputs to this process are:
a chrominance image sample array recaacture,
specifying a chroma position (xCb, yCb) of a left upsampled point of the current chroma codec block relative to a left upsampled point of the current picture,
specifying a chroma position (xBl, yBl) of a left upsampled point of the current chroma block relative to a left upsampled point of the current chroma codec block,
the variable edgeType, specifying whether to filter vertical (EDGE _ VER) or horizontal (EDGE _ HOR) EDGEs,
the variable cIdx, the specified color component index,
a variable cQpPicOffset, a specified picture level chroma quantization parameter offset,
a variable bS, a specified boundary filter strength,
the variable maxfilterlongthcbcr.
The output of this process is:
the modified variable maxfilterlongthcbcr,
-variable t C
The derivation of the variable maxK is as follows:
-if the edgeType is equal to edge_ver, the following conditions apply:
maxK=(SubHeightC==1)?3:1 (8-1124)
Otherwise (edgeType equals edge_hor), the following conditions apply:
maxK=(SubWidthC==1)?3:1 (8-1125)
p i and q i The values of (wherein i=0..maxfilterlongthcbcr and k=0..maxk) are derived as follows:
-if the edgeType is equal to edge_ver, the following conditions apply:
q i,k =recPicture[xCb+xBl+i][yCb+yBl+k] (8-1126)
p i,k =recPicture[xCb+xBl-i-1][yCb+yBl+k] (8-1127)
subSampleC=SubHeightC (8-1128)
otherwise (edgeType equals edge_hor), the following conditions apply:
q i,k =recPicture[xCb+xBl+k][yCb+yBl+i] (8-1129)
p i,k =recPicture[xCb+xBl+k][yCb+yBl-i-1] (8-1130)
subSampleC=SubWidthC (8-1131)
variable Qp Q And Qp P Is set equal to[ [ codec ]]]Qp of a cell Y Value of->[ [ codec ]]]The units comprise sample points q 0,0 And p 0,0 Is encoded and decoded.
Variable Qp C The derivation of (2) is as follows:
qPi=Clip3(0,63,((Qp Q +Qp P +1)>>1)+cQpPicOffset) (8-1132)
Qp C =ChromaQpTable[cIdx-1][qPi] (8-1133)
note that the variable cqpppicoffset adjusts the value of pps_cb_qp_offset or pps_cr_qp_offset depending on whether the filtered chrominance component is a Cb or Cr component. However, to avoid the need to change the amount of adjustment within the picture, the filtering process does not include an adjustment of the value of slice_cb_qp_offset or slice_cr_qp_offset, nor (when cu_chroma_qp_offset_enabled_flag is equal to 1) the value of CuQpOffset Cb 、CuQpOffset Cr Or CuQPOffset CbCr Is used for adjusting the value of (2).
As specified in tables 8-18,
Q=Clip3(0,63,Qp C +(slice_beta_offset_div2<<1)) (8-1134)
wherein slice_beta_offset_div2 is the contained sample point q 0,0 The syntax element slice beta offset div2 of the slice.
Fig. 3 is a block diagram of a video processing apparatus 300. The apparatus 300 may be used to implement one or more of the methods described herein. The apparatus 300 may be implemented in a smart phone, tablet, computer, internet of things (IoT) receiver, or the like. The apparatus 300 may include one or more processors 302, one or more memories 304, and video processing hardware 306. The processor 302 may be configured to implement one or more of the methods described in this document. Memory 304 may be used to store data and code for implementing the methods and techniques described herein. Video processing hardware 306 may be used to implement some of the techniques described herein in hardware circuitry. In some embodiments, hardware 306 may reside at least partially within processor 302, such as a graphics coprocessor.
In some embodiments, the following solutions may be implemented as preferred solutions.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 1).
1. A method of video processing (e.g., method 400 depicted in fig. 4), comprising: for a transition between a video unit comprising one or more sub-partitions and a codec representation of the video unit, determining (402) that the transition is using an intra sub-block partition mode; and performing (404) a conversion based on the determination such that the intra prediction process is used for conversion of each of the one or more sub-partitions.
2. The method according to solution 1, wherein the intra prediction process includes updating the x-partition index variable and the y-partition index variable at the end of the intra prediction process.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 2).
3. A method of video processing, comprising: determining whether to use a wide-angle intra-prediction map during a transition between a video block and a codec representation of the video block based on an applicability of a codec tool and/or a size of a prediction unit of the video block without using a codec unit size of the video block; and performing conversion based on the determination result.
4. The method of solution 3, wherein the determining is performed such that the wide-angle intra-prediction mapping is disabled because the codec tool is a specific codec tool.
5. The method according to solution 3, wherein the determination is performed such that the wide-angle intra prediction map is an equivalent map because the codec tool is a specific codec tool.
6. The method according to solutions 4-5, wherein the specific codec tool is an intra sub-segmentation tool.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 3).
7. A method of performing video processing, comprising: for a transition between video regions including a codec unit, determining an delta quantization parameter (delta QP) applicable to all intra sub-block partitions of the codec unit and a transition of a codec representation of the video region, wherein the codec unit includes the intra sub-block partitions; performing conversion using delta QP; wherein the delta QP is signaled for the codec unit in the codec representation.
8. The method according to solution 7, wherein the delta QP is signaled with the first transform unit of the video region.
9. The method according to solution 7, wherein the delta QP is signaled with the last transform unit of the video region.
10. The method according to solution 7, wherein the increment QP is signaled by a transform unit having a predetermined position within the video area.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 4).
11. A method of video processing, comprising: for a transition between a video region and a codec representation of the video region, determining a Quantization Parameter (QP) for the transition of a Codec Unit (CU) in the video region based on a QP of a Transform Unit (TU) in the video region; and performing conversion using the QP of the TU and/or the QP of the CU.
12. The method of claim 11, wherein the QP for the CU is determined to be equal to the QP for the TU that is the last or first TU of the video region.
13. The method of any of solutions 11-12, wherein determining the QP for the CU is the QP for the TU prior to adding the delta QP to the QP for the TU.
14. The method of any of solutions 11-13, wherein performing the conversion further comprises: the deblocking filter is selectively applied to video regions during transitions based on QP of the CU.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 5).
15. A method of video processing, comprising: for conversion between video areas including one or more codec units and one or more transform units, determining whether to apply a deblocking filter to edges of video blocks for conversion based on the transform unit to which the edges belong; and performing the conversion based on the determination.
16. The method according to solution 15, further comprising: the conversion of the video block is performed using an intra sub-division mode, and wherein the determination based on the transform unit is performed by checking a quantization parameter of the transform unit.
17. The method of solution 15, wherein the determining comprises, as the size of the video block is larger than the size of the virtual pipeline data unit or the size and cardinality of the largest transform block: further based on quantization parameters of the coding unit to which the edge belongs.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 6).
18. A method of video processing, comprising: for a transition between a video block using an intra sub-partition mode and a codec representation of the video block, determining whether a transform operation is skipped based on a dimension of the codec block or the prediction block or the transform block; and performing the conversion based on the determination.
19. The method of solution 18, wherein the intra sub-segmentation mode is a vertical intra sub-segmentation mode.
20. The method of solution 18, wherein the intra sub-segmentation mode is a horizontal intra sub-segmentation mode.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 7).
21. The method of any of the solutions 1-20, wherein the converting using intra sub-segmentation mode comprises using upper neighbor samples of 2*W and left neighbor samples of 2*H for conversion of WxH transform block sizes.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., items 8 and 9).
22. A method of video processing, comprising: for a transition between a video block and a codec representation of the video block, determining a type of transform to apply based on whether an intra-sub-partition mode or a lossless codec mode is used for the transition; and performing the conversion according to the determination.
23. The method of solution 22, wherein the determining further uses a transform block width to determine a type of transform.
24. The method of any of the solutions 22-23, wherein the type of transformation is a horizontal transformation or a vertical transformation.
25. The method of any of the solutions 22-24, wherein, in case a lossless codec mode is used, determining the type of transform comprises determining to use a 4x4 transform.
26. The method of any of the solutions 22-24, wherein, in case an intra sub-partition mode and a lossless codec mode are used, the determining comprises: the type of transform is determined as a PxQ transform, where P and/or Q are integers depending on the video block size.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 10).
27. A method of video processing, comprising: the conversion is performed between the video block and the codec representation of the video block according to an exclusivity rule, due to which lossless codec mode is used for the conversion or intra-sub-split mode is used for the conversion, wherein the codec representation comprises an indication of using lossless codec mode or using intra-sub-split mode.
28. The method of solution 27, wherein the exclusivity rules further define that lossless codec mode is disabled because the video block belongs to a codec unit, or a codec tree unit, or a virtual pipeline data unit, or a slice, or a picture, or a sequence level bypass enable flag mode enabled for the video block.
The following solutions may be implemented with other techniques described in the items listed in the previous section (e.g., item 11).
29. The method according to any of the solutions 1-28, wherein a given transform unit divided due to the segmentation in the intra sub-segmentation tool is prohibited from having all zero coefficients.
30. The method of any of solutions 1-29, wherein the converting comprises encoding the video into the encoded representation.
31. The method of any of solutions 1-29, wherein the converting comprises decoding the encoded and decoded representation to generate pixel values of the video.
32. A video decoding apparatus comprising a processor configured to implement the method of one or more of the solutions 1-31.
33. A video codec device comprising a processor configured to implement the method of one or more of the solutions 1-31.
34. A computer program product having computer code stored thereon, which when executed by a processor causes the processor to implement the method of any of solutions 1-31.
35. Methods, apparatus, or systems described in this document.
In the above solution, performing the conversion includes using the result of the previous decision step during the codec or decoding operation to obtain a conversion result.
Fig. 5 is a block diagram illustrating an example video processing system 500 in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of system 500. The system 500 may include an input 502 for receiving video content. The video content may be received in an original or uncompressed format (e.g., 8 or 10 bit multi-component pixel values), or may be received in a compressed or codec format. Input 502 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interfaces include wired interfaces such as ethernet, passive Optical Network (PON), and wireless interfaces such as Wi-Fi or cellular interfaces.
The system 500 can include a codec component 504 that can implement various codec or decoding methods described herein. The codec component 504 may reduce the average bit rate of the video from the input 502 to the output of the codec component 504 to produce a codec representation of the video. Thus, codec technology is sometimes referred to as video compression or video transcoding technology. The output of the codec component 504 can be stored or transmitted through communication of the connection represented by the component 506. Component 508 can use a stored or communication bitstream (or codec) representation of video received at input 502 to generate pixel values or displayable video that is sent to display interface 510. The process of generating user-viewable video from a bitstream representation is sometimes referred to as video decompression. Further, while certain video processing operations are referred to as "codec" operations or tools, it should be understood that a codec tool or operation is used at the codec and that the corresponding decoding tool or operation that inverts the codec results will be performed by the decoder.
Examples of the peripheral bus interface or the display interface may include a Universal Serial Bus (USB) or a High Definition Multimedia Interface (HDMI) or a display port, etc. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interfaces, and the like. The techniques described herein may be implemented in various electronic devices such as mobile phones, laptops, smartphones, or other devices capable of performing digital data processing and/or video display.
Fig. 6 is a flow chart representation of a method 600 for video processing in accordance with the present technique. The method 600 includes, at operation 610, performing a transition between a block of a current picture of a video and a codec representation of the video using an intra sub-block segmentation (ISP) mode. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The block is partitioned into a plurality of sub-partitions including a first sub-partition having a same upper left corner position as the upper left corner position of the block. In some embodiments, the x-partition index variable and the y-partition index variable are updated after the reconstruction process is invoked for one sub-partition.
Fig. 7 is a flowchart representation of a method 700 for video processing in accordance with the present technique. The method 700 includes, at operation 710, determining, based on a rule, whether wide-angle intra-prediction mode mapping is enabled for a transition between a block of video and a codec representation of the video. The wide-angle prediction mode is a mode in which a reference sample and a sample to be predicted form an obtuse angle with respect to an upper left direction. The rules specify a determination using dimensions of a prediction unit with a codec tool enabled for the conversion of the block. The method 700 further includes, at operation 720, performing the conversion based on the determination.
In some embodiments, the wide-angle intra-prediction mode mapping is not used with the codec tool enabled for the conversion of the block. In some embodiments, the wide-angle intra-prediction map is an equivalent map with the codec tool enabled for the conversion of the block. In some embodiments, the codec tool includes an intra sub-block partitioning (ISP) mode in which intra prediction processing is used to determine a prediction for each sub-partition based on samples in the current picture.
Fig. 8 is a flowchart representation of a method 800 for video processing in accordance with the present technique. The method 800 includes, at operation 810, performing a conversion between a codec unit of a video region of a video and a codec representation of the video. The codec unit is partitioned into one or more partitions, and the codec unit is encoded in the codec representation using a quantized residual signal obtained by intra-prediction processing of each of the one or more partitions. The codec representation includes syntax elements indicating quantization parameters for quantization. For the codec unit, the codec representation includes the syntax element at most once, and the syntax element indicates a difference of a value of the quantization parameter and another quantization value of a previously processed codec unit based on the video.
In some embodiments, where intra sub-block segmentation processing based on the one or more segmentations is used, differences in values of the quantization parameter are omitted in the codec representation. In some embodiments, the difference in the values of the quantization parameter is signaled by a first transform unit of the video region. In some embodiments, the difference in the values of the quantization parameter is signaled by the last transform unit of the video region.
In some embodiments, the difference in the values of the quantization parameter is signaled by a particular transform unit, whether or not the particular transform unit includes non-zero coefficients. In some embodiments, in case a particular transform unit comprises non-zero coefficients, the difference of the values of the quantization parameter is signaled by the particular transform unit. In some embodiments, where a particular transform unit includes only zero coefficients, the difference in values of the quantization parameter defaults to 0. In some embodiments, the particular transform unit comprises a first transform unit or a last transform unit of the video region.
Fig. 9 is a flowchart representation of a method 900 for video processing in accordance with the present technique. Method 900 includes, at operation 910, for a transition between a video block comprising one or more partitions and a coded representation of the video using an intra-sub-block partition (ISP) mode, determining whether to skip a transform operation during encoding or whether to skip an inverse transform operation during decoding based on characteristics of the block or the ISP mode. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The method 900 further includes, at operation 920, performing the conversion based on the determination.
In some embodiments, the characteristics of the block include a dimension of the block. In some embodiments, the block comprises a codec block, a prediction block, or a transform block. In some embodiments, the characteristics of the ISP pattern include a direction in which the ISP is applied, the direction including a vertical direction or a horizontal direction. In some embodiments, the same reference samples are used for the conversion, whether or not the ISP mode is used. In some embodiments, the block comprises a transform block having a width W and a height H, and wherein 2 xw neighboring samples above the block and 2 xh neighboring samples to the left of the block are used for the transformation of the block.
Fig. 10 is a flowchart representation of a method 1000 for video processing in accordance with the present technique. Method 1000 includes, at operation 1010, for a transition between a coded representation of the video and a video block including one or more partitions, determining a type of transform to use during the transition based on whether an intra-sub-block partition (ISP) mode is to be used for the transition. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The converting includes: the transform is applied during encoding prior to encoding in a codec representation or the inverse of the transform is applied to coefficient values parsed from the codec representation prior to reconstructing sample values of the block. The method 1000 also includes, at operation 1020, performing the conversion based on the determination.
In some embodiments, the type of transformation includes a horizontal transformation or a vertical transformation. In some embodiments, the determination is further based on whether the transform block width is greater than a threshold K, K being an integer of 1 or 2. In some embodiments, the determination is further based on whether the transform block height is greater than a threshold K, K being an integer of 1 or 2.
Fig. 11 is a flow chart representation of a method 1100 for video processing in accordance with the present technique. Method 1100 includes, at operation 1110, determining, for a transition between a block of video containing one or more partitions and a codec representation of the video, a restriction of an intra-sub-block partition (ISP) mode based on whether lossless codec processing is applied to the block. Using the ISP mode, a prediction is determined for each sub-partition using an intra prediction process based on samples in the current picture. The method 1100 also includes, at operation 1120, performing the conversion based on the determination.
In some embodiments, where the lossless codec mode is applied to the block, the limiting includes implementing a transform size limit on the block using the ISP mode codec. In some embodiments, the transform size limit comprises a 4 x 4 transform size. In some embodiments, the block has dimensions m×n, including four partitions, each partition having dimensions (M/4) ×n, each partition being divided into 4×4 transform blocks for performing transform operations and/or quantization operations. In some embodiments, where the lossless codec mode is applied to the block, the limiting includes enforcing a codec block size limit on the block that is encoded using the ISP mode. In some embodiments, the block comprises one or more partitions, and wherein a width of each of the one or more partitions is equal to or greater than 4. In some embodiments, the block comprises one or more partitions, and wherein each of the one or more partitions has a height equal to or greater than 4.
In some embodiments, where the lossless codec mode is applied to the block, the restriction specifies that signaling of syntax elements in the codec representation depends on a single partitioned dimension. The syntax element specifies a direction in which the block is partitioned into one or more partitions. In some embodiments, in the event that the width or height of the single partition is less than 4, the direction specified by the syntax element is not allowed. In some embodiments, signaling of the syntax element is omitted in the codec representation, and wherein a value of the syntax element is derived based on a shape of the block.
In some embodiments, the restriction specifies that the ISP mode is disabled if the lossless codec process is applied to the block. In some embodiments, the limiting includes enabling the ISP mode if the lossless codec process is not applied to the block. In some embodiments, a syntax flag in the codec representation that enables transform quantization bypass indicates that the ISP mode is disabled for a video unit if the lossless codec process is enabled at the video unit level. The video unit includes a codec unit, a codec tree unit, a virtual pipeline data unit, a slice, a picture, or a sequence. In some embodiments, none of the transform units determined using the ISP mode include non-zero coefficients. In some embodiments, all transform units determined using the ISP mode include only zero coefficients. In some embodiments, syntax flags indicating non-zero transform coefficients are signaled for all transform units in the codec representation.
Fig. 12 is a flowchart representation of a method 1200 for video processing in accordance with the present technique. The method 1200 includes, at operation 1210, performing a transition between a codec unit of a video region of a video and a codec representation of the video according to a rule. The codec unit is divided into a plurality of transform units. The rule specifies a relationship between a Quantization Parameter (QP) of the codec unit and quantization parameters of one or more of the plurality of transform units.
In some embodiments, the QP for the codec unit is equal to a QP for a last transform unit or a first transform unit of the codec unit. In some embodiments, the QP for a previously processed codec unit of the video is determined as the QP for at least one transform unit within the codec unit prior to adding a difference between a quantization parameter value and another quantization value for the codec unit. In some embodiments, the QP for the codec unit is derived using a function of QP for one or more transform units and at least one delta QP applied, the delta QP being a difference of a quantization parameter value and another quantization value for a previously processed codec unit based on the video. In some embodiments, performing the conversion further comprises: during the conversion, a deblocking filter is selectively applied to the coding unit based on the QP of the coding unit.
Fig. 13 is a flowchart representation of a method 1300 for video processing in accordance with the present technique. The method 1300 includes, at operation 1310, determining, for a transition between a video region and a codec representation of the video region, whether and/or how to apply a deblocking filter to an edge based on Quantization Parameters (QPs) of transform units related to the edge, the video region including one or more codec units and one or more transform units. The method 1300 also includes performing the conversion based on the determination.
In some embodiments, the QP for the transform unit is used where an intra sub-block segmentation process is used for the conversion of the video region. In some embodiments, the QP for the transform unit is used instead of the QP for the codec unit in the event that the size of the video region is greater than the size of a virtual pipeline data unit or a maximum transform block size.
In some embodiments, the converting includes encoding and decoding the video into the encoded representation. In some embodiments, the converting includes decoding the encoded and decoded representation to generate pixel values of the video.
Some embodiments of the disclosed technology include: a decision or determination is made to enable the video processing tool or mode. In one example, when a video processing tool or mode is enabled, the codec will use or implement the tool or mode in the processing of video blocks, but not necessarily modify the generated bitstream based on the use of the tool or mode. That is, when a video processing tool or mode is enabled based on a decision or determination, the conversion of the bitstream representation from video blocks to video will use the video processing tool or mode. In another example, when the video processing tool or mode is enabled, the decoder will process the bitstream with knowledge that the bitstream has been modified based on the video processing tool or mode. That is, the conversion of the bitstream representation of the video into video blocks will be performed using video processing tools or modes that are enabled based on the decision or determination.
Some embodiments of the disclosed technology include: a decision or determination is made to disable the video processing tool or mode. In one example, when a video processing tool or mode is disabled, the codec will not use the tool or mode in converting video blocks to a bitstream representation of the video. In another example, when a video processing tool or mode is disabled, the decoder will process the bitstream with knowledge that the bitstream was not modified using the video processing tool or mode enabled based on the decision or determination.
The disclosed and other solutions, examples, embodiments, modules, and functional operations described herein may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed herein and their structural equivalents, or in combinations of one or more of them. The disclosed embodiments and other embodiments may be implemented as one or more computer program products, such as one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a storage device, a combination of materials that affect a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" includes all apparatuses, devices and machines for processing data, including for example a programmable processor, a computer or a plurality of processors or computers. In addition to hardware, a device may include code that creates an execution environment for a computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. The propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and decode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. The computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processing and logic flows may also be performed by, and apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Typically, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., intra-frame hard disks or removable hard disks; magneto-optical disk; CD ROM and DVD ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or of the claims, but rather as descriptions of features of certain embodiments of certain subject matter. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various functions that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination and the combination of the claims may be directed to a subcombination or variation of a subcombination.
Also, although operations are depicted in the drawings in some order, this should not be understood as requiring that such operations be performed in the order shown or in any order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described herein should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described, and other implementations, enhancements, and variations may be made based on what is described and illustrated in this patent document.

Claims (21)

1. A method of processing video data, comprising:
for a transition between a current video block of video and a bitstream of the video, determining whether a segmentation mode is to be used for the current video block,
dividing the current video block into a plurality of sub-regions in the case that the division mode is used for the current video block; and
the conversion is performed based on the plurality of sub-regions,
wherein the plurality of sub-regions share the same intra-prediction mode and the plurality of sub-regions includes a first sub-region having the same upper left corner position as the upper left corner position of the current video block.
2. The method of claim 1, wherein if the segmentation mode is used for the current video block and the current video block is a luma transform block, a variable indicating a ratio of width to height is derived based on a width of the current video block and a height of the current video block,
wherein the variable is used to derive an intra prediction mode for each of the plurality of sub-regions in a wide-angle intra prediction mode mapping process,
The wide-angle prediction mode is a mode in which a reference sample point and a sample point to be predicted form an obtuse angle relative to an upper left direction.
3. The method of claim 2, wherein the variable is derived based on a width of the current video block and a height of the current video block in the event that the segmentation mode is not used for the current video block.
4. The method of claim 1, wherein, in the case where the partition mode is used for the current video block, a syntax element for a sub-region containing non-zero coefficients is included in the bitstream,
wherein the syntax element indicates an absolute value of a difference between a quantization parameter of the sub-region and its prediction.
5. The method of claim 1, wherein, in the event that the segmentation mode is used for the current video block, the segmentation depends on a dimension of the current video block.
6. The method of claim 5, wherein no segmentation is allowed that results in a width or height of the sub-region being less than a predefined value.
7. The method of claim 5, wherein if the current video block is of size M x 2M, then inferring that the segmentation of the current video block is different from the segmentation of a video block of size other than M x 2M,
Where M is a predefined integer.
8. The method of claim 1, wherein the converting comprises encoding the current video block into the bitstream.
9. The method of claim 1, wherein the converting comprises decoding the current video block from the bitstream.
10. An apparatus for processing video data, comprising a processor and a non-transitory memory having instructions thereon, wherein when the processor executes the instructions, the processor is caused to:
for a transition between a current video block of video and a bitstream of the video, determining whether a segmentation mode is to be used for the current video block,
dividing the current video block into a plurality of sub-regions in the case that the division mode is used for the current video block; and
the conversion is performed based on the plurality of sub-regions,
wherein the plurality of sub-regions share the same intra-prediction mode and the plurality of sub-regions includes a first sub-region having the same upper left corner position as the upper left corner position of the current video block.
11. The apparatus of claim 10, wherein if the segmentation mode is used for the current video block and the current video block is a luma transform block, a variable indicating a ratio of width to height is derived based on a width of the current video block and a height of the current video block,
Wherein the variable is used to derive an intra prediction mode for each of the plurality of sub-regions in a wide-angle intra prediction mode mapping process,
the wide-angle prediction mode is a mode in which a reference sample point and a sample point to be predicted form an obtuse angle relative to an upper left direction.
12. The apparatus of claim 11, wherein the variable is derived based on a width of the current video block and a height of the current video block if the partition mode is not used for the current video block.
13. The apparatus of claim 10, wherein, in the case where the partition mode is used for the current video block, a syntax element for a sub-region containing non-zero coefficients is included in the bitstream,
wherein the syntax element indicates an absolute value of a difference between a quantization parameter of the sub-region and its prediction.
14. The apparatus of claim 10, wherein, in the event that the segmentation mode is used for the current video block, the segmentation depends on a dimension of the current video block.
15. The apparatus of claim 14, wherein segmentation is not allowed that results in a width or height of the sub-region being less than a predefined value.
16. The apparatus of claim 14, wherein if the current video block is of size M x 2M, then inferring that the segmentation of the current video block is different from the segmentation of a video block of size other than M x 2M,
where M is a predefined integer.
17. A non-transitory computer-readable storage medium storing instructions that cause a processor to:
for a transition between a current video block of video and a bitstream of the video, determining whether a segmentation mode is to be used for the current video block,
dividing the current video block into a plurality of sub-regions in the case that the division mode is used for the current video block; and
the conversion is performed based on the plurality of sub-regions,
wherein the plurality of sub-regions share the same intra-prediction mode and the plurality of sub-regions includes a first sub-region having the same upper left corner position as the upper left corner position of the current video block.
18. The non-transitory computer-readable storage medium of claim 17, wherein if the segmentation mode is used for the current video block and the current video block is a luma transform block, a variable indicating a ratio of width to height is derived based on a width of the current video block and a height of the current video block,
Wherein the variable is used to derive an intra prediction mode for each of the plurality of sub-regions in a wide-angle intra prediction mode mapping process,
the wide-angle prediction mode is a mode in which a reference sample point and a sample point to be predicted form an obtuse angle relative to an upper left direction.
19. The non-transitory computer-readable storage medium of claim 18, wherein the variable is derived based on a width of the current video block and a height of the current video block if the partition mode is not used for the current video block.
20. A non-transitory computer readable recording medium storing a bitstream generated by a method performed by a video processing apparatus, wherein the method comprises:
determining whether a segmentation mode is used for a current video block of the video,
dividing the current video block into a plurality of sub-regions in the case that the division mode is used for the current video block; and
generating the bit stream based on the plurality of sub-regions,
wherein the plurality of sub-regions share the same intra-prediction mode and the plurality of sub-regions includes a first sub-region having the same upper left corner position as the upper left corner position of the current video block.
21. A method for storing a bitstream of video, comprising:
determining whether a segmentation mode is used for a current video block of the video,
dividing the current video block into a plurality of sub-regions in the case that the division mode is used for the current video block;
generating the bit stream based on the plurality of sub-regions; and (3) a step of,
The bit stream is stored in a non-transitory computer readable storage medium,
wherein the plurality of sub-regions share the same intra-prediction mode and the plurality of sub-regions includes a first sub-region having the same upper left corner position as the upper left corner position of the current video block.
CN202311686113.7A 2019-08-30 2020-08-31 Sub-segmentation in intra-coding Pending CN117676167A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CNPCT/CN2019/103762 2019-08-30
CN2019103762 2019-08-30
CN202080061155.XA CN114303383B (en) 2019-08-30 2020-08-31 Sub-segmentation in intra-coding
PCT/CN2020/112425 WO2021037258A1 (en) 2019-08-30 2020-08-31 Sub-partitioning in intra coding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202080061155.XA Division CN114303383B (en) 2019-08-30 2020-08-31 Sub-segmentation in intra-coding

Publications (1)

Publication Number Publication Date
CN117676167A true CN117676167A (en) 2024-03-08

Family

ID=74684261

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202080061155.XA Active CN114303383B (en) 2019-08-30 2020-08-31 Sub-segmentation in intra-coding
CN202311686113.7A Pending CN117676167A (en) 2019-08-30 2020-08-31 Sub-segmentation in intra-coding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202080061155.XA Active CN114303383B (en) 2019-08-30 2020-08-31 Sub-segmentation in intra-coding

Country Status (5)

Country Link
US (2) US11924422B2 (en)
JP (2) JP7381720B2 (en)
KR (1) KR20220051340A (en)
CN (2) CN114303383B (en)
WO (1) WO2021037258A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024005547A1 (en) * 2022-06-29 2024-01-04 엘지전자 주식회사 Isp mode-based image encoding/decoding method and device, and recording medium for storing bitstream
WO2024140853A1 (en) * 2022-12-30 2024-07-04 Douyin Vision Co., Ltd. Method, apparatus, and medium for video processing

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101091792B1 (en) 2007-04-17 2011-12-08 노키아 코포레이션 Feedback based scalable video coding
US20120189052A1 (en) * 2011-01-24 2012-07-26 Qualcomm Incorporated Signaling quantization parameter changes for coded units in high efficiency video coding (hevc)
US9525861B2 (en) 2012-03-14 2016-12-20 Qualcomm Incorporated Disparity vector prediction in video coding
WO2014089727A1 (en) 2012-12-14 2014-06-19 Qualcomm Incorporated Inside view motion prediction among texture and depth view components with asymmetric spatial resolution
US9615090B2 (en) 2012-12-28 2017-04-04 Qualcomm Incorporated Parsing syntax elements in three-dimensional video coding
US9516306B2 (en) 2013-03-27 2016-12-06 Qualcomm Incorporated Depth coding modes signaling of depth data for 3D-HEVC
US9369708B2 (en) 2013-03-27 2016-06-14 Qualcomm Incorporated Depth coding modes signaling of depth data for 3D-HEVC
WO2016123792A1 (en) 2015-02-06 2016-08-11 Microsoft Technology Licensing, Llc Skipping evaluation stages during media encoding
CN114786009A (en) * 2016-03-16 2022-07-22 寰发股份有限公司 Method and apparatus for processing video data with limited block size in video coding
EP3453174A1 (en) 2016-05-06 2019-03-13 VID SCALE, Inc. Method and system for decoder-side intra mode derivation for block-based video coding
WO2019009590A1 (en) * 2017-07-03 2019-01-10 김기백 Method and device for decoding image by using partition unit including additional region
US10666943B2 (en) * 2017-09-15 2020-05-26 Futurewei Technologies, Inc. Block partition structure in video compression
CN111971959B (en) 2018-02-09 2024-06-14 弗劳恩霍夫应用研究促进协会 Partition-based intra coding concept
AU2019342803B2 (en) * 2018-09-21 2023-07-13 Huawei Technologies Co., Ltd. Apparatus and method for inverse quantization
WO2020167841A1 (en) * 2019-02-11 2020-08-20 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for intra sub-partition coding mode
WO2020192793A1 (en) * 2019-03-28 2020-10-01 Huawei Technologies Co., Ltd. Method and apparatus for intra smoothing
EP3723368A1 (en) * 2019-04-12 2020-10-14 InterDigital VC Holdings, Inc. Wide angle intra prediction with sub-partitions

Also Published As

Publication number Publication date
CN114303383A (en) 2022-04-08
JP2024010156A (en) 2024-01-23
KR20220051340A (en) 2022-04-26
US20220191490A1 (en) 2022-06-16
WO2021037258A1 (en) 2021-03-04
JP7381720B2 (en) 2023-11-15
US20240129466A1 (en) 2024-04-18
US11924422B2 (en) 2024-03-05
CN114303383B (en) 2024-07-05
JP2022546395A (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN113812162B (en) Context modeling for simplified quadratic transforms in video
CN113678453B (en) Matrix-based intra-prediction context determination
CN105359521B (en) Method and apparatus for emulating low fidelity coding in a high fidelity encoder
CN113812155B (en) Interaction between multiple inter-frame coding and decoding methods
CN114208190B (en) Matrix selection for downscaling secondary transforms in video coding
CN113728636B (en) Selective use of quadratic transforms in codec video
CN114641997B (en) Color component based grammar signaling and parsing
CN114365490B (en) Coefficient scaling for high precision image and video codecs
US20240129466A1 (en) Sub-partitioning in intra coding
CN117376550A (en) High precision transform and quantization for image and video coding
CN113728631B (en) Intra sub-block segmentation and multiple transform selection
CN113994696A (en) Use of block-size dependent quadratic transforms in codec video
CN110944174B (en) Method and device for selecting conversion of small-size blocks
CN113812152A (en) Filter selection for intra video coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination