WO2020228761A1 - Filter selection for intra video coding - Google Patents

Filter selection for intra video coding Download PDF

Info

Publication number
WO2020228761A1
WO2020228761A1 PCT/CN2020/090193 CN2020090193W WO2020228761A1 WO 2020228761 A1 WO2020228761 A1 WO 2020228761A1 CN 2020090193 W CN2020090193 W CN 2020090193W WO 2020228761 A1 WO2020228761 A1 WO 2020228761A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
current video
intra
video block
prediction
Prior art date
Application number
PCT/CN2020/090193
Other languages
English (en)
French (fr)
Inventor
Zhipin DENG
Li Zhang
Hongbin Liu
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN202080035214.6A priority Critical patent/CN113812152B/zh
Publication of WO2020228761A1 publication Critical patent/WO2020228761A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter

Definitions

  • This patent document relates to video processing techniques, devices and systems.
  • Devices, systems and methods related to digital video coding and decoding using interpolation filters during intra coding are described.
  • a method of video processing includes generating, for a current video block of a video, an intra prediction block using an interpolation filter selected according to a rule; and performing a conversion between the current video block and a coded representation of the video using the prediction block, wherein the rule specifies to select the interpolation filter based on at least one of an intra angular prediction mode, a block dimension, or usage of a wide-angle mode, and without using a type of an intra prediction mode of the current video block.
  • another method of video processing includes determining, based on a rule, whether to apply a rounding operation in a shifting process during a derivation of prediction samples for a conversion between a current video block of a video and a coded representation of the video, wherein the current video block is coded using intra mode in the coded representation; and performing the conversion using the prediction samples based on the determining.
  • another method of video processing includes applying, for a conversion between a current video block of a video and a coded representation of the video, a uniquely defined rounding process for performing a bit-shifting used to generate an intermediate video block, wherein the current video block is coded using intra mode in the coded representation; and performing the conversion using the intermediate video block.
  • another method of video processing includes determining, based on a rule, to use a linear interpolation filter in a derivation of luma reference samples for an intra angular mode prediction for a current video block of a video; and performing, based on the decision, a conversion between the current video block and a coded representation of the current video block.
  • another method of video processing includes determining, based on a rule, to use a nearest neighbor sample of a current video block of a video to derive luma reference samples in an intra angular mode prediction for a current video block; and performing a conversion between the current video block and a coded representation of the video using the luma reference samples.
  • a video encoder apparatus comprising a processor configured to implement an above-described method is disclosed.
  • a video decoder apparatus comprising a processor configured to implement an above-described method is disclosed.
  • a computer readable medium has code for execution of one of above-described methods stored thereon.
  • FIG. 1 shows an example of 33 intra prediction directions.
  • FIG. 2 shows examples of new and old intra prediction modes.
  • FIG. 3 shows an example of intra mode index for 67 intra prediction modes.
  • FIG. 4A shows examples of sub-partitions for 4x8 and 8x4 CUs.
  • FIG. 4B shows examples of sub-partitions for CUs other than 4x8, 8x4 and 4x4.
  • FIG. 5 shows an example of intra modes.
  • FIG. 6 is a block diagrams of an example of a video processing apparatus.
  • FIG. 7 is a block diagram of an example of a video processing system.
  • FIGS. 8A to 8E are flowcharts for example methods for video processing based on some implementations of the disclosed technology.
  • Section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section.
  • certain embodiments are described with reference to Versatile Video Coding or other specific video codecs, the disclosed techniques are applicable to other video coding technologies also.
  • video processing encompasses video coding or compression, video decoding or decompression and video transcoding in which video pixels are represented from one compressed format into another compressed format or at a different compressed bitrate.
  • This document is related to video coding technologies. Specifically, it is related to intra coding process in video coding. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
  • Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards.
  • the ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards.
  • AVC H. 264/MPEG-4 Advanced Video Coding
  • H. 265/HEVC High Efficiency Video Coding
  • the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015.
  • JEM Joint Exploration Model
  • Intra prediction involves producing samples for a given TB (transform block) using samples previously reconstructed in the considered colour channel.
  • the intra prediction mode is separately signalled for the luma and chroma channels, with the chroma channel intra prediction mode optionally dependent on the luma channel intra prediction mode via the ‘DM_CHROMA’ mode.
  • the intra prediction mode is signalled at the PB (prediction block) level, the intra prediction process is applied at the TB level, in accordance with the residual quad-tree hierarchy for the CU, thereby allowing the coding of one TB to have an effect on the coding of the next TB within the CU, and therefore reducing the distance to the samples used as reference values.
  • HEVC includes 35 intra prediction modes –a DC mode, a planar mode and 33 directional, or ‘angular’ intra prediction modes.
  • the 33 angular intra prediction modes are illustrated in FIG. 1.
  • Figure 1 shows an example of 33 intra prediction directions.
  • the intra prediction mode is specified as either planar, DC, horizontal, vertical, ‘DM_CHROMA’ mode or sometimes diagonal mode ‘34’ .
  • the chroma PB may overlap two or four (respectively) luma PBs; in this case the luma direction for DM_CHROMA is taken from the top left of these luma PBs.
  • the DM_CHROMA mode indicates that the intra prediction mode of the luma colour channel PB is applied to the chroma colour channel PBs. Since this is relatively common, the most-probable-mode coding scheme of the intra_chroma_pred_mode is biased in favor of this mode being selected.
  • VTM4 the number of directional intra modes in VTM4 is extended from 33, as used in HEVC to 65.
  • the new directional modes not in HEVC are depicted as dotted arrows in FIG. 2 and the planar and DC modes remain the same.
  • the intra prediction mode and associated intra prediction mode index is: Planar (0) or DC (1) , Vertical (50) , HOR (18) , Top-left Mode (34) , Top-right Mode (66) .
  • FIG. 2 shows examples of new and old intra prediction modes.
  • FIG. 3 shows an example of intra mode index for 67 intra prediction modes.
  • Conventional angular intra prediction directions are defined from 45 degrees to -135 degrees in clockwise direction.
  • VTM4 several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for non-square blocks.
  • the replaced modes are signalled using the original mode indexes, which are remapped to the indexes of wide angular modes after parsing.
  • the total number of intra prediction modes is unchanged, i.e., 67, and the intra mode coding method is unchanged.
  • ISP Intra sub-partitions
  • the Intra Sub-Partitions (ISP) tool divides luma intra-predicted blocks vertically or horizontally into 2 or 4 sub-partitions depending on the block size. For example, minimum block size for ISP is 4x8 (or 8x4) . If block size is greater than 4x8 (or 8x4) then the corresponding block is divided by 4 sub-partitions. FIG. 4A-4B shows examples of the two possibilities. All sub-partitions fulfill the condition of having at least 16 samples.
  • FIG. 4A shows examples of sub-partitions for 4x8 and 8x4 CUs.
  • FIG. 4B shows examples of sub-partitions for CUs other than 4x8, 8x4 and 4x4.
  • reconstructed samples are obtained by adding the residual signal to the prediction signal.
  • a residual signal is generated by the processes such as entropy decoding, inverse quantization and inverse transform. Therefore, the reconstructed sample values of each sub-partition are available to generate the prediction of the next sub-partition, and each sub-partition is processed repeatedly.
  • the first sub-partition to be processed is the one containing the top-left sample of the CU and then continuing downwards (horizontal split) or rightwards (vertical split) .
  • reference samples used to generate the sub-partitions prediction signals are only located at the left and above sides of the lines. All sub-partitions share the same intra mode.
  • Four-tap intra interpolation filters are utilized to improve the directional intra prediction accuracy.
  • HEVC a two-tap bilinear interpolation filter has been used to generate the intra prediction block in the directional prediction modes (i.e., excluding Planar and DC predictors) .
  • VTM4 simplified 6-bit 4-tap Gaussian interpolation filter and 6-bit 4-tap DCT-IF chroma filter are used for only directional intra modes.
  • Non-directional intra prediction process is unmodified.
  • the selection of the 4-tap filters is performed according to the MDIS condition for directional intra prediction modes that provide non-fractional displacements, i.e. to all the directional modes excluding the following: 2, HOR_IDX, DIA_IDX, VER_IDX, 66...
  • the directional intra-prediction mode is classified into one of the following groups:
  • a [1, 2, 1] reference sample filter may be applied (depending on the MDIS condition) to reference samples to further copy these filtered values into an intra predictor according to the selected direction, but no interpolation filters are applied;
  • nTbS is set equal to (Log2 (nTbW) +Log2 (nTbH) ) >> 1.
  • nW and nH are derived as follows:
  • IntraSubPartitionsSplitType is equal to ISP_NO_SPLIT or cIdx is not equal to 0, the following applies:
  • nW nTbW (8-125)
  • nH nTbH (8-126)
  • nW nCbW (8-127)
  • nH nCbH (8-128)
  • the variable whRatio is set equal to Abs (Log2 (nW/nH) ) .
  • variable wideAngle is set equal to 0.
  • the intra prediction mode predModeIntra is modified as follows:
  • wideAngle is set equal to 1 and predModeIntra is set equal to (predModeIntra+65) .
  • nH is greater than nH
  • predModeIntra is greater than or equal to 2
  • predModeIntra is less than (whRatio>1) ? (8+2*whRatio) : 8
  • wideAngle is set equal to 1 and predModeIntra is set equal to (predModeIntra-67) .
  • predModeIntra is less than or equal to 66
  • predModeIntra is greater than (whRatio>1) ? (60-2*whRatio) : 60
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 0.
  • predModeIntra is equal to INTRA_ANGULAR2, INTRA_ANGULAR34 or INTRA_ANGULAR66
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0 and predModeIntra is greater than or equal to INTRA_ANGULAR34 and nW is greater than 8
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0 and predModeIntra is less than INTRA_ANGULAR34 and nH is greater than 8.
  • MinDistVerHor is set equal to Min (Abs (predModeIntra-50) , Abs (predModeIntra-18) ) .
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 0.
  • Figure 8-1 (of VVC) , which is FIG. 5 of the present application–Intra prediction directions (informative)
  • Figure 8-1 illustrates the 93 prediction directions, where the dashed directions are associated with the wide-angle modes that are only applied to non-square blocks.
  • Table 8-5 specifies the mapping table between predModeIntra and the angle parameter intraPredAngle.
  • the inverse angle parameter invAngle is derived based on intraPredAngle as follows:
  • predModeIntra is greater than or equal to 34, the following ordered steps apply:
  • the reference sample array ref [x] is specified as follows:
  • index variable iIdx and the multiplication factor iFact are derived as follows:
  • iIdx ( (y+1+refIdx) *intraPredAngle) >>5+refIdx (8-137)
  • fT [j] filterFlag? fG [iFact] [j] : fC [iFact] [j] (8-139)
  • the reference sample array ref [x] is specified as follows:
  • index variable iIdx and the multiplication factor iFact are derived as follows:
  • fT [j] filterFlag? fG [iFact] [j] : fC [iFact] [j] (8-152)
  • JVET-N0435 was adopted to harmonize between WAIP and the usage of the MDIS and reference sample interpolation filters. If a wide-angle mode represents a non-fractional offset. There are 8 modes in the wide-angle modes satisfy this condition, which are [-14, -12, -10, -6, 72, 76, 78, 80] . When a block is predicted by these modes. It’s proposed to directly copy from the particular sample in the reference buffer without applying any interpolation. Instead, reference filter is conditionally applied to these modes to smooth the predictor. With this modification, the number of samples needed to be smoothing is reduced. Besides, it aligns the design of non-fractional modes in the conventional prediction modes and wide-angle modes.
  • variable cIdx specifying the colour component of the current block.
  • IntraSubPartitionsSplitType is equal to ISP_NO_SPLIT or cIdx is not equal to 0, the following applies:
  • variable refIdx specifying the intra prediction reference line index is derived as follows:
  • nW and nH are derived as follows:
  • IntraSubPartitionsSplitType is equal to ISP_NO_SPLIT or cIdx is not equal to 0, the following applies:
  • nW nTbW (8-125)
  • nH nTbH (8-126)
  • nW nCbW (8-127)
  • nH nCbH (8-128)
  • the variable whRatio is set equal to Abs (Log2 (nW/nH) ) .
  • variable wideAngle is set equal to 0.
  • the intra prediction mode predModeIntra is modified as follows:
  • predModeIntra is set equal to (predModeIntra+65) .
  • nH is greater than nH
  • predModeIntra is greater than or equal to 2
  • predModeIntra is less than (whRatio>1) ? (8+2*whRatio) : 8
  • predModeIntra is set equal to (predModeIntra-67) .
  • predModeIntra is less than or equal to 66
  • predModeIntra is greater than (whRatio>1) ? (60-2*whRatio) : 60
  • predModeIntra is equal to INTRA_PLANAR, -14, -12, -10, -6, 2, 34, 66, 72, 76, 78 or 80, RefFilterFlag is set to 1
  • RefFilterFlag is set to 0
  • predModeIntra is equal to INTRA_PLANAR
  • the corresponding intra prediction mode process specified in clause 8.4.4.2.5 is invoked with the transform block width nTbW, and the transform block height nTbH, and the reference sample array p as inputs, and the output is the predicted sample array predSamples.
  • predModeIntra is equal to INTRA_DC
  • the corresponding intra prediction mode process specified in clause 8.4.4.2.6 is invoked with the transform block width nTbW, the transform block height nTbH, and the reference sample array p as inputs, and the output is the predicted sample array predSamples.
  • predModeIntra is equal to INTRA_LT_CCLM, INTRA_L_CCLM or INTRA_T_CCLM
  • the corresponding intra prediction mode process specified in clause 8.4.4.2.8 is invoked with the intra prediction mode predModeIntra, the sample location (xTbC, yTbC) set equal to (xTbCmp, yTbCmp) , the transform block width nTbW and height nTbH, and the reference sample array p as inputs, and the output is the predicted sample array predSamples.
  • the corresponding intra prediction mode process specified in clause 8.4.4.2.7 is invoked with the intra prediction mode predModeIntra, the intra prediction reference line index refIdx, the transform block width nTbW, the transform block height nTbH, the reference sample width refW, the reference sample height refH, the coding block width nCbW and height nCbH, the reference filter flag RefFilterFlag, the colour component index cIdx, and the reference sample array p as inputs, and the predicted sample array predSamples as outputs.
  • IntraSubPartitionsSplitType is equal to ISP_NO_SPLIT or cIdx is not equal to 0
  • ⁇ refIdx is equal to 0 or cIdx is not equal to 0
  • predModeIntra is equal to INTRA_PLANAR
  • predModeIntra is equal to INTRA_DC
  • predModeIntra is equal to INTRA_ANGULAR18
  • predModeIntra is equal to INTRA_ANGULAR50
  • predModeIntra is less than or equal to INTRA_ANGULAR10
  • predModeIntra is greater than or equal to INTRA_ANGULAR58
  • variable cIdx specifying the colour component of the current block.
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1:
  • IntraSubPartitionsSplitType is equal to ISP_NO_SPLIT
  • filterFlag is set equal to 0.
  • nTbS is set equal to (Log2 (nTbW) +Log2 (nTbH) ) >> 1.
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 0.
  • predModeIntra is equal to INTRA_ANGULAR2, INTRA_ANGULAR34 or INTRA_ANGULAR66
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0 and predModeIntra is greater than or equal to INTRA_ANGULAR34 and nW is greater than 8
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0 and predModeIntra is less than INTRA_ANGULAR34 and nH is greater than 8.
  • MinDistVerHor is set equal to Min (Abs (predModeIntra-50) , Abs (predModeIntra-18) ) .
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 0.
  • ⁇ 4-tap interpolation filter is used for all kinds of block sizes.
  • using 4-tap interpolation filter may bring too much computational complexity.
  • the 6-bit 4-tap DCT-IF chroma filter F C is used for ISP-coded blocks with specific block size and identified prediction mode.
  • filterFlag is set equal to 0 (which means the interpolation filter coefficients fC in Table 8-6 would be used) .
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0 and predModeIntra is greater than or equal to INTRA_ANGULAR34 and nW is greater than 8
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0 and predModeIntra is less than INTRA_ANGULAR34 and nH is greater than 8.
  • iIdx ( (y+1+refIdx) *intraPredAngle) >>5+refIdx
  • Fc denotes the 4-tap DCT-IF chroma filter
  • F G denotes the 4-tap Gaussian filter
  • both are specified in Table 8-6 of VVC working draft JVET-M1001-v7
  • bilinear/linearfilter denotes the 2-tap filter as specified in equation (8-141) and equation (8-154) of VVC working draft JVET-M1001-v7.
  • Other kinds of variances of DCT-IF/Gaussian/bilinear/linear filters may be also applicable.
  • block may indicate CU/PU/TU as defined in VVC.
  • a block may contain different color components such as Y/U/V component, or R/G/B component, or just corresponds to one color component.
  • the methods may be applicable for either color component of a block, or all color components of a block.
  • the width and height of a block are denoted as W and H, respectively.
  • Multiple interpolation filters for intra prediction processes may be pre-defined.
  • indications of one or multiple sets of multiple interpolation filters may be signaled in sequence/picture/slice/other video unit level.
  • they may be signaled in SPS/VPS/PPS/picture header/slice header/APS/tile group header/tile header etc. al.
  • Interpolation filter to be used may be changed from one video unit to anther video unit.
  • a video unit may be a sequence/picture/view/slice/tile group/tile/brick/CTU row/CTU/CU/PU/TU/VPDU.
  • Selection of multiple interpolation filters may depend on block dimension.
  • Selection of multiple interpolation filters may depend on color component.
  • Selection of multiple interpolation filters may depend on coding methods.
  • the coding methods may include normal intra prediction method, ISP, affine intra prediction method, MRL, etc. al.
  • 4-tap interpolation filter e.g., 4-tap cubic filter, or DCT-IF chroma filter, or gaussian filter, etc.
  • DCT-IF chroma filter or gaussian filter, etc.
  • whether 4-tap interpolation filter is used or not may depend on the width or height of a TU.
  • the 4-tpa interpolation filter may be disabled for other blocks.
  • whether 4-tap interpolation filter is used or not may depend on the size of a TU.
  • 4-tap interpolation filter may be used when W *H is greater than a threshold T1 wherein T1 is a threshold.
  • T1 32, which means 4x4/8x4/4x8/2x8/8x2/16x2/2x16/1x16/16x1/32x1/1x32 block may not use 4-tap interpolation filter.
  • the 4-tpa interpolation filter may be disabled for other blocks.
  • 4x4/4x8/8x8 block may not apply 4-tap interpolation filter.
  • the 4-tap interpolation filter may be disabled for other blocks.
  • 2-tap filter e.g., 2-tap bilinear filter
  • bilinear/linear filter may be used for small size TUs.
  • N 32.
  • 4x4/8x4/4x8/2x8/8x2/16x2/2x16/1x16/16x1/32x1/1x32 TU may use bilinear filter.
  • the thresholds mentioned above may be the same for all color components.
  • the thresholds may be dependent on the color component.
  • 4-tap filter F G (aka. Gaussian filter) is used or not may depend on the prediction information and/or block size.
  • F G is used for intra-coded (or ISP-coded) block or not may depend on the block size.
  • F G filter may be disabled.
  • F G filter may be enabled.
  • F G filter may be enabled.
  • F G filter may be disabled.
  • F G is used for intra-coded (or ISP-coded) block or not may depend on the block width/height and the prediction mode.
  • F C (aka. DCI-IF chroma filter)
  • cubic filter or bilinear filter
  • ISP-coded blocks e.g., CUs
  • how to select the interpolation filter or other filters applied to reference samples may depend on the dimension of the CU.
  • how to select the interpolation filter or other filters applied to reference samples may depend on the dimension of the sub-partition.
  • the reference sample used for interpolation may depend on the reference line index.
  • Proposed method may be applied to all or certain color components.
  • proposed method may be applied to luma component only.
  • proposed method may be applied to luma, Cb and Cr component.
  • proposed method may be applied to luma, Cb and Cr component for YCbCr 4: 4: 4 format.
  • proposed method may be applied to R, G and B component for RGB format.
  • the rounding operation may not be applied in the shifting process to derive the prediction samples.
  • a value A e.g., the summation of filter coefficients times reference samples
  • rs a positive integer value
  • offset is equal to 1 ⁇ (rs –1)
  • rounding operation may not be applied during the DCT-IF (e.g., “Fc” as denoted in Table 8-6 of JVET-M1001, which is same as the DCT-IF chroma interpolation filter used in motion compensation of VVC) interpolation filter and/or bilinear interpolation filter, and/or gaussian interpolation filter processes to derive intra luma prediction samples.
  • DCT-IF e.g., “Fc” as denoted in Table 8-6 of JVET-M1001, which is same as the DCT-IF chroma interpolation filter used in motion compensation of VVC
  • whether to use the right shift process with or without rounding operation may be aligned in the intra luma prediction sample derivation process with DCI-IF chroma filter, and the inter chroma motion compensation process.
  • both don’t use the rounding operation.
  • both use the rounding operation.
  • rounding operation may not be applied to derive the down-sampling or/and up-sampling samples in matrix based intra prediction.
  • rounding process A ( (A+offset - (A ⁇ 0) ) >> rs) may be applied to derive the down-sampling and up-sampling samples in matrix based intra prediction.
  • a unique behavior of rounding process may be used for difference modules that require right shifting operations for residual/prediction and/or reconstruction sample derivation (e.g., matrix based intra prediction, intra prediction, interpolation filter, RST, and etc. ) in a codec.
  • residual/prediction and/or reconstruction sample derivation e.g., matrix based intra prediction, intra prediction, interpolation filter, RST, and etc.
  • linear interpolation filter may be applied to derive luma reference samples in intra angular mode prediction.
  • linear interpolation filter may be conditionally applied depending on block dimensions.
  • the linear interpolation filter may be used depending on the relationship between WB, HB and two integers T1 and T2.
  • linear interpolation filter may be conditionally applied depending on intra prediction direction/color component/picture or slice type.
  • nearest neighbor may be used to derive luma reference samples in intra angular mode prediction.
  • the nearest neighbor may be defined as the nearest integer pixel position from the reference samples.
  • the nearest integer pixel position in the reference samples using nearest rounding may be used.
  • nearest (12.1) is equal to 12
  • nearest (12.6) is equal to 13.
  • the nearest integer pixel position in the reference samples using floor rounding may be used.
  • floor (12.1) is equal to 12
  • floor (12.6) is equal to 12.
  • the nearest integer pixel position in the reference samples using ceiling rounding may be used.
  • ceiling (12.1) is equal to 13
  • ceiling (12.6) is equal to 13.
  • nearest neighbor may be conditionally used depending on block dimensions.
  • nearest neighbor may be used depending on the relationship between WB, HB and two integers T1 and T2.
  • nearest neighbor may be conditionally used depending on intra prediction direction/color component/picture or slice type.
  • the intra block interpolation filter selection may depend on intra angular prediction mode, and/or block dimension (e.g., CU size, and/or TU size) , and/or wide angle, regardless of the intra prediction type used (e.g., regardless of whether it is ISP coded block or not) .
  • block dimension e.g., CU size, and/or TU size
  • wide angle regardless of the intra prediction type used (e.g., regardless of whether it is ISP coded block or not) .
  • the intra interpolation filter selection for an ISP block is as the same with the intra interpolation filter selection of a normal intra block, without any additional condition check for ISP block, e.g., the checking of usage of ISP mode and its associated intra prediction mode are both removed.
  • the intra interpolation filter selection is decided by whether it is wide angle block (e.g., the wide angle defined in JVET-M1001-v7) .
  • the intra interpolation filter selection is decided by whether the minDistVerHor is greater than intraHorVerDistThres [nTbS] , wherein nTbS is equal to (Log2 (nTbW) + Log2 (nTbH) ) >> 1, nTbW and nTbH denote the transform width and height of the current intra block, minDistVerHor is set equal to Min (Abs (predModeIntra -50) , Abs (predModeIntra -18) ) , and the array intraHorVerDistThres [] is defined in Table 8-4 in JVET-M1001-v7.
  • predModeIntra is greater than or equal to 34, the following ordered steps apply:
  • the reference sample array ref [x] is specified as follows:
  • index variable iIdx and the multiplication factor iFact are derived as follows:
  • iIdx ( (y+1+refIdx) *intraPredAngle) >>5+refIdx (8-137)
  • fT [j] filterFlag? fG [iFact] [j] : fC [iFact] [j] (8-139)
  • the reference sample array ref [x] is specified as follows:
  • index variable iIdx and the multiplication factor iFact are derived as follows:
  • fT [j] filterFlag? fG [iFact] [j] : fC [iFact] [j] (8-152)
  • M is set to 32 or 16.
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 0.
  • predModeIntra is equal to INTRA_ANGULAR2, INTRA_ANGULAR34 or INTRA_ANGULAR66
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT, the following apply:
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 0.
  • MinDistVerHor is set equal to Min (Abs (predModeIntra-50) , Abs (predModeIntra-18) ) .
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 0.
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 0.
  • predModeIntra is equal to [ [INTRA_ANGULAR2, INTRA_ANGULAR34 or INTRA_ANGULAR66] ] -14, -12, -10, -6, 2, 34, 66, 72, 76, 78 or 80
  • filterFlag is set equal to 1
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT [ [and cIdx is equal to 0] ] and predModeIntra is greater than or equal to INTRA_ANGULAR34 and [ [nW] ] nTbWis greater than 8
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT [ [and cIdx is equal to 0] ] and predModeIntra is less than INTRA_ANGULAR34 and [ [nH] ] nTbH is greater than 8
  • MinDistVerHor is set equal to Min (Abs (predModeIntra-50) , Abs (predModeIntra-18) ) .
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 0.
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 0.
  • predModeIntra is equal to [ [INTRA_ANGULAR2, INTRA_ANGULAR34 or INTRA_ANGULAR66] ] -14, -12, -10, -6, 2, 34, 66, 72, 76, 78 or 80
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT [ [and cIdx is equal to 0] ] and predModeIntra is greater than or equal to INTRA_ANGULAR34 and [ [nW] ] nTbW*nTbH is greater than 8 32, filterFlag is set equal to 1
  • MinDistVerHor is set equal to Min (Abs (predModeIntra-50) , Abs (predModeIntra-18) ) .
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 0.
  • predModeIntra is greater than or equal to 34, the following ordered steps apply:
  • the reference sample array ref [x] is specified as follows:
  • index variable iIdx and the multiplication factor iFact are derived as follows:
  • iIdx ( (y+1+refIdx) *intraPredAngle) >>5 [ [+refIdx] ] (8-137)
  • fT [j] filterFlag? fG [iFact] [j] : fC [iFact] [j] (8-139)
  • predModeIntra is greater than or equal to 34, the following ordered steps apply:
  • the reference sample array ref [x] is specified as follows:
  • index variable iIdx and the multiplication factor iFact are derived as follows:
  • iIdx ( (y+1+refIdx) *intraPredAngle) >>5+refIdx (8-137)
  • fT [j] filterFlag? fG [iFact] [j] : fC [iFact] [j] (8-139) ] ]
  • the reference sample array ref [x] is specified as follows:
  • index variable iIdx and the multiplication factor iFact are derived as follows:
  • fT [j] filterFlag? fG [iFact] [j] : fC [iFact] [j] (8-152) ] ]
  • ISP may be treated in the same way as the normal intra prediction method during the selection of interpolation filter process.
  • selection of interpolation filter may be independent from the intra prediction mode.
  • selection of interpolation filter may be dependent on block dimension and/or color component. Double brackets are placed before and after the text deleted.
  • the intra prediction mode predModeIntra is modified as follows:
  • wideAngle is set equal to 1 and predModeIntra is set equal to (predModeIntra+65) .
  • nH is greater than nH
  • predModeIntra is greater than or equal to 2
  • predModeIntra is less than (whRatio>1) ? (8+2*whRatio) : 8
  • wideAngle is set equal to 1 and predModeIntra is set equal to (predModeIntra-67) .
  • predModeIntra is less than or equal to 66
  • predModeIntra is greater than (whRatio>1) ? (60-2*whRatio) : 60
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 0.
  • predModeIntra is equal to INTRA_ANGULAR2, INTRA_ANGULAR34 or INTRA_ANGULAR66
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT and cIdx is equal to 0 and predModeIntra is less than INTRA_ANGULAR34 and nH is greater than 8. ] ]
  • MinDistVerHor is set equal to Min (Abs (predModeIntra-50) , Abs (predModeIntra-18) ) .
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1.
  • filterFlag is set equal to 0.
  • FIG. 6 is a block diagram of a video processing apparatus 600.
  • the apparatus 600 may be used to implement one or more of the methods described herein.
  • the apparatus 600 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 600 may include one or more processors 602, one or more memories 604 and video processing hardware 606.
  • the processor (s) 602 may be configured to implement one or more methods described in the present document.
  • the memory (memories) 604 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 606 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • FIG. 7 is another example of a block diagram of a video processing system in which disclosed techniques may be implemented.
  • FIG. 7 is a block diagram showing an example video processing system 700 in which various techniques disclosed herein may be implemented.
  • the system 700 may include input 702 for receiving video content.
  • the video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format.
  • the input 702 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON) , etc. and wireless interfaces such as Wi-Fi or cellular interfaces.
  • PON passive optical network
  • the system 700 may include a coding component 704 that may implement the various coding or encoding methods described in the present document.
  • the coding component 704 may reduce the average bitrate of video from the input 702 to the output of the coding component 704 to produce a coded representation of the video.
  • the coding techniques are therefore sometimes called video compression or video transcoding techniques.
  • the output of the coding component 704 may be either stored, or transmitted via a communication connected, as represented by the component 706.
  • the stored or communicated bitstream (or coded) representation of the video received at the input 702 may be used by the component 708 for generating pixel values or displayable video that is sent to a display interface 710.
  • the process of generating user-viewable video from the bitstream representation is sometimes called video decompression.
  • certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by
  • peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on.
  • storage interfaces include SATA (serial advanced technology attachment) , PCI, IDE interface, and the like.
  • the video processing methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 6 or 7.
  • FIG. 8A is a flowchart of an example method 810 of video processing.
  • the method 810 includes, at step 812, generating, for a current video block of a video, an intra prediction block using an interpolation filter selected according to a rule.
  • the method 810 further includes, at step 814, performing a conversion between the current video block and a coded representation of the video using the prediction block.
  • the rule specifies to select the interpolation filter based on at least one of an intra angular prediction mode, a block dimension, or usage of a wide-angle mode, and without using a type of an intra prediction mode of the current video block.
  • FIG. 8B is a flowchart of an example method 820 of video processing.
  • the method 820 includes, at step 822, determining, based on a rule, whether to apply a rounding operation in a shifting process during a derivation of prediction samples for a conversion between a current video block of a video and a coded representation of the video, wherein the current video block is coded using intra mode in the coded representation.
  • the method 820 further includes, at step 824, performing the conversion using the prediction samples based on the determining.
  • FIG. 8C is a flowchart of an example method 830 of video processing.
  • the method 830 includes, at step 832, applying, for a conversion between a current video block of a video and a coded representation of the video, a uniquely defined rounding process for performing a bit-shifting used to generate an intermediate video block, wherein the current video block is coded using intra mode in the coded representation.
  • the method 830 further includes, at step 834, performing the conversion using the intermediate video block.
  • various video decoding or encoding operations may have their own uniquely defined rounding processes that may be exclusively used for that operation only during video encoding or decoding.
  • Another benefit of this strategy is to allow shaping of quantization noise to be visually insignificant during video encoding or decoding.
  • Another benefit of this strategy is to make the encoded video more amenable to compactization using context coding. For example, a next operation’s unique rounding may be designed to flatten out a previous operation’s unique rounding operation by acting as an inverse filter (in frequency domain) to the effect of the previous operation’s rounding operation, making the resulting data easier to encode using compact codes.
  • FIG. 8D is a flowchart of an example method 840 of video processing.
  • the method 840 includes, at step 842, determining, based on a rule, to use a linear interpolation filter in a derivation of luma reference samples for an intra angular mode prediction for a current video block of a video.
  • the method 840 further includes, at step 844, performing, based on the decision, a conversion between the current video block and a coded representation of the current video block.
  • FIG. 8E is a flowchart of an example method 850 of video processing.
  • the method 850 includes, at step 852, determining, based on a rule, to use a nearest neighbor sample of a current video block of a video to derive luma reference samples in an intra angular mode prediction for a current video block.
  • the method 850 further includes, at step 854, performing a conversion between the current video block and a coded representation of the video using the luma reference samples.
  • Some embodiments of the disclosed technology include making a decision or determination to enable a video processing tool or mode.
  • the encoder when the video processing tool or mode is enabled, the encoder will use or implement the tool or mode in the processing of a block of video, but may not necessarily modify the resulting bitstream based on the usage of the tool or mode. That is, a conversion from the block of video to the bitstream representation of the video will use the video processing tool or mode when it is enabled based on the decision or determination.
  • the decoder when the video processingtool or mode is enabled, the decoder will process the bitstream with the knowledge that the bitstream has been modified based on the video processing tool or mode. That is, a conversion from the bitstream representation of the video to the block of video will be performed using the video processing tool or mode that was enabled based on the decision or determination.
  • Some embodiments of the disclosed technology include making a decision or determination to disable a video processing tool or mode.
  • the encoder will not use the tool or mode in the conversion of the block of video to the bitstream representation of the video.
  • the decoder will process the bitstream with the knowledge that the bitstream has not been modified using the video processing tool or mode that was disabled based on the decision or determination.
  • video processing may refer to video encoding, video decoding, video compression or video decompression.
  • video compression algorithms may be applied during conversion from pixel representation of a video to a corresponding bitstream representation or vice versa.
  • the bitstream representation of a current video block may, for example, correspond to bits that are either co-located or spread in different places within the bitstream, as is defined by the syntax.
  • a macroblock may be encoded in terms of transformed and coded error residual values and also using bits in headers and other fields in the bitstream.
  • a video processing method comprising: determining, for a conversion between a current video block of a video and a bitstream representation of the current video block, one or more interpolation filters to use during the conversion, wherein the one or more interpolation filters are from multiple interpolation filters for the video; and performing the conversion using the one or more interpolation filters.
  • bitstream representation is configured to carry indications of the multiple interpolation filters.
  • bitstream representation carries the indications at a sequence parameter set level or a video parameter set level or a picture parameter set level or a picture header or a slice header or a slice header or an adaptive parameter set level, or a tile group header or a tile header.
  • a video unit corresponds to a video sequence or a video picture or a video view or a video tile group or a video tile or a video brick or a video coding tree unit row or a video coding unit or a video prediction unit or a video transform unit or a VPDU.
  • a video processing method comprising: determining, based on a rule, whether or not to use a 4-tap interpolation filter in an intra prediction based conversion between a current video block of a video and a bitstream representation of the current video block; an performing the conversion based on the determining whether or not to use the 4-tap interpolation filter.
  • a video processing method comprising: determining, for a conversion between a current video block of a video and a bitstream representation of the current video block, an interpolation filter to use during the conversion; applying the interpolation filter to reference samples determined using a rule; and performing the conversion using a result of the applying.
  • a video processing method comprising making a decision, for a conversion between a current video block and a bitstream representation of the current video block, regarding a selective application of a rounding operation in a shifting process in a derivation of prediction samples for the current video block, and performing, based on the decision, the conversion.
  • a video processing method comprising making a decision, based on at least one dimension of a current video block, regarding a selective application of a linear interpolation filter in a derivation of luma reference samples for intra angular mode prediction, andperforming, based on the decision, a conversion between the current video block and a bitstream representation of the current video block.
  • a video processing apparatus comprising a processor configured to implement one or more of clauses 1 to 38.
  • a computer-readable medium having code stored thereon, the code, when executed by a processor, causing the processor to implement a method recited in any one or more of clauses 1 to 38.
  • a video processing method comprising: generating, for a current video block of a video, an intra prediction block using an interpolation filter selected according to a rule; and performing a conversion between the current video block and a coded representation of the video using the prediction block, wherein the rule specifies to select the interpolation filter based on at least one of an intra angular prediction mode, a block dimension, or usage of a wide-angle mode, and without using a type of an intra prediction mode of the current video block.
  • a video processing method comprising: determining, based on a rule, whether to apply a rounding operation in a shifting process during a derivation of prediction samples for a conversion between a current video block of a video and a coded representation of the video, wherein the current video block is coded using intra mode in the coded representation; and performing the conversion using the prediction samples according to the determining.
  • a video processing method comprising: applying, for a conversion between a current video block of a video and a coded representation of the video, a uniquely defined rounding process for performing a bit-shifting used to generate an intermediate video block, wherein the current video block is coded using intra mode in the coded representation; and performing the conversion using the intermediate video block.
  • a video processing method comprising: determining, based on a rule, to use a linear interpolation filter in a derivation of luma reference samples for an intra angular mode prediction for a current video block of a video; and performing, based on the decision, a conversion between the current video block and a coded representation of the current video block.
  • a video processing method comprising: determining, based on a rule, to use a nearest neighbor sample of a current video block of a video to derive luma reference samples in an intra angular mode prediction for a current video block; and performing a conversion between the current video block and a coded representation of the video using the luma reference samples.
  • An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 58.
  • a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 58.
  • Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, orbe operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
PCT/CN2020/090193 2019-05-14 2020-05-14 Filter selection for intra video coding WO2020228761A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080035214.6A CN113812152B (zh) 2019-05-14 2020-05-14 帧内视频编解码的滤波器选择

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019086810 2019-05-14
CNPCT/CN2019/086810 2019-05-14

Publications (1)

Publication Number Publication Date
WO2020228761A1 true WO2020228761A1 (en) 2020-11-19

Family

ID=73289785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090193 WO2020228761A1 (en) 2019-05-14 2020-05-14 Filter selection for intra video coding

Country Status (2)

Country Link
CN (1) CN113812152B (zh)
WO (1) WO2020228761A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126962A1 (en) * 2001-03-26 2006-06-15 Sharp Laboratories Of America, Inc. Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding
CN101505425A (zh) * 2009-03-11 2009-08-12 北京中星微电子有限公司 一种宏块滤波方法及装置
US20120082224A1 (en) * 2010-10-01 2012-04-05 Qualcomm Incorporated Intra smoothing filter for video coding
US20150023405A1 (en) * 2013-07-19 2015-01-22 Qualcomm Incorporated Disabling intra prediction filtering
US20170332075A1 (en) * 2016-05-16 2017-11-16 Qualcomm Incorporated Confusion of multiple filters in adaptive loop filtering in video coding
US20180091825A1 (en) * 2016-09-28 2018-03-29 Qualcomm Incorporated Interpolation filters for intra prediction in video coding

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108632611A (zh) * 2012-06-29 2018-10-09 韩国电子通信研究院 视频解码方法、视频编码方法和计算机可读介质
CA2935301C (en) * 2014-01-02 2019-10-01 Hfi Innovation Inc. Method and apparatus for intra prediction coding with boundary filtering control
WO2015127581A1 (en) * 2014-02-25 2015-09-03 Mediatek Singapore Pte. Ltd. Methods for switching off the intra prediction filters
JP6339691B2 (ja) * 2014-02-26 2018-06-06 ドルビー ラボラトリーズ ライセンシング コーポレイション ビデオ圧縮のための輝度ベースの符号化ツール
KR20160112810A (ko) * 2015-03-20 2016-09-28 삼성전자주식회사 이미지 처리 방법 및 그 전자 장치
US11463689B2 (en) * 2015-06-18 2022-10-04 Qualcomm Incorporated Intra prediction and intra mode coding
US20160373770A1 (en) * 2015-06-18 2016-12-22 Qualcomm Incorporated Intra prediction and intra mode coding
US20160373742A1 (en) * 2015-06-18 2016-12-22 Qualcomm Incorporated Intra prediction and intra mode coding
US10194170B2 (en) * 2015-11-20 2019-01-29 Mediatek Inc. Method and apparatus for video coding using filter coefficients determined based on pixel projection phase
EP3453173B1 (en) * 2016-05-05 2021-08-25 InterDigital Madison Patent Holdings, SAS Control-point based intra direction representation for intra coding

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126962A1 (en) * 2001-03-26 2006-06-15 Sharp Laboratories Of America, Inc. Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding
CN101505425A (zh) * 2009-03-11 2009-08-12 北京中星微电子有限公司 一种宏块滤波方法及装置
US20120082224A1 (en) * 2010-10-01 2012-04-05 Qualcomm Incorporated Intra smoothing filter for video coding
US20150023405A1 (en) * 2013-07-19 2015-01-22 Qualcomm Incorporated Disabling intra prediction filtering
US20170332075A1 (en) * 2016-05-16 2017-11-16 Qualcomm Incorporated Confusion of multiple filters in adaptive loop filtering in video coding
US20180091825A1 (en) * 2016-09-28 2018-03-29 Qualcomm Incorporated Interpolation filters for intra prediction in video coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHIEN,WEI-JUNG ET AL.: "Methodology and reporting template for tool testing", JOINT VIDEO EXPLORATION TEAM(JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 31 January 2019 (2019-01-31), DOI: 20200710100514A *

Also Published As

Publication number Publication date
CN113812152A (zh) 2021-12-17
CN113812152B (zh) 2023-10-03

Similar Documents

Publication Publication Date Title
WO2020221374A1 (en) Intra video coding using multiple reference filters
US11611779B2 (en) Multiple secondary transform matrices for video processing
JP2022533190A (ja) アップサンプリングを使用した行列ベースのイントラ予測
JP7321364B2 (ja) ビデオコーディングにおけるクロマ量子化パラメータ
WO2021027928A1 (en) Weighting factors for prediction sample filtering in intra mode
WO2021110116A1 (en) Prediction from multiple cross-components
WO2021027925A1 (en) Position-dependent intra prediction sample filtering
WO2020228716A1 (en) Usage of transquant bypass mode for multiple color components
CN114270848A (zh) 在去方块滤波中使用边界强度
WO2020228761A1 (en) Filter selection for intra video coding
WO2021136470A1 (en) Clustering based palette mode for video coding
WO2023020318A1 (en) Fusion mode for adaptive loop filter in video coding
WO2020228661A1 (en) Deblocking filter for video coding
KR20240091033A (ko) 다중 레퍼런스 필터를 사용한 인트라 비디오 코딩

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20805206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20805206

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/03/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20805206

Country of ref document: EP

Kind code of ref document: A1