CN113965764B - Image encoding method, image decoding method and related device - Google Patents

Image encoding method, image decoding method and related device Download PDF

Info

Publication number
CN113965764B
CN113965764B CN202010707877.XA CN202010707877A CN113965764B CN 113965764 B CN113965764 B CN 113965764B CN 202010707877 A CN202010707877 A CN 202010707877A CN 113965764 B CN113965764 B CN 113965764B
Authority
CN
China
Prior art keywords
block
boundary
pixel
target component
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010707877.XA
Other languages
Chinese (zh)
Other versions
CN113965764A (en
Inventor
谢志煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010707877.XA priority Critical patent/CN113965764B/en
Priority to TW110123867A priority patent/TW202209880A/en
Publication of CN113965764A publication Critical patent/CN113965764A/en
Application granted granted Critical
Publication of CN113965764B publication Critical patent/CN113965764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application discloses an image coding method, an image decoding method and a related device, wherein the image decoding method comprises the following steps: dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block, wherein the target component comprises a brightness component or a chroma component; determining a prediction block of a target component of a current coding block according to an intra-frame prediction mode of the target component; performing first filtering on reference pixels used for correcting the prediction block according to the intra-frame prediction mode of the target component to obtain filtered reference pixels; and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block. According to the embodiment of the application, before the prediction block of the current pixel block is corrected by utilizing the spatial relevance between the adjacent pixel block and the current pixel block, the boundary pixels of the adjacent pixel block of the current pixel block are filtered, so that sharpening is avoided, and the intra-frame prediction accuracy and the coding efficiency are improved.

Description

Image encoding method, image decoding method and related device
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to an image encoding method, an image decoding method, and a related apparatus.
Background
Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones, video conferencing devices, video streaming devices, and so forth.
Digital video devices implement video compression techniques such as those described in the standards defined by the Moving Picture Experts Group (MPEG) -2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 Advanced Video Coding (AVC), ITU-T H.265 High Efficiency Video Coding (HEVC) standards and extensions to the standards to more efficiently transmit and receive digital video information. Video devices may more efficiently transmit, receive, encode, decode, and/or store digital video information by implementing these video codec techniques.
With the proliferation of internet video, even though digital video compression technology is continuously evolving, still higher requirements are put on video compression ratio.
Disclosure of Invention
The embodiment of the application provides an image coding method, an image decoding method and a related device, so that before a prediction block of a current pixel block is corrected by utilizing the spatial relevance between an adjacent pixel block and the current pixel block, the boundary pixels of the adjacent pixel block of the current pixel block are filtered, sharpening is avoided, and the intra-frame prediction accuracy and the coding efficiency are improved.
In a first aspect, an embodiment of the present application provides an image encoding method, including:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block, wherein the target component comprises a brightness component or a chrominance component;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
performing first filtering on reference pixels used for correcting the prediction block according to the intra-frame prediction mode of the target component to obtain filtered reference pixels;
compared with the prior art, in the intra-frame prediction mode, before the prediction block of the current coding block is corrected by utilizing the spatial correlation between the adjacent pixel block and the current coding block, the boundary pixel of the adjacent coding block of the current coding block is filtered, so that sharpening is avoided, and the intra-frame prediction accuracy and the coding efficiency are improved.
In a second aspect, an embodiment of the present application provides an image decoding method, including:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block, wherein the target component comprises a brightness component or a chrominance component;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
performing first filtering on reference pixels used for correcting the prediction block according to the intra-frame prediction mode of the target component to obtain filtered reference pixels;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
Compared with the prior art, in the intra-frame prediction mode, before the prediction block of the current decoding block is corrected by using the spatial correlation between the adjacent pixel block and the current decoding block, the boundary pixels of the adjacent decoding block of the current decoding block are filtered, so that sharpening is avoided, and the intra-frame prediction accuracy and the coding efficiency are improved.
In a third aspect, an embodiment of the present application provides an image encoding apparatus, including:
analyzing a code stream, and determining an intra-frame prediction mode of a target component of a current decoding block, wherein the target component comprises a brightness component or a chrominance component;
Determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
performing first filtering on reference pixels used for correcting the prediction block according to the intra-frame prediction mode of the target component to obtain filtered reference pixels;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
In a fourth aspect, an embodiment of the present application provides an image decoding apparatus, including:
the first determining unit is used for analyzing the code stream and determining an intra-frame prediction mode of a target component of a current decoding block, wherein the target component comprises a brightness component or a chroma component;
a second determination unit for determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
a first filtering unit, configured to perform first filtering on a reference pixel used for correcting the prediction block according to an intra prediction mode of the target component to obtain a filtered reference pixel;
and the second filtering unit is used for carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
In a fifth aspect, an embodiment of the present application provides an encoder, including: a processor and a memory coupled to the processor; the processor is configured to perform the method of the first aspect.
In a sixth aspect, an embodiment of the present application provides a decoder, including: a processor and a memory coupled to the processor; the processor is configured to perform the method of the second aspect.
In a seventh aspect, an embodiment of the present application provides a terminal, where the terminal includes: one or more processors, memory, and a communication interface; the memory, the communication interface and the one or more processors; the terminal communicates with other devices through the communication interface, the memory is used for storing computer program code comprising instructions which, when executed by the one or more processors, perform the method according to the first or second aspect.
In an eighth aspect, the present invention provides a computer-readable storage medium, having stored therein instructions, which, when executed on a computer, cause the computer to perform the method of the first or second aspect.
In a ninth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first or second aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic block diagram of a coding tree unit in an embodiment of the present application;
FIG. 2 is a schematic block diagram of a CTU and a coding block CU in an embodiment of the present application;
FIG. 3 is a schematic block diagram of a color format in an embodiment of the present application;
FIG. 4A is a block diagram of a luma component intra prediction mode in an embodiment of the present application;
FIG. 4B is a diagram illustrating a chroma component normal intra prediction mode according to an embodiment of the present application;
FIG. 5 is a schematic block diagram of an associated pixel of a coding block in an embodiment of the present application;
FIG. 6 is a schematic block diagram of neighboring pixels used for calculation of coefficients of a linear model in an embodiment of the present application;
FIG. 7 is a schematic block diagram of a down-sampling filter in an embodiment of the present application;
fig. 8 is a schematic block diagram of a change from a luminance component reconstruction block to a chrominance component prediction block in the embodiment of the present application;
FIG. 9 is a schematic block diagram of a video coding system in an embodiment of the present application;
FIG. 10 is a schematic block diagram of a video encoder in an embodiment of the present application;
FIG. 11 is a schematic block diagram of a video decoder in an embodiment of the present application;
FIG. 12A is a flowchart illustrating an image encoding method according to an embodiment of the present application;
FIG. 12B is a diagram illustrating a three-tap filter for filtering a reference pixel according to an embodiment of the present application;
FIG. 12C is a diagram illustrating a five-tap filter for filtering a reference pixel according to an embodiment of the present application;
FIG. 12D is a diagram illustrating a horizontal down-sampling process in an embodiment of the present application;
FIG. 12E is a diagram illustrating a vertical down-sampling process according to an embodiment of the present application;
FIG. 12F is a diagram illustrating a bidirectional down-sampling process in an embodiment of the present application;
FIG. 13 is a flowchart illustrating an image decoding method according to an embodiment of the present application;
FIG. 14 is a block diagram of a functional unit of an image encoding apparatus according to an embodiment of the present application;
FIG. 15 is a block diagram showing another functional unit of the image encoding apparatus according to the embodiment of the present application;
FIG. 16 is a block diagram of a functional unit of an image decoding apparatus according to an embodiment of the present application;
fig. 17 is a block diagram of another functional unit of the image decoding apparatus in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present invention. Both the first client and the second client are clients, but they are not the same client.
First, terms used in the embodiments of the present application will be described.
For the partition of images, in order to more flexibly represent Video contents, a Coding Tree Unit (CTU), a Coding Unit (CU), a Prediction Unit (PU), and a Transform Unit (TU) are defined in the High Efficiency Video Coding (HEVC) technology. The CTU, CU, PU, and TU are all image blocks.
A coding tree unit CTU, an image being composed of a plurality of CTUs, a CTU generally corresponding to a square image area, containing luminance pixels and chrominance pixels (or may contain only luminance pixels, or may contain only chrominance pixels) in the image area; syntax elements are also included in the CTU that indicate how to divide the CTU into at least one Coding Unit (CU), and the method of decoding each coding block to obtain a reconstructed picture. As shown in fig. 1, the picture 10 is composed of a plurality of CTUs (including CTU a, CTU B, CTU C, etc.). The encoded information corresponding to a CTU includes luminance values and/or chrominance values of pixels in a square image region corresponding to the CTU. Furthermore, the coding information corresponding to a CTU may also contain syntax elements indicating how to divide the CTU into at least one CU and the method of decoding each CU to get the reconstructed picture. The image area corresponding to one CTU may include 64 × 64, 128 × 128, or 256 × 256 pixels. In one example, a CTU of 64 × 64 pixels comprises a rectangular pixel lattice of 64 columns of 64 pixels each, each pixel comprising a luminance component and/or a chrominance component. The CTUs may also correspond to rectangular image regions or image regions with other shapes, and an image region corresponding to one CTU may also be an image region in which the number of pixels in the horizontal direction is different from the number of pixels in the vertical direction, for example, including 64 × 128 pixels.
The coding block CU, as shown in fig. 2, may further be divided into coding blocks CU, each of which generally corresponds to an a × B rectangular region in the image, and includes a × B luma pixels and/or its corresponding chroma pixels, a being the width of the rectangle, B being the height of the rectangle, a and B may be the same or different, and a and B generally take values of 2 raised to an integer power, such as 128, 64, 32, 16, 8, 4. Here, the width referred to in the embodiment of the present application refers to the length along the X-axis direction (horizontal direction) in the two-dimensional rectangular coordinate system XoY shown in fig. 1, and the height refers to the length along the Y-axis direction (vertical direction) in the two-dimensional rectangular coordinate system XoY shown in fig. 1. The reconstructed image of a CU may be obtained by adding a predicted image, which is generated by intra prediction or inter prediction, specifically, may be composed of one or more Predicted Blocks (PB), and a residual image, which is generated by inverse quantization and inverse transform processing on transform coefficients, specifically, may be composed of one or more Transform Blocks (TB). Specifically, one CU includes coding information including information such as a prediction mode and a transform coefficient, and performs decoding processing such as corresponding prediction, inverse quantization, and inverse transform on the CU according to the coding information to generate a reconstructed image corresponding to the CU. The coding tree unit and coding block relationship is shown in fig. 3.
The prediction unit PU is a basic unit of intra prediction and inter prediction. Defining motion information of an image block to include an inter-frame prediction direction, a reference frame, a motion vector, and the like, wherein the image block undergoing encoding processing is called a Current Coding Block (CCB), the image block undergoing decoding processing is called a Current Decoding Block (CDB), and for example, when one image block is undergoing prediction processing, the current coding block or the current decoding block is a prediction block; when an image block is being residual processed, the currently encoded block or the currently decoded block is a transform block. The picture in which the current coding block or the current decoding block is located is called the current frame. In the current frame, image blocks located on the left or upper side of the current block may be inside the current frame and have completed encoding/decoding processing, resulting in reconstructed images, which are referred to as reconstructed blocks; information such as the coding mode of the reconstructed block, the reconstructed pixels, etc. is available (available). A frame in which the encoding/decoding process has been completed before the encoding/decoding of the current frame is referred to as a reconstructed frame. When the current frame is a unidirectional predicted frame (P frame) or a bidirectional predicted frame (B frame), it has one or two reference frame lists, respectively, the two lists are referred to as L0 and L1, respectively, and each list contains at least one reconstructed frame, referred to as the reference frame of the current frame. The reference frame provides reference pixels for inter-frame prediction of the current frame.
And a transform unit TU for processing the residual between the original image block and the predicted image block.
The pixel (also called as a pixel) refers to a pixel in an image, such as a pixel in a coding block, a pixel in a luminance component pixel block (also called as a luminance pixel), a pixel in a chrominance component pixel block (also called as a chrominance pixel), and the like.
The samples (also referred to as pixel values and sample values) refer to pixel values of pixels, the pixel values refer to luminance (i.e., gray-scale values) in a luminance component domain, and the pixel values refer to chrominance values (i.e., colors and saturations) in a chrominance component domain, and according to different processing stages, a sample of one pixel specifically includes an original sample, a predicted sample, and a reconstructed sample.
Description of the directions: horizontal direction, for example: along the X-axis direction and the vertical direction in the two-dimensional rectangular coordinate system XoY as shown in fig. 1, for example: in the two-dimensional rectangular coordinate system XoY shown in fig. 1, in the negative direction along the Y-axis.
And intra-frame prediction, namely generating a prediction image of the current block according to the spatial adjacent pixels of the current block. An intra prediction mode corresponds to a method of generating a prediction image. The division of the intra-frame prediction unit comprises a 2 Nx 2N division mode and an Nx N division mode, wherein the 2 Nx 2N division mode is that image blocks are not divided; the N × N division is to divide the image block into four sub image blocks of equal size.
In general, digital video compression techniques work on video sequences whose color coding method is YCbCr, which may also be referred to as YUV, with a color format of 4. Where Y denotes brightness (Luma) that is a gray scale value, cb denotes a blue Chrominance component, cr denotes a red Chrominance component, and U and V denote Chrominance (Chroma) for describing color and saturation. In color format, 4.
In a digital video encoding process, an encoder reads pixels and encodes raw video sequences in different color formats. A general digital encoder generally includes prediction, transformation and quantization, inverse transformation and inverse quantization, loop filtering, entropy coding, and the like, and is used to eliminate spatial, temporal, visual, and character redundancy. However, human eyes are more sensitive to changes in luminance components and do not strongly respond to changes in chrominance components, so the original video sequence is generally coded by adopting a color format of YUV 4. Meanwhile, the digital video encoder adopts different prediction processes for the luminance component and the chrominance component in the intra-frame coding part, the prediction of the luminance component is more delicate and complex, and the prediction of the chrominance component is generally simpler. The Cross Component Prediction (CCP) mode is a technique of existing digital video coding that acts on a luminance Component and a chrominance Component to increase a video compression ratio.
In the process of intra-frame coding, the brightness component and the chroma component are respectively subjected to intra-frame prediction calculation to obtain an intra-frame prediction sample value of the current coding block. And the intra-frame prediction sample value is differed from the original sample to obtain a residual error sample of the current coding block, and the residual error sample is subjected to subsequent processes of transformation and quantization, filtering, entropy coding and the like, and finally the output code stream is transmitted to a decoding end.
In a specific implementation example, intra-frame Coding brightness and chroma are respectively calculated by intra-frame Coding brightness and chroma in standards such as the national Audio and Video Coding Standard (AVS) and the international latest Video Coding (VVC) Video Coding, as shown in fig. 4A, up to 65 angle and non-angle prediction modes are calculated for the brightness component of the current Coding block, DC represents a mean mode, plane represents a Plane mode, bilinear represents a Bilinear mode, and the optimal brightness component prediction mode is selected according to the rate-distortion cost. Similarly, the intra-coding chroma component calculation also traverses up to ten common chroma and cross-component prediction modes, and selects the optimal chroma component prediction mode according to the rate distortion cost, wherein the common chroma intra-prediction mode comprises an angular prediction mode derived from the luminance and some non-angular prediction modes, and the cross-component technical prediction mode comprises a prediction mode crossing a single component and a prediction mode crossing multiple components.
The intra-luminance component prediction process is similar to the ordinary chroma intra-prediction, and the current luminance coding block calculates the prediction block of the current luminance coding block according to the available information of the adjacent luminance boundary pixels (the boundary pixels refer to the pixels which are close to the boundary of the current coding block in the left side or the upper side adjacent block of the current coding block) of the luminance component and the corresponding intra-luminance prediction mode. The brightness boundary pixels selected by different prediction modes are not consistent, part of the prediction modes can directly use the brightness boundary pixels as reference pixels and calculate the prediction sample of the current brightness coding block, and the part of the prediction modes need to interpolate the brightness boundary pixels and then select the brightness boundary pixels to calculate the prediction sample of the current brightness coding block.
When using the common chroma intra prediction, the current chroma coding block calculates the prediction sample value of the current chroma coding block according to the available information of the adjacent chroma reconstruction block with the same component and the corresponding intra chroma prediction mode. The reference pixels selected by different prediction modes are not consistent, the boundary pixels of adjacent coding blocks can be directly used as the reference pixels in partial prediction modes, the prediction sample value of the current chroma coding block is calculated, interpolation needs to be carried out on the boundary pixels of the adjacent coding blocks in the partial prediction modes, and then the reference pixels are selected to calculate the prediction sample value of the current chroma coding block. Fig. 4B shows a schematic diagram of intra prediction modes with a coding block size of 8X8, where (1) is a normal intra vertical angle prediction mode that uses boundary pixels of an upper-side adjacent coding block as reference pixels to calculate prediction samples of a current coding block, (2) is a normal intra horizontal angle prediction mode that uses boundary pixels of a left-side adjacent coding block as reference pixels to calculate prediction samples of the current coding block, and (3) is a normal intra non-angle prediction mode that uses boundary pixels of both the upper-side and left-side adjacent coding blocks as reference pixels to calculate prediction samples of the current coding block.
When using cross-component chroma intra prediction, boundary pixels of the current luma neighboring reconstructed block and boundary pixels of the current chroma neighboring reconstructed block are used for the computation of the coefficients of the linear model. The boundary pixels of the adjacent brightness reconstruction blocks comprise the boundary pixels of the upper adjacent brightness reconstruction blocks and the boundary pixels of the left adjacent brightness reconstruction blocks of the current brightness component coding block; the boundary pixels of the chrominance neighboring reconstruction block include a boundary pixel of an upper neighboring chrominance reconstruction block and a boundary pixel of a left neighboring chrominance reconstruction block of the current chrominance component block. When the boundary pixel is selected as the reference pixel, the luminance component and the chrominance component may both adopt a combination of two boundary pixels of the upper side adjacent reconstructed block and two boundary pixels of the left side adjacent reconstructed block, may also all adopt four boundary pixels of the upper side adjacent reconstructed block, and may all adopt four boundary pixels of the left side adjacent reconstructed block, in combination with the availability of reconstructed samples of the boundary pixels of the adjacent reconstructed blocks. Fig. 5 shows an example of the positional relationship of an 8x8 luminance component pixel block and a boundary pixel (a) and a 4x4 chrominance component pixel block and a boundary pixel (b) under the color format YUV4: 2.
In the above boundary pixels for linear model coefficient calculation, as shown in fig. 6, if the boundary pixels are from the boundary pixels on both sides of the current coding block, the boundary pixel on the upper side selects the leftmost pixel and the rightmost pixel in the boundary pixels on the upper side, and the boundary pixel on the left side selects the uppermost pixel and the lowermost pixel in the boundary pixels on the left side; if the boundary pixel for calculating the coefficient of the linear model only comes from the upper side, selecting the pixels at four continuous step positions in the boundary pixel of the upper side by taking the quarter distance of the width of the original pixel block corresponding to the current coding block as a step length; if the boundary pixel only comes from the left side, selecting the pixels at four continuous step positions in the four left side boundary pixels by taking the distance of one fourth of the height of the original pixel block corresponding to the current coding block as a step length. That is, the boundary pixels of the four luminance components and the boundary pixels of the four chrominance components can be selected in three ways.
The first method is as follows: when two boundary pixels are selected from the upper side and the left side, the selected luminance boundary pixel can be determined by the following formula (the chrominance boundary pixels are the same, and are not described again):
minStep=min(Width,Height);
TopIndex=(minStep–1)*Width/minStep;
LeftIndex=(minStep–1)*Height/minStep;
in the above equation, min (x, y) returns the smaller value of x and y, width is the Width of the luma pixel block of the current coding block, height is the Height of the luma pixel block of the current coding block, topIndex is the index value of the other luma boundary pixel except the first luma boundary pixel when the upper luma boundary pixel is selected, and LeftIndex is the index value of the other luma boundary pixel except the first luma boundary pixel when the left luma boundary pixel is selected.
The second method comprises the following steps: when only four brightness boundary pixels are selected from the upper adjacent coding block (chroma boundary pixels are the same and are not described again), starting from the first brightness boundary pixel at the leftmost side, and taking the width of one fourth of the brightness pixel block of the current coding block as the step length, the four brightness boundary pixels are selected in total.
The third method comprises the following steps: when only four adjacent luminance pixels are selected from the left adjacent coding block (chroma boundary pixels are the same and are not repeated), starting from the first luminance boundary pixel at the top, and taking the height of one fourth of the luminance pixel block of the current coding block as the step length, the four luminance boundary pixels are selected in total.
In the current coding block, a prediction sample of a pixel in a chroma component prediction block is obtained by performing linear model calculation and downsampling on a reconstructed sample of a pixel in a luminance component pixel block of the current coding block, wherein the linear model calculation process is represented as follows:
Pred C (i,j)=α·Rec L (i,j)+β (1)
where (i, j) is the coordinate of the pixel, i specifically refers to the abscissa of the prediction block (also called chroma prediction block) of the chroma component of the current coding block, and its range is [0,width-1 ]]The step length is 1,widthThe width of the prediction block of the chroma component of the previous coding block, which may take values of 4,8, 16 and 32; j specifically refers to the ordinate of the prediction block of the chrominance component of the current coding block, which ranges from 0,height-1 ]Step size 1,height is the height of the prediction block for the chroma components of the current block, which may be 4,8, 16 and 32,Rec L For reconstructed samples of pixels in a reconstructed block of luminance components, pred C Is a prediction sample of a pixel in a prediction block of the chroma component, and α, β are coefficients of a linear model.
Wherein α and β can be calculated by the following equation:
Figure BDA0002594622800000051
β=Y Min -α·X Min (3)
wherein, Y Max Is an average value of two maximum reconstructed samples among reconstructed samples of a plurality of boundary pixels in an adjacent chrominance reconstructed block (i.e., a reconstructed block of chrominance components) for calculating a linear model, Y Min Is the average of the two smallest reconstructed samples among the reconstructed samples for a plurality of boundary pixels in neighboring chroma reconstructed blocks used to compute the linear model. X Max For calculating an average value, X, of two largest reconstructed samples among reconstructed samples of a plurality of boundary pixels in adjacent luminance reconstructed blocks of a linear model Min Is the average of the two smallest reconstructed samples among the reconstructed samples for a plurality of boundary pixels in the neighboring luma reconstructed blocks used to compute the linear model.
The cross-component linear model technique of VVC includes LM, LM _ L, and LM _ a modes. Wherein, LM _ L calculates a linear model using only boundary pixels of the left adjacent pixel block, LM _ A calculates a linear model using only boundary pixels of the upper adjacent pixel block, and LM calculates a linear model using boundary pixels of the left adjacent pixel block and the upper adjacent pixel block.
According to the different choices of the boundary pixels, the Cross-component chroma intra-frame prediction mode comprises that if the boundary pixels of the upper adjacent pixel blocks (the luminance pixel blocks and the chroma pixel blocks) and the boundary pixels of the left adjacent pixel blocks of the current coding block are both available, and the boundary pixels adopted by the coefficient calculation of the linear model are from the upper boundary pixels and the left boundary pixels, or if the current coding block only has the boundary pixels of the upper adjacent pixel blocks available, and the boundary pixels adopted by the coefficient calculation of the linear model only select the upper boundary pixels, or if the current coding block only has the left boundary pixels available, and the boundary pixels adopted by the coefficient calculation of the linear model only select the left boundary pixels, the boundary pixels are both TSCPM or Cross-component linear model prediction (CCLM) modes; if the boundary pixel of the upper adjacent pixel block of the current coding block and the boundary pixel of the left adjacent pixel block of the current coding block are available and the boundary pixel adopted by the linear model coefficient calculation only selects the upper boundary pixel, the mode is a TSCPM _ T or CCLM _ A mode; and if the boundary pixel of the upper adjacent pixel block of the current coding block and the boundary pixel of the left adjacent pixel block of the current coding block are both available and the boundary pixel adopted by the coefficient calculation of the linear model only selects the upper adjacent pixel, the mode is the TSCPM _ L or CCLM _ L mode.
In addition to the cross-mono prediction mode described above, AVS3 also adopts a cross-multi-component prediction mode MCPM. The multi-component cross chroma prediction process necessarily has the prediction process of a certain chroma component, which needs to refer to a brightness component and another chroma component, after the brightness component is predicted, the brightness component is used for carrying out cross single-component prediction calculation on a first chroma component to obtain a prediction sample value of the chroma component, and finally, a reconstructed sample of the brightness component and the prediction sample of the first chroma component are used for predicting to obtain a prediction sample of a second chroma component. In the specific example AVS, the cross-multi-component prediction mode MCPM mode is similar to the TSCPM prediction mode, with the MCPM mode corresponding to the TSCPM mode, the MCPM _ T mode corresponding to the TSCPM _ T mode, and the MCPM _ L mode corresponding to the TSCPM _ L mode. The prediction process of the chroma U component of the three prediction modes of MCPM is consistent with the TSCPM mode, and the prediction block of the chroma V component is obtained by subtracting the reconstruction block of the chroma U component from the temporary chroma prediction component block. The specific formula is as follows:
Pred C (x,y)=α·Rec L (x,y)+β (6)
Pred Cr (x,y)=Pred C ′(x,y)-Rec Cb (x,y) (7)
wherein, pred C (x, y) is a prediction sample, rec, located at pixel (x, y) in the reference prediction block for the chroma component L (x, y) is the reconstructed sample at pixel (x, y) in the reconstructed block of the luminance component, pred C ' (x, y) is the prediction sample located at pixel (x, y) in the prediction block of the chroma component after downsampling, rec Cb (x, y) is a reconstructed sample of the U component located at pixel (x, y) in the reconstructed block of the chrominance component, pred xr (x, y) are prediction samples of a V component located at a pixel (x, y) in a prediction block of the chrominance component, α and β are linear parameters of the U component and the V component, respectively, and the linear parameters of the U component and the V component calculate reference equations (2) and (3).
After the linear model is calculated, cross-component Prediction of chrominance components is performed according to the calculated linear model, and a reconstructed Block of the luminance component of the current CU is used to generate a Reference Prediction Block (Chroma Reference Prediction Pixel Block) of the chrominance components of the current coding Block. Specifically, a reference prediction sample of a chrominance component of each pixel of the current coding block is calculated according to the expressions (1), (2) and (3), and the size of the obtained reference prediction block of the chrominance component is the same as that of the original pixel block of the luminance component. In a specific example, the input digital video color format is typically YUV4:2 format, i.e. the chroma component predicts a block size of one quarter of the original pixel block for the luma component. In order to obtain a corresponding chroma component prediction block with a correct size, the reference prediction block of the chroma component needs to be respectively subjected to half down-sampling in the horizontal direction and the vertical direction, and the prediction block of the chroma component after down-sampling is one fourth of the original pixel block of the luminance component, so that the size requirement of color format constraint is met. The filter adopted for down-sampling the reference prediction block of the chroma component comprises a down-sampling filter with the same two taps of the same coefficients for the left boundary pixel area of the reference prediction block of the chroma component and a down-sampling filter with two different six taps of different coefficients for other pixel areas.
The down-sampling filter with six taps and two different coefficients is shown in equation (4).
Figure BDA0002594622800000061
Wherein x, y are coordinates of pixel, P' C Is a reference prediction sample of a pixel in a reference prediction block of a chroma component,
Figure BDA0002594622800000062
and &>
Figure BDA0002594622800000063
Are prediction samples of pixels in a prediction block of a chroma component.
The down-sampling filter with the same coefficient for the two taps is shown in equation (5).
Figure BDA0002594622800000064
/>
Wherein x, y are coordinates of pixel, P' C Reference prediction samples for pixels in a prediction block are references to chroma components,
Figure BDA0002594622800000065
and &>
Figure BDA0002594622800000066
Are prediction samples of pixels in a prediction block of a chroma component.
The down-sampling filter is shown in fig. 7, where X11 denotes the multiplication by 1 and X12 denotes the multiplication by 2. Fig. 8 shows a schematic diagram of a cross-component technique from a luma component reconstruction block to a chroma component prediction block, where the luma component reconstruction block size of the coding block is 8 × 8, the size of the corresponding chroma component reference prediction block is 8 × 8, and the size of the filtered chroma component prediction block is 4 × 4.
When the boundary pixels are selected as the reference pixels of the prediction mode in the existing intra-frame prediction mode, the relevance between some boundary pixels and the current coding block is usually ignored. The current coding block has spatial relevance with all boundary pixels, and if only the boundary pixels from the upper side are taken, the reference information from the boundary pixels on the left side is lacked; if only from the left border pixel, then there is no reference information from the upper border pixel. Thus, the prediction wastes a large amount of available reference information, the prediction sample of the current coding block cannot be calculated well, and the coding efficiency is lost. In view of this problem, those skilled in the art have proposed solving the above problem through the following design ideas.
And after the intra-frame prediction is finished to obtain a prediction sample, performing prediction correction on the prediction sample, including filtering a brightness component and filtering a chroma component.
The filtering process for the prediction samples of the luminance component includes the following implementation manners.
If the brightness component of the current coding block is predicted and only the upper brightness boundary pixel is selected as the reference pixel to calculate the prediction sample, the left brightness boundary pixel is adopted as the reference pixel to filter the prediction sample of the brightness component of the current coding block;
if the brightness component of the current coding block is predicted and only the left brightness boundary pixel is selected as the reference pixel to calculate the prediction sample, the upper brightness boundary pixel is adopted as the reference pixel to filter the prediction sample of the brightness component of the current coding block;
if the luminance component prediction of the current coding block adopts the upper side and the left side luminance boundary pixels as the reference pixels or the current luminance coding block is in a non-angle prediction mode, filtering prediction samples of the luminance component of the current coding block by adopting the upper side and the left side luminance boundary pixels as the reference pixels.
The filtering process of the prediction sample obtained by the calculation of the chroma component common intra-frame prediction mode comprises the following implementation modes:
If the current coding block only selects the upper side chroma boundary pixel as the reference pixel to calculate the prediction sample, the left side chroma boundary pixel is adopted as the reference pixel to filter the prediction sample of the chroma component of the current coding block;
if the current coding block only selects the left side chroma boundary pixel as the reference pixel to calculate the prediction sample, filtering the prediction sample of the chroma component of the current coding block by adopting the upper side chroma boundary pixel as the reference pixel;
and if the current coding block adopts a chroma component common intra-frame non-angle prediction mode, filtering a prediction sample of the chroma component of the current coding block by adopting an upper chroma boundary pixel and a left chroma boundary pixel as reference pixels.
The filtering process for the prediction samples calculated by the chroma component cross-component prediction mode comprises the following implementation modes:
if the current coding block only selects the upper side brightness boundary pixel and the upper side chroma boundary pixel to calculate the coefficient of the linear model, filtering the prediction sample of the chroma component of the current coding block by adopting the left side chroma boundary pixel as a reference pixel;
and if the current coding block only selects the left luminance boundary pixel and the left chrominance boundary pixel to calculate the coefficient of the linear model, filtering a prediction sample of the chrominance component of the current coding block by using the upper chrominance boundary pixel as a reference pixel.
The filtering of the prediction sample of the current coding block comprises the steps of using the distance between the current pixel and the reference pixel as a filter coefficient index value, using the size of the current coding block as a filter coefficient group index value, searching a chroma component intra-frame prediction filter coefficient group according to the filter coefficient group index value, finding a chroma component intra-frame prediction filter coefficient in the group according to the filter coefficient index value, and calculating to obtain a corrected prediction sample according to the searched filter coefficient and a filter formula.
Specifically, at the encoding end, the input video is divided into a plurality of encoding tree units at the encoding end, each encoding tree unit is further divided into a plurality of encoding blocks which are rectangular or square, and the luminance component and the chrominance component of each independent encoding block are subjected to an intra-frame prediction process.
And carrying out intra-frame prediction on the brightness component of the current coding block, and calculating to obtain a prediction sample of the brightness component of the current coding block according to a brightness prediction mode and the availability of boundary pixels. If the intra-frame prediction filtering identification bit in the current mode is true, filtering the prediction sample according to the method; and if the intra-frame prediction filtering identification bit in the current mode is not, no additional operation is performed. And updating rate distortion cost information, and selecting an optimal brightness prediction mode according to the rate distortion cost.
The intra-frame prediction of the chroma component of the current coding block comprises a common prediction mode and a cross-component prediction mode:
the current chroma prediction mode is a common prediction mode, is similar to the brightness component, and obtains a prediction sample of the chroma component of the current coding block by calculation according to the chroma prediction mode and the availability of boundary pixels. If the intra-frame prediction filtering identification bit in the current mode is true, filtering the prediction sample according to the method; and if the intra-frame prediction filtering identification bit in the current mode is negative, no additional operation is performed, and the rate distortion cost information is updated.
The current chroma prediction mode is a cross-component prediction mode, a corresponding boundary pixel calculation linear model is selected according to the cross-component prediction mode, a reference prediction block (a reference prediction sample of each pixel) of a chroma component is calculated through the linear model, then, the prediction block of the chroma component is obtained through down-sampling, and after the prediction of the brightness component and the first chroma component is finished, a final prediction sample can be obtained through calculation in the cross-multi-component prediction mode. If the intra-frame prediction filtering identification bit in the current mode is true, filtering the prediction sample according to the method; and if the intra-frame prediction filtering identification bit in the current mode is negative, no additional operation is performed, and the rate distortion cost information is updated. And selecting the optimal chroma prediction mode according to the rate distortion cost.
The optimal prediction mode index of the brightness component and the chroma component is used as the prediction mode coding parameter of the current coding block and is transmitted to a decoding end through a code stream, if intra-frame prediction filtering in the current prediction mode is true, an identification bit needs to be transmitted at the level of the coding block, and the current coding block is indicated to use the intra-frame prediction filtering technology; if the intra prediction filtering in the current prediction mode is false, an identification bit needs to be transmitted at the level of the coding block, which indicates that the current coding block does not use the intra prediction filtering technology. And then, calculating a residual error between the original video sample and the prediction sample, wherein one subsequent path of the residual error forms an output code stream through change, quantization, entropy coding and the like, and the other path of the residual error forms a reconstructed sample through inverse transformation, inverse quantization, loop filtering and the like to be used as reference information of subsequent video compression.
Specifically, at the decoding end, the input code stream is analyzed, inversely transformed and inversely quantized to obtain the luminance component and chrominance component prediction mode index of the current coding block and the intra-frame prediction filtering identification bit of the current coding block.
And predicting and reconstructing the brightness component of the current coding block, selecting boundary pixels according to the brightness component prediction mode obtained by the current coding block, and performing prediction sample calculation on the brightness component of the current coding block to obtain a prediction sample. If the intra-frame prediction filtering identification bit acquired by the current coding block is true, filtering a prediction sample of the brightness component of the current coding block according to the brightness prediction mode of the current coding block to obtain a final prediction sample; and if the intra-frame prediction filtering identification bit acquired by the current coding block is false, filtering the brightness component of the current coding block to obtain a final prediction sample, and superposing the brightness component residual of the current coding block on the final prediction sample to obtain a reconstructed sample of the brightness component of the current coding block.
And predicting and reconstructing the chrominance component of the current coding block, selecting boundary pixels according to a chrominance component prediction mode obtained by the current coding block, calculating or not calculating a linear model, and performing prediction sample calculation on the chrominance component of the current coding block to obtain a prediction sample. If the intra-frame prediction filtering identification bit acquired by the current coding block is true, filtering a chroma component prediction sample of the current coding block according to a chroma prediction mode of the current coding block to obtain a final prediction sample; if the intra-frame prediction filtering identification bit acquired by the current coding block is false, filtering operation is not carried out on the chroma component of the current coding block to obtain a final prediction sample, and the final prediction sample is superposed with the chroma component residual error of the current coding block to obtain a reconstructed sample of the chroma component of the current coding block.
One path of the subsequent code stream is used as reference information of subsequent video decoding, and the other path of the subsequent code stream is subjected to post-filtering processing to output a video signal.
In the intra-frame prediction technology, the filtering operation on the prediction samples can be effectively combined with the spatial correlation, and the accuracy of intra-frame prediction is improved. However, the intra-frame prediction filtering does not perform additional processing on the boundary pixels, and all the prediction filtering uses original reconstructed samples of the boundary pixels (including the boundary pixels of the prediction block of the luminance component and the boundary pixels of the prediction block of the chrominance component), so that some characteristics of intra-frame prediction modes are ignored, and for some coding blocks requiring smoothing processing, if the original reconstructed samples of the boundary pixels are used to perform filtering operation on the currently processed pixels, the currently processed pixels may be sharpened, so that the prediction result becomes worse.
In view of the above problems, the embodiments of the present application consider preprocessing the reconstructed samples of the boundary pixels in combination with the characteristics of the prediction mode, that is, in combination with the spatial correlation of the intra-frame coding, also consider the characteristics of different reference pixels and intra-frame prediction modes.
Specifically, in the intra-frame filtering prediction correction process, when prediction correction is performed on a prediction block of a luminance component obtained in the intra-frame prediction process, smoothing is performed on a reconstructed sample of a boundary pixel of an adjacent coding block on the upper side and/or the left side, which is taken as a reference pixel, and the smoothing includes the following four specific implementation mechanisms:
first, a filter is selected to perform a first filtering on a boundary pixel according to an intra prediction mode of a currently processed pixel and a distance between the currently processed pixel and the boundary pixel.
Secondly, according to the intra-frame prediction mode of the currently processed pixel, a filter is selected to carry out first filtering on the boundary pixel.
And thirdly, selecting a filter to perform first filtering on the boundary pixel according to the distance between the currently processed pixel and the boundary pixel.
Fourthly, selecting a filter to carry out first filtering on boundary pixels according to the number of rows and columns of the reference pixels in the prediction block of the target component.
FIG. 9 is a block diagram of a video coding system 1 of one example described in an embodiment of the present application. As used herein, the term "video coder" generally refers to both video encoders and video decoders. In this application, the term "video coding" or "coding" may generally refer to video encoding or video decoding. The video encoder 100 and the video decoder 200 of the video coding system 1 are used to implement the cross-component prediction method proposed in the present application.
As shown in fig. 9, video coding system 1 includes a source device 10 and a destination device 20. Source device 10 generates encoded video data. Accordingly, source device 10 may be referred to as a video encoding device. Destination device 20 may decode the encoded video data generated by source device 10. Accordingly, the destination device 20 may be referred to as a video decoding device. Various implementations of source device 10, destination device 20, or both may include one or more processors and memory coupled to the one or more processors. The memory can include, but is not limited to, RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein.
Source device 10 and destination device 20 may comprise a variety of devices, including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, televisions, cameras, display devices, digital media players, video game consoles, in-vehicle computers, or the like.
Destination device 20 may receive encoded video data from source device 10 via link 30. Link 30 may comprise one or more media or devices capable of moving encoded video data from source device 10 to destination device 20. In one example, link 30 may comprise one or more communication media that enable source device 10 to transmit encoded video data directly to destination device 20 in real-time. In this example, source device 10 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination device 20. The one or more communication media may include wireless and/or wired communication media such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The one or more communication media may include a router, switch, base station, or other apparatus that facilitates communication from source device 10 to destination device 20. In another example, encoded data may be output from output interface 140 to storage device 40.
The image codec techniques of this application may be applied to video codecs to support a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, streaming video transmissions (e.g., via the internet), encoding for video data stored on a data storage medium, decoding of video data stored on a data storage medium, or other applications. In some examples, video coding system 1 may be used to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
The video coding system 1 illustrated in fig. 9 is merely an example, and the techniques of this application may be applied to video coding settings (e.g., video encoding or video decoding) that do not necessarily include any data communication between an encoding device and a decoding device. In other examples, the data is retrieved from local storage, streamed over a network, and so forth. A video encoding device may encode and store data to a memory, and/or a video decoding device may retrieve and decode data from a memory. In many examples, the encoding and decoding are performed by devices that do not communicate with each other, but merely encode data to and/or retrieve data from memory and decode data.
In the example of fig. 9, source device 10 includes video source 120, video encoder 100, and output interface 140. In some examples, output interface 140 may include a regulator/demodulator (modem) and/or a transmitter. Video source 120 may comprise a video capture device (e.g., a video camera), a video archive containing previously captured video data, a video feed interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources of video data.
Video encoder 100 may encode video data from video source 120. In some examples, source device 10 transmits the encoded video data directly to destination device 20 via output interface 140. In other examples, the encoded video data may also be stored onto storage device 40 for later access by destination device 20 for decoding and/or playback.
In the example of fig. 9, destination device 20 includes input interface 240, video decoder 200, and display device 220. In some examples, input interface 240 includes a receiver and/or a modem. Input interface 240 may receive encoded video data via link 30 and/or from storage device 40. The display device 220 may be integrated with the destination device 20 or may be external to the destination device 20. In general, display device 220 displays the decoded video data. The display device 220 may include a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or other types of display devices.
Although not shown in fig. 9, in some aspects, video encoder 100 and video decoder 200 may each be integrated with an audio encoder and decoder, and may include appropriate multiplexer-demultiplexer units or other hardware and software to handle encoding of both audio and video in a common data stream or separate data streams.
Video encoder 100 and video decoder 200 may each be implemented as any of a variety of circuits such as: one or more microprocessors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), discrete logic, hardware or any combinations thereof. If the present application is implemented in part in software, a device may store instructions for the software in a suitable non-volatile computer-readable storage medium and may execute the instructions in hardware using one or more processors to implement the techniques of the present application. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered one or more processors. Each of video encoder 100 and video decoder 200 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (codec) in a respective device.
Fig. 10 is a block diagram of an example of a video encoder 100 as described in embodiments of the present application. The video encoder 100 is used to output the video to the post-processing entity 41. Post-processing entity 41 represents an example of a video entity that may process the encoded video data from video encoder 100, such as a Media Aware Network Element (MANE) or a splicing/editing device. In some cases, post-processing entity 41 may be an instance of a network entity. In some video encoding systems, post-processing entity 41 and video encoder 100 may be parts of separate devices, while in other cases, the functionality described with respect to post-processing entity 41 may be performed by the same device that includes video encoder 100. In some example, the post-processing entity 41 is an example of the storage 40 of fig. 1.
In the example of fig. 10, the video encoder 100 includes a prediction processing unit 108, a filter unit 106, a memory 107, a summer 112, a transformer 101, a quantizer 102, and an entropy encoder 103. The prediction processing unit 108 includes an inter predictor 110 and an intra predictor 109. For image block reconstruction, the video encoder 100 further includes an inverse quantizer 104, an inverse transformer 105, and a summer 111. Filter unit 106 represents one or more loop filters, such as deblocking filters, adaptive Loop Filters (ALF), and Sample Adaptive Offset (SAO) filters. Although filter unit 106 is shown in fig. 10 as an in-loop filter, in other implementations, filter unit 106 may be implemented as a post-loop filter. In one example, the video encoder 100 may further include a video data memory, a partitioning unit (not shown).
Video encoder 100 receives video data and stores the video data in a video data memory. The partitioning unit partitions the video data into image blocks and these image blocks may be further partitioned into smaller blocks, e.g. image block partitions based on a quadtree structure or a binary tree structure. Prediction processing unit 108 may select one of a plurality of possible coding modes for the current image block, such as one of a plurality of intra coding modes or one of a plurality of inter coding modes. Prediction processing unit 108 may provide the resulting intra, inter coded block to summer 112 to generate a residual block and to summer 111 to reconstruct the encoded block used as a reference picture. An intra predictor 109 within prediction processing unit 108 may perform intra-predictive encoding of a current image block relative to one or more neighboring encoding blocks in the same frame or slice as the current block to be encoded to remove spatial redundancy. Inter predictor 110 within prediction processing unit 108 may perform inter-predictive coding of the current tile relative to one or more prediction blocks in one or more reference pictures to remove temporal redundancy. The prediction processing unit 108 provides information indicating the selected intra or inter prediction mode of the current image block to the entropy encoder 103 so that the entropy encoder 103 encodes the information indicating the selected inter prediction mode.
After prediction processing unit 108 generates a prediction block for the current image block via inter/intra prediction, video encoder 100 forms a residual image block by subtracting the prediction block from the current image block to be encoded. Summer 112 represents one or more components that perform this subtraction operation. The residual video data in the residual block may be included in one or more TUs and applied to transformer 101. The transformer 101 transforms the residual video data into residual transform coefficients using a transform such as a Discrete Cosine Transform (DCT) or a conceptually similar transform. Transformer 101 may convert residual video data from a pixel value domain to a transform domain, such as the frequency domain.
The transformer 101 may send the resulting transform coefficients to the quantizer 102. Quantizer 102 quantizes the transform coefficients to further reduce the bit rate. In some examples, quantizer 102 may then perform a scan of a matrix that includes quantized transform coefficients. Alternatively, the entropy encoder 103 may perform a scan.
After quantization, the entropy encoder 103 entropy encodes the quantized transform coefficients. For example, the entropy encoder 103 may perform Context Adaptive Variable Length Coding (CAVLC), context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), probability Interval Partition Entropy (PIPE) coding, or another entropy encoding method or technique. After entropy encoding by the entropy encoder 103, the encoded codestream may be transmitted to the video decoder 200, or archived for later transmission or retrieved by the video decoder 200. The entropy encoder 103 may also entropy encode syntax elements of the current image block to be encoded.
Inverse quantizer 104 and inverse transformer 105 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block for a reference picture. The summer 111 adds the reconstructed residual block to the prediction block produced by the inter predictor 110 or the intra predictor 109 to produce a reconstructed image block. The filter unit 106 may be adapted to reconstruct the image block to reduce distortions, such as block artifacts. The reconstructed image block is then stored in memory 107 as a reference block that may be used by inter predictor 110 as a reference block for inter prediction of blocks in subsequent video frames or images.
The video encoder 100 divides the input video into a number of coding tree units, each of which is in turn divided into a number of coding blocks, either rectangular or square. When the current coding block selects the intra-frame prediction mode for coding, the calculation traversal of a plurality of prediction modes is carried out on the brightness component of the current coding block, the optimal prediction mode is selected according to the rate distortion cost, the calculation traversal of a plurality of prediction modes is carried out on the chroma component of the current coding block, and the optimal prediction mode is selected according to the rate distortion cost. And then, calculating a residual between the original video block and the prediction block, wherein one subsequent path of the residual forms an output code stream through change, quantization, entropy coding and the like, and the other path of the residual forms a reconstruction sample through inverse transformation, inverse quantization, loop filtering and the like to be used as reference information of subsequent video compression.
The present application embodies the following in the video encoder 100:
in intra-frame coding, a current sequence intra-frame prediction filtering enabling identification bit is obtained.
If the intra prediction filtering enable flag is true, allowing the current sequence to use an intra prediction filtering technique;
and if the intra-frame prediction filtering enabling identification bit is false, forbidding the current sequence to use the intra-frame prediction filtering technology.
Performing intra-frame prediction mode traversal on the brightness and chrominance components of the current coding unit to obtain an intra-frame prediction filtering identification bit of the current coding unit,
and if the intra-frame prediction filtering identification bit of the current coding unit is true, acquiring the intra-frame prediction multi-row and multi-column reference pixel filtering identification bit of the current coding unit.
If the filtering identification bits of the reference pixels in the rows and the columns are true, filtering the reference pixels in the rows and the columns, and performing first filtering on the reference pixels of the current coding unit to obtain filtered reference pixels;
and if the filtering identification bits of the reference pixels in the rows and the columns are false, filtering the reference pixels in a single row and a single column, and performing first filtering on the reference pixels of the current coding unit to obtain filtered reference pixels.
Performing second filtering on the prediction sample of the pixel of the current coding unit by using the filtered reference pixel to obtain a final prediction sample;
If the intra-prediction filtering identification bit of the current coding unit is false, no additional operation is performed, and the prediction sample is the final prediction sample.
And selecting the optimal prediction mode according to the rate distortion cost by the final prediction sample obtained in the prediction process.
If the condition that the current coding unit uses the multi-row and multi-column reference pixel filtering is the optimal condition, transmitting the index of the optimal prediction mode, the identification bit for the intra-frame prediction filtering to be true and the identification bit for the multi-row and multi-column reference pixel filtering to be true to a decoding end through a code stream;
if the current coding unit does not use multi-row multi-column reference pixel filtering and the condition of using intra-frame prediction filtering is the optimal condition, transmitting the index of the optimal prediction mode, the identification bit for filtering the intra-frame prediction filtering to be true and the identification bit for filtering the multi-row multi-column reference pixels to be false to a decoding end through a code stream;
and if the condition that the current coding unit does not use the intra-frame prediction filtering is the optimal condition, transmitting the index of the optimal prediction mode and the identification bit for which the intra-frame prediction filtering is false to a decoding end through the code stream.
And (4) subtracting the final prediction sample from the original sample to obtain a residual sample of the current coding unit, and transmitting the residual sample to a decoding end in a code stream form through changing, quantizing and entropy coding.
The intra predictor 109 may also provide information indicating the selected intra prediction mode of the current encoding block to the entropy encoder 103 so that the entropy encoder 103 encodes the information indicating the selected intra prediction mode.
Fig. 11 is an exemplary block diagram of a video decoder 200 described in the embodiments of the present application. In the example of fig. 11, the video decoder 200 includes an entropy decoder 203, a prediction processing unit 208, an inverse quantizer 204, an inverse transformer 205, a summer 211, a filter unit 206, and a memory 207. The prediction processing unit 208 may include an inter predictor 210 and an intra predictor 209. In some examples, video decoder 200 may perform a decoding process that is substantially reciprocal to the encoding process described with respect to video encoder 100 from fig. 10.
In the decoding process, video decoder 200 receives an encoded video bitstream representing an image block and associated syntax elements of an encoded video slice from video encoder 100. Video decoder 200 may receive video data from network entity 42 and, optionally, may store the video data in a video data store (not shown). The video data memory may store video data, such as an encoded video bitstream, to be decoded by components of video decoder 200. The video data stored in the video data memory may be obtained, for example, from storage device 40, from a local video source such as a camera, via wired or wireless network communication of video data, or by accessing a physical data storage medium. The video data memory may serve as a decoded picture buffer (CPB) for storing encoded video data from the encoded video bitstream.
Network entity 42 may be, for example, a server, a MANE, a video editor/splicer, or other such device for implementing one or more of the techniques described above. Network entity 42 may or may not include a video encoder, such as video encoder 100. Network entity 42 may implement portions of the techniques described in this application before network entity 42 sends the encoded video bitstream to video decoder 200. In some video decoding systems, network entity 42 and video decoder 200 may be part of separate devices, while in other cases, the functionality described with respect to network entity 42 may be performed by the same device that includes video decoder 200.
The entropy decoder 203 of the video decoder 200 entropy decodes the code stream to generate quantized coefficients and some syntax elements. The entropy decoder 203 forwards the syntax elements to the prediction processing unit 208. Video decoder 200 may receive syntax elements at the video slice level and/or the picture block level. When a video slice is decoded as an intra-decoded (I) slice, intra predictor 209 of prediction processing unit 208 generates a prediction block for an image block of the current video slice based on the signaled intra prediction mode and data from previously decoded blocks of the current frame or picture. When a video slice is decoded as an inter-decoded (i.e., B or P) slice, the inter predictor 210 of the prediction processing unit 208 may determine an inter prediction mode for decoding a current image block of the current video slice based on the syntax elements received from the entropy decoder 203, and decode (e.g., perform inter prediction) the current image block based on the determined inter prediction mode.
The inverse quantizer 204 inversely quantizes, i.e., dequantizes, the quantized transform coefficients provided in the codestream and decoded by the entropy decoder 203. The inverse quantization process may include: the quantization parameter calculated by the video encoder 100 for each image block in the video slice is used to determine the degree of quantization that should be applied and likewise the degree of inverse quantization that should be applied. Inverse transformer 205 applies an inverse transform, such as an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to generate a block of residues in the pixel domain.
After the inter predictor 210 generates a prediction block for the current image block or a sub-block of the current image block, the video decoder 200 obtains a reconstructed block, i.e., a decoded image block, by summing the residual block from the inverse transformer 205 with the corresponding prediction block generated by the inter predictor 210. Summer 211 represents the component that performs this summation operation. A loop filter (in or after the decoding loop) may also be used to smooth pixel transitions or otherwise improve video quality, if desired. Filter unit 206 may represent one or more loop filters, such as deblocking filters, adaptive Loop Filters (ALF), and Sample Adaptive Offset (SAO) filters. Although the filter unit 206 is shown in fig. 11 as an in-loop filter, in other implementations, the filter unit 206 may be implemented as a post-loop filter.
The image decoding method specifically performed by the video decoder 200 includes obtaining a prediction mode index of a current coding block after an input code stream is analyzed, inversely transformed, and inversely quantized. If the prediction mode index of the chroma component of the current coding block is an enhanced two-step cross-component prediction mode, selecting only reconstructed samples from upper side or left side adjacent pixels of the current coding block according to an index value to calculate a linear model, calculating according to the linear model to obtain a reference prediction block of the chroma component of the current coding block, performing down-sampling, and performing prediction correction based on the correlation of boundary adjacent pixels in the orthogonal direction on the down-sampled prediction block to obtain a final prediction block of the chroma component. One path of the subsequent code stream is used as reference information of subsequent video decoding, and the other path of the subsequent code stream is subjected to post-filtering processing to output a video signal.
The present application is implemented in the video decoder 200 as follows:
and acquiring a code stream and analyzing to obtain an intra-frame prediction filtering enabling identification bit of the current sequence.
If the intra-frame prediction filtering enabling identification bit of the current sequence is true, the intra-frame prediction filtering identification bit of the decoding unit needs to be analyzed;
if the intra prediction filtering flag of the current sequence is false, the intra prediction filtering flag of the decoding unit does not need to be parsed.
And analyzing the current decoding unit to obtain the prediction modes of the brightness component and the chroma component of the current decoding unit.
If the intra-frame prediction filtering enabling identification bit is true, analyzing the intra-frame prediction filtering identification bit of the current decoding unit;
if the intra-frame prediction filtering enable identification bit is false, no additional operation is performed, the current decoding unit is predicted according to the prediction mode of the current decoding unit obtained by analysis, the prediction value of the current decoding unit is obtained,
and if the intra-frame prediction filtering enabling identification bit of the current sequence is true and the intra-frame prediction filtering identification bit of the current decoding unit is true, analyzing the multiple rows and multiple columns of reference pixel filtering identification bits of the current decoding unit.
If the filtering identification bits of the multiple rows and multiple columns of reference pixels of the current decoding unit are true, performing first filtering on the multiple rows and multiple columns of reference pixels of the adjacent reconstruction block of the current decoding unit by using a multiple rows and multiple columns of filter;
and if the filtering identification bits of the reference pixels of the multiple rows and the multiple columns of the current decoding unit are false, performing first filtering on the reference pixels of the single row and the single column of the adjacent reconstruction block of the current decoding unit by using a single row and single column filter.
According to the reference pixel obtained by filtering, carrying out second filtering on the prediction sample of the currently processed pixel to obtain a final prediction sample;
If the intra-frame prediction filtering enabling identification bit of the current sequence is true and the intra-frame prediction filtering identification bit of the current decoding unit is false, or if the intra-frame prediction filtering enabling identification bit of the current sequence is false, no additional operation is performed, and the prediction sample value of the current decoding unit is the final prediction sample value.
And superposing the final prediction sample and the residual error sample decoded by the current decoding unit to obtain a reconstructed sample of the current decoding unit, and performing post-processing on one path of the reconstructed sample to serve as an output signal and the other path of the reconstructed sample to serve as reference information.
It should be understood that other structural variations of the video decoder 200 may be used to decode the encoded video stream. For example, the video decoder 200 may generate an output video stream without processing by the filter unit 206; alternatively, for some image blocks or image frames, the entropy decoder 203 of the video decoder 200 does not decode quantized coefficients and accordingly does not need to be processed by the inverse quantizer 204 and the inverse transformer 205.
Specifically, the intra predictor 209 may use the image decoding method described in the embodiment of the present application in the generation of the prediction block.
Fig. 12A is a flowchart illustrating an image encoding method in an embodiment of the present application, where the image encoding method can be applied to the source device 10 in the video decoding system 1 shown in fig. 9 or the video encoder 100 shown in fig. 10. The flow shown in fig. 12A is described by taking the execution subject as the video encoder 100 shown in fig. 10 as an example. As shown in fig. 12A, the cross-component prediction method provided in the embodiment of the present application includes:
Step 110, dividing the image, and determining the intra prediction mode of the target component of the current coding block, wherein the target component comprises a luminance component or a chrominance component.
Wherein, the color format of the video to which the image belongs includes, but is not limited to, 4.
For example, when the color format is 4.
As another example, when the color format is 4.
Wherein the intra prediction mode of the target component includes a luminance component intra prediction mode and a chrominance component intra prediction mode.
And 120, determining a prediction block of the target component of the current coding block according to the intra-frame prediction mode of the target component.
In this possible example, the determination of the prediction block of the target component is detailed in the foregoing description of various intra prediction processes of the luminance component and the chrominance component, and will not be described again here.
Step 130, performing a first filtering on the reference pixel used for correcting the prediction block according to the intra-frame prediction mode of the target component to obtain a filtered reference pixel.
And 140, performing second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
The reference pixel refers to a boundary pixel close to and/or adjacent to the current coding block in a left side or upper side adjacent coding block of the current coding block. The details will be described below.
In the filtering of the prediction samples for the luminance component:
if the brightness component of the current coding block is predicted and only the upper brightness boundary pixel is selected as the reference pixel to calculate the prediction sample, the left brightness boundary pixel is adopted as the reference pixel to filter the prediction sample of the brightness component of the current coding block;
if the brightness component of the current coding block is predicted and only the left brightness boundary pixel is selected as the reference pixel to calculate the prediction sample, the upper brightness boundary pixel is adopted as the reference pixel to filter the prediction sample of the brightness component of the current coding block;
If the luminance component prediction of the current coding block adopts the upper side and the left side luminance boundary pixels as the reference pixels or the current luminance coding block is in a non-angle prediction mode, filtering prediction samples of the luminance component of the current coding block by adopting the upper side and the left side luminance boundary pixels as the reference pixels.
In the filtering of prediction samples computed by the chroma component normal intra prediction mode:
if the current coding block only selects the upper side chroma boundary pixel as the reference pixel to calculate the prediction sample, the left side chroma boundary pixel is used as the reference pixel to filter the prediction sample of the chroma component of the current coding block;
if the current coding block only selects the left chrominance boundary pixel as the reference pixel to calculate the prediction sample, filtering the prediction sample of the chrominance component of the current coding block by adopting the upper chrominance boundary pixel as the reference pixel;
and if the current coding block adopts a chroma component common intra-frame non-angle prediction mode, filtering a prediction sample of the chroma component of the current coding block by adopting the upper chroma boundary pixel and the left chroma boundary pixel as reference pixels.
In the filtering of prediction samples computed by the chroma component cross-component prediction mode:
If the current coding block only selects the upper side brightness boundary pixel and the upper side chroma boundary pixel to calculate the coefficient of the linear model, filtering the prediction sample of the chroma component of the current coding block by adopting the left side chroma boundary pixel as a reference pixel;
and if the current coding block only selects the left luminance boundary pixel and the left chrominance boundary pixel to calculate the coefficient of the linear model, filtering the prediction sample of the chrominance component of the current coding block by using the upper chrominance boundary pixel as a reference pixel.
In this possible example, the first filtering, according to the intra prediction mode of the target component, the reference pixels used for modifying the prediction block, includes: determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component; determining a filter according to association information of a prediction block of the target component, the association information including at least one of: the intra prediction mode of the target component, the distance between the reference pixel and the currently processed pixel, and the number of rows and columns of the reference pixel in a prediction block of the target component; first filtering the reference pixel using the filter.
In this possible example, the target component includes a luminance component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set to filter X11.
In this possible example, the filter X11 is configured to filter a boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a middle boundary pixel of the three consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter X11 comprises a first three-tap filter;
the first third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current coding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y rows of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
Fig. 12B is a schematic diagram of a first filtering of a reference pixel by a three-tap filter, where f0, f1, and f2 are boundary pixel regions, f1 is a reference pixel corresponding to a current predicted pixel, and the first Distance range Distance may be (0, 10).
As can be seen, in this example, for the intra-frame vertical angle-like prediction mode of the luminance component, and the distance between the reference pixel and the currently processed pixel is in the first distance range, the filtering for the reference pixel is implemented by using a three-tap filter.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises: when the intra prediction mode of the luma component is an intra horizontal-like angle prediction mode and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set to filter X12.
In this possible example, the filter X12 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter X12 comprises a second three-tap filter;
the second three-tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the predicted block of the luma component of the current coded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
It can be seen that in this example, for the intra-frame horizontal-class angular prediction mode of the luminance component, and in the case where the distance between the reference pixel and the currently processed pixel is in the first distance range, filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in the second distance range, the filter is set to filter X13.
Wherein the second distance range may be (10, 20), which is not limited herein.
In this possible example, the filter X13 is configured to filter a boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block, the boundary pixel region being a pixel region near a left boundary of the current coding block, the boundary pixel region including five consecutive, neighboring boundary pixels, and a middle boundary pixel of the five consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter X13 comprises a first five tap filter;
the first five tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-2) original reconstructed samples of boundary pixels located in y-2 rows of the left-side neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
Fig. 12C is a schematic diagram of a first filtering of a reference pixel by a five-tap filter, where f3, f1, f0, f2, and f4 are boundary pixel regions, f0 is a reference pixel corresponding to a current prediction pixel, and Distance in a first Distance range may be greater than or equal to 10.
It can be seen that in this example, for the intra-frame vertical angle-like prediction mode of the luminance component, and in the case that the distance between the reference pixel and the currently processed pixel is in the second distance range, the filtering for the reference pixel can be implemented by using the five-tap filter.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises: when the intra prediction mode of the luma component is an intra horizontal class angle prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter X14.
In this possible example, the filter X14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter X14 comprises a second five tap filter;
the second fifth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the predicted block of the luma component of the current coded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, for the intra-frame horizontal-class angular prediction mode of the luminance component, and the distance between the reference pixel and the currently processed pixel is in the second distance range, the filtering for the reference pixel can be implemented by using the five-tap filter.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter X15.
In this possible example, the filter X15 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
Wherein the filter X15 comprises a first three-tap filter and a second three-tap filter;
the first third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left-hand neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block;
the second third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the predicted block of the luma component of the current coded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
It can be seen that in this example, for the case of the intra non-angular prediction mode of the luminance component and the distance of the reference pixel from the currently processed pixel is in the first distance range, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises:
when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter X16.
In this possible example, the filter X16 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
Wherein the filter X16 comprises a first five tap filter and a second five tap filter;
the first five tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-2) is the original reconstructed sample of boundary pixels of the y-2 row of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstruction of the boundary pixels of the y-row of the left-hand neighboring reconstruction blockSample, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block, P ref (0,y + 2) is the original reconstructed sample of boundary pixels of row y +2 of the left neighboring reconstructed block;
the second fifth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the predicted block of the luma component of the current coded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, for the intra non-angular prediction mode of the luminance component, and for the case where the distance of the reference pixel from the currently processed pixel is in the second distance range, the first filtering for the reference pixel can be implemented using the five-tap filter.
In this possible example, the target component comprises a chroma component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes:
when the intra-frame prediction mode of the chrominance component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y11.
In this possible example, the filter Y11 is configured to filter a boundary pixel region of a left-side neighboring reconstructed block of a predicted block of chroma components of the current coding block, the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter Y11 comprises a third three-tap filter;
the third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the predicted block of the chroma component of the current coded block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left-hand neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of the boundary pixels of row y +1 of the neighboring reconstructed block on the left side.
It can be seen that in this example, for the intra vertical class angle prediction mode of the chroma component, or TSCPM _ T, or CCLM _ a, or MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in the first distance range, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes:
when the intra-frame prediction mode of the chroma component is an intra-frame horizontal angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ L or a cross-component linear model prediction CCLM _ L or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y12.
In this possible example, the filter Y12 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter Y12 comprises a fourth third tap filter;
the fourth third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
It can be seen that in this example, for the intra horizontal class angle prediction mode of the chroma component, or TSCPM _ L or CCLM _ L or MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in the first distance range, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes:
and when the intra-frame prediction mode of the chrominance component is an intra-frame vertical angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ T or a cross-component linear model prediction CCLM _ A or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set as a filter Y13.
In this possible example, the filter Y13 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a chroma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter Y13 comprises a third five tap filter;
the third fifth tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block for the chroma component of the current coding block, P ref (0, y-2) is the original reconstructed sample of boundary pixels of the y-2 row of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left-hand neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block, P ref (0,y + 2) is the original reconstructed sample of the boundary pixels of row y +2 of the neighboring reconstructed block on the left side.
It can be seen that in this example, for the intra-frame vertical angle-like prediction mode of the chroma component, or TSCPM _ T, or CCLM _ a, or MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in the second distance range, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes:
when the intra prediction mode of the chrominance component is an intra horizontal type angle prediction mode or a two-step cross component prediction mode TSCPM _ L or a cross component linear model prediction CCLM _ L or a multi-step cross component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set to filter Y14.
In this possible example, the filter Y14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter Y14 comprises a fourth fifth tap filter;
the fourth fifth tap filter includes:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the origin of boundary pixels of x-1 column of the upper adjacent reconstructed blockStarting to reconstruct the sample, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, for the case that the intra-frame horizontal class angle prediction mode of the chroma component is TSCPM _ L, CCLM _ L, or MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in the second distance range, the first filtering for the reference pixel can be implemented by using the five-tap filter.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises:
when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter Y15.
In this possible example, the filter Y15 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter Y15 comprises a third three-tap filter and a fourth three-tap filter;
the third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block for the chroma component of the current coding block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block;
the fourth third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
It can be seen that in this example, for the case of the normal intra non-angular prediction mode of the chroma component and the distance between the reference pixel and the currently processed pixel is in the first distance range, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes:
when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter Y16.
In this possible example, the filter Y16 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter Y16 comprises a third five-tap filter and a fourth five-tap filter;
the third five-tap filter includes:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block for the chroma component of the current coding block, P ref (0, y-2) is the original reconstructed sample of boundary pixels of the y-2 row of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the edge of the y row of the left-hand neighboring reconstructed blockOriginal reconstructed sample of boundary pixels, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block, P ref (0,y + 2) is the original reconstructed sample of boundary pixels of row y +2 of the left neighboring reconstructed block;
the fourth fifth tap filter includes:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, for the case of the normal intra non-angular prediction mode of the chroma component and the distance between the reference pixel and the currently processed pixel is in the second distance range, the first filtering for the reference pixel can be implemented by using the five-tap filter.
In this possible example, the target component includes a luminance component; the associated information includes an intra prediction mode of the target component.
In this possible example, when the intra prediction mode of the luminance component is an intra vertical type angle prediction mode, the filter is set to the filter X21.
In this possible example, the filter X21 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a predicted block of a luma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter X21 comprises a first four tap filter;
the first four-tap filter comprises:
P′ ref (0,y)=(23×P ref (0,y-1)+82×P ref (0,y)+21×P ref (0,y+1)+2×P ref (0,y+2))>>7
wherein y is the coordinate of the current pixel, and the value of y does not exceed the wide value range, P ', of the current coding block' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
It can be seen that in this example, for the case of intra-frame vertical-class angular prediction mode for the luma component, the first filtering for the reference pixel can be achieved using a four-tap filter.
In the present possible example, when the intra prediction mode of the luminance component is an intra horizontal type angle prediction mode, the filter is set to the filter X22.
In this possible example, the filter X22 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a luma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter X22 comprises a second four-tap filter;
the second four-tap filter comprises:
P′ ref (x,0)=(11×P ref (x-1,0)+75×P ref (x,0)+40×P ref (x+1,0)+2×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, for the case of intra-horizontal class angular prediction mode for the luma component, the first filtering for the reference pixel can be achieved with a four-tap filter.
In this possible example, when the intra prediction mode of the luminance component is an intra non-angular prediction mode, the filter is set to the filter X23.
In this possible example, the filter X23 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
Wherein the filter X23 comprises a third four-tap filter and a fourth four-tap filter;
the third four-tap filter comprises:
P′ ref (0,y)=(32×P ref (0,y-1)+64×P ref (0,y)+32×P ref (0,y+1)+0×P ref (0,y+2))>>7
wherein y is the coordinate of the current pixel, and the value of y does not exceed the wide value range, P ', of the current coding block' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
The fourth four-tap filter comprises:
P′ ref (x,0)=(32×P ref (x-1,0)+64×P ref (x,0)+32×P ref (x+1,0)+0×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, the first filtering for the reference pixel can be achieved with a four-tap filter for the case of intra non-angular prediction mode for the luma component.
In this possible example, the target component comprises a chrominance component; the associated information includes an intra prediction mode of the target component.
In this possible example, when the intra prediction mode of the chrominance component is an intra vertical type angular prediction mode, the filter is set to the filter Y21.
In this possible example, the filter Y21 is configured to filter a boundary pixel region of an upper side adjacent reconstructed block of a prediction block of a chroma component of the current coding block by the filter X21, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes four consecutive adjacent boundary pixels, and a boundary pixel at a second position in the four consecutive adjacent boundary pixels is the reference pixel.
Wherein the filter Y21 comprises a fifth fourth tap filter;
the fifth fourth tap filter comprises:
P′ ref (0,y)=(23×P ref (0,y-1)+82×P ref (0,y)+21×P ref (0,y+1)+2×P ref (0,y+2))>>7
wherein the coordinate of y current pixel, the value of y does not exceed the wide value range, P ', of the current coding block' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the predicted block of the chroma component of the current coded block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) isOriginal reconstructed samples of boundary pixels located in y rows of the left neighboring reconstructed block, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel of the left side neighboring reconstructed block located in row y +1, P ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
It can be seen that in this example, for the case of intra vertical class angular prediction mode for chroma components, the first filtering for the reference pixel can be achieved with a four-tap filter.
In this possible example, when the intra prediction mode of the chrominance component is an intra horizontal class angle prediction mode, the filter is set to the filter Y22.
In this possible example, the filter Y22 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a chroma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter Y22 comprises a sixth four-tap filter;
the sixth fourth tap filter comprises:
P′ ref (x,0)=(11×P ref (x-1,0)+75×P ref (x,0)+40×P ref (x+1,0)+2×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) isOriginal reconstructed samples of boundary pixels of x +2 columns of the upper side neighboring reconstructed block.
It can be seen that in this example, for the case of intra-horizontal class angular prediction mode for chroma components, the first filtering for the reference pixel can be achieved with a four-tap filter.
In this possible example, when the intra prediction mode of the chrominance component is the intra non-angular prediction mode, the filter is set to the filter Y23.
In this possible example, the filter Y23 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
Wherein the filter Y23 comprises a seventh four-tap filter and an eighth four-tap filter;
the seventh fourth tap filter comprises:
P′ ref (0,y)=(32×P ref (0,y-1)+64×P ref (0,y)+32×P ref (0,y+1)+0×P ref (0,y+2))>>7
wherein y is the coordinate of the current pixel, and the value of y does not exceed the wide value range, P ', of the current coding block' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the predicted block of the chroma component of the current coded block, P ref (0Y-1) is the original reconstructed sample of the boundary pixel of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
The eighth fourth tap filter comprises:
P′ ref (x,0)=(32×P ref (x-1,0)+64×P ref (x,0)+32×P ref (x+1,0)+0×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, the first filtering for the reference pixel can be achieved with a four-tap filter for the case of intra non-angular prediction mode for the chroma components.
In this possible example, the target component comprises a luminance component; the correlation information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, the filter is set to filter X31 when the distance of the reference pixel from the currently processed pixel is in a first distance range.
In this possible example, the filter X31 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
Wherein the filter X31 comprises a first three-tap filter and a second three-tap filter;
the first third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
The second third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is a prediction block for the luma component of the current coding blockP of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
It can be seen that in this example, for the case where the distance between the reference pixel and the currently processed pixel is in the first distance range, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the filter is set to filter X32 when the distance of the reference pixel from the currently processed pixel is in the second distance range.
In this possible example, the filter X32 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region including five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region including five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
Wherein the filter X32 comprises a first five tap filter and a second five tap filter;
the first five tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2))>>3
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-2) original reconstructed samples of boundary pixels located in y-2 rows of the left-side neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
The second five-tap filter includes:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0))>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, the use of the five-tap filter enables the first filtering for the reference pixel for the case where the distance between the reference pixel and the currently processed pixel is in the second distance range.
In this possible example, the target component comprises a chrominance component; the correlation information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, the filter is set to filter Y31 when the distance of the reference pixel from the currently processed pixel is in a first distance range.
In this possible example, the filter Y31 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter Y31 comprises a third and a fourth third tap filter;
the third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current coding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y rows of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is a left side neighbor reconstructionOriginal reconstructed samples of boundary pixels of the block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
The fourth third tap filter comprises:
P′ rrf (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the predicted block of the chroma component of the current coded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
It can be seen that in this example, for the case where the distance between the reference pixel and the currently processed pixel is in the first distance range, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, the filter is set to filter Y32 when the distance of the reference pixel from the currently processed pixel is in the second distance range.
In this possible example, the filter Y32 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
The filter Y32 comprises a third five-tap filter and a fourth five-tap filter;
the third fifth tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2))>>3
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block for the chroma component of the current coding block, P ref (0, y-2) original reconstructed samples of boundary pixels located in y-2 lines of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block. The fourth fifth tap filter includes:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0))>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original of the boundary pixels of x-1 column of the upper adjacent reconstructed blockReconstruction of a sample, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
It can be seen that in this example, the use of the five-tap filter enables the first filtering for the reference pixel to be implemented for the case where the distance of the reference pixel from the currently processed pixel is in the second distance range.
In this possible example, the target component comprises a luminance component; the correlation information includes a number of rows and a number of columns of the reference pixel in a prediction block of the target component.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 1, the filter is set to filter X41.
In this possible example, the filter X41 is configured to filter a first boundary pixel region of a left-side adjacent reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side adjacent reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes three consecutive, adjacent first boundary pixels, and a first boundary pixel in the middle of the three consecutive, adjacent first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes three consecutive, adjacent second boundary pixels, and a second boundary pixel in the middle of the three consecutive, adjacent second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter X41 comprises a first three-tap filter and a second three-tap filter;
the first third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current coding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y rows of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
The second three-tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 column of the upper side neighboring reconstructed block.
It can be seen that in this example, for the case where the number of rows and columns of the reference pixel in the luminance component in the prediction block of the target component is 1, the first filtering for the reference pixel can be implemented by using the three-tap filter.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 2, the filter is set to the filter X42.
In this possible example, the filter X42 is configured to filter a first boundary pixel region of a left adjacent reconstructed block of a prediction block of a luma component of the current coding block and a second boundary pixel region of an upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes six adjacent first boundary pixels of 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six adjacent first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes six adjacent second boundary pixels of 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six adjacent second boundary pixels is a second reference pixel of the reference pixels.
Wherein the filter X42 comprises a first six-tap filter and a second six-tap filter;
the first six-tap filter:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+1×P ref (-1,y-1)+2×P ref (-1,y)+1×P ref (-1,y+1))>>3
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the predicted block of the luma component of the current coded block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (-1, y-1) is the original reconstructed sample of the line sub-boundary pixel of the left side neighboring reconstructed block located in line y-1, which refers to the left side neighboring pixel of the boundary pixel, P ref (-1,y) original reconstructed sample of row sub-boundary pixels in y row of left-hand neighboring reconstructed block, P ref (-1,y + 1) is the original reconstructed sample of the row sub-boundary pixel located in row y +1 of the left neighboring reconstructed block.
The second sixth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+1×P ref (x-1,-1)+2×P ref (x,-1)+1×P ref (x+1,-1))>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x-1, -1) is the original reconstructed sample of the column sub-boundary pixel of x-1 column of the upper side neighboring reconstructed block, which refers to the upper side neighboring pixel of the boundary pixel, P ref (x, -1) is the original reconstructed sample of the column sub-boundary pixel of the x columns of the upper neighboring reconstructed block, P ref (x +1, -1) is the original reconstructed sample of the column sub-boundary pixel of the x +1 column of the upper adjacent reconstructed block.
It can be seen that in this example, for the case where the number of rows and columns of the reference pixel in the luminance component in the prediction block of the target component is 2, the first filtering for the reference pixel can be implemented by using a five-tap filter.
In this possible example, the target component comprises a chrominance component; the correlation information comprises the number of rows and the number of columns of the reference pixel in a prediction block of the target component.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 1, the filter is set to filter Y41.
In this possible example, the filter Y41 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
Wherein the filter Y41 comprises a third three-tap filter and a fourth three-tap filter;
the third three-tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
The fourth third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
wherein y is the coordinate of the current pixelThe value of y does not exceed the high value range, P ', of the current coding block' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the current coding block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
It can be seen that in this example, for the case where the number of rows and columns of the reference pixel in the chroma component in the prediction block of the target component is 1, the first filtering for the reference pixel can be implemented by using a three-tap filter.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 2, the filter is set to filter Y42.
In this possible example, the filter Y42 is configured to filter a first boundary pixel region of a left adjacent reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper adjacent reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes six adjacent first boundary pixels of 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six adjacent first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes six adjacent second boundary pixels of 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six adjacent second boundary pixels is a second reference pixel of the reference pixels.
Wherein the filter Y42 comprises a third six-tap filter and a fourth six-tap filter;
the third sixth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+1×P ref (x-1,-1)+2×P ref (x,-1)+1×P ref (x+1,-1))>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range P 'of the current coding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x-1, -1) is the original reconstructed sample of the column sub-boundary pixel of x-1 column of the upper side neighboring reconstructed block, which means the upper side neighboring pixel of the boundary pixel, P ref (x, -1) is the original reconstructed sample of the column sub-boundary pixel of the x column of the upper neighboring reconstructed block, P ref (x +1, -1) is the original reconstructed sample of the column sub-boundary pixel of the x +1 column of the upper adjacent reconstructed block.
The fourth sixth tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+1×P ref (-1,y-1)+2×P ref (-1,y)+1×P ref (-1,y+1))>>3
y is the coordinate of the current pixel, the value of y does not exceed the value range of the current coding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the current coding block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (-1, y-1) isOriginal reconstructed samples of line sub-boundary pixels of the left adjacent reconstructed block located in line y-1, line sub-boundary pixels being left adjacent pixels of the boundary pixels, P ref (-1,y) is the original reconstructed sample of the row sub-boundary pixels of the left-hand neighboring reconstructed block located in the y row, P ref (-1,y + 1) is the original reconstructed sample of the row sub-boundary pixel in row y +1 of the left neighboring reconstructed block.
It can be seen that in this example, for the case where the number of rows and columns of the reference pixel in the chroma component in the prediction block of the target component is 2, the first filtering for the reference pixel can be implemented by using a five-tap filter.
In an alternative example, applying the scene for a vertical-class intra-prediction direction of a target component, and the second filtering the prediction block of the target component according to the filtered reference pixel comprises:
Second filtering the prediction block of the target component using a two-tap filter,
P′(x,y)=f(x)·P(-1,y)+(1-f(x))·P(x,y)
wherein x, y are coordinates of the current pixel, x does not exceed a wide value range of the current coding block, y does not exceed a high value range of the current coding block, P' (x, y) is a final prediction sample of a pixel (x, y) of a prediction block of a target component of the current coding block, P (-1, y) is a final reconstruction sample subjected to first filtering of a boundary pixel located in a y row, f (x) is a horizontal filtering coefficient of the pixel (x, y) relative to a reference pixel P (-1, y), and P (x, y) is an original prediction sample of the pixel (x, y).
For example, as shown in fig. 12D, in the 4 × 4 pixel array, taking the pixels in the prediction block of the chroma component and the left boundary adjacent pixels as an example, first, the pixels a and the left boundary adjacent pixels 1 are down-sampled by using the first two-tap filter to form the pixels a of the prediction block after the chroma component is corrected, and in the horizontal direction, the pixels B and the left boundary adjacent pixels 1 are down-sampled by using the first two-tap filter to form the pixels B of the prediction block after the chroma component is corrected, and so on for the other columns until the pixels P and the left boundary adjacent pixels 4 are down-sampled by using the first two-tap filter to form the pixels P of the prediction block after the chroma component is corrected.
Wherein the horizontal filter coefficients are determined by a first set of parameters comprising a size of a prediction block of the target component and a distance between pixel (x, y) and pixel P (-1, y).
In an alternative example, the second filtering of the prediction block of the target component according to the filtered reference pixels, for applying the scene for a horizontal-class intra prediction direction of a luminance component, comprises:
the prediction block of the luminance component is second filtered using a two-tap filter,
P′(x,y)=f(y)·P(x,-1)+(1-f(y))·P(x,y)
wherein x, y is the coordinate of the current pixel, x does not exceed the wide range of the current coding block, y does not exceed the high range of the current coding block, P' (x, y) is the final prediction sample of the pixel (x, y) of the prediction block of the luma component of the current coding block, P (x, -1) is the final reconstructed sample of the boundary pixel located in x columns after the first filtering, f (y) is the vertical filtering coefficient of the pixel (x, y) relative to the reference pixel P (x, -1), and P (x, y) is the original prediction sample of the pixel (x, y).
For example, as shown in fig. 12E, in the 4 × 4 pixel array, taking the pixels in the prediction block of the chroma component and the upper boundary adjacent pixels as an example, first, the pixel a and the upper boundary adjacent pixel 1 are down-sampled by using the second two-tap filter to form the pixel a of the prediction block after the correction of the chroma component, in the vertical direction, the pixel E and the upper boundary adjacent pixel 1 are down-sampled by using the second two-tap filter to form the pixel E of the prediction block after the correction of the chroma component, and the other columns are so repeated until the pixel P and the upper boundary adjacent pixel 4 are down-sampled by using the second two-tap filter to form the pixel P of the prediction block after the correction of the chroma component.
Wherein the vertical filter coefficients are determined by a second set of parameters comprising a size of a prediction block of the chroma component and a distance between pixel (x, y) and pixel P (x, -1).
In an alternative example, the second filtering of the prediction block of the target component according to the filtered reference pixels applies a scene for a non-angular intra prediction direction of a luma component, including:
second filtering the prediction block of the target component using a three-tap filter,
P′(x,y)=f(x)·P(-1,y)+f(y)·P(x,-1)+(1-f(x)-f(y))·P(x,y)
where x, y is coordinates of the current pixel, x does not exceed a wide range of values of the current coding block, y does not exceed a high range of values of the current coding block, P' (x, y) is a final predicted sample of a pixel (x, y) of a prediction block of a luma component of the current coding block, P (-1, y) is a first filtered final reconstructed sample of a boundary pixel located in a row y, P (x, -1) is a first filtered final reconstructed sample of a boundary pixel located in a column x, f (x) is a horizontal filter coefficient of the pixel (x, y) with respect to a reference pixel P (-1, y), f (y) is a vertical filter coefficient of the pixel (x, y) with respect to the reference pixel P (x, -1), and P (x, y) is an original predicted sample of the pixel (x, y).
For example, as shown in fig. 12F, in the 4 × 4 pixel array, taking the pixels in the prediction block of the chroma component, the upper boundary adjacent pixel and the left boundary adjacent pixel as an example, first, the pixel a, the upper boundary adjacent pixel 1 and the left adjacent pixel 5 are down-sampled by using the first three-tap filter to form the pixel a of the prediction block after the correction of the chroma component, and in the vertical direction, the pixel E, the upper boundary adjacent pixel 1 and the left boundary adjacent pixel 6 are down-sampled by using the first three-tap filter to form the pixel E of the prediction block after the correction of the chroma component, and the other columns are so on until the pixel P, the upper boundary adjacent pixel 4 and the left boundary adjacent pixel 8 are down-sampled by using the first three-tap filter to form the pixel P of the prediction block after the correction of the chroma component.
Wherein the horizontal filter coefficients are determined by a first set of parameters comprising a size of a prediction block of the target component and a distance between pixel (x, y) and pixel P (-1, y); the vertical filter coefficients are determined by a second set of parameters comprising the size of a prediction block of the target component and the distance between pixel (x, y) and pixel P (x, -1).
The value of the filter coefficient is related to the size of a prediction block of a target component of a current coding block and the distance between a prediction pixel and a reference pixel in the prediction block of the target component.
Specifically, the prediction blocks of the target components of the coding blocks are divided into different filter coefficient groups according to the sizes of the prediction blocks, the corresponding filter coefficient group is selected according to the size of the prediction block of the target component of the current coding block, the distance from the current predicted pixel to the reference pixel is used as an index value, and the corresponding filter coefficient is selected from the corresponding filter coefficient group. The intra target component prediction filter coefficients are specifically shown in table 1, and it should be noted that all the coefficients in the table can be amplified and shifted in a specific encoding process to reduce the computational complexity.
TABLE 1 Intra target component prediction filter coefficients
Figure BDA0002594622800000271
Figure BDA0002594622800000281
In addition, the filter coefficients of the technique can reduce coefficient storage by adopting a coefficient truncation mode, namely, the filter coefficients of all pixels with the distance between the current predicted pixel and the reference pixel being more than 10 are consistent.
In one possible example, before the second filtering of the prediction block of the target component according to the filtered reference pixels, the method further comprises: calculating a first rate distortion cost of the current coding block under the unmodified condition and calculating a second rate distortion cost of the current coding block under the modified condition; determining that the first rate-distortion cost is greater than the second rate-distortion cost.
In this possible example, the method further comprises: and determining that the first rate distortion cost is less than or equal to the second rate distortion cost, and setting a chroma correction identification bit as a second numerical value, wherein the second numerical value is used for indicating that the prediction correction is not needed.
In this possible example, the chroma modified flag is common to the luma modified flag.
In this possible example, the chroma correction flag is used independently.
In a specific implementation, after the prediction block of the current coding block after the correction of the chroma component is determined, the device may further calculate a reconstructed block of the chroma component, and determine a reconstructed image block of the current coding block according to the reconstructed block of the chroma component and the reconstructed block of the luma component.
In one possible example, the linear model applied to the current coding block may be exchanged for a linear model applied line by line.
In one possible example, the identification bit representation in the existing protocol is added to: each chrominance component is respectively indicated whether the prediction correction technology is used or not by an identification bit.
In one possible example, the prediction modification technique of the chrominance components of the present application may be used only for individual ones of the chrominance component prediction modes.
In one possible example, whether to cancel in advance or directly use the prediction correction technique may be determined according to the prediction correction technique usage information of the neighboring coding blocks of the current coding block.
Therefore, compared with the prior art, the prediction sample of the chroma component of the current coding block is corrected by using the spatial correlation between the adjacent coding block and the current coding block in the scheme of the application, so that the prediction accuracy and the coding efficiency are improved.
Fig. 13 is a flowchart illustrating an image encoding method according to an embodiment of the present application, corresponding to the image encoding method illustrated in fig. 12A, where the image encoding method can be applied to the destination device 20 in the video decoding system 1 illustrated in fig. 9 or the video decoder 200 illustrated in fig. 11. The flow shown in fig. 13 is described by taking as an example the video encoder 200 shown in fig. 11 as an execution subject. As shown in fig. 13, the cross-component prediction method provided in the embodiment of the present application includes:
step 210, parsing the code stream, and determining an intra prediction mode of a target component of the current decoding block, where the target component includes a luminance component or a chrominance component.
In this possible example, the determination of the prediction block of the target component is detailed in the foregoing description of various intra prediction processes of the luminance component and the chrominance component, and will not be described again here.
Step 220, determining a prediction block of the target component of the current decoded block according to the intra prediction mode of the target component.
Step 230, performing a first filtering on the reference pixel used for correcting the prediction block according to the intra prediction mode of the target component to obtain a filtered reference pixel.
And 240, performing second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
Wherein, the reference pixel refers to a boundary pixel close to and/or next to the current decoding block in a left or upper side adjacent decoding block of the current decoding block. The details will be described below.
In the filtering of the prediction samples for the luminance component:
if the luminance component prediction of the current decoding block only selects the upper luminance boundary pixel as the reference pixel to calculate the prediction sample, filtering the prediction sample of the luminance component of the current decoding block by using the left luminance boundary pixel as the reference pixel;
if the luminance component prediction of the current decoding block only selects the left luminance boundary pixel as the reference pixel to calculate the prediction sample, filtering the prediction sample of the luminance component of the current decoding block by using the upper luminance boundary pixel as the reference pixel;
And if the luminance component prediction of the current decoding block adopts the upper side and the left side luminance boundary pixels as the reference pixels or the current luminance decoding block is in a non-angle prediction mode, filtering the prediction sample of the luminance component of the current decoding block by adopting the upper side and the left side luminance boundary pixels as the reference pixels.
In the filtering of prediction samples computed by the chroma component normal intra prediction mode:
if the current decoding block only selects the upper side chroma boundary pixel as the reference pixel to calculate the prediction sample, filtering the prediction sample of the chroma component of the current decoding block by adopting the left side chroma boundary pixel as the reference pixel;
if the current decoding block only selects the left side chroma boundary pixel as the reference pixel to calculate the prediction sample, filtering the prediction sample of the chroma component of the current decoding block by adopting the upper side chroma boundary pixel as the reference pixel;
and if the current decoding block adopts a chroma component common intra-frame non-angle prediction mode, filtering prediction samples of the chroma component of the current decoding block by adopting an upper chroma boundary pixel and a left chroma boundary pixel as reference pixels.
In the filtering of prediction samples computed by the chroma component cross-component prediction mode:
If the current decoding block only selects the upper side brightness boundary pixel and the upper side chroma boundary pixel to calculate the coefficient of the linear model, filtering the prediction sample of the chroma component of the current decoding block by adopting the left side chroma boundary pixel as a reference pixel;
and if the current decoding block only selects the left luminance boundary pixel and the left chrominance boundary pixel to calculate the coefficient of the linear model, filtering the prediction sample of the chrominance component of the current decoding block by adopting the upper chrominance boundary pixel as a reference pixel.
In this possible example, the first filtering, according to the intra prediction mode of the target component, the reference pixel used for modifying the prediction block to obtain a filtered reference pixel includes: determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component; determining a filter according to association information of a prediction block of the target component, the association information including at least one of: the intra-prediction mode of the target component, the distance between the reference pixel and the currently processed pixel, and the number of rows and columns of the reference pixel in a prediction block of the target component; first filtering the reference pixel using the filter.
In this possible example, the target component comprises a luminance component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set to filter X11.
In this possible example, the filter X11 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, the boundary pixel region including three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter X11 comprises a first three-tap filter;
the first three-tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y rows of the left neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) boundary pixels in y rows of the left-hand neighboring reconstructed blockOf the original reconstructed sample, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode of the luma component is an intra horizontal-like angle prediction mode and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set to filter X12.
In this possible example, the filter X12 is configured to filter a boundary pixel region of an upper neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near an upper boundary of the current decoded block, the boundary pixel region including three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter X12 comprises a second three-tap filter;
the second three-tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in the second distance range, the filter is set to filter X13.
In this possible example, the filter X13 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, and the boundary pixel region including five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter X13 comprises a first five tap filter;
the first five tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-2) original reconstructed samples of boundary pixels located in y-2 lines of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode of the luma component is an intra horizontal-like angle prediction mode and the distance between the reference pixel and the currently processed pixel is in the second distance range, the filter is set to filter X14.
In this possible example, the filter X14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near an upper boundary of the current decoded block, the boundary pixel region including five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter X14 comprises a second five tap filter;
the second fifth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter X15.
In this possible example, the filter X15 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, and the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter X15 comprises a first three-tap filter and a second three-tap filter;
the first third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luminance component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block;
the second third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
where x is the coordinate of the current pixel, the value of x not exceeding the width of the current decoded blockValue range of (1), P' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 column of the upper side neighboring reconstructed block.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a second distance range, the filter is set to filter X16.
In this possible example, the filter X16 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
Wherein the filter X16 comprises a first five tap filter and a second five tap filter;
the first five tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-2) is the original reconstructed sample of boundary pixels of the y-2 row of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left-hand neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block, P ref (0,y + 2) is the original reconstructed sample of boundary pixels of row y +2 of the left neighboring reconstructed block;
the second fifth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the target component comprises a chrominance component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra-frame prediction mode of the chrominance component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y11.
In this possible example, the filter Y11 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a chroma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, the boundary pixel region including three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter Y11 comprises a third three-tap filter;
the third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of the boundary pixels of row y +1 of the neighboring reconstructed block on the left side.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises: and when the intra-frame prediction mode of the chrominance component is an intra-frame horizontal angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ L or a cross-component linear model prediction CCLM _ L or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y12.
In this possible example, the filter Y12 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current decoded block, where the boundary pixel region is a pixel region near an upper boundary of the current decoded block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter Y12 comprises a fourth third tap filter;
the fourth third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
wherein x is the coordinate of the current pixel, the value of x does not exceed the wide range of values, P ', of the current decoded block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises: and when the intra-frame prediction mode of the chrominance component is an intra-frame vertical angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ T or a cross-component linear model prediction CCLM _ A or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set as a filter Y13.
In this possible example, the filter Y13 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of chroma components of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, and the boundary pixel region including five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter Y13 comprises a third five tap filter;
the third fifth tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-2) is the original reconstructed sample of boundary pixels of the y-2 row of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left-hand neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block, P ref (0,y + 2) is the original reconstructed sample of the boundary pixels of row y +2 of the neighboring reconstructed block on the left side.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode of the chrominance component is an intra horizontal type angle prediction mode or a two-step cross component prediction mode TSCPM _ L or a cross component linear model prediction CCLM _ L or a multi-step cross component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set to filter Y14.
In this possible example, the filter Y14 is configured to filter a boundary pixel region of an upper neighboring reconstruction block of a prediction block of chroma components of the current decoded block, the boundary pixel region being a pixel region near an upper boundary of the current decoded block, the boundary pixel region including five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter Y14 comprises a fourth fifth tap filter;
the fourth fifth tap filter includes:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
Wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the determining a filter according to the association information of the prediction block of the target component comprises: when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter Y15.
In this possible example, the filter Y15 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter Y15 comprises a third three-tap filter and a fourth three-tap filter;
the third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+2)>>2
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left-hand neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block;
the fourth third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+2)>>2
where x is the coordinate of the current pixel,the value of x does not exceed the wide range of values, P ', of the current decoded block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
In this possible example, the determining a filter according to the correlation information of the prediction block of the target component includes: when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter Y16.
In this possible example, the filter Y16 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter Y16 comprises a third five tap filter and a fourth five tap filter;
the third fifth tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2)+4)>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-2) is the original reconstructed sample of boundary pixels of the y-2 row of the left neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of boundary pixels of the y-1 row of the left neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the y-row of the left neighboring reconstructed block, P ref (0,y + 1) is the original reconstructed sample of boundary pixels of row y +1 of the left neighboring reconstructed block, P ref (0,y + 2) is the original reconstructed sample of boundary pixels of row y +2 of the left neighboring reconstructed block;
the fourth fifth tap filter includes:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0)+4)>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the target component includes a luminance component; the associated information includes an intra prediction mode of the target component.
In this possible example, when the intra prediction mode of the luminance component is the intra vertical type angle prediction mode, the filter is set to the filter X21.
In this possible example, the filter X21 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current decoded block, where the boundary pixel region is a pixel region near an upper boundary of the current decoded block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter X21 comprises a first four-tap filter;
the first four-tap filter comprises:
P′ ref (0,y)=(23×P ref (0,y-1)+82×P ref (0,y)+21×P ref (0,y+1)+2×P ref (0,y+2))>>7
wherein the coordinate of y current pixel, the value of y does not exceed the wide value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
In this possible example, when the intra prediction mode of the luminance component is an intra horizontal class angle prediction mode, the filter is set to the filter X22.
In this possible example, the filter X22 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block, where the boundary pixel region is a pixel region near a left boundary of the current decoded block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter X22 comprises a second four-tap filter;
the second four-tap filter comprises:
P′ ref (x,0)=(11×P ref (x-1,0)+75×P ref (x,0)+40×P ref (x+1,0)+2×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the current decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, when the intra prediction mode of the luminance component is an intra non-angular prediction mode, the filter is set to the filter X23.
In this possible example, the filter X23 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
Wherein the filter X23 comprises a third four-tap filter and a fourth four-tap filter;
the third four-tap filter comprises:
P′ ref (0,y)=(32×P ref (0,y-1)+64×P ref (0,y)+32×P ref (0,y+1)+0×P ref (0,y+2))>>7
wherein y is the coordinate of the current pixel, the value of y does not exceed the wide range of values, P ', of the current decoded block' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel of the left side neighboring reconstructed block located in row y +1, P ref (0, y + 2) are the original reconstructed samples of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
The fourth four-tap filter comprises:
P′ ref (x,0)=(32×P ref (x-1,0)+64×P ref (x,0)+32×P ref (x+1,0)+0×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, the value of x does not exceed the wide range of values, P ', of the current decoded block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is located at the upper side adjacent to the weightOriginal reconstructed samples, P, of boundary pixels of x columns of a building block ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the target component comprises a chrominance component; the associated information includes an intra prediction mode of the target component.
In this possible example, when the intra prediction mode of the chroma component is the intra vertical-type angular prediction mode, the filter is set to the filter Y21.
In this possible example, the filter Y21 is configured to filter a boundary pixel region of an upper side neighboring reconstruction block of a prediction block of a chroma component of the current decoded block by the filter X21, the boundary pixel region is a pixel region near an upper boundary of the current decoded block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
Wherein the filter Y21 comprises a fifth four tap filter;
The fifth fourth tap filter comprises:
P′ ref (0,y)=(23×P ref (0,y-1)+82×P ref (0,y)+21×P ref (0,y+1)+2×P ref (0,y+2))>>7
wherein the coordinate of y current pixel, the value of y does not exceed the wide value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the bit of the left side neighboring reconstructed blockThe original reconstructed samples of the boundary pixels in row y + 2.
In this possible example, when the intra prediction mode of the chroma component is an intra horizontal-type angle prediction mode, the filter is set to the filter Y22.
In this possible example, the filter Y22 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a chroma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, the boundary pixel region including four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels being the reference pixel.
Wherein the filter Y22 comprises a sixth four-tap filter;
the sixth fourth tap filter comprises:
P′ ref (x,0)=(11×P ref (x-1,0)+75×P ref (x,0)+40×P ref (x+1,0)+2×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixel of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, when the intra prediction mode of the chrominance component is the intra non-angular prediction mode, the filter is set to the filter Y23.
In this possible example, the filter Y23 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
Wherein the filter Y23 comprises a seventh fourth tap filter and an eighth fourth tap filter;
the seventh fourth tap filter comprises:
P′ ref (0,y)=(32×P ref (0,y-1)+64×P ref (0,y)+32×P ref (0,y+1)+0×P ref (0,y+2))>>7
wherein the coordinate of y current pixel, the value of y does not exceed the wide value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel of the left side neighboring reconstructed block located in row y +1, P ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
The eighth fourth tap filter comprises:
P′ ref (x,0)=(32×P ref (x-1,0)+64×P ref (x,0)+32×P ref (x+1,0)+0×P ref (x+2,0))>>7
wherein x is the coordinate of the current pixel, the value of x does not exceed the wide range of values, P ', of the current decoded block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the target component includes a luminance component; the correlation information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, the filter is set to filter X31 when the distance of the reference pixel from the currently processed pixel is in a first distance range.
In this possible example, the filter X31 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter X31 comprises a first three-tap filter and a second three-tap filter;
the first third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luminance component of the currently decoded block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
The second third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
In this possible example, the filter is set to filter X32 when the distance of the reference pixel from the currently processed pixel is in the second distance range.
In this possible example, the filter X32 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
Wherein the filter X32 comprises a first five tap filter and a second five tap filter;
the first five tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2))>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-2) original reconstructed samples of boundary pixels located in y-2 rows of the left-side neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block.
The second fifth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0))>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the target component comprises a chrominance component; the associated information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, the filter is set to filter Y31 when the distance of the reference pixel from the currently processed pixel is in a first distance range.
In this possible example, the filter Y31 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
Wherein the filter Y31 comprises a third and a fourth third tap filter;
the third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
The fourth third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
In this possible example, the filter is set to filter Y32 when the distance of the reference pixel from the currently processed pixel is in the second distance range.
In this possible example, the filter Y32 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
The filter Y32 comprises a third fifth tap filter and a fourth fifth tap filter;
the third five-tap filter includes:
P′ ref (0,y)=(1×P ref (0,y-2)+2×P ref (0,y-1)+2×P ref (0,y)+2×P ref (0,y+1)+1×P ref (0,y+2))>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (0, y-2) original reconstructed samples of boundary pixels located in y-2 rows of the left-side neighboring reconstructed block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel of the left side neighboring reconstructed block located in row y +1, P ref (0,y + 2) is the original reconstructed sample of the boundary pixels located in row y +2 of the left neighboring reconstructed block. The fourth fifth tap filter includes:
P′ ref (x,0)=(1×P ref (x-2,0)+2×P ref (x-1,0)+2×P ref (x,0)+2×P ref (x+1,0)+1×P ref (x+2,0))>>3
wherein x is the coordinate of the current pixel, the value of x does not exceed the wide range of values, P ', of the current decoded block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-2, 0) is the original reconstructed sample of the boundary pixels of x-2 columns of the upper neighboring reconstructed block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x +2, 0) is the original reconstructed sample of the boundary pixels of x +2 columns of the upper neighboring reconstructed block.
In this possible example, the target component comprises a luminance component; the correlation information comprises a number of rows and a number of columns of the reference pixel in a prediction block of the target component.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 1, the filter is set to filter X41.
In this possible example, the filter X41 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
Wherein the filter X41 comprises a first three-tap filter and a second three-tap filter;
the first third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample after filtering of the boundary pixels of the y rows of the left neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
The second third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 2, the filter is set to the filter X42.
In this possible example, the filter X42 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including six neighboring first boundary pixels of 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region including six neighboring second boundary pixels of 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six neighboring second boundary pixels being a second reference pixel of the reference pixels.
Wherein the filter X42 comprises a first six-tap filter and a second six-tap filter;
the first six-tap filter:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+1×P ref (-1,y-1)+2×P ref (-1,y)+1×P ref (-1,y+1))>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (-1, y-1) is the original reconstructed sample of the row sub-boundary pixel of the left adjacent reconstructed block located in row y-1, the row sub-boundary pixel being the left phase of the boundary pixelAdjacent pixel, P ref (-1,y) is the original reconstructed sample of the row sub-boundary pixels of the left-hand neighboring reconstructed block located in the y row, P ref (-1,y + 1) is the original reconstructed sample of the row sub-boundary pixel located in row y +1 of the left neighboring reconstructed block.
The second sixth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+1×P ref (x-1,-1)+2×P ref (x,-1)+1×P ref (x+1,-1))>>3
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x-1, -1) is the original reconstructed sample of the column sub-boundary pixel of x-1 column of the upper side neighboring reconstructed block, which means the upper side neighboring pixel of the boundary pixel, P ref (x, -1) is the original reconstructed sample of the column sub-boundary pixel of the x column of the upper neighboring reconstructed block, P ref (x +1, -1) is the original reconstructed sample of the column sub boundary pixel of the x +1 column of the upper side neighboring reconstructed block.
In this possible example, the target component comprises a chroma component; the association information comprises a number of rows and a number of columns of the reference pixel in a prediction block of the target component.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 1, the filter is set to filter Y41.
In this possible example, the filter Y41 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
Wherein the filter Y41 comprises a third and a fourth third tap filter;
the third tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0))>>2
wherein x is the coordinate of the current pixel, and the value of x does not exceed the wide value range, P ', of the current decoding block' ref (x, 0) is the final reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the chroma component of the currently decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper adjacent reconstructed block.
The fourth third tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1))>>2
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the chroma division of the currently decoded blockThe final reconstructed sample after filtering of the boundary pixels of the y rows of the left neighboring reconstructed block of the predicted block of quantity P ref (0, y-1) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in line y-1, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample of the boundary pixel located in row y +1 of the left neighboring reconstructed block.
In this possible example, when the number of rows and columns of the reference pixel in the prediction block of the target component is 2, the filter is set to filter Y42.
In this possible example, the filter Y42 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes six neighboring first boundary pixels of 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six neighboring first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes six neighboring second boundary pixels of 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six neighboring second boundary pixels is a second reference pixel of the reference pixels.
Wherein the filter Y42 comprises a third six-tap filter and a fourth six-tap filter;
the third sixth tap filter comprises:
P′ ref (x,0)=(1×P ref (x-1,0)+2×P ref (x,0)+1×P ref (x+1,0)+1×P ref (x-1,-1)+2×P ref (x,-1)+1×P ref (x+1,-1))>>3
wherein, x is the coordinate of the current pixel, and the value of x does not exceed the wide value range of the current decoding block ,P′ ref (x, 0) is the final reconstructed sample of boundary pixels of x columns of the upper neighboring reconstructed block of the prediction block of the luma component of the current decoded block, P ref (x-1, 0) is the original reconstructed sample of the boundary pixels of x-1 column of the upper neighboring reconstructed block, P ref (x, 0) is the original reconstructed sample of the boundary pixels of x columns of the upper neighboring reconstructed block, P ref (x +1, 0) is the original reconstructed sample of the boundary pixels of x +1 columns of the upper neighboring reconstructed block, P ref (x-1, -1) is the original reconstructed sample of the column sub-boundary pixel of x-1 column of the upper side neighboring reconstructed block, which means the upper side neighboring pixel of the boundary pixel, P ref (x, -1) is the original reconstructed sample of the column sub-boundary pixel of the x columns of the upper neighboring reconstructed block, P ref (x +1, -1) is the original reconstructed sample of the column sub boundary pixel of the x +1 column of the upper side neighboring reconstructed block.
The fourth sixth tap filter comprises:
P′ ref (0,y)=(1×P ref (0,y-1)+2×P ref (0,y)+1×P ref (0,y+1)+1×P ref (-1,y-1)+2×P ref (-1,y)+1×P ref (-1,y+1))>>3
wherein y is the coordinate of the current pixel, the value of y does not exceed the high value range of the current decoding block, P' ref (0, y) is the final reconstructed sample of boundary pixels of the y row of the left-hand neighboring reconstructed block of the prediction block of the luma component of the currently decoded block, P ref (0, y-1) original reconstructed samples of boundary pixels located in y-1 line of left-side neighboring reconstructed block, P ref (0, y) is the original reconstructed sample of the boundary pixels of the left neighboring reconstructed block located in y rows, P ref (0, y + 1) is the original reconstructed sample, P, of the boundary pixel located in row y +1 of the left neighboring reconstructed block ref (-1, y-1) is the original reconstructed sample of the row sub-boundary pixel of the left adjacent reconstructed block located in row y-1, which is the left adjacent pixel of the boundary pixel, P ref (-1,y) is the original reconstructed sample of the row sub-boundary pixels of the left-hand neighboring reconstructed block located in the y row, P ref (-1,y + 1) is the original reconstructed sample of the line sub-boundary pixel in line y +1 of the left neighboring reconstructed block。
In an alternative example, applying the scene for a vertical-class intra-prediction direction of a target component, and the second filtering the prediction block of the target component according to the filtered reference pixel comprises:
second filtering the prediction block of the target component using a two-tap filter,
P′(x,y)=f(x)·P(-1,y)+(1-f(x))·P(x,y)
wherein x, y are coordinates of the current pixel, x does not exceed a wide range of values of the current decoded block, y does not exceed a high range of values of the current decoded block, P' (x, y) is a final prediction sample of a pixel (x, y) of a prediction block of a target component of the current decoded block, P (-1, y) is a final reconstructed sample of a boundary pixel located at a y-row, which is subjected to first filtering, f (x) is a horizontal filtering coefficient of the pixel (x, y) with respect to a reference pixel P (-1, y), and P (x, y) is an original prediction sample of the pixel (x, y).
Wherein the horizontal filter coefficients are determined by a first set of parameters comprising a size of a prediction block of the target component and a distance between pixel (x, y) and pixel P (-1, y).
In an alternative example, applying the scene for a horizontal-class intra prediction direction of a luma component, said second filtering a prediction block of the target component according to the filtered reference pixels, comprises:
the prediction block of the luminance component is second filtered using a two-tap filter,
P′(x,y)=f(y)·P(x,-1)+(1-f(y))·P(x,y)
wherein x, y is the coordinate of the current pixel, x does not exceed the wide range of the current decoded block, y does not exceed the high range of the current decoded block, P' (x, y) is the final prediction sample of pixel (x, y) of the prediction block of the luma component of the current decoded block, P (x, -1) is the final reconstructed sample of the boundary pixel located in x columns after the first filtering, f (y) is the vertical filter coefficient of pixel (x, y) with respect to the reference pixel P (x, -1), and P (x, y) is the original prediction sample of pixel (x, y).
Wherein the vertical filter coefficients are determined by a second set of parameters comprising a size of a prediction block of the chroma component and a distance between pixel (x, y) and pixel P (x, -1).
In an alternative example, the second filtering of the prediction block of the target component according to the filtered reference pixels applies a scene for a non-angular intra prediction direction of a luma component, including:
second filtering the prediction block of the target component using a three-tap filter,
P′(x,y)=f(x)·P(-1,y)+f(y)·P(x,-1)+(1-f(x)-f(y))·P(x,y)
wherein x, y are coordinates of the current pixel, x has a value not exceeding a wide range of values of the current decoded block, y has a value not exceeding a high range of values of the current decoded block, P' (x, y) is a final prediction sample of a pixel (x, y) of a prediction block of a luminance component of the current decoded block, P (-1, y) is a final reconstruction sample of a boundary pixel located at a y row, P (x, -1) is a final reconstruction sample of a boundary pixel located at an x column, f (x) is a horizontal filter coefficient of the pixel (x, y) with respect to a reference pixel P (-1, y), f (y) is a vertical filter coefficient of the pixel (x, y) with respect to the reference pixel P (x, -1), and P (x, y) is an original prediction sample of the pixel (x, y).
Wherein the horizontal filter coefficients are determined by a first set of parameters comprising a size of a prediction block of the target component and a distance between pixel (x, y) and pixel P (-1, y); the vertical filter coefficients are determined by a second set of parameters comprising the size of a prediction block of the target component and the distance between pixel (x, y) and pixel P (x, -1).
In one possible example, before the second filtering of the prediction block of the target component according to the filtered reference pixel, the method further includes: calculating a first rate-distortion cost of the current decoding block under an unmodified condition, and calculating a second rate-distortion cost of the current decoding block under a modified condition; determining that the first rate-distortion cost is greater than the second rate-distortion cost.
In this possible example, the method further comprises: and determining that the first rate-distortion cost is less than or equal to the second rate-distortion cost, and setting a chroma correction identification bit as a second numerical value, wherein the second numerical value is used for indicating that the prediction correction is not needed.
In this possible example, the chroma correction flag is common to the luma correction flag.
In this possible example, the chroma correction flag is used independently.
In a specific implementation, after the corrected prediction block of the chroma component of the current decoded block is determined, the device may further calculate a reconstructed block of the chroma component, and determine a reconstructed image block of the current decoded block according to the reconstructed block of the chroma component and the reconstructed block of the luma component.
In one possible example, the linear model applied to the current decoded block described above may be exchanged for a linear model applied row-by-row.
In one possible example, the identification bit representation in the existing protocol is added to: each chrominance component is respectively indicated whether the prediction correction technology is used or not by an identification bit.
In one possible example, the prediction modification technique of the chrominance components of the present application may be used only for individual ones of the chrominance component prediction modes.
In one possible example, whether to cancel in advance or directly use the prediction correction technique may be determined based on the prediction correction technique use information of the neighboring decoded block of the current decoded block.
It can be seen that, in the embodiment of the present application, in the intra prediction mode, before the prediction block of the current decoding block is corrected by using the spatial correlation between the adjacent pixel block and the current decoding block, the boundary pixels of the adjacent decoding block of the current decoding block are filtered, so that sharpening is avoided, and the intra prediction accuracy and the encoding efficiency are improved.
The technology provided by the application is realized on AVS reference software HPM7.0, a 1-second sequence test is carried out on a full-frame mode and a random access mode under a general test condition and a video sequence, a specific scheme is realized by transmitting an identification bit to indicate whether a prediction filtering correction technology is used, and specific performances are shown in tables 2 and 3.
TABLE 2 All Intra test results
Figure BDA0002594622800000421
TABLE 3 Random Access test results
Figure BDA0002594622800000422
As can be seen from tables 2 and 3, the UV component of the test sequence has a relatively significant performance gain on average, the U component has an encoding performance improvement of 0.71% on average under the AI test condition, and the V component has an encoding performance improvement of 1.35% on average under the RA test condition.
The embodiment of the application provides an image coding device which can be a video decoder or a video encoder. In particular, the image encoding device is used for executing the steps executed by the video decoder in the above decoding method. The image encoding device provided by the embodiment of the application can comprise modules corresponding to the corresponding steps.
The present embodiment may divide the functional modules of the image encoding apparatus according to the above method, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The division of the modules in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 14 shows a schematic diagram of a possible structure of the image encoding apparatus according to the above embodiment, in the case of employing a division of each functional module corresponding to each function. As shown in fig. 14, the image encoding device 14 includes a first determination unit 140, a second determination unit 141, a first filtering unit 142, and a second filtering unit 143.
A first determining unit 140, configured to divide an image, and determine an intra prediction mode of a target component of a current coding block, where the target component includes a luminance component or a chrominance component;
a second determining unit 141, configured to determine a prediction block of the target component of the current coding block according to the intra prediction mode of the target component;
a first filtering unit 142, configured to perform a first filtering on the reference pixel used for correcting the prediction block according to the intra prediction mode of the target component, so as to obtain a filtered reference pixel;
the second filtering unit 143 is configured to perform second filtering on the prediction block of the target component according to the filtered reference pixel, so as to obtain a modified prediction block.
In this possible example, in terms of the first filtering of the reference pixels for modifying the prediction block according to the intra prediction mode of the target component, the first filtering unit 142 is specifically configured to: determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component; determining a filter according to association information of a prediction block of the target component, the association information including at least one of: the intra prediction mode of the target component, the distance between the reference pixel and the currently processed pixel, and the number of rows and columns of the reference pixel in a prediction block of the target component; first filtering the reference pixel using the filter.
In this possible example, the target component includes a luminance component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set to filter X11.
In this possible example, the filter X11 is configured to filter a boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a middle boundary pixel of the three consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the luma component is an intra horizontal class angle prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter X12.
In this possible example, the filter X12 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a predicted block of a luma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in the second distance range, the filter is set to filter X13.
In this possible example, the filter X13 is configured to filter a boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block, the boundary pixel region being a pixel region near a left boundary of the current coding block, the boundary pixel region including five consecutive, neighboring boundary pixels, and a middle boundary pixel of the five consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the luma component is an intra horizontal class angle prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter X14.
In this possible example, the filter X14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter X15.
In this possible example, the filter X15 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter X16.
In this possible example, the filter X16 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
In this possible example, the target component comprises a chrominance component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra-frame prediction mode of the chrominance component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y11.
In this possible example, the filter Y11 is configured to filter a boundary pixel region of a left-side neighboring reconstructed block of a predicted block of chroma components of the current coding block, the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: and when the intra-frame prediction mode of the chrominance component is an intra-frame horizontal angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ L or a cross-component linear model prediction CCLM _ L or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y12.
In this possible example, the filter Y12 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra-frame prediction mode of the chroma component is an intra-frame vertical angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ T or a cross-component linear model prediction CCLM _ A or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set as a filter Y13.
In this possible example, the filter Y13 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a chroma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the chroma component is an intra horizontal type angle prediction mode or a two-step cross component prediction mode TSCPM _ L or a cross component linear model prediction CCLM _ L or a multi-step cross component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set to filter Y14.
In this possible example, the filter Y14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current coding block, the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter Y15.
In this possible example, the filter Y15 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a second distance range, the filter is set to filter Y16.
In this possible example, the filter Y16 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, the target component includes a luminance component; the associated information includes an intra prediction mode of the target component.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the luminance component is an intra vertical type angle prediction mode, the filter is set to a filter X21.
In this possible example, the filter X21 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current coding block, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the luminance component is an intra horizontal class angle prediction mode, the filter is set to a filter X22.
In this possible example, the filter X22 is configured to filter a boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the luminance component is the intra non-angular prediction mode, the filter is set to the filter X23.
In this possible example, the filter X23 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
In this possible example, the target component comprises a chroma component; the associated information includes an intra prediction mode of the target component.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the chrominance component is an intra vertical type angle prediction mode, the filter is set to a filter Y21.
In this possible example, the filter Y21 is configured to filter a boundary pixel region of an upper side adjacent reconstructed block of a prediction block of a chroma component of the current coding block by the filter X21, where the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region includes four consecutive adjacent boundary pixels, and a boundary pixel at a second position in the four consecutive adjacent boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the chroma component is an intra horizontal class angle prediction mode, the filter is set to filter Y22.
In this possible example, the filter Y22 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a chroma component of the current coding block, where the boundary pixel region is a pixel region near a left boundary of the current coding block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the intra prediction mode of the chroma component is an intra non-angular prediction mode, the filter is set to filter Y23.
In this possible example, the filter Y23 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
In this possible example, the target component comprises a luminance component; the correlation information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the distance of the reference pixel from the currently processed pixel is in the first distance range, the filter is set to filter X31.
In this possible example, the filter X31 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter X32.
In this possible example, the filter X32 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region including five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region including five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
In this possible example, the target component comprises a chrominance component; the associated information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: the filter is set to filter Y31 when the distance of the reference pixel from the currently processed pixel is in a first distance range.
In this possible example, the filter Y31 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter Y32.
In this possible example, the filter Y32 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, the target component includes a luminance component; the correlation information comprises a number of rows and a number of columns of the reference pixel in a prediction block of the target component.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the number of rows and columns of the reference pixel in the prediction block of the target component is 1, the filter is set to filter X41.
In this possible example, the filter X41 is configured to filter a first boundary pixel region of a left-side adjacent reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper-side adjacent reconstructed block of the predicted block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes three consecutive, adjacent first boundary pixels, and a first boundary pixel in the middle of the three consecutive, adjacent first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region includes three consecutive, adjacent second boundary pixels, and a second boundary pixel in the middle of the three consecutive, adjacent second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the number of rows and columns of the reference pixel in the prediction block of the target component is 2, the filter is set to filter X42.
In this possible example, the filter X42 is configured to filter a first boundary pixel region of a left adjacent reconstructed block of a prediction block of a luma component of the current coding block and a second boundary pixel region of an upper adjacent reconstructed block of the prediction block of the luma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes six adjacent first boundary pixels of 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six adjacent first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes six adjacent second boundary pixels of 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six adjacent second boundary pixels is a second reference pixel of the reference pixels.
In this possible example, the target component comprises a chrominance component; the correlation information comprises the number of rows and the number of columns of the reference pixel in a prediction block of the target component.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the number of rows and columns of the reference pixel in a prediction block of the target component is 1, the filter is set to filter Y41.
In this possible example, the filter Y41 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 142 is specifically configured to: when the number of rows and columns of the reference pixel in the prediction block of the target component is 2, the filter is set to filter Y42.
In this possible example, the filter Y42 is configured to filter a first boundary pixel region of a left adjacent reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper adjacent reconstructed block of the predicted block of the chroma component of the current coding block, where the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region includes six adjacent first boundary pixels of 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six adjacent first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current coding block, and the second boundary pixel region includes six adjacent second boundary pixels of 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six adjacent second boundary pixels is a second reference pixel of the reference pixels.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. Of course, the image encoding apparatus provided in the embodiments of the present application includes, but is not limited to, the above modules, for example: the image encoding apparatus may further include a storage unit 144. The storage unit 144 may be used to store program codes and data of the image encoding apparatus.
In the case of using an integrated unit, a schematic structural diagram of an image encoding device provided in an embodiment of the present application is shown in fig. 15. In fig. 15, the image encoding device 15 includes: a processing module 150 and a communication module 151. The processing module 150 is used for controlling and managing actions of the image encoding apparatus, for example, performing steps performed by the first determining unit 140, the second determining unit 141, the first filtering unit 142, the second filtering unit 143, and/or other processes for performing the techniques described herein. The communication module 151 is used to support interaction between the image encoding apparatus and other devices. As shown in fig. 15, the image encoding apparatus may further include a storage module 152, and the storage module 152 is used for storing program codes and data of the image encoding apparatus, for example, contents stored in the storage unit 144.
The Processing module 150 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, and the like. The communication module 151 may be a transceiver, an RF circuit or a communication interface, etc. The storage module 152 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The image encoding apparatus may perform the image encoding method shown in fig. 12A, and the image encoding apparatus may be specifically a video image encoding apparatus or other devices having a video encoding function.
The application also provides a video encoder, which comprises a nonvolatile storage medium and a central processing unit, wherein the nonvolatile storage medium stores an executable program, and the central processing unit is connected with the nonvolatile storage medium and executes the executable program to realize the image encoding method of the embodiment of the application.
The embodiment of the application provides an image decoding device which can be a video decoder or a video decoder. Specifically, the image decoding apparatus is configured to perform the steps performed by the video decoder in the above decoding method. The image decoding device provided by the embodiment of the application can comprise modules corresponding to the corresponding steps.
In the embodiment of the present application, the image decoding apparatus may be divided into functional modules according to the above method, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The division of the modules in the embodiment of the present application is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Fig. 16 is a schematic diagram showing a possible configuration of the image decoding apparatus according to the above embodiment, in a case where each functional module is divided in correspondence with each function. As shown in fig. 16, the image decoding device 16 includes a first determining unit 160, a second determining unit 161, a first filtering unit 162, and a second filtering unit 163.
A first determining unit 160, configured to parse the code stream, and determine an intra prediction mode of a target component of a current decoded block, where the target component includes a luminance component or a chrominance component;
a second determining unit 161 for determining a prediction block of the target component of the current decoded block according to the intra prediction mode of the target component;
a first filtering unit 162, configured to perform first filtering on a reference pixel used for modifying the prediction block according to an intra prediction mode of the target component, so as to obtain a filtered reference pixel;
the second filtering unit 163 is configured to perform second filtering on the prediction block of the target component according to the filtered reference pixel, so as to obtain a modified prediction block.
In this possible example, in terms of performing, according to the intra prediction mode of the target component, the first filtering on the reference pixel used for modifying the prediction block to obtain a filtered reference pixel, the first filtering unit 162 is specifically configured to: determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component; determining a filter according to association information of a prediction block of the target component, the association information including at least one of: the intra prediction mode of the target component, the distance between the reference pixel and the currently processed pixel, and the number of rows and columns of the reference pixel in a prediction block of the target component; first filtering the reference pixel using the filter.
In this possible example, the target component includes a luminance component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set to filter X11.
In this possible example, the filter X11 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, the boundary pixel region including three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the luma component is an intra horizontal class angle prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter X12.
In this possible example, the filter X12 is configured to filter a boundary pixel region of an upper neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near an upper boundary of the current decoded block, the boundary pixel region including three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the luma component is an intra vertical angle-like prediction mode and the distance between the reference pixel and the currently processed pixel is in the second distance range, the filter is set to filter X13.
In this possible example, the filter X13 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, and the boundary pixel region including five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the luma component is an intra horizontal class angle prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter X14.
In this possible example, the filter X14 is configured to filter a boundary pixel region of an upper neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near an upper boundary of the current decoded block, the boundary pixel region including five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter X15.
In this possible example, the filter X15 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode for the luma component is intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a second distance range, the filter is set to filter X16.
In this possible example, the filter X16 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
In this possible example, the target component comprises a chrominance component; the associated information includes an intra prediction mode of the target component and a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra-frame prediction mode of the chroma component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y11.
In this possible example, the filter Y11 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a chroma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, the boundary pixel region including three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra-frame prediction mode of the chroma component is an intra-frame horizontal angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ L or a cross-component linear model prediction CCLM _ L or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a first distance range, the filter is set as a filter Y12.
In this possible example, the filter Y12 is configured to filter a boundary pixel region of an upper neighboring reconstruction block of a prediction block of chroma components of the current decoded block, the boundary pixel region being a pixel region near an upper boundary of the current decoded block, the boundary pixel region including three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: and when the intra-frame prediction mode of the chrominance component is an intra-frame vertical angle-like prediction mode or a two-step cross-component prediction mode TSCPM _ T or a cross-component linear model prediction CCLM _ A or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set as a filter Y13.
In this possible example, the filter Y13 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of chroma components of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, and the boundary pixel region including five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the chrominance component is an intra horizontal type angle prediction mode or a two-step cross component prediction mode TSCPM _ L or a cross component linear model prediction CCLM _ L or a multi-step cross component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a second distance range, the filter is set to filter Y14.
In this possible example, the filter Y14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current decoded block, where the boundary pixel region is a pixel region near an upper boundary of the current decoded block, the boundary pixel region includes five consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter Y15.
In this possible example, the filter Y15 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode for the chroma component is normal intra non-angular prediction mode and the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter Y16.
In this possible example, the filter Y16 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
In this possible example, the target component comprises a luminance component; the associated information includes an intra prediction mode of the target component.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the luminance component is an intra vertical type angle prediction mode, the filter is set to a filter X21.
In this possible example, the filter X21 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current decoded block, where the boundary pixel region is a pixel region near an upper boundary of the current decoded block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the luminance component is the intra horizontal type angle prediction mode, the filter is set to the filter X22.
In this possible example, the filter X22 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block, where the boundary pixel region is a pixel region near a left boundary of the current decoded block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the luminance component is the intra non-angular prediction mode, the filter is set to the filter X23.
In this possible example, the filter X23 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
In this possible example, the target component comprises a chrominance component; the associated information includes an intra prediction mode of the target component.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the chrominance component is an intra vertical type angle prediction mode, the filter is set to a filter Y21.
In this possible example, the filter Y21 is used for the filter X21 to filter a boundary pixel region of an upper side neighboring reconstruction block of a prediction block of a chroma component of the current decoded block, the boundary pixel region is a pixel region near an upper boundary of the current decoded block, the boundary pixel region includes four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the chroma component is an intra horizontal class angle prediction mode, the filter is set to filter Y22.
In this possible example, the filter Y22 is configured to filter a boundary pixel region of a left neighboring reconstruction block of a prediction block of a chroma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, the boundary pixel region including four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels being the reference pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the intra prediction mode of the chroma component is an intra non-angular prediction mode, the filter is set to filter Y23.
In this possible example, the filter Y23 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, where the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and the first boundary pixel region includes four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position among the four consecutive, neighboring first boundary pixels is a first reference pixel among the reference pixels, the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region includes four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position among the four consecutive, neighboring second boundary pixels is a second reference pixel among the reference pixels.
In this possible example, the target component includes a luminance component; the associated information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the distance of the reference pixel from the currently processed pixel is in the first distance range, the filter is set to filter X31.
In this possible example, the filter X31 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the distance of the reference pixel from the currently processed pixel is in the second distance range, the filter is set to filter X32.
In this possible example, the filter X32 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
In this possible example, the target component comprises a chrominance component; the associated information includes a distance of the reference pixel from a currently processed pixel.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the distance of the reference pixel from the currently processed pixel is in a first distance range, the filter is set to filter Y31.
In this possible example, the filter Y31 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of a prediction block of chroma components of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of chroma components of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the associated information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the distance of the reference pixel from the currently processed pixel is in a second distance range, the filter is set to filter Y32.
In this possible example, the filter Y32 is configured to filter a first boundary pixel region of a left-side neighboring reconstruction block of the prediction block of the chroma components of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstruction block of the prediction block of the chroma components of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including five consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the five consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including five consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the five consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
In this possible example, the target component includes a luminance component; the correlation information comprises a number of rows and a number of columns of the reference pixel in a prediction block of the target component.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the number of rows and columns of the reference pixel in the prediction block of the target component is 1, the filter is set to filter X41.
In this possible example, the filter X41 is configured to filter a first boundary pixel region of a left-side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper-side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the number of rows and columns of the reference pixel in the prediction block of the target component is 2, the filter is set to filter X42.
In this possible example, the filter X42 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including six neighboring first boundary pixels of 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region including six neighboring second boundary pixels of 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six neighboring second boundary pixels being a second reference pixel of the reference pixels.
In this possible example, the target component comprises a chrominance component; the association information comprises a number of rows and a number of columns of the reference pixel in a prediction block of the target component.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the number of rows and columns of the reference pixel in a prediction block of the target component is 1, the filter is set to filter Y41.
In this possible example, the filter Y41 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near the upper boundary of the current decoded block, and the second boundary pixel region including three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
In this possible example, in terms of said determining a filter according to the correlation information of the prediction block of the target component, the first filtering unit 162 is specifically configured to: when the number of rows and columns of the reference pixel in a prediction block of the target component is 2, the filter is set to filter Y42.
In this possible example, the filter Y42 is configured to filter a first boundary pixel region of a left adjacent reconstruction block of a prediction block of chroma components of the current decoded block and a second boundary pixel region of an upper adjacent reconstruction block of the prediction block of chroma components of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region including six adjacent first boundary pixels of 3 rows and 2 columns, and a first boundary pixel of the six adjacent first boundary pixels in the middle of the 2 nd column being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region including six adjacent second boundary pixels of 2 rows and 3 columns, and a second boundary pixel of the six adjacent second boundary pixels in the middle of the second row being a second reference pixel of the reference pixels.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. Of course, the image decoding apparatus provided in the embodiments of the present application includes, but is not limited to, the above modules, for example: the image decoding apparatus may further include a storage unit 164. The storage unit 164 may be used to store program codes and data of the image decoding apparatus.
In the case of using an integrated unit, a schematic structural diagram of an image decoding apparatus provided in an embodiment of the present application is shown in fig. 17. In fig. 17, the image decoding device 17 includes: a processing module 170 and a communication module 171. The processing module 170 is used to control and manage the actions of the image decoding apparatus, for example, execute the steps performed by the first determining unit 160, the second determining unit 161, the first filtering unit 162, the second filtering unit 163, and/or other processes for performing the techniques described herein. The communication module 171 is used to support interaction between the image decoding apparatus and other devices. As shown in fig. 17, the image decoding apparatus may further include a storage module 172, and the storage module 172 is used for storing program codes and data of the image decoding apparatus, for example, contents stored in the storage unit 164.
The Processing module 170 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 171 may be a transceiver, an RF circuit or a communication interface, etc. The storage module 172 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The image decoding apparatus may perform the image decoding method shown in fig. 13, and the image decoding apparatus may be specifically a video image decoding apparatus or other devices having a video decoding function.
The application also provides a video decoder, which comprises a nonvolatile storage medium and a central processing unit, wherein the nonvolatile storage medium stores an executable program, and the central processing unit is connected with the nonvolatile storage medium and executes the executable program to realize the image decoding method of the embodiment of the application.
The present application further provides a terminal, including: one or more processors, memory, a communication interface. The memory, communication interface, and one or more processors; the memory is used for storing computer program code comprising instructions which, when executed by the one or more processors, cause the terminal to perform the image encoding and/or image decoding methods of embodiments of the present application. The terminal can be a video display device, a smart phone, a portable computer and other devices which can process video or play video.
Another embodiment of the present application also provides a computer-readable storage medium including one or more program codes, where the one or more programs include instructions, and when a processor in a decoding apparatus executes the program codes, the decoding apparatus executes an image encoding method and an image decoding method of the embodiments of the present application.
In another embodiment of the present application, there is also provided a computer program product comprising computer executable instructions stored in a computer readable storage medium; the at least one processor of the decoding device may read the computer executable instructions from the computer readable storage medium, and the execution of the computer executable instructions by the at least one processor causes the terminal to implement the image encoding method and the image decoding method of the embodiments of the present application.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any combination thereof. When implemented using a software program, may take the form of a computer program product, either entirely or partially. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part.
The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.).
The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (111)

1. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X11 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame vertical angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
Performing first filtering on the reference pixel by using the filter X11 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
2. The method of claim 1, wherein the filter X11 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a luma component of the current coding block, wherein the boundary pixel region is a pixel region near a left boundary of the current coding block, wherein the boundary pixel region comprises three consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
3. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X12 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame horizontal angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
Performing first filtering on the reference pixel by using the filter X12 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
4. The method according to claim 3, wherein the filter X12 is used for filtering a boundary pixel region of an upper side neighboring reconstructed block of a predicted block of the luma component of the current coding block, the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region comprises three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
5. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X13 according to the associated information of the prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and the distance between the reference pixel and the currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame vertical angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
Performing first filtering on the reference pixel by using the filter X13 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
6. The method of claim 5, wherein the filter X13 is used to filter a boundary pixel region of a left neighboring reconstructed block of a predicted block of a luma component of the current coding block, wherein the boundary pixel region is a pixel region near a left boundary of the current coding block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a middle boundary pixel of the five consecutive, neighboring boundary pixels is the reference pixel.
7. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that the filter is a filter X14 according to the associated information of the prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and the distance between the reference pixel and the currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame horizontal angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
Performing first filtering on the reference pixel by using the filter X14 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
8. The method of claim 7, wherein the filter X14 is used for filtering a boundary pixel region of an upper side neighboring reconstructed block of a predicted block of a luma component of the current coding block, wherein the boundary pixel region is a pixel region near an upper boundary of the current coding block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
9. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X15 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame non-angle prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
Performing first filtering on the reference pixel by using the filter X15 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
10. The method of claim 9, wherein the filter X15 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
11. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X16 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a pixel which is processed currently, the intra-frame prediction mode of the brightness component is an intra-frame non-angle prediction mode, and the distance between the reference pixel and the pixel which is processed currently is in a second distance range;
performing first filtering on the reference pixel by using the filter X16 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
12. The method of claim 11, wherein the filter X16 is used to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprising five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region comprising five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
13. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y11 according to association information of a prediction block of the target component, wherein the target component comprises a chroma component, the association information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter Y11 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
14. The method of claim 13, wherein the filter Y11 is used to filter a boundary pixel region of a left neighboring reconstructed block of a predicted block of chroma components of the current coding block, the boundary pixel region being a pixel region near a left boundary of the current coding block, and the boundary pixel region comprising three consecutive, neighboring boundary pixels, and a boundary pixel in between of the three consecutive, neighboring boundary pixels being the reference pixel.
15. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y12 according to association information of a prediction block of the target component, wherein the target component comprises a chroma component, the association information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame horizontal angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ L, or a cross-component linear model prediction CCLM _ L, or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
Performing first filtering on the reference pixel by using the filter Y12 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
16. The method of claim 15, wherein the filter Y12 is configured to filter a boundary pixel region of an upper side neighboring reconstructed block of a prediction block of a chroma component of the current coding block, the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region comprises three consecutive, neighboring boundary pixels, and a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
17. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter Y13 according to association information of a prediction block of the target component, wherein the target component comprises a chroma component, the association information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
Performing first filtering on the reference pixel by using the filter Y13 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
18. The method of claim 17, wherein the filter Y13 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a chroma component of the current coding block, wherein the boundary pixel region is a pixel region near a left boundary of the current coding block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
19. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter Y14 according to association information of a prediction block of the target component, wherein the target component comprises a chroma component, the association information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame horizontal type angle prediction mode, or a two-step cross-component prediction mode TSCPM _ L, or a cross-component linear model prediction CCLM _ L, or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
Performing first filtering on the reference pixel by using the filter Y14 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
20. The method of claim 19, wherein the filter Y14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current coding block, wherein the boundary pixel region is a pixel region near an upper boundary of the current coding block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a middle boundary pixel of the five consecutive, neighboring boundary pixels is the reference pixel.
21. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y15 according to associated information of a prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a pixel which is processed currently, the intra-frame prediction mode of the chroma component is a normal intra-frame non-angle prediction mode, and the distance between the reference pixel and the pixel which is processed currently is in a first distance range;
Performing first filtering on the reference pixel by using the filter Y15 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
22. The method of claim 21, wherein the filter Y15 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprising three consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the three consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region comprising three consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the three consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
23. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter Y16 according to the associated information of the prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component and the distance between the reference pixel and the currently processed pixel, the intra-frame prediction mode of the chroma component is a common intra-frame non-angle prediction mode, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter Y16 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
24. The method of claim 23, wherein the filter Y16 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprises five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
25. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X21 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the brightness component is an intra-frame vertical angle prediction mode;
performing first filtering on the reference pixel by using the filter X21 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
26. The method of claim 25, wherein the filter X21 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a predicted block of a luma component of the current coding block, wherein the boundary pixel region is a pixel region near an upper boundary of the current coding block, wherein the boundary pixel region comprises four consecutive, neighboring boundary pixels, and wherein a boundary pixel at a second position of the four consecutive, neighboring boundary pixels is the reference pixel.
27. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X22 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the brightness component is an intra-frame horizontal angle-like prediction mode;
performing first filtering on the reference pixel by using the filter X22 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
28. The method of claim 27, wherein the filter X22 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a predicted block of a luma component of the current coding block, wherein the boundary pixel region is a pixel region near a left boundary of the current coding block, wherein the boundary pixel region comprises four consecutive, neighboring boundary pixels, and wherein a boundary pixel at a second position of the four consecutive, neighboring boundary pixels is the reference pixel.
29. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X23 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the brightness component is an intra-frame non-angle prediction mode;
performing first filtering on the reference pixel by using the filter X23 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
30. The method of claim 29, wherein the filter X23 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprises four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position of the four consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position of the four consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
31. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter Y21 according to associated information of a prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the chroma component is an intra-frame vertical angle prediction mode;
performing first filtering on the reference pixel by using the filter Y21 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
32. The method of claim 31, wherein the filter Y21 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current coding block, the boundary pixel region is a pixel region near an upper boundary of the current coding block, the boundary pixel region comprises four consecutive, neighboring boundary pixels, and a boundary pixel at a second position in the four consecutive, neighboring boundary pixels is the reference pixel.
33. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y22 according to the associated information of the prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the chroma component is an intra-frame horizontal angle-like prediction mode;
performing first filtering on the reference pixel by using the filter Y22 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
34. The method of claim 33, wherein the filter Y22 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a chroma component of the current coding block, wherein the boundary pixel region is a pixel region near a left boundary of the current coding block, wherein the boundary pixel region comprises four consecutive, neighboring boundary pixels, and wherein a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
35. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y23 according to associated information of a prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the chroma component is an intra-frame non-angle prediction mode;
performing first filtering on the reference pixel by using the filter Y23 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
36. The method of claim 35, wherein the filter Y23 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprises four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position of the four consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position of the four consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
37. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X31 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises the distance between the reference pixel and a currently processed pixel, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter X31 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
38. The method of claim 37, wherein the filter X31 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
39. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X32 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises a distance between the reference pixel and a current processed pixel, and the distance between the reference pixel and the current processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter X32 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
40. The method of claim 39, wherein the filter X32 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and wherein the first boundary pixel region comprises five consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises five consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
41. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y31 according to associated information of a prediction block of the target component, wherein the target component comprises a chrominance component, the associated information comprises a distance between the reference pixel and a currently processed pixel, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter Y31 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
42. The method of claim 41, wherein the filter Y31 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
43. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y32 according to associated information of a prediction block of the target component, wherein the target component comprises a chrominance component, the associated information comprises a distance between the reference pixel and a currently processed pixel, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter Y32 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
44. The method of claim 43, wherein the filter Y32 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprises five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and the second boundary pixel region comprises five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
45. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X41 according to the association information of the prediction block of the target component, wherein the target component comprises a brightness component, the association information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 1;
performing first filtering on the reference pixel by using the filter X41 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
46. The method of claim 45, wherein the filter X41 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the luma component of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
47. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X42 according to the associated information of the prediction block of the target component, wherein the target component comprises a luminance component, the associated information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 2;
performing a first filtering on the reference pixel by using the filter X42 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
48. The method of claim 47, wherein the filter X42 is used to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a luma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of a predicted block of a luma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block and the first boundary pixel region comprising six neighboring first boundary pixels in 3 rows and 2 columns and a first boundary pixel in the middle of a 2 nd column of the six neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block and the second boundary pixel region comprising 2 rows and 3 columns and six neighboring second boundary pixels and a second boundary pixel in the middle of a second row of the six neighboring second boundary pixels being a second reference pixel of the reference pixels.
49. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y41 according to the association information of the prediction block of the target component, wherein the target component comprises a chrominance component, the association information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 1;
performing first filtering on the reference pixel by using the filter Y41 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
50. The method of claim 49, wherein the filter Y41 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current coding block, wherein the first boundary pixel region is a pixel region near a left boundary of the current coding block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current coding block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
51. An image encoding method, comprising:
dividing an image, and determining an intra-frame prediction mode of a target component of a current coding block;
determining a prediction block of a target component of the current coding block according to an intra-frame prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y42 according to associated information of the prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 2;
performing first filtering on the reference pixel by using the filter Y42 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
52. The method of claim 51, wherein the filter Y42 is used to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of a chroma component of the current coding block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of the chroma component of the current coding block, the first boundary pixel region being a pixel region near a left boundary of the current coding block, and the first boundary pixel region comprising six neighboring first boundary pixels in 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current coding block, and the second boundary pixel region comprising 2 rows and 3 columns of six neighboring second boundary pixels, and a second boundary pixel in the middle of a second row of the six neighboring second boundary pixels being a second reference pixel of the reference pixels.
53. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X11 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame vertical angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter X11 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
54. The method of claim 53, wherein the filter X11 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a luma component of the current decoded block, wherein the boundary pixel region is a pixel region near a left boundary of the current decoded block, wherein the boundary pixel region comprises three consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
55. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X12 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame horizontal angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter X12 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
56. The method of claim 55, wherein the filter X12 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current decoded block, wherein the boundary pixel region is a pixel region near an upper boundary of the current decoded block, wherein the boundary pixel region comprises three consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
57. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X13 according to the associated information of the prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and the distance between the reference pixel and the currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame vertical angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter X13 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
58. The method of claim 57, wherein the filter X13 is used to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a luma component of the current decoded block, wherein the boundary pixel region is a pixel region near a left boundary of the current decoded block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a middle boundary pixel of the five consecutive, neighboring boundary pixels is the reference pixel.
59. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that the filter is a filter X14 according to the associated information of the prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and the distance between the reference pixel and the currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame horizontal angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter X14 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
60. The method of claim 59, wherein the filter X14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current decoded block, wherein the boundary pixel region is a pixel region near an upper boundary of the current decoded block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
61. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X15 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame non-angle prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter X15 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
62. The method of claim 61, wherein the filter X15 is configured to filter a first boundary pixel region of a left side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
63. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X16 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame non-angle prediction mode, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter X16 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
64. The method of claim 63, wherein the filter X16 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of luma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of a predicted block of luma components of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises five consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current decoded block, and wherein the second boundary pixel region comprises five consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
65. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y11 according to association information of a prediction block of the target component, wherein the target component comprises a chroma component, the association information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter Y11 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
66. The method of claim 65, wherein the filter Y11 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a chroma component of the current decoded block, wherein the boundary pixel region is a pixel region near a left boundary of the current decoded block, wherein the boundary pixel region comprises three consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
67. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y12 according to association information of a prediction block of the target component, wherein the target component comprises a chroma component, the association information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame horizontal type angle prediction mode, or a two-step cross-component prediction mode TSCPM _ L, or a cross-component linear model prediction CCLM _ L, or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
Performing first filtering on the reference pixel by using the filter Y12 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
68. The method of claim 67, wherein the filter Y12 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current decoded block, wherein the boundary pixel region is a pixel region near an upper boundary of the current decoded block, wherein the boundary pixel region comprises three consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the three consecutive, neighboring boundary pixels is the reference pixel.
69. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y13 according to association information of a prediction block of the target component, wherein the target component comprises a chroma component, the association information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame vertical angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ T, or a cross-component linear model prediction CCLM _ A, or a multi-step cross-component prediction mode MCPM _ T, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
Performing first filtering on the reference pixel by using the filter Y13 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
70. The method of claim 69, wherein the filter Y13 is used for filtering a boundary pixel region of a left neighboring reconstruction block of a prediction block of chroma components of the current decoded block, wherein the boundary pixel region is a pixel region near a left boundary of the current decoded block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a middle boundary pixel of the five consecutive, neighboring boundary pixels is the reference pixel.
71. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y14 according to the associated information of the prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component and the distance between the reference pixel and the currently processed pixel, the intra-frame prediction mode of the chroma component is an intra-frame horizontal angle-like prediction mode, or a two-step cross-component prediction mode TSCPM _ L, or a cross-component linear model prediction CCLM _ L, or a multi-step cross-component prediction mode MCPM _ L, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
Performing first filtering on the reference pixel by using the filter Y14 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
72. The method of claim 71, wherein the filter Y14 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current decoded block, wherein the boundary pixel region is a pixel region near an upper boundary of the current decoded block, wherein the boundary pixel region comprises five consecutive, neighboring boundary pixels, and wherein a boundary pixel in the middle of the five consecutive, neighboring boundary pixels is the reference pixel.
73. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y15 according to associated information of a prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a pixel which is processed currently, the intra-frame prediction mode of the chroma component is a normal intra-frame non-angle prediction mode, and the distance between the reference pixel and the pixel which is processed currently is in a first distance range;
Performing first filtering on the reference pixel by using the filter Y15 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
74. The method of claim 73, wherein the filter Y15 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
75. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter Y16 according to the associated information of the prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component and the distance between the reference pixel and the currently processed pixel, the intra-frame prediction mode of the chroma component is a common intra-frame non-angle prediction mode, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter Y16 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
76. The method of claim 75, wherein the filter Y16 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises five consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and wherein the second boundary pixel region comprises five consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
77. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X21 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the brightness component is an intra-frame vertical angle prediction mode;
performing first filtering on the reference pixel by using the filter X21 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
78. The method of claim 77, wherein the filter X21 is configured to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a luma component of the current decoded block, wherein the boundary pixel region is a pixel region near an upper boundary of the current decoded block, wherein the boundary pixel region comprises four consecutive, neighboring boundary pixels, and wherein a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
79. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X22 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the brightness component is an intra-frame horizontal angle prediction mode;
performing first filtering on the reference pixel by using the filter X22 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
80. The method of claim 79, wherein the filter X22 is configured to filter a boundary pixel region of a left neighboring reconstructed block of a prediction block of a luma component of the current decoded block, the boundary pixel region being a pixel region near a left boundary of the current decoded block, the boundary pixel region comprising four consecutive, neighboring boundary pixels, and a boundary pixel at a second position among the four consecutive, neighboring boundary pixels being the reference pixel.
81. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter X23 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the brightness component is an intra-frame non-angle prediction mode;
performing first filtering on the reference pixel by using the filter X23 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
82. The method of claim 81, wherein the filter X23 is configured to filter a first boundary pixel region of a left side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstruction block of the prediction block of the luma component of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises four consecutive, neighboring first boundary pixels, and wherein a first boundary pixel at a second position of the four consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current decoded block, and wherein the second boundary pixel region comprises four consecutive, neighboring second boundary pixels, and wherein a second boundary pixel at a second position of the four consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
83. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining that a filter is a filter Y21 according to associated information of a prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the chroma component is an intra-frame vertical angle prediction mode;
performing first filtering on the reference pixel by using the filter Y21 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
84. The method of claim 83, wherein the filter Y21 is used for the filter X21 to filter a boundary pixel region of an upper neighboring reconstructed block of a prediction block of a chroma component of the current decoded block, wherein the boundary pixel region is a pixel region near an upper boundary of the current decoded block, wherein the boundary pixel region comprises four consecutive, neighboring boundary pixels, and wherein a boundary pixel at a second position in the four consecutive, neighboring boundary pixels is the reference pixel.
85. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y22 according to the associated information of the prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the chroma component is an intra-frame horizontal angle-like prediction mode;
performing first filtering on the reference pixel by using the filter Y22 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
86. The method of claim 85, wherein the filter Y22 is used for filtering a boundary pixel region of a left neighboring reconstruction block of a prediction block of chroma components of the current decoded block, wherein the boundary pixel region is a pixel region near a left boundary of the current decoded block, wherein the boundary pixel region comprises four consecutive, neighboring boundary pixels, and wherein a boundary pixel at a second position among the four consecutive, neighboring boundary pixels is the reference pixel.
87. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y23 according to associated information of a prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises an intra-frame prediction mode of the target component, and the intra-frame prediction mode of the chroma component is an intra-frame non-angle prediction mode;
performing first filtering on the reference pixel by using the filter Y23 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
88. The method of claim 87, wherein the filter Y23 is configured to filter a first boundary pixel region of a left side neighboring reconstruction block of a prediction block of chroma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstruction block of the prediction block of chroma components of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region comprising four consecutive, neighboring first boundary pixels, and a first boundary pixel at a second position of the four consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region comprising four consecutive, neighboring second boundary pixels, and a second boundary pixel at a second position of the four consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
89. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X31 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises a distance between the reference pixel and a current processed pixel, and the distance between the reference pixel and the current processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter X31 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
90. The method of claim 89, wherein the filter X31 is configured to filter a first boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper neighboring reconstruction block of a prediction block of a luma component of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, wherein the second boundary pixel region is a pixel region near an upper boundary of the current decoded block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
91. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X32 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises a distance between the reference pixel and a current processed pixel, and the distance between the reference pixel and the current processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter X32 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
92. The method of claim 91, wherein the filter X32 is configured to filter a first boundary pixel region of a left side neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper side neighboring reconstruction block of a prediction block of a luma component of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region comprising five consecutive, neighboring first boundary pixels, and a middle first boundary pixel of the five consecutive, neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region comprising five consecutive, neighboring second boundary pixels, and a middle second boundary pixel of the five consecutive, neighboring second boundary pixels being a second reference pixel of the reference pixels.
93. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y31 according to associated information of a prediction block of the target component, wherein the target component comprises a chrominance component, the associated information comprises a distance between the reference pixel and a currently processed pixel, and the distance between the reference pixel and the currently processed pixel is in a first distance range;
performing first filtering on the reference pixel by using the filter Y31 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
94. The method of claim 93, wherein the filter Y31 is configured to filter a first boundary pixel region of a left side neighboring reconstruction block of a prediction block of chroma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstruction block of the prediction block of chroma components of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region comprising three consecutive, neighboring first boundary pixels, and a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels being a first reference pixel in the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region comprising three consecutive, neighboring second boundary pixels, and a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels being a second reference pixel in the reference pixels.
95. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y32 according to associated information of a prediction block of the target component, wherein the target component comprises a chrominance component, the associated information comprises a distance between the reference pixel and a currently processed pixel, and the distance between the reference pixel and the currently processed pixel is in a second distance range;
performing first filtering on the reference pixel by using the filter Y32 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
96. The method of claim 95, wherein the filter Y32 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises five consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the five consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and wherein the second boundary pixel region comprises five consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the five consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
97. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X41 according to the associated information of the prediction block of the target component, wherein the target component comprises a luminance component, the associated information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 1;
performing first filtering on the reference pixel by using the filter X41 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
98. The method of claim 97, wherein the filter X41 is configured to filter a first boundary pixel region of a left neighboring reconstruction block of a prediction block of a luma component of the current decoded block and a second boundary pixel region of an upper neighboring reconstruction block of the prediction block of the luma component of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a first boundary pixel in the middle of the three consecutive, neighboring first boundary pixels is a first reference pixel in the reference pixels, wherein the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a second boundary pixel in the middle of the three consecutive, neighboring second boundary pixels is a second reference pixel in the reference pixels.
99. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter X42 according to the associated information of the prediction block of the target component, wherein the target component comprises a luminance component, the associated information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 2;
performing first filtering on the reference pixel by using the filter X42 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
100. The method of claim 99, wherein the filter X42 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of luma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of a predicted block of luma components of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region comprising six neighboring first boundary pixels in 3 rows and 2 columns, and a first boundary pixel in the middle of a 2 nd column of the six neighboring first boundary pixels being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region comprising six neighboring second boundary pixels in 2 rows and 3 columns, and a second boundary pixel in the middle of a second row of the six neighboring second boundary pixels being a second reference pixel of the reference pixels.
101. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y41 according to the association information of the prediction block of the target component, wherein the target component comprises a chrominance component, the association information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 1;
performing first filtering on the reference pixel by using the filter Y41 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
102. The method of claim 101, wherein the filter Y41 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current decoded block, wherein the first boundary pixel region is a pixel region near a left boundary of the current decoded block, and wherein the first boundary pixel region comprises three consecutive, neighboring first boundary pixels, and wherein a middle first boundary pixel of the three consecutive, neighboring first boundary pixels is a first reference pixel of the reference pixels, wherein the second boundary pixel region is a pixel region near the upper boundary of the current decoded block, and wherein the second boundary pixel region comprises three consecutive, neighboring second boundary pixels, and wherein a middle second boundary pixel of the three consecutive, neighboring second boundary pixels is a second reference pixel of the reference pixels.
103. An image decoding method, comprising:
analyzing the code stream, and determining an intra-frame prediction mode of a target component of a current decoding block;
determining a prediction block of a target component of the currently decoded block according to an intra prediction mode of the target component;
determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component;
determining a filter as a filter Y42 according to associated information of the prediction block of the target component, wherein the target component comprises a chroma component, the associated information comprises the number of rows and columns of the reference pixel in the prediction block of the target component, and the number of rows and columns of the reference pixel in the prediction block of the target component is 2;
performing first filtering on the reference pixel by using the filter Y42 to obtain a filtered reference pixel;
and carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
104. The method of claim 103, wherein the filter Y42 is configured to filter a first boundary pixel region of a left side neighboring reconstructed block of a predicted block of chroma components of the current decoded block and a second boundary pixel region of an upper side neighboring reconstructed block of the predicted block of chroma components of the current decoded block, the first boundary pixel region being a pixel region near a left boundary of the current decoded block, and the first boundary pixel region comprising six neighboring first boundary pixels of 3 rows and 2 columns, and a first boundary pixel of the six neighboring first boundary pixels in the middle of column 2 being a first reference pixel of the reference pixels, the second boundary pixel region being a pixel region near an upper boundary of the current decoded block, and the second boundary pixel region comprising six neighboring second boundary pixels of 2 rows and 3 columns, and a second boundary pixel of the six neighboring second boundary pixels in the middle of a second row being a second reference pixel of the reference pixels.
105. An image encoding device characterized by comprising:
a first determining unit, configured to divide an image and determine an intra prediction mode of a target component of a current coding block;
a second determining unit, configured to determine a prediction block of a target component of the current coding block according to an intra prediction mode of the target component;
a first filtering unit for determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component; determining a filter as a filter X11 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame vertical angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range; performing first filtering on the reference pixel by using the filter X11 to obtain a filtered reference pixel;
and the second filtering unit is used for carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
106. An image decoding apparatus, comprising:
the first determining unit is used for analyzing the code stream and determining the intra-frame prediction mode of the target component of the current decoding block;
a second determining unit for determining a prediction block of a target component of the current decoded block according to an intra prediction mode of the target component;
a first filtering unit for determining reference pixels for modifying the prediction block according to an intra prediction mode of the target component; determining that a filter is a filter X11 according to associated information of a prediction block of the target component, wherein the target component comprises a brightness component, the associated information comprises an intra-frame prediction mode of the target component and a distance between the reference pixel and a currently processed pixel, the intra-frame prediction mode of the brightness component is an intra-frame vertical angle-like prediction mode, and the distance between the reference pixel and the currently processed pixel is in a first distance range; performing first filtering on the reference pixel by using the filter X11 to obtain a filtered reference pixel;
and the second filtering unit is used for carrying out second filtering on the prediction block of the target component according to the filtered reference pixel to obtain a corrected prediction block.
107. An encoder comprising a non-volatile storage medium and a central processing unit, wherein the non-volatile storage medium stores an executable program, the central processing unit is coupled to the non-volatile storage medium, and when the executable program is executed by the central processing unit, the encoder performs the bi-directional inter prediction method as recited in any one of claims 1-52.
108. A decoder comprising a non-volatile storage medium and a central processing unit, wherein the non-volatile storage medium stores an executable program, the central processing unit is connected to the non-volatile storage medium, and when the executable program is executed by the central processing unit, the decoder performs the bi-directional inter-frame prediction method as recited in any one of claims 53-104.
109. A terminal, characterized in that the terminal comprises: one or more processors, memory, and a communication interface; the memory, the communication interface, and the one or more processors; the terminal being in communication with other devices via the communication interface, the memory being for storing computer program code, the computer program code comprising instructions,
The instructions, when executed by the one or more processors, cause the terminal to perform the method of any of claims 1-104.
110. A computer program product, characterized in that it comprises instructions which, when run on a terminal, cause the terminal to carry out the method according to any one of claims 1-104.
111. A computer-readable storage medium comprising instructions that, when executed on a terminal, cause the terminal to perform the method of any one of claims 1-104.
CN202010707877.XA 2020-07-21 2020-07-21 Image encoding method, image decoding method and related device Active CN113965764B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010707877.XA CN113965764B (en) 2020-07-21 2020-07-21 Image encoding method, image decoding method and related device
TW110123867A TW202209880A (en) 2020-07-21 2021-06-29 Image encoding method, image decoding method and related devices capable of improving the accuracy of intra-frame prediction and encoding efficiency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010707877.XA CN113965764B (en) 2020-07-21 2020-07-21 Image encoding method, image decoding method and related device

Publications (2)

Publication Number Publication Date
CN113965764A CN113965764A (en) 2022-01-21
CN113965764B true CN113965764B (en) 2023-04-07

Family

ID=79460063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010707877.XA Active CN113965764B (en) 2020-07-21 2020-07-21 Image encoding method, image decoding method and related device

Country Status (2)

Country Link
CN (1) CN113965764B (en)
TW (1) TW202209880A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589028A (en) * 2004-07-29 2005-03-02 联合信源数字音视频技术(北京)有限公司 Predicting device and method based on pixel flowing frame
CN1720747A (en) * 2003-01-10 2006-01-11 汤姆森特许公司 Defining interpolation filters for error concealment in a coded image
WO2011163517A1 (en) * 2010-06-25 2011-12-29 Qualcomm Incorporated Intra prediction mode signaling for finer spatial prediction directions
CN103081467A (en) * 2010-09-01 2013-05-01 高通股份有限公司 Filter description signaling for multi-filter adaptive filtering
CN103141100A (en) * 2010-10-01 2013-06-05 高通股份有限公司 Intra smoothing filter for video coding
JP2013138395A (en) * 2011-11-04 2013-07-11 Sharp Corp Image filtering device, image decoding device, image encoding device and data structure
JP2013150178A (en) * 2012-01-19 2013-08-01 Sharp Corp Image decoding apparatus and image encoding apparatus
WO2017084577A1 (en) * 2015-11-18 2017-05-26 Mediatek Inc. Method and apparatus for intra prediction mode using intra prediction filter in video and image compression
EP3211895A1 (en) * 2009-04-24 2017-08-30 Sony Corporation Image processing with intra prediction having fractional pixel precision
CN107302700A (en) * 2016-04-15 2017-10-27 谷歌公司 Adaptive direction loop filter
CN108293130A (en) * 2015-11-27 2018-07-17 联发科技股份有限公司 Pass through the device and method of the coding and decoding video of intra prediction
WO2019010267A1 (en) * 2017-07-05 2019-01-10 Arris Enterprises Llc Post-filtering for weighted angular prediction
EP3570543A1 (en) * 2017-01-12 2019-11-20 Sony Corporation Image processing device and image processing method
WO2020013583A1 (en) * 2018-07-13 2020-01-16 엘지전자 주식회사 Image decoding method and device according to intra prediction in image coding system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8472527B2 (en) * 2006-09-13 2013-06-25 Texas Instruments Incorporated Hierarchical motion estimation using original frame for sub-sampled reference
MX355896B (en) * 2010-12-07 2018-05-04 Sony Corp Image processing device and image processing method.
US10425648B2 (en) * 2015-09-29 2019-09-24 Qualcomm Incorporated Video intra-prediction using position-dependent prediction combination for video coding
US10225578B2 (en) * 2017-05-09 2019-03-05 Google Llc Intra-prediction edge filtering

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1720747A (en) * 2003-01-10 2006-01-11 汤姆森特许公司 Defining interpolation filters for error concealment in a coded image
CN1589028A (en) * 2004-07-29 2005-03-02 联合信源数字音视频技术(北京)有限公司 Predicting device and method based on pixel flowing frame
EP3211895A1 (en) * 2009-04-24 2017-08-30 Sony Corporation Image processing with intra prediction having fractional pixel precision
WO2011163517A1 (en) * 2010-06-25 2011-12-29 Qualcomm Incorporated Intra prediction mode signaling for finer spatial prediction directions
CN103081467A (en) * 2010-09-01 2013-05-01 高通股份有限公司 Filter description signaling for multi-filter adaptive filtering
CN103141100A (en) * 2010-10-01 2013-06-05 高通股份有限公司 Intra smoothing filter for video coding
JP2013138395A (en) * 2011-11-04 2013-07-11 Sharp Corp Image filtering device, image decoding device, image encoding device and data structure
JP2013150178A (en) * 2012-01-19 2013-08-01 Sharp Corp Image decoding apparatus and image encoding apparatus
WO2017084577A1 (en) * 2015-11-18 2017-05-26 Mediatek Inc. Method and apparatus for intra prediction mode using intra prediction filter in video and image compression
CN108293130A (en) * 2015-11-27 2018-07-17 联发科技股份有限公司 Pass through the device and method of the coding and decoding video of intra prediction
CN107302700A (en) * 2016-04-15 2017-10-27 谷歌公司 Adaptive direction loop filter
EP3570543A1 (en) * 2017-01-12 2019-11-20 Sony Corporation Image processing device and image processing method
WO2019010267A1 (en) * 2017-07-05 2019-01-10 Arris Enterprises Llc Post-filtering for weighted angular prediction
WO2020013583A1 (en) * 2018-07-13 2020-01-16 엘지전자 주식회사 Image decoding method and device according to intra prediction in image coding system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于决策树的360度视频编码参考像素滤波算法";康剑源;《工业技术创新》;20200425;全文 *

Also Published As

Publication number Publication date
CN113965764A (en) 2022-01-21
TW202209880A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
JP7055745B2 (en) Geometric transformations for filters for video coding
WO2021238540A1 (en) Image encoding method, image decoding method, and related apparatuses
WO2021185257A1 (en) Image coding method, image decoding method and related apparatuses
US11516477B2 (en) Intra block copy scratch frame buffer
AU2023202986A1 (en) Method and apparatus for intra prediction
JP2024029063A (en) Position dependent spatial varying transform for video coding
WO2021244197A1 (en) Image encoding method, image decoding method, and related apparatuses
CN114071161B (en) Image encoding method, image decoding method and related devices
WO2021135856A1 (en) Video coding method and apparatus, video decoding method and apparatus, device, and storage medium
WO2022022622A1 (en) Image coding method, image decoding method, and related apparatus
WO2022037300A1 (en) Encoding method, decoding method, and related devices
CN113965764B (en) Image encoding method, image decoding method and related device
US20240214561A1 (en) Methods and devices for decoder-side intra mode derivation
WO2023154359A1 (en) Methods and devices for multi-hypothesis-based prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant