WO2024098263A1 - 编解码方法、码流、编码器、解码器以及存储介质 - Google Patents

编解码方法、码流、编码器、解码器以及存储介质 Download PDF

Info

Publication number
WO2024098263A1
WO2024098263A1 PCT/CN2022/130727 CN2022130727W WO2024098263A1 WO 2024098263 A1 WO2024098263 A1 WO 2024098263A1 CN 2022130727 W CN2022130727 W CN 2022130727W WO 2024098263 A1 WO2024098263 A1 WO 2024098263A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
prediction mode
block
intra
mode
Prior art date
Application number
PCT/CN2022/130727
Other languages
English (en)
French (fr)
Inventor
霍俊彦
马彦卓
杨付正
张振尧
李明
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/130727 priority Critical patent/WO2024098263A1/zh
Publication of WO2024098263A1 publication Critical patent/WO2024098263A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction

Definitions

  • the embodiments of the present application relate to the field of video coding and decoding technology, and in particular, to a coding and decoding method, a bit stream, an encoder, a decoder, and a storage medium.
  • JVET Joint Video Exploration Team
  • VVC Video Coding
  • cross-component prediction technology mainly includes the Cross-Component Linear Model (CCLM) mode.
  • CCLM Cross-Component Linear Model
  • the candidate list construction process is incomplete, resulting in poor intra-frame chrominance prediction and reduced encoding and decoding efficiency.
  • the embodiments of the present application provide a coding and decoding method, a bit stream, an encoder, a decoder and a storage medium, which can not only improve the accuracy of intra-frame chrominance prediction, but also improve the coding and decoding efficiency, thereby improving the coding and decoding performance.
  • an embodiment of the present application provides a decoding method, including:
  • a prediction value of a second color component of the current block is determined according to the reference intra-frame prediction mode parameter.
  • an embodiment of the present application provides an encoding method, including:
  • a predicted difference value of the second color component of the current block is determined.
  • an embodiment of the present application provides a code stream, which is generated by bit encoding according to information to be encoded; wherein the information to be encoded includes at least one of the following:
  • the predicted difference value of the second color component of the current block the mode index number and the first parameter.
  • an encoder comprising a first determination unit and a first prediction unit; wherein:
  • a first determination unit is configured to determine a reference block of the current block; wherein the reference block is a neighboring block of the current block; and when the prediction mode of the second color component of the reference block satisfies a first condition, determine a reference intra-frame prediction mode parameter according to the reference block;
  • a first prediction unit configured to determine a prediction value of a second color component of a current block according to a reference intra-frame prediction mode parameter
  • the first determination unit is further configured to determine a predicted difference value of the second color component of the current block according to the predicted value of the second color component of the current block.
  • an encoder comprising a first memory and a first processor; wherein:
  • a first memory for storing a computer program that can be run on the first processor
  • the first processor is used to execute the method described in the second aspect when running a computer program.
  • an embodiment of the present application provides a decoder, comprising a second determination unit and a second prediction unit; wherein:
  • a second determination unit is configured to determine a reference block of the current block; wherein the reference block is a neighboring block of the current block; and when the prediction mode of the second color component of the reference block satisfies the first condition, determine the reference intra-frame prediction mode parameter according to the reference block;
  • the second prediction unit is configured to determine a prediction value of a second color component of the current block according to a reference intra-frame prediction mode parameter.
  • an embodiment of the present application provides a decoder, including a second memory and a second processor; wherein:
  • a second memory for storing a computer program that can be run on a second processor
  • the second processor is used to execute the method described in the first aspect when running the computer program.
  • an embodiment of the present application provides a computer-readable storage medium, which stores a computer program.
  • the computer program When executed, it implements the method as described in the first aspect, or implements the method as described in the second aspect.
  • the embodiment of the present application provides a coding and decoding method, a code stream, an encoder, a decoder and a storage medium. Whether it is an encoding end or a decoding end, a reference block of a current block is determined, and the reference block is an adjacent block of the current block; when the prediction mode of the second color component of the reference block meets the first condition, the reference intra-frame prediction mode parameter is determined according to the reference block; and the prediction value of the second color component of the current block is determined according to the reference intra-frame prediction mode parameter.
  • the prediction difference value of the second color component of the current block can be determined according to the prediction value of the second color component of the current block; so that at the decoding end, the reconstruction value of the second color component of the current block can be determined according to the prediction value of the second color component of the current block.
  • the reference intra-frame prediction mode parameters of non-CCLM can be determined; according to the reference intra-frame prediction mode parameters, the completeness and diversity of the intra-frame chrominance prediction mode can be improved, thereby improving the accuracy of the intra-frame chrominance prediction, and also improving the coding and decoding efficiency, thereby improving the coding and decoding performance.
  • FIG1 is a schematic diagram of the positional relationship between a luminance CU and a chrominance CU provided in an embodiment of the present application;
  • FIG2 is a schematic diagram of another positional relationship between a luminance CU and a chrominance CU provided in an embodiment of the present application;
  • FIG3 is a schematic diagram of the position relationship between another luminance CU and chrominance CU provided in an embodiment of the present application;
  • FIG4 is a schematic diagram of the position relationship between another luminance CU and chrominance CU provided in an embodiment of the present application;
  • FIG5 is a schematic diagram of the distribution of reference chrominance pixel positions adjacent to a current block provided by an embodiment of the present application.
  • FIG6 is a schematic block diagram of a composition of an encoder provided in an embodiment of the present application.
  • FIG7 is a schematic block diagram of a decoder provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of a network architecture of a coding and decoding system provided in an embodiment of the present application.
  • FIG9 is a flowchart diagram 1 of a decoding method provided in an embodiment of the present application.
  • FIG10A is a schematic diagram of a position distribution of reference chromaticity pixels provided in an embodiment of the present application.
  • FIG10B is a schematic diagram of a position distribution of reference brightness pixels provided in an embodiment of the present application.
  • FIG11A is a second schematic diagram of the position distribution of a reference chromaticity pixel provided in an embodiment of the present application.
  • FIG11B is a second schematic diagram of the position distribution of reference brightness pixels provided in an embodiment of the present application.
  • FIG12A is a third schematic diagram of the position distribution of a reference chromaticity pixel provided in an embodiment of the present application.
  • FIG12B is a third schematic diagram of the position distribution of reference brightness pixels provided in an embodiment of the present application.
  • FIG13A is a fourth schematic diagram of the position distribution of a reference chromaticity pixel provided in an embodiment of the present application.
  • FIG13B is a fourth schematic diagram of the position distribution of reference brightness pixels provided in an embodiment of the present application.
  • FIG14 is a schematic histogram of gradient intensity values corresponding to an intra-frame prediction mode provided in an embodiment of the present application.
  • FIG15 is a second flow chart of a decoding method provided in an embodiment of the present application.
  • FIG16 is a third flowchart of a decoding method provided in an embodiment of the present application.
  • FIG17 is a schematic diagram of a flow chart of an encoding method provided in an embodiment of the present application.
  • FIG18 is a schematic diagram of the structure of an encoder provided in an embodiment of the present application.
  • FIG19 is a schematic diagram of a specific hardware structure of an encoder provided in an embodiment of the present application.
  • FIG20 is a schematic diagram of the composition structure of a decoder provided in an embodiment of the present application.
  • FIG21 is a schematic diagram of a specific hardware structure of a decoder provided in an embodiment of the present application.
  • FIG. 22 is a schematic diagram of the composition structure of a coding and decoding system provided in an embodiment of the present application.
  • first ⁇ second ⁇ third involved in the embodiments of the present application are only used to distinguish similar objects and do not represent a specific ordering of the objects. It can be understood that “first ⁇ second ⁇ third” can be interchanged in a specific order or sequence where permitted, so that the embodiments of the present application described here can be implemented in an order other than that illustrated or described here.
  • the first color component, the second color component and the third color component are generally used to represent a coding block (Coding Block, CB); wherein the three color components are a brightness component, a blue chrominance component and a red chrominance component, respectively.
  • the brightness component is usually represented by the symbol Y
  • the blue chrominance component is usually represented by the symbol Cb or U
  • the red chrominance component is usually represented by the symbol Cr or V; in this way, the video image can be represented in the YCbCr format or in the YUV format.
  • the cross-component prediction technology mainly includes the cross-component linear model (CCLM) prediction mode and the multi-directional linear model (MDLM) prediction mode.
  • CCLM cross-component linear model
  • MDLM multi-directional linear model
  • the corresponding prediction model can realize the prediction between the first color component to the second color component, the second color component to the first color component, the first color component to the third color component, the third color component to the first color component, the second color component to the third color component, or the third color component to the second color component.
  • i,j represent the position coordinates of the pixel to be predicted in the coding block, i represents the horizontal direction, and j represents the vertical direction;
  • Pred C (i,j) represents the chrominance prediction value corresponding to the pixel to be predicted at the position coordinate (i,j) in the coding block, and
  • Rec L (i,j) represents the luminance reconstruction value corresponding to the pixel to be predicted at the position coordinate (i,j) in the same coding block (after downsampling).
  • ⁇ and ⁇ represent model factors, which can be derived from reference pixels.
  • intra-frame chroma prediction modes such as INTRA_LT_CCLM mode, INTRA_L_CCLM mode and INTRA_T_CCLM mode, which are also called cross-component linear model (CCLM) mode; for example, PLANAR mode, DC mode, ANGULAR18 mode, ANGULAR50 mode and DM (direct mode, DM) mode, these five intra-frame chroma prediction modes are also called non-CCLM mode.
  • CCLM cross-component linear model
  • the intra-frame chroma prediction mode can be set to be equal to the intra-frame luminance prediction mode.
  • ccm_mode_flag may correspond to different intra-frame chroma prediction modes (chroma intra mode). For example, when ccm_mode_flag is equal to 0, the intra-frame chroma prediction mode is non-CCLM mode; when ccm_mode_flag is equal to 1, the intra-frame chroma prediction mode is CCLM mode.
  • the DM mode in the embodiment of the present application, it refers to the case where cclm_mode_flag is equal to 0 and intra_chroma_pred_mode is equal to 4, that is, the intra-frame chroma prediction mode index number is directly set to be equal to the intra-frame luminance prediction mode index number.
  • the intra_chroma_pred_mode value is equal to the intra-frame chroma prediction mode index number of the four modes of 0-3, which can also be determined according to the intra-frame luminance prediction mode index number.
  • the difference from the "DM mode" is that it is not a one-to-one correspondence method.
  • the luminance component of the current block can be referred to as a luminance block
  • the chrominance component of the current block can be referred to as a chrominance block.
  • At least one coding unit (Coding Unit, CU) can be divided in the luminance block, which can be referred to as a "luminance CU” in the embodiment of the present application; at least one coding unit can also be divided in the chrominance block, which can be referred to as a "chrominance CU” in the embodiment of the present application.
  • the DM mode refers to directly using the brightness prediction mode information of the corresponding position.
  • the brightness block the entire diagonal filling area in the left figure
  • the chrominance block the entire diagonal filling area in the right figure
  • the brightness component of the corresponding position of the chrominance CU may contain multiple brightness CUs, as shown in Figure 1.
  • the chrominance CU inherits the intra-frame prediction mode of the CU at the center position of the corresponding brightness block.
  • the brightness block (the entire diagonal filling area in the left figure) and the chrominance block (the entire diagonal filling area in the right figure) use the same block partitioning structure.
  • the brightness component of the corresponding position of the chrominance CU contains only one brightness CU, as shown in Figure 2.
  • MaxChromaCandidateList Num refers to the preset number, which indicates the maximum number of modes that can be stored in the chroma candidate list.
  • the encoder needs to construct the list, and the decoder only needs to construct the mode of the list index for bitstream transmission;
  • both the encoder and the decoder need to construct a complete list, and after the adjustment, the decoder selects the mode corresponding to the list index for bitstream transmission;
  • the encoder in the first case a complete list needs to be constructed for rate-distortion optimization, but there may be a fast algorithm, including but not limited to rate-distortion optimization of only the first few modes, then the encoder does not need to construct all the mode lists.
  • the intra-frame luminance prediction mode of the CU where the co-located luminance block C, TL, TR, BL, and BR corresponding to the current chrominance coding block is located is added in sequence.
  • the position of the co-located luminance pixel corresponding to the upper left corner of the current block relative to the luminance pixel in the upper left corner of the image (that is, the position of the luminance pixel TL) is (xCb, yCb), and the width of the co-located luminance area corresponding to the current block (the filling area of the entire diagonal line in the left figure) is cbWidth, and the height is cbHeight.
  • the coordinates of the position of the brightness pixel C are (xCb+cbWidth/2, yCb+cbHeight/2);
  • the coordinates of the position of the luminance pixel TL are (xCb, yCb);
  • the coordinates of the position of the brightness pixel TR are (xCb+cbWidth-1, yCb);
  • the coordinates of the position of the brightness pixel BL are (xCb, yCb+cbHeight-1);
  • the coordinates of the position of the brightness pixel BR are (xCb+cbWidth-1, yCb+cbHeight-1).
  • partition tree type treeType is a single tree (SINGLE_TREE), as shown in FIG4 , the process of adding the brightness prediction mode of the CU where C is located in step (1) is performed;
  • partition tree type treeType is dual tree (DUAL_TREE), as shown in Figure 3, perform the following operations:
  • the derivation process of the prediction mode lumaIntraPredMode of the corresponding co-located luma block is as follows:
  • the center position refers to the brightness sampling point with coordinates (xCb+cbWidth/2, yCb+cbHeight/2).
  • CuPredMode[0][xCb+cbWidth/2][yCb+cbHeight/2] is MODE_IBC or MODE_PLT
  • lumaIntraPredMode INTRA_DC
  • the array CuPredMode[chType][x][y] refers to the prediction mode used by the luminance block or chrominance block containing the pixel point at coordinates (x, y)
  • chType is 0 for the luminance component
  • chType is 1 for the chrominance component.
  • lumaIntraPredMode IntraPredModeY[xCb+cbWidth/2][yCb+cbHeight/2]; wherein the array IntraPredModeY[x][y] refers to the intra prediction mode used by the current block containing the pixel point with coordinates (x, y).
  • Table 2 shows the mapping relationship between the chroma subsampling format of the digital video and sps_chroma_format_idc.
  • the specific derivation rules for converting the luminance prediction mode of the same-position luminance block to the chrominance prediction mode are as follows:
  • mode Y of the intra chroma prediction mode may be derived using mode X of the intra luma prediction mode lumaIntraPredMode as specified in Table 3;
  • the intra chroma prediction mode is equal to the intra luma prediction mode lumaIntraPredMode.
  • Table 3 shows the mapping relationship between the intra-frame luma prediction mode X and the intra-frame chroma prediction mode Y, which is specifically as follows.
  • the intra-frame chroma prediction modes of the coded chroma blocks where the reference chroma pixels 0, 1, 2, 3, and 4 adjacent to the current block are located are added in sequence.
  • the position information of chroma pixel 0 is (xCb-1, yCb+cbHeight-1);
  • the position information of chroma pixel 1 is (xCb+cbWidth-1,yCb-1);
  • the position information of chroma pixel 2 is (xCb-1, yCb+cbHeight);
  • the position information of chroma pixel 3 is (xCb+cbWidth, yCb-1);
  • the position information of the chrominance pixel 4 is (xCb-1, yCb-1).
  • the intra chroma prediction mode of the neighboring coded chroma blocks is directly added.
  • the first way is to determine whether the pattern is an angle mode. If it is an angle mode, add the angle pattern mapped by the angle mode mapped clockwise or counterclockwise offset by a minimum angle unit. If the mode is not an angle mode, do nothing.
  • the second way is to directly add 1 or -1 to the corresponding pattern index value: if the pattern index value is 0, only add the pattern with a pattern index value of 1; if the pattern index value is the maximum pattern index, only add the pattern with a pattern index value of -1 of the maximum pattern index. In addition, if there is only one pattern, only add the two angle patterns offset by the pattern; if it is not an angle mode, do not execute step (3).
  • the default list includes but is not limited to the modes described later.
  • the modes in the list are PLANAR_IDX, VER_IDX, HOR_IDX, DC_IDX, VDIA_IDX, VER_IDX–4, VER_IDX+4, HOR_IDX–4, and HOR_IDX+4.
  • a set of non-CCLM intra-frame prediction mode lists are used to predict the chrominance components for the current block, and the CCLM modes of the adjacent coded chrominance blocks are ignored.
  • This will have some defects.
  • the CCLM mode that uses the linear relationship between components for prediction has better prediction effect in the coded blocks with different content characteristics, so there are more chrominance coded blocks of the CCLM mode used in one frame of image.
  • the construction process of the intra-frame prediction mode list of the non-CCLM mode ignores the CCLM mode of the adjacent coded chrominance blocks, and some spatial correlation will be lost; that is, the acquisition and construction process of the non-CCLM mode list of the existing chrominance prediction mode is incomplete, resulting in poor chrominance prediction effect.
  • an embodiment of the present application provides a decoding method to determine a reference block of a current block; wherein the reference block is an adjacent block of the current block; when the prediction mode of the second color component of the reference block satisfies a first condition, determining a reference frame intra-prediction mode parameter according to the reference block; and determining a prediction value of the second color component of the current block according to the reference frame intra-prediction mode parameter.
  • An embodiment of the present application also provides an encoding method for determining a reference block of a current block; wherein the reference block is an adjacent block of the current block; when the prediction mode of the second color component of the reference block satisfies a first condition, determining a reference intra-frame prediction mode parameter according to the reference block; determining a prediction value of the second color component of the current block according to the reference intra-frame prediction mode parameter; and determining a prediction difference value of the second color component of the current block according to the prediction value of the second color component of the current block.
  • both the encoding end and the decoding end can determine the reference intra-frame prediction mode parameters of the non-CCLM by analyzing the relevant parameters of the reference blocks adjacent to the current block; based on the reference intra-frame prediction mode parameters, the completeness and diversity of the intra-frame chrominance prediction mode can be improved, thereby improving the accuracy of the intra-frame chrominance prediction, and also improving the encoding and decoding efficiency, thereby improving the encoding and decoding performance.
  • the encoder 100 may include a transform and quantization unit 101, an intra-frame estimation unit 102, an intra-frame prediction unit 103, a motion compensation unit 104, a motion estimation unit 105, an inverse transform and inverse quantization unit 106, a filter control analysis unit 107, a filtering unit 108, an encoding unit 109 and a decoded image cache unit 110, etc.
  • the filtering unit 108 can implement deblocking filtering and sample adaptive offset (Sample Adaptive Offset, SAO) filtering
  • the encoding unit 109 can implement header information encoding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmetic Coding, CABAC).
  • a video coding block can be obtained by dividing the coding tree block (CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is transformed by the transformation and quantization unit 101 to transform the video coding block, including transforming the residual information from the pixel domain to the transform domain, and quantizing the obtained transform coefficients to further reduce the bit rate;
  • the intra-frame estimation unit 102 and the intra-frame prediction unit 103 are used to perform intra-frame prediction on the video coding block; specifically, the intra-frame estimation unit 102 and the intra-frame prediction unit 103 are used to determine the intra-frame prediction mode to be used to encode the video coding block;
  • the motion compensation unit 104 and the motion estimation unit 105 are used to perform inter-frame prediction coding of the received video coding block relative to one or more blocks in one or more reference frames to provide temporal prediction information;
  • the motion estimation performed by the motion estimation unit 105 is a process of generating a motion vector, and the motion vector can estimate the motion of the video coding block, and then
  • the motion vector determined by the motion estimation unit 105 performs motion compensation; after determining the intra-frame prediction mode, the intra-frame prediction unit 103 is also used to provide the selected intra-frame prediction data to the encoding unit 109, and the motion estimation unit 105 also sends the calculated and determined motion vector data to the encoding unit 109; in addition, the inverse transform and inverse quantization unit 106 is used to reconstruct the video coding block, reconstruct the residual block in the pixel domain, and the reconstructed residual block is removed by the filter control analysis unit 107 and the filtering unit 108.
  • the encoding unit 109 is used to encode various coding parameters and quantized transform coefficients.
  • the context content can be based on adjacent coding blocks and can be used to encode information indicating the determined intra-frame prediction mode and output the code stream of the video signal; and the decoded image buffer unit 110 is used to store the reconstructed video coding block for prediction reference. As the video image encoding proceeds, new reconstructed video encoding blocks are continuously generated, and these reconstructed video encoding blocks are stored in the decoded image buffer unit 110 .
  • the decoder 200 includes a decoding unit 201, an inverse transform and inverse quantization unit 202, an intra-frame prediction unit 203, a motion compensation unit 204, a filtering unit 205, and a decoded image cache unit 206, etc., wherein the decoding unit 201 can implement header information decoding and CABAC decoding, and the filtering unit 205 can implement deblocking filtering and SAO filtering.
  • the decoding unit 201 can implement header information decoding and CABAC decoding
  • the filtering unit 205 can implement deblocking filtering and SAO filtering.
  • a bit stream of the video signal is output; the bit stream is input into the decoder 200, and first passes through the decoding unit 201 to obtain the decoded transform coefficients; the transform coefficients are processed by the inverse transform and inverse quantization unit 202 to generate residual blocks in the pixel domain; the intra-frame prediction unit 203 can be used to generate prediction data of the current video decoding block based on the determined intra-frame prediction mode and the data of the previously decoded block from the current frame or picture; the motion compensation unit 204 is to determine the prediction information for the video decoding block by analyzing the motion vector and other associated syntax elements, and use The prediction information is used to generate a predictive block of the video decoding block being decoded; a decoded video block is formed by summing the residual block from the inverse transform and inverse quantization unit 202 and the corresponding predictive block generated by the intra-frame prediction unit 203 or the motion compensation unit 204; the decoded video signal passes through the filtering unit 205 to remove the block effect artifacts,
  • the embodiment of the present application also provides a network architecture of a codec system including an encoder and a decoder, wherein FIG8 shows a schematic diagram of a network architecture of a codec system provided by an embodiment of the present application.
  • the network architecture includes one or more electronic devices 13 to 1N and a communication network 01, wherein the electronic devices 13 to 1N can perform video interaction through the communication network 01.
  • the electronic device can be various types of devices with video codec functions during implementation, for example, the electronic device can include a smart phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, a television, a sensor device, a server, etc., and the embodiment of the present application is not specifically limited.
  • the decoder or encoder described in the embodiment of the present application can be the above-mentioned electronic device.
  • the method of the embodiment of the present application is mainly applied to the intra-frame prediction unit 103 part shown in Figure 6 and the intra-frame prediction unit 203 part shown in Figure 7.
  • the embodiment of the present application can be applied to both the encoder and the decoder, and can even be applied to both the encoder and the decoder at the same time, but the embodiment of the present application is not specifically limited.
  • the "current block” specifically refers to the coding block currently to be intra-frame predicted; when applied to the intra-frame prediction unit 203, the "current block” specifically refers to the decoding block currently to be intra-frame predicted.
  • FIG. 9 which is applied to a decoder, a flowchart diagram 1 of a decoding method provided by an embodiment of the present application is shown. As shown in FIG. 9 , the method may include:
  • the decoding method of the embodiment of the present application is applied to a decoding device, or a decoding device integrated with the decoding device (also referred to as a "decoder").
  • the decoding method of the embodiment of the present application may specifically refer to an intra-frame prediction method. In which, assuming that the first color component is a luminance component and the second color component is a chrominance component, then more specifically, here is a method for deriving an intra-frame chrominance prediction mode.
  • the current block includes at least a first color component and a second color component.
  • the block at this time can be simply referred to as a first color component block; and when the first color component is a brightness component, the first color component block can also be referred to as a brightness block.
  • the block at this time can be simply referred to as a second color component block; and when the second color component is a chrominance component, the second color component block can also be referred to as a chrominance block.
  • the current block may refer to a decoded block in a video image that is currently to be intra-predicted.
  • the reference block is an adjacent block of the current block; and the "adjacent" here may refer to spatial adjacent, temporal adjacent, etc., without specific limitation.
  • the reference block of the current block may be an adjacent decoded block of the current block.
  • the reference chroma block is an adjacent decoded chroma block of the current block.
  • the reference frame intra prediction mode parameters can be derived according to the reference block.
  • the first condition may include: the prediction mode of the second color component of the reference block is a first preset mode.
  • the first preset mode may be a non-angular prediction mode.
  • the first preset mode may include at least one of the following: an inter-component prediction mode, an IBC mode, a MIP mode, and a Palette mode.
  • the inter-component prediction mode may be a CCLM mode.
  • the first preset mode may be an inter-frame prediction mode.
  • the prediction mode of the second color component of the reference block is the first preset mode, such as the CCLM mode
  • the reference intra-frame prediction mode parameters can be derived according to the reference block of the current block.
  • the first condition may include: the prediction mode of the second color component of the reference block is not a second preset mode.
  • the second preset mode may be an angle prediction mode.
  • the second preset mode may be a traditional prediction mode.
  • the second preset mode may be a DC mode or a Planar mode.
  • the reference frame intra-prediction mode parameters can also be derived based on the reference block of the current block.
  • the DC mode may also be represented by INTRA_DC
  • the Planar mode may also be represented by INTRA_PLANAR.
  • the first condition may include: decoding a code stream and determining a first parameter; wherein the first parameter indicates determining a reference intra-frame prediction mode parameter according to a reference block.
  • the first parameter can also be written into the bitstream, and then the decoding end determines the first parameter by decoding the bitstream, and the first parameter indicates that the reference frame intra prediction mode parameter needs to be determined based on the reference block.
  • determining the reference intra-frame prediction mode parameters according to the reference block may include: determining the reference pixel according to the reference block; determining the first parameter according to the reconstructed sample value of the reference pixel; and determining the reference intra-frame prediction mode parameters according to the first parameter.
  • determining a reference pixel according to a reference block may include: determining the reference pixel according to pixels in an adjacent area of the reference block; wherein the adjacent area includes at least one of the following: a left adjacent area, an upper adjacent area, and an upper-left adjacent area.
  • the left adjacent area of the reference block, the upper adjacent area of the reference block, and the upper left adjacent area of the reference block as adjacent areas can all be referred to as adjacent areas of the reference block.
  • the current block is a chroma block
  • the reference block of the current block is an adjacent decoded chroma block
  • the adjacent area of the adjacent decoded chroma block can be a chroma area composed of a plurality of dot pixels
  • the reference block is a co-located luminance block of an adjacent decoded chroma block
  • the adjacent area of the co-located luminance block of the adjacent decoded chroma block can be a luminance area composed of a plurality of dot pixels.
  • determining the reference pixel according to the reference block may include: determining the reference pixel according to a pixel in the reference block.
  • the current block is a chroma block
  • the reference block of the current block is an adjacent decoded chroma block
  • the reference pixel is a pixel in the adjacent decoded chroma block
  • the reference block is a co-located luminance block of an adjacent decoded chroma block
  • the reference pixel is a pixel in the co-located luminance block of the adjacent decoded chroma block.
  • determining the first parameter based on the reconstructed sample value of the reference pixel may include: performing gradient calculation on the reconstructed sample value of the reference pixel to determine the horizontal gradient value and the vertical gradient value of the reference pixel; performing angle mapping based on the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one intra-frame prediction mode corresponding to the reference pixel; performing gradient strength calculation based on the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one gradient strength value corresponding to the reference pixel; determining the first parameter based on at least one intra-frame prediction mode and at least one gradient strength value corresponding to the reference pixel.
  • the reference pixel may include at least one candidate pixel, and each candidate pixel corresponds to an intra-frame prediction mode and a gradient strength value.
  • performing angle mapping based on the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one intra-frame prediction mode corresponding to the reference pixel may include: determining the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel; performing angle mapping based on the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine the initial mode index value of the candidate pixel; and determining an intra-frame prediction mode corresponding to the candidate pixel based on the initial mode index value of the candidate pixel.
  • the horizontal gradient value of the candidate pixel can be represented by gVer[x][y]
  • the vertical gradient value of the candidate pixel can be represented by gHor[x][y]
  • the horizontal gradient absolute value of the candidate pixel can be represented by abs(gVer[x][y])
  • the vertical gradient absolute value of the candidate pixel can be represented by abs(gHor[x][y]).
  • the initial mode index value of the candidate pixel can be determined, represented by angIdx[x][y]; then according to the value of angIdx[x][y], an intra-frame prediction mode corresponding to the candidate pixel can be determined.
  • determining an intra-frame prediction mode corresponding to the candidate pixel based on an initial mode index value of the candidate pixel can include: compensating the initial mode index value according to a preset angle compensation value to determine a target mode index value of the candidate pixel; determining an intra-frame prediction mode corresponding to the candidate pixel based on the target mode index value of the candidate pixel.
  • the preset angle compensation value can be represented by angOffset[region[x][y]]
  • the target mode index value of the candidate pixel i.e., the corresponding intra-frame prediction mode
  • the value of ipm[x][y] is equal to the sum of angOffset[region[x][y]] and angIdx[x][y]. Then, based on the value of ipm[x][y], the corresponding intra-frame prediction mode can be determined.
  • the method may also include: determining the target quadrant value of the candidate pixel; determining the value corresponding to the target quadrant value under a preset mapping relationship; and setting the preset angle compensation value to be equal to the value.
  • determining the target quadrant value of a candidate pixel may include: determining a first symbol value based on a horizontal gradient value of the candidate pixel; and determining a second symbol value based on a vertical gradient value of the candidate pixel; determining a comparison value of the candidate pixel based on a comparison result of an absolute value of the horizontal gradient and an absolute value of the vertical gradient of the candidate pixel; and performing quadrant mapping based on the comparison value, the first symbol value, and the second symbol value to determine the target quadrant value corresponding to the candidate pixel.
  • the first symbol value can be expressed as signV[x][y], which is used to characterize whether gVer[x][y] is greater than 0 or less than 0;
  • the second symbol value can be expressed as signH[x][y], which is used to characterize whether gHor[x][y] is greater than 0 or less than 0,
  • the comparison value can be expressed as HgV[x][y], which is used to characterize whether abs(gHor[x][y]) is greater than abs(gVer[x][y]); in this way, the target quadrant value region[x][y] can be determined according to signV[x][y], signH[x][y] and HgV[x][y].
  • performing gradient strength calculation based on the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one gradient strength value corresponding to the reference pixel may include: performing an addition calculation based on the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine a gradient strength value corresponding to the candidate pixel.
  • a gradient intensity value corresponding to a candidate pixel can be represented by iAmp[x][y], whose value is equal to the sum of abs(gVer[x][y]) and abs(gHor[x][y]).
  • the reconstructed sample value of the reference pixel includes at least one of the following: the reconstructed sample value of the first color component of the reference pixel; the reconstructed sample value of the second color component of the reference pixel.
  • determining the first parameter according to the reconstructed sample value of the reference pixel may include: when the reconstructed sample value of the reference pixel is the reconstructed sample value of the first color component of the reference pixel, determining at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the first color component of the reference pixel; when the reconstructed sample value of the reference pixel is the reconstructed sample value of the second color component of the reference pixel, determining at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the second color component of the reference pixel; forming a first set according to at least one intra-frame prediction mode corresponding to the first color component of the reference pixel and at least one intra-frame prediction mode corresponding to the second color component of the reference pixel, and the first set includes at least one reference intra-frame prediction mode with mutually different characteristics; according to at least one gradient intensity value corresponding to the first color component of the reference pixel and at least one gradient intensity value corresponding to the second color component of the reference pixel
  • exemplary descriptions of determining at least one intra prediction mode and at least one gradient strength value corresponding to a reference pixel are respectively given below.
  • a reference pixel is determined based on pixels in an adjacent area of a reference block, and then based on the reference pixel, at least one intra-frame prediction mode and at least one gradient strength value corresponding to the reference pixel are determined, wherein the reference pixel includes at least one candidate pixel.
  • the current block is a chroma block
  • the reference block is an adjacent decoded chroma block.
  • the width of the adjacent decoded chroma block is CbNbWidth
  • the height is CbNbHeight.
  • the coordinate information of the candidate chroma pixel is pC[x][y]
  • x ⁇ [-3,CbNbWidth] y ⁇ [-3,-1]
  • x ⁇ [-3,-1] x ⁇ [-3,-1]
  • y ⁇ [0,CbNbHeight] where the origin [0][0] is the pixel coordinate information of the upper left corner of the adjacent decoded chroma block, and the candidate chroma pixel is located in the chroma area composed of multiple dots in FIG10A.
  • the width of the co-located luminance block of the adjacent decoded chroma block is 2 ⁇ CbNbWidth, and the height is 2 ⁇ CbNbHeight.
  • the coordinate information of the candidate luminance pixel is pY[x][y]
  • x ⁇ [-3,2 ⁇ CbNbWidth] y ⁇ [-3,-1] and x ⁇ [-3,-1]
  • y ⁇ [0,2 ⁇ CbNbHeight] is the origin [0][0] is the pixel coordinate information of the upper left corner of the same-position luminance block adjacent to the decoded chrominance block
  • the candidate luminance pixel is located in the luminance area composed of multiple dots in Figure 10B.
  • angOffset is a preset angle compensation value
  • gHor[x][y] is a vertical gradient value
  • gVer[x][y] is a horizontal gradient value
  • signH[x][y] is a second symbol value
  • signV[x][y] is a first symbol value
  • region[x][y] is a target quadrant value
  • ipm[x][y] is an intra-frame prediction mode
  • iAmp[x][y] is a gradient intensity value.
  • mapHgV ⁇ 2,1 ⁇ , ⁇ 1,2 ⁇
  • mapVgH ⁇ 3,4 ⁇ , ⁇ 4,3 ⁇
  • angTable ⁇ 0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536 ⁇ ;
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • a reference pixel is determined based on all pixels in a reference block, and then based on the reference pixel, at least one intra-frame prediction mode and at least one gradient strength value corresponding to the reference pixel are determined, wherein the reference pixel includes at least one candidate pixel.
  • the current block is a chroma block
  • the reference block is an adjacent decoded chroma block.
  • the width of the adjacent decoded chroma block is CbNbWidth
  • the height is CbNbHeight.
  • the coordinate information of the candidate chroma pixel is pC[x][y]
  • y ⁇ [0,CbNbHeight-1] where the origin [0][0] is the pixel coordinate information of the upper left corner of the adjacent decoded chroma block
  • the candidate chroma pixel is located in the chroma area composed of multiple dots in FIG11A .
  • the width of the co-located luminance block of the adjacent decoded chroma block is 2 ⁇ CbNbWidth
  • the height is 2 ⁇ CbNbHeight.
  • the coordinate information of the candidate luminance pixel is pY[x][y]
  • the origin [0][0] is the pixel coordinate information of the upper left corner of the co-located luminance block adjacent to the decoded chrominance block
  • the candidate luminance pixel is located in the luminance area composed of multiple dots in Figure 11B.
  • angOffset is a preset angle compensation value
  • gHor[x][y] is a vertical gradient value
  • gVer[x][y] is a horizontal gradient value
  • signH[x][y] is a second symbol value
  • signV[x][y] is a first symbol value
  • region[x][y] is a target quadrant value
  • ipm[x][y] is an intra-frame prediction mode
  • iAmp[x][y] is a gradient intensity value.
  • mapHgV ⁇ 2,1 ⁇ , ⁇ 1,2 ⁇
  • mapVgH ⁇ 3,4 ⁇ , ⁇ 4,3 ⁇
  • angTable ⁇ 0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536 ⁇ ;
  • Step (1) For the coordinate information pC[x][y] of the candidate chromaticity pixel, that is, x ⁇ [1,CbNbWidth-2], y ⁇ [1,CbNbHeight-2], calculate the following values.
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • Step (2) For the coordinate information pY[x][y] of the candidate brightness pixel, that is, x ⁇ [1,2 ⁇ CbNbWidth-2], y ⁇ [1,2 ⁇ CbNbHeight-2], calculate the following values.
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • the reference pixel may be determined based on some pixels in the reference block, and then based on the reference pixel, at least one intra-frame prediction mode and at least one gradient strength value corresponding to the reference pixel may be determined, wherein the reference pixel includes at least one candidate pixel.
  • the current block is a chroma block
  • the reference block is an adjacent decoded chroma block.
  • the width of the adjacent decoded chroma block is CbNbWidth
  • the height is CbNbHeight.
  • the coordinate information of the candidate chroma pixel is pC[x][y]
  • the origin [0][0] is the pixel coordinate information of the upper left corner of the adjacent decoded chroma block
  • the candidate chroma pixel is located in the chroma area composed of multiple dots in FIG12A.
  • the width of the co-located luminance block of the adjacent decoded chroma block is 2 ⁇ CbNbWidth
  • the height is 2 ⁇ CbNbHeight.
  • the coordinate information of the candidate luminance pixel is pY[x][y]
  • the origin [0][0] is the pixel coordinate information of the upper left corner of the co-located luminance block adjacent to the decoded chrominance block
  • the candidate luminance pixel is located in the luminance area composed of multiple dots in Figure 12B.
  • angOffset is a preset angle compensation value
  • gHor[x][y] is a vertical gradient value
  • gVer[x][y] is a horizontal gradient value
  • signH[x][y] is a second symbol value
  • signV[x][y] is a first symbol value
  • region[x][y] is a target quadrant value
  • ipm[x][y] is an intra-frame prediction mode
  • iAmp[x][y] is a gradient intensity value.
  • mapHgV ⁇ 2,1 ⁇ , ⁇ 1,2 ⁇
  • mapVgH ⁇ 3,4 ⁇ , ⁇ 4,3 ⁇
  • angTable ⁇ 0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536 ⁇ ;
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • the reference pixel may be determined based on some pixels in the reference block, and then based on the reference pixel, at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the reference pixel may be determined, wherein the reference pixel includes at least one candidate pixel.
  • the current block is a chroma block
  • the reference block is an adjacent decoded chroma block.
  • the width of the adjacent decoded chroma block is CbNbWidth
  • the height is CbNbHeight.
  • the coordinate information of the candidate chroma pixel is pC[x][y], then x ⁇ [0,CbNbWidth-1], y ⁇ [CbNbHeight-3,CbNbHeight-1], where the origin [0][0] is the pixel coordinate information of the upper left corner of the adjacent decoded chroma block, and the candidate chroma pixel is located in the chroma area composed of multiple dots in FIG13A.
  • the width of the co-located luminance block of the adjacent decoded chroma block is 2 ⁇ CbNbWidth
  • the height is 2 ⁇ CbNbHeight.
  • the coordinate information of the candidate luminance pixel is pY[x][y]
  • the origin [0][0] is the pixel coordinate information of the upper left corner of the co-located luminance block adjacent to the decoded chrominance block
  • the candidate luminance pixel is located in the luminance area composed of multiple dots in 13B.
  • angOffset is a preset angle compensation value
  • gHor[x][y] is a vertical gradient value
  • gVer[x][y] is a horizontal gradient value
  • signH[x][y] is a second symbol value
  • signV[x][y] is a first symbol value
  • region[x][y] is a target quadrant value
  • ipm[x][y] is an intra-frame prediction mode
  • iAmp[x][y] is a gradient intensity value.
  • mapHgV ⁇ 2,1 ⁇ , ⁇ 1,2 ⁇
  • mapVgH ⁇ 3,4 ⁇ , ⁇ 4,3 ⁇
  • angTable ⁇ 0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536 ⁇ ;
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • grad[x][y] round(grad[x][y]*(1 ⁇ 16));
  • the initial mode index value is the closest calculated intra-frame prediction mode index value, and after compensating it using the preset angle compensation value angOffset[region[x][y]], the calculated intra-frame prediction mode can be determined.
  • FIG14 shows a schematic histogram of gradient intensity values corresponding to at least one intra-frame prediction mode provided in an embodiment of the present application.
  • the gradient values iAmp of steps (1) and (2) in any of the aforementioned implementations may be accumulated according to the corresponding intra-frame prediction mode ipm, and a histogram may be established with the intra-frame prediction mode ipm as the horizontal coordinate and the gradient intensity value iAmp as the vertical coordinate.
  • the histogram may include gradient intensity values corresponding to at least one intra-frame prediction mode, and the mode index interval range of the at least one intra-frame prediction mode is [0,66].
  • determining the reference intra-frame prediction mode parameters according to the first parameter may include: forming a second set according to the gradient strength values corresponding to at least one reference intra-frame prediction mode; if the gradient strength values in the second set are all zero, determining the reference intra-frame prediction mode parameters according to the PLANAR mode; if there are non-zero items in the gradient strength values in the second set, determining the maximum gradient strength value from the second set, and determining the reference intra-frame prediction mode parameters according to the intra-frame prediction mode corresponding to the maximum gradient strength value.
  • the second set may include gradient strength values corresponding to at least one intra-frame prediction mode. From FIG14 , it can be intuitively obtained that the maximum gradient value among the gradient strength values corresponding to the at least one intra-frame prediction mode and the intra-frame prediction mode corresponding to the maximum gradient strength value.
  • the maximum gradient strength value in the second set is assigned to -1 to determine the third set; if the intra-frame prediction mode corresponding to the maximum gradient strength value is the same as the first color component prediction mode of the first color component block at the same position of the reference block, a new maximum gradient strength value is determined from the third set, and the reference intra-frame prediction mode parameters are determined according to the intra-frame prediction mode corresponding to the new maximum gradient strength value.
  • the reference intra prediction mode parameters are determined according to the DC mode.
  • the reference intra-frame prediction mode parameter derived from the histogram shown in FIG. 14 is represented by IntraPredModeD, and its corresponding mode index interval range is [0,66].
  • IntraPredModeD INTRA_PLANAR.
  • IntraPredModeD argmax i(HoG[i]) and set HoG[IntraPredModeD] to -1;
  • determining the predicted value of the second color component of the current block according to the prediction mode parameters within the reference frame may include: constructing a mode candidate list of the second color component of the current block according to the prediction mode parameters within the reference frame; and determining the predicted value of the second color component of the current block according to the mode candidate list.
  • determining the predicted value of the second color component of the current block according to the mode candidate list may include: parsing the bitstream to determine the mode index number of the second color component of the current block; determining the target prediction mode corresponding to the mode index number according to the mode candidate list; and using the target prediction mode to perform prediction processing on the second color component of the current block to determine the predicted value of the second color component of the current block.
  • the decoding end can determine the target prediction mode through the mode index number obtained by decoding; then use the target prediction mode to predict the second color component of the current block to determine the predicted value of the second color component of the current block.
  • the method may also include: parsing the code stream to determine the predicted difference value of the second color component of the current block; determining the reconstructed value of the second color component of the current block based on the predicted value of the second color component of the current block and the predicted difference value of the second color component of the current block.
  • the decoding end can add the predicted value of the second color component of the current block and the predicted difference value of the second color component of the current block after obtaining the predicted difference value through decoding, thereby obtaining the reconstructed value of the second color component of the current block.
  • a mode candidate list of the chroma component of the current block is constructed according to the reference intra-frame prediction mode parameters.
  • the content characteristics of the adjacent decoded reference blocks are fully utilized to perform texture analysis, and a gradient histogram corresponding to multiple angle mode entries is constructed.
  • a reference intra-frame prediction mode parameter can be determined and added to the mode candidate list, which can improve the diversity of intra-frame chroma prediction modes; and according to the mode candidate list, a more accurate chroma prediction value can also be obtained, which improves the accuracy of intra-frame chroma prediction.
  • determining the reference block of the current block may also include: determining at least one target pixel adjacent to the current block; determining at least one first target block based on the blocks where the at least one target pixel is located; and determining the reference block of the current block based on the at least one first target block.
  • the current block (the entire filled area of the diagonal line) is a chroma block
  • the target pixel adjacent to the current block may be pixel 0
  • the block where pixel 0 is located may be the first target block, which is the reference block of the current block.
  • the reference blocks are used in sequence according to a first preset order to determine the reference intra-frame prediction mode parameters of at least one first target block; and according to the reference intra-frame prediction mode parameters of at least one first target block, a mode candidate list of the second color component of the current block is constructed.
  • the first preset order may be set manually or according to a certain rule in a specific scenario, and the embodiments of the present application are not limited to this.
  • the current block (the entire filled area of the diagonal line) is the chrominance block, and 0, 1, 2, 3, and 4 are target pixels respectively.
  • the coordinate information of the upper left corner of the current block relative to the upper left chrominance pixel of the image is (xCb, yCb)
  • the width of the current block is cbWidth
  • the height is cbHeight
  • the coordinate information of 0, 1, 2, 3, and 4 is as follows.
  • the coordinate information of target pixel 0 is (xCb-1, yCb+cbHeight-1);
  • the coordinate information of target pixel 1 is (xCb+cbWidth-1, yCb-1);
  • the coordinate information of target pixel 2 is (xCb-1, yCb+cbHeight);
  • the coordinate information of target pixel 3 is (xCb+cbWidth, yCb-1);
  • the coordinate information of the target pixel 4 is (xCb-1, yCb-1).
  • the blocks where 0, 1, 2, 3, and 4 are located can be determined, and the blocks where 0, 1, 2, 3, and 4 are located can be used as five first target blocks respectively.
  • the five first target blocks can be used as reference blocks in turn, so that the method described above can be referred to to determine the reference frame intra-prediction mode parameters of the five first target blocks, and then according to the reference frame intra-prediction mode parameters of the five first target blocks, a mode candidate list for the second color component of the current block can be constructed.
  • the method may further include: determining a prediction mode of the second color component of the reference block; when the prediction mode of the second color component of the reference block does not satisfy the first condition, directly adding the prediction mode of the second color component of the reference block to the mode candidate list.
  • a mode candidate list of the second color component of the current block can be constructed based on the prediction mode of the second color component of the reference block, that is, the prediction mode of the second color component of the reference block is directly added to the mode candidate list.
  • the embodiment of the present application provides a decoding method. After determining the reference block of the current block, when the prediction mode of the second color component of the reference block meets the first condition, the reference intra-frame prediction mode parameters can be determined according to the reference block; a mode candidate list of the second color component of the current block can be constructed according to the reference intra-frame prediction mode parameters; and the prediction value of the second color component of the current block can be determined according to the mode candidate list.
  • the prediction mode of the adjacent decoded reference block is taken into account; in this way, not only the completeness and diversity of the intra-frame chroma prediction mode can be improved, but also the accuracy of the intra-frame chroma prediction can be improved, thereby improving the decoding efficiency and further improving the decoding performance.
  • Figure 15 shows a second flow chart of a decoding method provided by an embodiment of the present application.
  • the method may include:
  • the current block is the chrominance block (the entire area filled with diagonal lines in the right figure of FIG. 3 or FIG. 4 ), and the first color component area at the same position of the current block is the luminance block (the entire area filled with diagonal lines in the left figure of FIG. 3 or FIG. 4 ).
  • S1520 Determine at least one second target block at a preset position from at least one block divided by the first color component area.
  • the first color component region is divided into a block
  • the second target block at the preset position is a C block.
  • the first color component region is divided into a plurality of blocks, and there are five second target blocks at preset positions, which are respectively recorded as TL block, TR block, C block, BL block, and BR block.
  • the position coordinates of the five second target blocks can be recorded as follows:
  • the coordinate information of block C is (xCb+cbWidth/2, yCb+cbHeight/2);
  • the coordinate information of the TL block is (xCb, yCb);
  • the coordinate information of the TR block is (xCb+cbWidth-1, yCb);
  • the coordinate information of the BL block is (xCb, yCb+cbHeight-1);
  • the coordinate information of the BR block is (xCb+cbWidth-1, yCb+cbHeight-1).
  • the five positions of the second target blocks shown in Figure 3 are for exemplary purposes, and the embodiments of the present application are not limited to these five positions, but may be multiple different positions. The embodiments of the present application do not limit the number and specific positions of the second target blocks.
  • S1530 Determine a reference block of the current block according to at least one second target block.
  • one of the at least one second target block may be used as a reference block of the current block.
  • block C in FIG4 may be used as a reference block of the current block.
  • multiple second target blocks may be used as reference blocks of the current block.
  • the C block, TL block, TR block, BL block, and BR block in FIG3 may be used as reference blocks of the current block in sequence.
  • the method may also include: determining, in sequence, the first color component prediction mode parameters of each of the at least one second target blocks based on a preset order of the at least one second target block; and constructing a mode candidate list for the second color component of the current block according to the first color component prediction mode parameters of the at least one second target block.
  • the preset order of the second target blocks may not be used as a reference.
  • the second preset order can be set manually or according to a certain rule in a specific scenario, and the present application embodiment does not limit this.
  • the preset order of the second target block can include but is not limited to the following order: C->TL->TR->BL->BR.
  • the first color component prediction mode parameter of the second target block may be the first color component prediction mode of the C block.
  • determining the first color component prediction mode parameters of block C in FIG. 3 can be divided into the following steps.
  • the first step is to determine whether the C block in FIG3 uses the MIP mode.
  • the first color component prediction mode parameter of the C block in Figure 3 is the PLANAR mode. It can be expressed in pseudo code as follows: If IntraMipFlag[xCb+cbWidth/2][yCb+cbHeight/2] is equal to 1, lumaIntraPredMode is set to INTRA_PLANAR.
  • (xCb+cbWidth/2, yCb+cbHeight/2) is the coordinate information of the C block
  • IntraMipFlag[xCb+cbWidth/2][yCb+cbHeight/2] is an array used to indicate whether the C block uses the MIP mode
  • lumaIntraPredMode is the first color component prediction mode parameter
  • INTRA_PLANAR is the PLANAR mode.
  • Step 2 If CuPredMode[0][xCb+cbWidth/2][yCb+cbHeight/2] is IBC mode or PLT mode, the prediction mode parameter of the first color component of block C is DC mode, where (xCb+cbWidth/2, yCb+cbHeight/2) is the coordinate information of block C, and [0] is the first color component.
  • the prediction mode parameter of the first color component of the C block is set to IntraPredModeY[xCb+cbWidth/2][yCb+cbHeight/2], where (xCb+cbWidth/2, yCb+cbHeight/2) is the coordinate information of the C block.
  • determining the first color component prediction mode parameters of other blocks (such as TL block and TR block) except C block in FIG. 3 is similar to determining the first color component prediction mode parameters of C block, which will not be described in detail here.
  • a mode candidate list of the second color component of the current block is constructed, and there are two possible implementation methods:
  • the first color component prediction mode parameter of at least one second target block is added to the mode candidate list of the second color component of the current block.
  • the first color component prediction mode of the C block may be added to the mode candidate list of the second color component of the current block.
  • the PLANAR mode can be added to the mode candidate list of the second color component of the current block.
  • the first color component prediction mode parameters of at least one second target block may need to be converted before being added to the mode candidate list of the second color component of the current block.
  • the first color component prediction mode parameters of at least one second target block are converted using the preset rules specified in Table 3 to obtain the converted first color component prediction mode parameters of at least one second target block, and the converted first color component prediction mode parameters of the second target blocks are added to the mode candidate list of the second color component of the current block.
  • sps_chroma_format_idc is 1 or 3
  • the first color component prediction mode parameters of at least one second target block can be directly added to the mode candidate list of the second color component of the current block.
  • the decoding method shown in Figure 9 can construct a mode candidate list for the second color component of the current block based on the reference frame intra-prediction mode parameters of at least one first target block
  • the decoding method shown in Figure 15 can construct a mode candidate list for the second color component of the current block based on the first color component prediction mode parameters of at least one second target block.
  • the mode candidate list for the second color component of the current block can be constructed based on the reference frame intra-prediction mode parameters of at least one first target block and the first color component prediction mode parameters of at least one second target block.
  • Figure 16 shows a flowchart diagram 3 of a decoding method provided by an embodiment of the present application. As shown in Figure 16, the method may include:
  • the mode candidate list can be constructed based on the reference frame intra-prediction mode parameters of at least one first target block, or based on the first color component prediction mode parameters of at least one second target block, or jointly constructed based on the reference frame intra-prediction mode parameters of at least one first target block and the first color component prediction mode parameters of at least one second target block.
  • the prediction mode is an angle mode. If the prediction mode is an angle mode, the angle mapped by the angle mode is offset clockwise or counterclockwise by a minimum angle unit to obtain the mapped angle mode, and the mapped angle mode is used as the new intra-frame prediction mode. If the prediction mode is not an angle mode, the new intra-frame prediction mode is not determined.
  • the mode index number of the prediction mode is directly increased or decreased by 1.
  • the mode candidate list can be constructed by the new intra-frame prediction mode, or by the new intra-frame prediction mode and the reference intra-frame prediction mode parameters of at least one first target block, or by the new intra-frame prediction mode and the first color component prediction mode parameters of at least one second target block, or by the new intra-frame prediction mode, the reference intra-frame prediction mode parameters of at least one first target block and the first color component prediction mode parameters of at least one second target block.
  • the embodiment of the present application is not limited to this.
  • the embodiment of the present application can determine at least one new intra-frame prediction mode by offsetting the mode index numbers of the first two prediction modes in the mode candidate list, and place the at least one new intra-frame prediction mode in the mode candidate list. In this way, not only the completeness and diversity of the intra-frame chroma prediction mode can be improved, but also the accuracy of the intra-frame chroma prediction can be improved, thereby improving the decoding efficiency and thus the decoding performance.
  • the embodiment of the present application may also use a preset intra-frame prediction mode to construct a mode candidate list.
  • the preset intra-frame prediction mode may be at least one of the following: PLANAR_IDX, VER_IDX, HOR_IDX, DC_IDX, VDIA_IDX, VER_IDX-4, VER_IDX+4, HOR_IDX-4, HOR_IDX+4.
  • a mode candidate list is constructed by constructing a mode candidate list through a pre-set intra-frame prediction mode, a mode candidate list is constructed through a new intra-frame prediction mode, a mode candidate list is constructed through reference intra-frame prediction mode parameters of at least one first target block, and a candidate list is constructed through the first color component prediction mode of at least one second target block.
  • the prediction modes in the mode candidate list can be adjusted in order.
  • the intra-frame prediction modes in the mode candidate list have different characteristics.
  • the embodiment of the present application provides a decoding method, which can construct a mode candidate list through a preset intra-frame prediction mode, or through a new intra-frame prediction mode, or through the reference intra-frame prediction mode parameters of at least one first target block, or through the first color component prediction mode of at least one second target block. And these four methods of constructing a mode candidate list can be used alone or in combination. In this way, not only the completeness and diversity of the intra-frame chroma prediction mode can be improved, but also the accuracy of the intra-frame chroma prediction can be improved, thereby improving the decoding efficiency and further improving the decoding performance.
  • FIG. 17 which is applied to an encoder, a schematic flow chart of an encoding method provided by an embodiment of the present application is shown. As shown in FIG. 17 , the method may include:
  • the encoding method of the embodiment of the present application is applied to an encoding device, or an encoding device integrated with the encoding device (also referred to as "encoder").
  • the encoding method of the embodiment of the present application may specifically refer to an intra-frame prediction method. In which, assuming that the first color component is a luminance component and the second color component is a chrominance component, then more specifically, here is a method for deriving an intra-frame chrominance prediction mode.
  • the current block may refer to the coding block in the video image that is currently to be intra-predicted.
  • the reference block is an adjacent block of the current block; and the "adjacent" here may refer to spatial adjacent, temporal adjacent, etc., without specific limitation.
  • the reference block of the current block may be an adjacent coded block of the current block.
  • the reference chroma block is an adjacent encoded chroma block of the current block.
  • the reference frame intra prediction mode parameters can be derived according to the reference block.
  • the first condition may include: the prediction mode of the second color component of the reference block is a first preset mode.
  • the first preset mode may be a non-angular prediction mode.
  • the first preset mode includes at least one of the following: an inter-component prediction mode, an IBC mode, a MIP mode, and a Palette mode.
  • the inter-component prediction mode can be the CCLM mode.
  • the first preset mode is an inter-frame prediction mode.
  • the prediction mode of the second color component of the reference block is the first preset mode, such as the CCLM mode
  • the reference intra-frame prediction mode parameters can be derived according to the reference block of the current block.
  • the first condition may include: the prediction mode of the second color component of the reference block is not a second preset mode.
  • the second preset mode is an angle prediction mode.
  • the second preset mode is a traditional prediction mode.
  • the second preset mode may be a DC mode or a Planar mode.
  • the reference frame intra-prediction mode parameters can also be derived based on the reference block of the current block.
  • the first condition may include determining a first parameter, the first parameter indicating determining a reference intra-frame prediction mode parameter according to a reference block; wherein the method further includes: encoding the first parameter and writing the obtained encoded bits into a bitstream.
  • the first parameter can also be written into the bitstream, and then the decoding end determines the first parameter by decoding the bitstream, and the first parameter indicates that the reference frame intra prediction mode parameter needs to be determined based on the reference block.
  • determining the reference intra-frame prediction mode parameters according to the reference block may include: determining the reference pixel according to the reference block; determining the first parameter according to the reconstructed sample value of the reference pixel; and determining the reference intra-frame prediction mode parameters according to the first parameter.
  • determining a reference pixel according to a reference block may include: determining the reference pixel according to pixels in an adjacent area of the reference block; wherein the adjacent area includes at least one of the following: a left adjacent area, an upper adjacent area, and an upper-left adjacent area.
  • determining the reference pixel according to the reference block may include: determining the reference pixel according to a pixel in the reference block.
  • determining the first parameter based on the reconstructed sample value of the reference pixel may include: performing gradient calculation on the reconstructed sample value of the reference pixel to determine the horizontal gradient value and the vertical gradient value of the reference pixel; performing angle mapping based on the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one intra-frame prediction mode corresponding to the reference pixel; performing gradient strength calculation based on the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one gradient strength value corresponding to the reference pixel; determining the first parameter based on at least one intra-frame prediction mode and at least one gradient strength value corresponding to the reference pixel.
  • the reference pixel includes at least one candidate pixel, and each candidate pixel corresponds to an intra-frame prediction mode and a gradient strength value.
  • performing angle mapping according to the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one intra-frame prediction mode corresponding to the reference pixel may include: determining the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel; performing angle mapping according to the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine the initial mode index value of the candidate pixel; and determining an intra-frame prediction mode corresponding to the candidate pixel according to the initial mode index value of the candidate pixel.
  • determining an intra-frame prediction mode corresponding to the candidate pixel based on an initial mode index value of the candidate pixel can include: compensating the initial mode index value according to a preset angle compensation value to determine a target mode index value of the candidate pixel; determining an intra-frame prediction mode corresponding to the candidate pixel based on the target mode index value of the candidate pixel.
  • the method further includes: determining a target quadrant value of the candidate pixel; determining a value corresponding to the target quadrant value under a preset mapping relationship; and setting a preset angle compensation value to be equal to the value.
  • determining the target quadrant value of the candidate pixel may include: determining a first symbol value based on the horizontal gradient value of the candidate pixel; and determining a second symbol value based on the vertical gradient value of the candidate pixel; determining a comparison value of the candidate pixel based on a comparison result of the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel; and performing quadrant mapping based on the comparison value, the first symbol value, and the second symbol value to determine the target quadrant value corresponding to the candidate pixel.
  • performing gradient strength calculation based on the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one gradient strength value corresponding to the reference pixel may include: performing an addition calculation based on the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine a gradient strength value corresponding to the candidate pixel.
  • the reconstructed sample value of the reference pixel includes at least one of the following: the reconstructed sample value of the first color component of the reference pixel; the reconstructed sample value of the second color component of the reference pixel.
  • determining the first parameter according to the reconstructed sample value of the reference pixel may include: when the reconstructed sample value of the reference pixel is the reconstructed sample value of the first color component of the reference pixel, determining at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the first color component of the reference pixel; when the reconstructed sample value of the reference pixel is the reconstructed sample value of the second color component of the reference pixel, determining at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the second color component of the reference pixel; forming a first set according to at least one intra-frame prediction mode corresponding to the first color component of the reference pixel and at least one intra-frame prediction mode corresponding to the second color component of the reference pixel, and the first set includes at least one reference intra-frame prediction mode with mutually different characteristics; accumulating and calculating the gradient intensity values belonging to the same reference intra-frame prediction mode according to at least one gradient intensity value corresponding to
  • the exemplary description of determining at least one intra-frame prediction mode and at least one gradient strength value corresponding to a reference pixel is similar to that on the decoder side and will not be repeated here.
  • determining the reference intra-frame prediction mode parameters according to the first parameter may include: forming a second set according to the gradient strength values corresponding to at least one reference intra-frame prediction mode; if the gradient strength values in the second set are all zero, determining the reference intra-frame prediction mode parameters according to the PLANAR mode; if there are non-zero items in the gradient strength values in the second set, determining the maximum gradient strength value from the second set, and determining the reference intra-frame prediction mode parameters according to the intra-frame prediction mode corresponding to the maximum gradient strength value.
  • the method may further include: assigning the maximum gradient intensity value in the second set to -1, and determining the third set; if the intra-frame prediction mode corresponding to the maximum gradient intensity value is the same as the first color component prediction mode of the first color component block at the same position of the reference block, then determining a new maximum gradient intensity value from the third set, and determining the reference intra-frame prediction mode parameters according to the intra-frame prediction mode corresponding to the new maximum gradient intensity value.
  • the method may further include: if the intra-frame prediction mode corresponding to the new maximum gradient intensity value is the same as the first color component prediction mode of the first color component block at the same position of the reference block, then determining the reference intra-frame prediction mode parameters according to the DC mode.
  • determining the predicted value of the second color component of the current block according to the prediction mode parameters within the reference frame may include: constructing a mode candidate list of the second color component of the current block according to the prediction mode parameters within the reference frame; and determining the predicted value of the second color component of the current block according to the mode candidate list.
  • determining a reference block of a current block may include: determining at least one target pixel adjacent to the current block; determining at least one first target block based on the blocks where the at least one target pixel is located; and determining a reference block of the current block based on the at least one first target block.
  • the method may also include: based on at least one first target block, using them as reference blocks in sequence according to a first preset order, determining the reference intra-frame prediction mode parameters of at least one first target block; and constructing a mode candidate list for the second color component of the current block based on the reference intra-frame prediction mode parameters of at least one first target block.
  • the first preset order may be set manually or according to a certain rule in a specific scenario, and the embodiments of the present application are not limited to this.
  • determining a reference block of the current block may also include: determining a first color component area at the same position as the current block; determining at least one second target block at a preset position from a plurality of blocks divided from the first color component area; and determining a reference block of the current block based on the at least one second target block.
  • the method may also include: based on a preset order of at least one second target block, determining in sequence the first color component prediction mode parameters of each of the at least one second target blocks; and constructing a mode candidate list for the second color component of the current block according to the intra-reference frame prediction mode parameters of the at least one first target block and the first color component prediction mode parameters of the at least one second target block.
  • the method may also include: determining the first two prediction modes in the mode candidate list; performing an offset operation on the mode index numbers of the first two prediction modes to determine at least one new intra-frame prediction mode; and placing at least one new intra-frame prediction mode in the mode candidate list.
  • the method may further include: adjusting the order of the prediction modes in the mode candidate list.
  • determining the predicted value of the second color component of the current block according to the mode candidate list may include: determining a target prediction mode of the second color component of the current block according to the mode candidate list; and performing prediction processing on the second color component of the current block using the target prediction mode to determine the predicted value of the second color component of the current block.
  • determining a target prediction mode for a second color component of a current block according to a mode candidate list may include: pre-encoding the second color component of the current block according to at least one candidate prediction mode in the mode candidate list, and determining a pre-encoding result of each of the at least one candidate prediction mode; determining a rate-distortion cost value of each of the at least one candidate prediction mode according to the pre-encoding result of each of the at least one candidate prediction mode; determining a minimum rate-distortion cost value from the rate-distortion cost values of each of the at least one candidate prediction mode, and determining the candidate prediction mode corresponding to the minimum rate-distortion cost value as the target prediction mode for the second color component of the current block.
  • the distortion value of each of the at least one candidate prediction mode can be determined.
  • it can be determined according to the cost result of Rate Distortion Optimization (RDO), or according to the cost result of Sum of Absolute Difference (SAD), or even according to the cost result of Sum of Absolute Transformed Difference (SATD), but no limitation is made here.
  • RDO Rate Distortion Optimization
  • SAD Sum of Absolute Difference
  • SATD Sum of Absolute Transformed Difference
  • the rate-distortion cost value of each of the at least one candidate prediction mode can be determined according to the pre-encoding result of each of the at least one candidate prediction mode; then the minimum rate-distortion cost value is selected therefrom, and the candidate prediction mode corresponding to the minimum rate-distortion cost value is determined as the target prediction mode (i.e., the optimal prediction mode), thereby improving the encoding efficiency of the second color component.
  • the method may further include: determining a mode index number corresponding to the target prediction mode according to the mode candidate list; encoding the mode index number, and writing the obtained encoded bits into a bitstream.
  • each mode index number can be encoded using either a context model or bypass coding.
  • determining the predicted difference value of the second color component of the current block based on the predicted value of the second color component of the current block may include: determining the predicted difference value of the second color component of the current block based on the original value of the second color component of the current block and the predicted value of the second color component of the current block.
  • the method may further include: encoding the predicted difference value of the second color component of the current block, and writing the obtained encoded bits into the bitstream.
  • a subtraction operation can be performed on the original value of the second color component of the current block and the predicted value of the second color component of the current block to obtain the predicted difference value of the second color component of the current block, which is then written into the bitstream.
  • an embodiment of the present application also provides a code stream, which is generated by bit encoding based on the information to be encoded; wherein the information to be encoded may include at least one of the following: a predicted difference value of the second color component of the current block, a mode index number and a first parameter.
  • the encoding end after determining the predicted difference value, mode index number and first parameter of the second color component of the current block, the encoding end can encode the information and write it into the bitstream, and transmit it to the decoding end through the bitstream. In this way, the decoding end can directly determine the predicted difference value, mode index number and first parameter of the second color component of the current block by decoding the bitstream, thereby improving the decoding efficiency.
  • the embodiment of the present application provides a coding method that can take into account adjacent coded chroma block prediction modes when determining the reference intra-frame prediction mode parameters of non-CCLM. In this way, not only the completeness and diversity of intra-frame chroma prediction modes can be improved, but also the accuracy of intra-frame chroma prediction can be improved, thereby improving coding efficiency and further improving coding performance.
  • the present application embodiment proposes a derivation technology of the LM mode (Linear Model–Derived Mode, LM-DM) using these reconstruction information.
  • LM-DM Linear Model–Derived Mode
  • multiple prediction modes can be added through a derivation process to supplement the completeness of the existing optional chroma prediction mode.
  • MaxChromaCandidateListNum refers to a preset number, indicating the maximum number of modes that can be stored in the chroma candidate list.
  • a chroma candidate list with a length of MaxChromaCandidateListNum can be established first, and then each mode can be added to the chroma candidate list in sequence according to a preset order.
  • the chroma candidate list may be adjusted after it is built (including order and mode adjustments). It can be divided into three situations:
  • the encoder needs to construct the chroma candidate list, and the decoder only needs to construct a pattern of the chroma candidate list index for bitstream transmission;
  • both the encoder and the decoder need to construct a complete chroma candidate list, and after the adjustment, the decoder selects the mode corresponding to the chroma candidate list index transmitted in the bitstream;
  • MaxChromaCandidateListNum chroma prediction modes may be added in sequence according to the following preset order.
  • the intra-frame luminance prediction mode of the CU where the co-located luminance block C, TL, TR, BL, and BR of the current chrominance decoding block (the entire slash filled area in the right figure) (the entire slash filled area in the left figure) is located is added to the chrominance candidate list in order.
  • the position of the co-located luminance block corresponding to the upper left corner of the current chrominance decoding block relative to the luminance block in the upper left corner of the image (that is, the position of the luminance block TL) is (xCb, yCb), and the width of the co-located luminance block corresponding to the current chrominance decoding block (the entire filled area of the diagonal line in the left figure) is cbWidth, and the height is cbHeight.
  • the coordinate information of block C is (xCb+cbWidth/2, yCb+cbHeight/2);
  • the coordinate information of the TL block is (xCb, yCb);
  • the coordinate information of the TR block is (xCb+cbWidth-1, yCb);
  • the coordinate information of the BL block is (xCb, yCb+cbHeight-1);
  • the coordinate information of the BR block is (xCb+cbWidth-1, yCb+cbHeight-1).
  • the luminance prediction mode of the CU where C is located can be added to the chrominance candidate list.
  • the position of the co-located luma block corresponding to the upper left corner of the current chroma decoding block relative to the luma block in the upper left corner of the image i.e. the position of the luma block TL
  • the width of the co-located luma block corresponding to the current chroma decoding block is cbWidth
  • the height is cbHeight.
  • the luma prediction mode lumaIntraPredMode of block C is derived as follows:
  • the position information of the C block is (xCb+cbWidth/2, yCb+cbHeight/2).
  • the array IntraMipFlag[x][y] refers to whether the decoded block containing the coordinates (x, y) uses the MIP mode.
  • the array CuPredMode[chType][x][y] refers to the intra-frame prediction mode used by the luminance or chrominance decoding block containing the coordinates (x, y).
  • chType is 0 for luminance and chType is 1 for chrominance.
  • lumaIntraPredMode IntraPredModeY[xCb+cbWidth/2][yCb+cbHeight/2].
  • the array IntraPredModeY[x][y] refers to the intra prediction mode used by the decoded block containing the coordinates (x, y).
  • the specific derivation rules for converting the luma prediction mode of the co-located luma block into the chroma prediction mode are as follows:
  • mode X of the intra-frame luminance prediction mode lumaIntraPredMode specified in Table 3 is used to derive mode Y of the intra-frame chrominance prediction mode;
  • the intra chroma prediction mode is equal to the intra luma prediction mode lumaIntraPredMode.
  • the intra-frame chroma prediction modes of the decoded chroma blocks at the positions 0, 1, 2, 3, and 4 of the chroma pixels adjacent to the current chroma decoding block are added in sequence.
  • the position of the upper left chroma pixel of the current chroma decoding block relative to the upper left chroma pixel of the image is (xCb, yCb)
  • the width of the current chroma decoding block is cbWidth
  • the height is cbHeight.
  • the position information of chroma pixel 0 is (xCb-1, yCb+cbHeight-1);
  • the position information of chroma pixel 1 is (xCb+cbWidth-1,yCb-1);
  • the position information of chroma pixel 2 is (xCb-1, yCb+cbHeight);
  • the position information of chroma pixel 3 is (xCb+cbWidth, yCb-1);
  • the position information of the chrominance pixel 4 is (xCb-1, yCb-1).
  • the specific rules for converting the chroma prediction mode of the adjacent decoded chroma block to the traditional chroma prediction mode are as follows:
  • LM mode Linear Model-Derived Mode
  • the intra chroma prediction mode of the neighboring decoded chroma blocks is directly added.
  • the first is to determine whether the mode is an angle mode. If it is an angle mode, add the angle pattern mapped by the angle mode mapped clockwise or counterclockwise offset by a minimum angle unit. If the mode is not an angle mode, do nothing.
  • the second is to directly add 1 or -1 to the corresponding pattern index value: if the pattern index value is 0, only add the pattern with pattern index 1; if the pattern index value is the maximum pattern index, only add the pattern with pattern index -1 of the maximum pattern index. If there is only one pattern, only add the two angle patterns offset by the pattern. If it is not an angle mode, do not execute step (3).
  • a pre-set set of default non-CCLM mode chromaticity candidate lists are added (the default chromaticity candidate list includes but is not limited to the subsequent description form), and the modes in the chromaticity candidate list are PLANAR_IDX, VER_IDX, HOR_IDX, DC_IDX, VDIA_IDX, VER_IDX–4, VER_IDX+4, HOR_IDX–4, and HOR_IDX+4.
  • the LM-DM derivation process when the chroma prediction mode of the adjacent decoded chroma block is the CCLM mode, the LM-DM derivation process is executed, and the derivation process mainly includes the following three steps: the first step is to determine the template area and obtain the reconstructed pixels of the template area, the second step is to calculate the gradient mapping of the pixels obtained in the first step into an angle pattern and count it in the histogram, and the third step is to derive the LM-DM mode.
  • the width of the chroma pixel size is CbNbWidth
  • the height of the chroma pixel size is CbNbHeight.
  • sps_chroma_format_idc is 1, that is, the YUV420 format
  • the width of the luma pixel size is 2 ⁇ CbNbWidth
  • the height of the luma pixel size is 2 ⁇ CbNbHeight.
  • Step 1 Determine the template area and obtain the reconstructed pixels (i.e., input) of the template area.
  • the chroma pixel of the adjacent area of the adjacent decoded chroma block is pC[x][y], where x ⁇ [-3,CbNbWidth], y ⁇ [-3,-1] and x ⁇ [-3,-1], y ⁇ [0,CbNbHeight], and the origin [0][0] is the chroma pixel coordinate of the upper left corner of the block.
  • the co-located luma pixel of the adjacent area of the adjacent decoded chroma block is pY[x][y], where x ⁇ [-3,2 ⁇ CbNbWidth], y ⁇ [-3,-1] and x ⁇ [-3,-1], y ⁇ [0,2 ⁇ CbNbHeight], and the origin [0][0] is the luma pixel coordinate corresponding to the chroma pixel in the upper left corner of the block.
  • the specific positions are shown as the dot pixels in Figure 10A and the dot pixels in Figure 10B.
  • Step 2 Calculate the gradient map of the pixels obtained in the first step into an angle mode and count it in the histogram.
  • the specific pseudo code please refer to the description of the above content.
  • the gradient intensity values iAmp of step (1) and step (2) are accumulated according to the corresponding intra-frame prediction mode ipm, and a histogram HOG is established with the intra-frame prediction mode ipm as the horizontal coordinate and the gradient intensity value iAmp as the vertical coordinate, as shown in Figure 14.
  • Step 3 Derive the model (i.e. output) of LM-DM.
  • the intra-frame chrominance prediction mode IntraPredModeD can be derived from LM-DM, and the mode index interval range is [0,66].
  • IntraPredModeD INTRA_PLANAR.
  • IntraPredModeD argmax i(HoG[i]) and set HoG[IntraPredModeD] to -1;
  • the template pixels in the adjacent area are still used for analysis.
  • the embodiment of the present application can also use all the internal pixels of the adjacent decoded blocks to derive the LM-DM mode.
  • the width of the chroma pixel size is CbNbWidth
  • the height of the chroma pixel size is CbNbHeight.
  • sps_chroma_format_idc is 1, that is, the YUV420 format
  • the width of the luma pixel size is 2 ⁇ CbNbWidth
  • the height of the luma pixel size is 2 ⁇ CbNbHeight.
  • Step 1 Determine the template area and obtain the reconstructed pixels (i.e., input) of the template area.
  • the chroma pixel of the neighboring decoded chroma block is pC[x][y], where x ⁇ [0,CbNbWidth-1], y ⁇ [0,CbNbHeight-1], and the origin [0][0] is the chroma pixel coordinate of the upper left corner of the block.
  • the co-located luma pixel of the neighboring decoded chroma block is pY[x][y], where x ⁇ [0,2 ⁇ CbNbWidth-1], y ⁇ [0,2 ⁇ CbNbHeight-1], and the origin [0][0] is the luma pixel coordinate corresponding to the chroma pixel in the upper left corner of the block.
  • the specific positions are shown as the dot pixels in Figure 11A and the dot pixels in Figure 11B.
  • Step 2 Calculate the gradient mapping of the pixels obtained in the first step into an angle mode and count it in the histogram.
  • the specific pseudo code can be found in the description of the above content, which will not be repeated here.
  • Step 3 Derive the model (i.e. output) of LM-DM.
  • the intra-frame chrominance prediction mode IntraPredModeD can be derived from LM-DM, and the mode index interval range is [0,66].
  • IntraPredModeD INTRA_PLANAR.
  • IntraPredModeD argmax i(HoG[i]) and set HoG[IntraPredModeD] to -1;
  • the embodiments of the present application may also use some internal pixels of the adjacent decoded blocks for analysis.
  • Step 1 Determine the template area and obtain the reconstructed pixels (i.e., input) of the template area.
  • the chroma pixel of the neighboring decoded chroma block is pC[x][y], where x ⁇ [CbNbWidth-3,CbNbWidth-1], y ⁇ [0,CbNbHeight-1], and the origin [0][0] is the chroma pixel coordinate of the upper left corner of the block.
  • the co-located luma pixel of the neighboring decoded chroma block is pY[x][y], where x ⁇ [2 ⁇ CbNbWidth-3,2 ⁇ CbNbWidth-1], y ⁇ [0,2 ⁇ CbNbHeight-1], and the origin [0][0] is the luma pixel coordinate corresponding to the chroma pixel in the upper left corner of the block.
  • the specific positions are shown as the dot pixels in Figure 12A and the dot pixels in Figure 12B.
  • Step 2 Calculate the gradient mapping of the pixels obtained in the first step into an angle mode and count it in the histogram.
  • the specific pseudo code can be found in the above description, which will not be repeated here.
  • Step 3 Derive the model (i.e. output) of LM-DM.
  • Step 1 Determine the template area and obtain the reconstructed pixels (i.e., input) of the template area.
  • the chroma pixel of the neighboring decoded chroma block is pC[x][y], where x ⁇ [0,CbNbWidth-1], y ⁇ [CbNbHeight-3,CbNbHeight-1], and the origin [0][0] is the chroma pixel coordinate of the upper left corner of the block.
  • the co-located luma pixel of the neighboring decoded chroma block is pY[x][y], where x ⁇ [0,2 ⁇ CbNbWidth-1], y ⁇ [2 ⁇ CbNbHeight-3,2 ⁇ CbNbHeight-1], and the origin [0][0] is the luma pixel coordinate corresponding to the chroma pixel in the upper left corner of the block.
  • the specific positions are shown as the dot pixels in Figure 13A and the dot pixels in Figure 13B.
  • Step 2 Calculate the gradient mapping of the pixels obtained in the first step into an angle mode and count it in the histogram.
  • the specific pseudo code can be found in the description of the above content, which will not be repeated here.
  • Step 3 Derive the model (i.e. output) of LM-DM.
  • the intra-frame chrominance prediction mode IntraPredModeD can be derived from LM-DM, and the mode index interval range is [0,66].
  • IntraPredModeD INTRA_PLANAR.
  • IntraPredModeD argmax i(HoG[i]) and set HoG[IntraPredModeD] to -1;
  • the embodiment of the present application utilizes the above-mentioned reconstruction information to derive the LM-DM mode.
  • the derivation method of the non-CCLM mode of the intra-frame chroma block of the LM-DM mode may include:
  • the embodiments of the present application can improve the completeness of the intra-frame chroma prediction mode.
  • a gradient histogram corresponding to multiple angle mode entries is constructed, and the pure horizontal and vertical intensities of the template area are calculated respectively by using horizontal and vertical Sobel filters, the angle mode is determined and the amplitude is updated to obtain the best non-CCLM mode.
  • the embodiments of the present application can further improve the diversity of the intra-frame chroma prediction mode, thereby obtaining a more accurate chroma prediction value.
  • FIG18 shows a schematic diagram of the structure of an encoder provided by an embodiment of the present application.
  • the encoder 1800 may include: a first determination unit 1810 and a first prediction unit 1820; wherein,
  • the first determining unit 1810 is configured to determine a reference block of the current block; wherein the reference block is a neighboring block of the current block; and when the prediction mode of the second color component of the reference block satisfies the first condition, determine the reference intra-frame prediction mode parameter according to the reference block;
  • a first prediction unit 1820 is configured to determine a prediction value of a second color component of a current block according to a reference intra-frame prediction mode parameter
  • the first determining unit 1810 is further configured to determine a predicted difference value of the second color component of the current block according to the predicted value of the second color component of the current block.
  • the first condition includes: the prediction mode of the second color component of the reference block is a first preset mode.
  • the first preset mode includes at least one of the following: inter-component prediction mode, IBC mode, MIP mode, and Palette mode.
  • the inter-component prediction mode is CCLM mode.
  • the first preset mode is an inter prediction mode.
  • the first condition includes: the prediction mode of the second color component of the reference block is not a second preset mode.
  • the second preset mode is an angle prediction mode.
  • the second preset mode is a DC mode or a Planar mode.
  • the first condition comprises determining a first parameter, the first parameter indicating determining a reference intra prediction mode parameter according to the reference block.
  • the encoder 1800 may further include an encoding unit 1830 configured to encode the first parameter and write the obtained encoded bits into a bitstream.
  • the encoder 1800 may further include a first construction unit 1840 ; wherein,
  • a first constructing unit 1840 is configured to construct a mode candidate list of a second color component of a current block according to a reference intra-frame prediction mode parameter;
  • the first prediction unit 1820 is further configured to determine a prediction value of a second color component of the current block according to the mode candidate list.
  • the first determination unit 1810 is further configured to determine a reference pixel according to a reference block; determine a first parameter according to a reconstructed sample value of the reference pixel; and determine a reference intra-frame prediction mode parameter according to the first parameter.
  • the first determination unit 1810 is further configured to determine the reference pixel based on pixels in an adjacent area of the reference block; wherein the adjacent area includes at least one of the following: a left adjacent area, an upper adjacent area, and an upper-left adjacent area.
  • the first determining unit 1810 is further configured to determine the reference pixel according to the pixels in the reference block.
  • the first determination unit 1810 is further configured to perform gradient calculation on the reconstructed sample value of the reference pixel to determine the horizontal gradient value and the vertical gradient value of the reference pixel; perform angle mapping according to the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one intra-frame prediction mode corresponding to the reference pixel; perform gradient strength calculation according to the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one gradient strength value corresponding to the reference pixel; determine the first parameter according to at least one intra-frame prediction mode and at least one gradient strength value corresponding to the reference pixel.
  • the reference pixel includes at least one candidate pixel, and each candidate pixel corresponds to an intra-frame prediction mode and a gradient strength value; the first determination unit 1810 is further configured to determine the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel; perform angle mapping based on the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine the initial mode index value of the candidate pixel; determine an intra-frame prediction mode corresponding to the candidate pixel based on the initial mode index value of the candidate pixel.
  • the first determination unit 1810 is further configured to compensate the initial mode index value according to a preset angle compensation value to determine the target mode index value of the candidate pixel; and determine an intra-frame prediction mode corresponding to the candidate pixel according to the target mode index value of the candidate pixel.
  • the first determination unit 1810 is further configured to determine a target quadrant value of the candidate pixel; determine a value corresponding to the target quadrant value under a preset mapping relationship; and set the preset angle compensation value to be equal to the value.
  • the first determination unit 1810 is further configured to determine a first symbol value based on the horizontal gradient value of the candidate pixel; and determine a second symbol value based on the vertical gradient value of the candidate pixel; determine a comparison value of the candidate pixel based on a comparison result of the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel; and perform quadrant mapping based on the comparison value, the first symbol value, and the second symbol value to determine the target quadrant value corresponding to the candidate pixel.
  • the first determining unit 1810 is further configured to perform an addition calculation based on the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine a gradient intensity value corresponding to the candidate pixel.
  • the reconstructed sample value of the reference pixel includes at least one of the following: the reconstructed sample value of the first color component of the reference pixel; the reconstructed sample value of the second color component of the reference pixel.
  • the first determination unit 1810 is further configured to, when the reconstructed sample value of the reference pixel is the reconstructed sample value of the first color component of the reference pixel, determine at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the first color component of the reference pixel; when the reconstructed sample value of the reference pixel is the reconstructed sample value of the second color component of the reference pixel, determine at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the second color component of the reference pixel;
  • the first constructing unit 1840 is further configured to form a first set according to at least one intra-frame prediction mode corresponding to the first color component of the reference pixel and at least one intra-frame prediction mode corresponding to the second color component of the reference pixel, and the first set includes at least one reference intra-frame prediction mode with mutually different characteristics;
  • the first determination unit 1810 is further configured to perform cumulative calculation on the gradient intensity values belonging to the same reference frame intra prediction mode based on at least one gradient intensity value corresponding to the first color component of the reference pixel and at least one gradient intensity value corresponding to the second color component of the reference pixel, to determine the gradient intensity value corresponding to at least one reference frame intra prediction mode; and determine the first parameter based on the at least one reference frame intra prediction mode and the gradient intensity value corresponding to the at least one reference frame intra prediction mode.
  • the first determination unit 1810 is further configured to form a second set based on gradient strength values corresponding to at least one reference intra-frame prediction mode; and if the gradient strength values in the second set are all zero, the reference intra-frame prediction mode parameters are determined according to the PLANAR mode; if there are non-zero items in the gradient strength values in the second set, the maximum gradient strength value is determined from the second set, and the reference intra-frame prediction mode parameters are determined according to the intra-frame prediction mode corresponding to the maximum gradient strength value.
  • the first determination unit 1810 is further configured to assign the maximum gradient strength value in the second set to -1 to determine the third set; if the intra-frame prediction mode corresponding to the maximum gradient strength value is the same as the first color component prediction mode of the first color component block at the same position of the reference block, then a new maximum gradient strength value is determined from the third set, and the reference intra-frame prediction mode parameters are determined according to the intra-frame prediction mode corresponding to the new maximum gradient strength value.
  • the first determination unit 1810 is further configured to determine the reference intra-frame prediction mode parameters according to the DC mode if the intra-frame prediction mode corresponding to the new maximum gradient intensity value is the same as the first color component prediction mode of the first color component block at the same position of the reference block.
  • the first determination unit 1810 is further configured to determine at least one target pixel adjacent to the current block; determine at least one first target block based on the block where the at least one target pixel is located; and determine a reference block of the current block based on the at least one first target block.
  • the first determination unit 1810 is further configured to determine the reference intra prediction mode parameter of the at least one first target block based on the at least one first target block, which is sequentially used as a reference block in a first preset order;
  • the first constructing unit 1840 is further configured to construct a mode candidate list of the second color component of the current block according to the reference intra-frame prediction mode parameters of the at least one first target block.
  • the first determination unit 1810 is further configured to determine a first color component area at the same position as the current block; determine at least one second target block at a preset position from at least one block divided from the first color component area; and determine a reference block of the current block based on the at least one second target block.
  • the first determination unit 1810 is further configured to determine, in sequence, the first color component prediction mode parameters of at least one second target block based on a preset order of at least one second target block; the first construction unit 1840 is further configured to construct a mode candidate list for the second color component of the current block based on the reference frame intra-prediction mode parameters of at least one first target block and the first color component prediction mode parameters of at least one second target block.
  • the first determination unit 1810 is further configured to determine the first two prediction modes in the mode candidate list; perform an offset operation on the mode index numbers of the first two prediction modes to determine at least one new intra-frame prediction mode; and place at least one new intra-frame prediction mode in the mode candidate list.
  • the encoder 1800 may further include a first adjustment unit 1850 configured to sequentially adjust the prediction modes in the mode candidate list.
  • the first determination unit 1810 is further configured to determine a target prediction mode for the second color component of the current block based on the mode candidate list; the first prediction unit 1820 is further configured to perform prediction processing on the second color component of the current block using the target prediction mode to determine a predicted value of the second color component of the current block.
  • the encoding unit 1830 is further configured to pre-encode the second color component of the current block according to at least one candidate prediction mode in the mode candidate list, and determine a pre-encoding result of each of the at least one candidate prediction mode;
  • the first determination unit 1810 is further configured to determine a rate-distortion cost value of at least one candidate prediction mode according to a precoding result of each candidate prediction mode; determine a minimum rate-distortion cost value from the rate-distortion cost values of each candidate prediction mode, and determine the candidate prediction mode corresponding to the minimum rate-distortion cost value as the target prediction mode for the second color component of the current block.
  • the first determining unit 1810 is further configured to determine the mode index number corresponding to the target prediction mode according to the mode candidate list;
  • the encoding unit 1830 is further configured to encode the mode index sequence number and write the obtained encoded bits into the bit stream.
  • the first determining unit 1810 is further configured to determine a predicted difference value of the second color component of the current block according to an original value of the second color component of the current block and a predicted value of the second color component of the current block;
  • the encoding unit 1830 is further configured to encode the predicted difference value of the second color component of the current block, and write the obtained encoded bits into the bitstream.
  • a "unit” may be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course, it may be a module, or it may be non-modular.
  • the components in the present embodiment may be integrated into a processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of a software functional module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the method described in this embodiment.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc., various media that can store program codes.
  • an embodiment of the present application provides a computer-readable storage medium, which is applied to the encoder 1800.
  • the computer-readable storage medium stores a computer program, and when the computer program is executed by the first processor, the method described in any one of the aforementioned embodiments is implemented.
  • the encoder 1800 may include: a first communication interface 1910, a first memory 1920 and a first processor 1930; the various components are coupled together through a first bus system 1940. It can be understood that the first bus system 1940 is used to achieve connection and communication between these components.
  • the first bus system 1940 also includes a power bus, a control bus and a status signal bus.
  • various buses are labeled as the first bus system 1940 in Figure 19. Among them,
  • the first communication interface 1910 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • a first memory 1920 used to store a computer program that can be run on the first processor 1930;
  • the first processor 1930 is configured to, when running the computer program, execute:
  • a reference block of the current block wherein the reference block is an adjacent block of the current block; when the prediction mode of the second color component of the reference block meets the first condition, determine the reference intra-frame prediction mode parameters according to the reference block; determine the prediction value of the second color component of the current block according to the reference intra-frame prediction mode parameters; determine the prediction difference value of the second color component of the current block according to the prediction value of the second color component of the current block.
  • the first memory 1920 in the embodiment of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories.
  • the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory can be a random access memory (RAM), which is used as an external cache.
  • RAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate synchronous DRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous link DRAM
  • DRRAM direct RAM bus RAM
  • the first processor 1930 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by the hardware integrated logic circuit or software instructions in the first processor 1930.
  • the above-mentioned first processor 1930 can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the methods, steps and logic block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as a hardware decoding processor to execute, or the hardware and software modules in the decoding processor can be executed.
  • the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the first memory 1920, and the first processor 1930 reads the information in the first memory 1920 and completes the steps of the above method in combination with its hardware.
  • the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP Device, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general processors, controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • general processors controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • the technology described in this application can be implemented by a module (such as a process, function, etc.) that performs the functions described in this application.
  • the software code can be stored in a memory and executed by a processor.
  • the memory can be implemented in a processor or outside a processor.
  • the first processor 1930 is further configured to execute the method described in any one of the aforementioned embodiments when running the computer program.
  • the present embodiment provides an encoder, in which the reference intra-frame prediction mode parameters of non-CCLM are determined by performing relevant parameter analysis on the reference blocks adjacent to the current block; based on the reference intra-frame prediction mode parameters, the completeness and diversity of the intra-frame chrominance prediction mode can be improved, thereby improving the accuracy of the intra-frame chrominance prediction, and also improving the coding efficiency, thereby improving the coding performance.
  • FIG. 20 shows a schematic diagram of the structure of a decoder provided in an embodiment of the present application.
  • the decoder 2000 may include: a second determination unit 2010 and a second prediction unit 2020; wherein,
  • the second determination unit 2010 is configured to determine a reference block of the current block; wherein the reference block is a neighboring block of the current block; and when the prediction mode of the second color component of the reference block satisfies the first condition, determine the reference intra-frame prediction mode parameter according to the reference block;
  • the second prediction unit 2020 is configured to determine a prediction value of a second color component of the current block according to a reference intra-frame prediction mode parameter.
  • the first condition includes: the prediction mode of the second color component of the reference block is a first preset mode.
  • the first preset mode includes at least one of the following: inter-component prediction mode, IBC mode, MIP mode, and Palette mode.
  • the inter-component prediction mode is CCLM mode.
  • the first preset mode is an inter prediction mode.
  • the first condition includes: the prediction mode of the second color component of the reference block is not a second preset mode.
  • the second preset mode is an angle prediction mode.
  • the second preset mode is a DC mode or a Planar mode.
  • the first condition includes: decoding a code stream and determining a first parameter; wherein the first parameter indicates determining a reference intra-frame prediction mode parameter according to a reference block.
  • the decoder 2000 may further include a second construction unit 2030, wherein:
  • a second constructing unit 2030 is configured to construct a mode candidate list of a second color component of the current block according to the reference intra-frame prediction mode parameter;
  • the second prediction unit 2020 is further configured to determine a prediction value of a second color component of the current block according to the mode candidate list.
  • the second determination unit 2010 is further configured to determine a reference pixel according to a reference block; determine a first parameter according to a reconstructed sample value of the reference pixel; and determine a reference intra-frame prediction mode parameter according to the first parameter.
  • the second determination unit 2010 is further configured to determine the reference pixel based on pixels in an adjacent area of the reference block; wherein the adjacent area includes at least one of the following: a left adjacent area, an upper adjacent area, and an upper-left adjacent area.
  • the second determining unit 2010 is further configured to determine the reference pixel according to the pixels in the reference block.
  • the second determination unit 2010 is further configured to perform gradient calculation on the reconstructed sample value of the reference pixel to determine the horizontal gradient value and the vertical gradient value of the reference pixel; perform angle mapping according to the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one intra-frame prediction mode corresponding to the reference pixel; perform gradient intensity calculation according to the horizontal gradient value and the vertical gradient value of the reference pixel to determine at least one gradient intensity value corresponding to the reference pixel; and determine the first parameter according to at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the reference pixel.
  • the reference pixel includes at least one candidate pixel, and each candidate pixel corresponds to an intra-frame prediction mode and a gradient strength value; the second determination unit 2010 is further configured to determine the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel; perform angle mapping based on the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine the initial mode index value of the candidate pixel; determine an intra-frame prediction mode corresponding to the candidate pixel based on the initial mode index value of the candidate pixel.
  • the second determination unit 2010 is further configured to compensate the initial mode index value according to a preset angle compensation value to determine the target mode index value of the candidate pixel; and determine an intra-frame prediction mode corresponding to the candidate pixel according to the target mode index value of the candidate pixel.
  • the second determination unit 2010 is further configured to determine a target quadrant value of the candidate pixel; determine a value corresponding to the target quadrant value under a preset mapping relationship; and set the preset angle compensation value to be equal to the value.
  • the second determination unit 2010 is further configured to determine a first symbol value based on the horizontal gradient value of the candidate pixel; and determine a second symbol value based on the vertical gradient value of the candidate pixel; determine a comparison value of the candidate pixel based on a comparison result of the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel; and perform quadrant mapping based on the comparison value, the first symbol value, and the second symbol value to determine the target quadrant value corresponding to the candidate pixel.
  • the second determination unit 2010 is further configured to perform an addition calculation based on the horizontal gradient absolute value and the vertical gradient absolute value of the candidate pixel to determine a gradient intensity value corresponding to the candidate pixel.
  • the reconstructed sample value of the reference pixel includes at least one of the following: the reconstructed sample value of the first color component of the reference pixel; the reconstructed sample value of the second color component of the reference pixel.
  • the second determination unit 2010 is further configured to, when the reconstructed sample value of the reference pixel is the reconstructed sample value of the first color component of the reference pixel, determine at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the first color component of the reference pixel; when the reconstructed sample value of the reference pixel is the reconstructed sample value of the second color component of the reference pixel, determine at least one intra-frame prediction mode and at least one gradient intensity value corresponding to the second color component of the reference pixel;
  • the second constructing unit 2030 is further configured to form a first set according to at least one intra-frame prediction mode corresponding to the first color component of the reference pixel and at least one intra-frame prediction mode corresponding to the second color component of the reference pixel, and the first set includes at least one reference intra-frame prediction mode with mutually different characteristics;
  • the second determining unit 2010 is further configured to perform cumulative calculation on the gradient intensity values belonging to the same reference frame intra prediction mode according to at least one gradient intensity value corresponding to the first color component of the reference pixel and at least one gradient intensity value corresponding to the second color component of the reference pixel, to determine a gradient intensity value corresponding to at least one reference frame intra prediction mode;
  • the second determining unit 2010 is further configured to determine a first parameter according to at least one reference intra-frame prediction mode and a gradient strength value corresponding to the at least one reference intra-frame prediction mode.
  • the second determination unit 2010 is further configured to form a second set according to the gradient strength values corresponding to at least one reference intra-frame prediction mode; and if the gradient strength values in the second set are all zero, the reference intra-frame prediction mode parameters are determined according to the PLANAR mode; if there are non-zero items in the gradient strength values in the second set, the maximum gradient strength value is determined from the second set, and the reference intra-frame prediction mode parameters are determined according to the intra-frame prediction mode corresponding to the maximum gradient strength value.
  • the second determination unit 2010 is further configured to assign the maximum gradient strength value in the second set to -1 to determine the third set; if the intra-frame prediction mode corresponding to the maximum gradient strength value is the same as the first color component prediction mode of the first color component block at the same position of the reference block, then a new maximum gradient strength value is determined from the third set, and the reference intra-frame prediction mode parameters are determined according to the intra-frame prediction mode corresponding to the new maximum gradient strength value.
  • the second determination unit 2010 is further configured to determine the reference intra prediction mode parameters according to the DC mode if the intra prediction mode corresponding to the new maximum gradient strength value is the same as the first color component prediction mode of the first color component block at the same position of the reference block.
  • the second determination unit 2010 is further configured to determine at least one target pixel adjacent to the current block; determine at least one first target block based on the block where the at least one target pixel is located; and determine a reference block of the current block based on the at least one first target block.
  • the second determination unit 2010 is further configured to determine the reference intra-frame prediction mode parameters of at least one first target block based on at least one first target block, which are used as reference blocks in sequence according to a first preset order; the second construction unit 2030 is further configured to construct a mode candidate list for the second color component of the current block based on the reference intra-frame prediction mode parameters of at least one first target block.
  • the second determination unit 2010 is further configured to determine a first color component area at the same position as the current block; determine at least one second target block at a preset position from at least one block divided from the first color component area; and determine a reference block of the current block based on the at least one second target block.
  • the second determination unit 2010 is further configured to sequentially determine the first color component prediction mode parameter of each of the at least one second target block based on a preset order of the at least one second target block;
  • the second construction unit 2030 is further configured to construct a mode candidate list of the second color component of the current block according to the reference intra-frame prediction mode parameters of at least one first target block and the first color component prediction mode parameters of at least one second target block.
  • the second determination unit 2010 is further configured to determine the first two prediction modes in the mode candidate list; perform an offset operation on the mode index numbers of the first two prediction modes to determine at least one new intra-frame prediction mode; and place at least one new intra-frame prediction mode in the mode candidate list.
  • the decoder 2000 may further include a second adjustment unit 2040 configured to sequentially adjust the prediction modes in the mode candidate list.
  • the decoder 2000 may further include a decoding unit 2050 configured to parse the bitstream and determine the mode index number of the second color component of the current block;
  • the second determining unit 2010 is further configured to determine the target prediction mode corresponding to the mode index number according to the mode candidate list;
  • the second prediction unit 2020 is further configured to perform prediction processing on the second color component of the current block using the target prediction mode to determine a predicted value of the second color component of the current block.
  • the decoding unit 2050 is further configured to parse the bitstream to determine a prediction difference value of a second color component of the current block;
  • the second determination unit 2010 is further configured to determine a reconstructed value of the second color component of the current block according to the predicted value of the second color component of the current block and the predicted difference value of the second color component of the current block.
  • a "unit" can be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course it can also be a module, or it can be non-modular.
  • the components in this embodiment can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of a software functional module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • this embodiment provides a computer-readable storage medium, which is applied to the decoder 2000, and the computer-readable storage medium stores a computer program. When the computer program is executed by the second processor, the method described in any one of the above embodiments is implemented.
  • the decoder 2000 may include: a second communication interface 2110, a second memory 2120 and a second processor 2130; each component is coupled together through a second bus system 2140.
  • the second bus system 2140 is used to achieve connection and communication between these components.
  • the second bus system 2140 also includes a power bus, a control bus and a status signal bus.
  • various buses are labeled as the second bus system 2140 in Figure 21. Among them,
  • the second communication interface 2110 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the second memory 2120 is used to store a computer program that can be run on the second processor 2130;
  • the second processor 2130 is configured to, when running the computer program, execute:
  • the second processor 2130 is further configured to execute any one of the methods described in the foregoing embodiments when running the computer program.
  • the present embodiment provides a decoder in which relevant parameter analysis is performed on reference blocks adjacent to the current block, thereby determining reference intra-frame prediction mode parameters of non-CCLM; based on the reference intra-frame prediction mode parameters, the completeness and diversity of the intra-frame chrominance prediction mode can be improved, thereby improving the accuracy of the intra-frame chrominance prediction, and also improving the decoding efficiency, thereby improving the decoding performance.
  • a schematic diagram of the composition structure of a coding and decoding system provided in an embodiment of the present application is shown.
  • a coding and decoding system 2200 may include an encoder 2210 and a decoder 2220 .
  • the encoder 2210 may be the encoder described in any one of the aforementioned embodiments
  • the decoder 2220 may be the decoder described in any one of the aforementioned embodiments.
  • a reference block of the current block is determined; wherein the reference block is an adjacent block of the current block; when the prediction mode of the second color component of the reference block meets the first condition, the reference intra-frame prediction mode parameters are determined according to the reference block; and the prediction value of the second color component of the current block is determined according to the reference intra-frame prediction mode parameters.
  • a reference block of the current block is determined; wherein the reference block is an adjacent block of the current block; when the prediction mode of the second color component of the reference block meets the first condition, the reference intra-frame prediction mode parameters are determined according to the reference block; and the prediction value of the second color component of the current block is determined according to the reference intra-frame prediction mode parameters; and the prediction difference of the second color component of the current block is determined according to the prediction value of the second color component of the current block.
  • the reference intra-frame prediction mode parameters of the non-CCLM can be determined; according to the reference intra-frame prediction mode parameters, the completeness and diversity of the intra-frame chrominance prediction mode can be improved, thereby improving the accuracy of the intra-frame chrominance prediction, and also improving the encoding and decoding efficiency, thereby improving the encoding and decoding performance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种编解码方法、码流、编码器、解码器以及存储介质,该方法包括:确定当前块的参考块;其中,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。这样,根据参考帧内预测模式参数,可以提高帧内色度预测模式的完备性和多样性,从而能够提高帧内色度预测的准确性,而且还能够提高编解码效率,进而提升编解码性能。

Description

编解码方法、码流、编码器、解码器以及存储介质 技术领域
本申请实施例涉及视频编解码技术领域,尤其涉及一种编解码方法、码流、编码器、解码器以及存储介质。
背景技术
随着人们对视频显示质量要求的提高,高清和超高清视频等新视频应用形式应运而生。国际标准组织ISO/IEC和ITU-T的联合视频研究组(Joint Video Exploration Team,JVET)制定了视频编码标准H.266/多功能视频编码(Versatile Video Coding,VVC)。
在H.266/VVC中,跨分量预测技术主要包含有颜色分量间线性模型(Cross-Component Linear Model,CCLM)模式。然而,对于不是CCLM模式(即non-CCLM模式)来说,其候选列表的构建过程存在不完备性,导致帧内色度预测效果不好,降低了编解码效率。
发明内容
本申请实施例提供一种编解码方法、码流、编码器、解码器以及存储介质,不仅能够提高帧内色度预测的准确性,而且还能够提高编解码效率,进而提升编解码性能。
本申请实施例的技术方案可以如下实现:
第一方面,本申请实施例提供了一种解码方法,包括:
确定当前块的参考块;其中,参考块是当前块的相邻块;
在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;
根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。
第二方面,本申请实施例提供了一种编码方法,包括:
确定当前块的参考块;其中,参考块是当前块的相邻块;
在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;
根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值;
根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。
第三方面,本申请实施例提供了一种码流,该码流是根据待编码信息进行比特编码生成的;其中,待编码信息包括下述至少一项:
当前块的第二颜色分量的预测差值、模式索引序号和第一参数。
第四方面,本申请实施例提供了一种编码器,包括第一确定单元和第一预测单元;其中,
第一确定单元,配置为确定当前块的参考块;其中,参考块是当前块的相邻块;以及在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;
第一预测单元,配置为根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值;
第一确定单元,还配置为根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。
第五方面,本申请实施例提供了一种编码器,包括第一存储器和第一处理器;其中,
第一存储器,用于存储能够在第一处理器上运行的计算机程序;
第一处理器,用于在运行计算机程序时,执行如第二方面所述的方法。
第六方面,本申请实施例提供了一种解码器,包括第二确定单元和第二预测单元;其中,
第二确定单元,配置为确定当前块的参考块;其中,参考块是当前块的相邻块;以及在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;
第二预测单元,配置为根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。
第七方面,本申请实施例提供了一种解码器,包括第二存储器和第二处理器;其中,
第二存储器,用于存储能够在第二处理器上运行的计算机程序;
第二处理器,用于在运行计算机程序时,执行如第一方面所述的方法。
第八方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程 序,该计算机程序被执行时实现如第一方面所述的方法、或者实现如第二方面所述的方法。
本申请实施例提供了一种编解码方法、码流、编码器、解码器以及存储介质,无论是编码端还是解码端,确定当前块的参考块,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。这样,在编码端,根据当前块的第二颜色分量的预测值,可以确定当前块的第二颜色分量的预测差值;使得在解码端,根据当前块的第二颜色分量的预测值能够确定出当前块的第二颜色分量的重建值。也就是说,通过对当前块相邻的参考块进行相关参数分析,从而能够确定出non-CCLM的参考帧内预测模式参数;根据这参考帧内预测模式参数,可以提高帧内色度预测模式的完备性和多样性,从而能够提高帧内色度预测的准确性,而且还能够提高编解码效率,进而提升编解码性能。
附图说明
图1为本申请实施例提供的一种亮度CU与色度CU的位置关系示意图;
图2为本申请实施例提供的另一种亮度CU与色度CU的位置关系示意图;
图3为本申请实施例提供的又一种亮度CU与色度CU的位置关系示意图;
图4为本申请实施例提供的又一种亮度CU与色度CU的位置关系示意图;
图5为本申请实施例提供的一种当前块相邻的参考色度像素位置分布示意图;
图6为本申请实施例提供的一种编码器的组成框图示意图;
图7为本申请实施例提供的一种解码器的组成框图示意图;
图8为本申请实施例提供的一种编解码***的网络架构示意图;
图9为本申请实施例提供的一种解码方法的流程示意图一;
图10A为本申请实施例提供的一种参考色度像素的位置分布示意图一;
图10B为本申请实施例提供的一种参考亮度像素的位置分布示意图一;
图11A为本申请实施例提供的一种参考色度像素的位置分布示意图二;
图11B为本申请实施例提供的一种参考亮度像素的位置分布示意图二;
图12A为本申请实施例提供的一种参考色度像素的位置分布示意图三;
图12B为本申请实施例提供的一种参考亮度像素的位置分布示意图三;
图13A为本申请实施例提供的一种参考色度像素的位置分布示意图四;
图13B为本申请实施例提供的一种参考亮度像素的位置分布示意图四;
图14为本申请实施例提供的一种帧内预测模式对应的梯度强度值的示意直方图;
图15为本申请实施例提供的一种解码方法的流程示意图二;
图16为本申请实施例提供的一种解码方法的流程示意图三;
图17为本申请实施例提供的一种编码方法的流程示意图;
图18为本申请实施例提供的一种编码器的组成结构示意图;
图19为本申请实施例提供的一种编码器的具体硬件结构示意图;
图20为本申请实施例提供的一种解码器的组成结构示意图;
图21为本申请实施例提供的一种解码器的具体硬件结构示意图;
图22为本申请实施例提供的一种编解码***的组成结构示意图。
具体实施方式
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
还需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅是用于区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
在视频图像中,一般采用第一颜色分量、第二颜色分量和第三颜色分量来表征编码块(Coding Block,CB);其中,这三个颜色分量分别为一个亮度分量、一个蓝色色度分量和一个红色色度分量,具体地, 亮度分量通常使用符号Y表示,蓝色色度分量通常使用符号Cb或者U表示,红色色度分量通常使用符号Cr或者V表示;这样,视频图像可以用YCbCr格式表示,也可以用YUV格式表示。
可以理解,在当前的视频图像或者视频编解码过程中,对于跨分量预测技术,主要包括分量间线性模型(Cross-component Linear Model,CCLM)预测模式和多方向线性模型(Multi-Directional Linear Model,MDLM)预测模式,无论是根据CCLM预测模式推导的模型因子,还是根据MDLM预测模式推导的模型因子,其对应的预测模型均可以实现第一颜色分量到第二颜色分量、第二颜色分量到第一颜色分量、第一颜色分量到第三颜色分量、第三颜色分量到第一颜色分量、第二颜色分量到第三颜色分量、或者第三颜色分量到第二颜色分量等颜色分量间的预测。
以第一颜色分量到第二颜色分量的预测为例,假定第一颜色分量为亮度分量,第二颜色分量为色度分量,为了减少亮度分量与色度分量之间的冗余,在VVC中使用CCLM预测模式,即根据同一编码块的亮度重建值来构造色度预测值,如:Pred C(i,j)=α·Rec L(i,j)+β。
其中,i,j表示编码块中待预测像素的位置坐标,i表示水平方向,j表示竖直方向;Pred C(i,j)表示编码块中位置坐标(i,j)的待预测像素对应的色度预测值,Rec L(i,j)表示同一编码块中(经过下采样的)位置坐标(i,j)的待预测像素对应的亮度重建值。另外,α和β表示模型因子,可通过参考像素推导得到。
在H.266/VVC中,可以有多种帧内色度预测模式,例如INTRA_LT_CCLM模式、INTRA_L_CCLM模式和INTRA_T_CCLM模式,这几种帧内色度预测模式也称为颜色分量间线性模型(cross-component linear model,CCLM)模式;再例如,PLANAR模式、DC模式、ANGULAR18模式、ANGULAR50模式和DM(direct mode,DM)模式,这五种帧内色度预测模式也称为non-CCLM模式。其中,在DM模式中,帧内色度预测模式可以设置为等于帧内亮度预测模式。
示例性地,在ITU-TH.266中,参见表1,cclm_mode_flag的不同取值可以对应不同的帧内色度预测模式(chroma intra mode)。例如,当cclm_mode_flag等于0时,帧内色度预测模式为non-CCLM模式;当cclm_mode_flag等于1时,帧内色度预测模式为CCLM模式。
表1
Figure PCTCN2022130727-appb-000001
需要说明的是,对于本申请实施例中的DM模式,这里指的是cclm_mode_flag等于0且intra_chroma_pred_mode等于4成立下的情况,即直接将帧内色度预测模式索引序号设置为等于帧内亮度预测模式索引序号。对于cclm_mode_flag等于0时,intra_chroma_pred_mode取值等于0-3这四种方式的帧内色度预测模式索引序号,也可以根据帧内亮度预测模式索引序号确定,不同于“DM模式”之处在于其不是一一对应的方式。
还需要说明的是,在本申请实施例中,对于当前块的亮度分量可以简称为亮度块,对于当前块的色度分量可以简称为色度块。在亮度块中可以划分至少一个编码单元(Coding Unit,CU),在本申请实施例中其可称为“亮度CU”;在色度块中也可以划分至少一个编码单元,在本申请实施例中其可称为“色度CU”。
另外,DM模式指的是直接使用对应位置的亮度预测模式信息。当I帧使用双树划分时,允许亮度块(左图整个斜线的填充区域)和色度块(右图整个斜线的填充区域)使用独立的块划分结构。此时,色度CU对应位置的亮度分量可能包含多个亮度CU,具体如图1所示,在H.266/VVC中,色度CU继承相应亮度块中心位置CU的帧内预测模式。当使用单树划分时,亮度块(左图整个斜线的填充区域)和色度块(右图整个斜线的填充区域)使用相同的块划分结构,此时色度CU对应位置的亮度分量仅包含一个亮度CU,具体如图2所示。
在一种可能的实施例中,为了更好地预测色度分量,这里提出了通过某种推导流程添加多种传统预测模式,以补充已有色度预测的可供选择模式的完备性。下面介绍这种技术方案的整体流程。
在这里,使用MaxChromaCandidateListNum种色度预测模式替换原有的5种non-CCLM模式,该 MaxChromaCandidateListNum种色度预测模式按照以下顺序的流程进行依次添加,并保证各个模式的互异性。其中,MaxChromaCandidateList Num指的是预设数量,表示该色度候选列表的可以存储的模式数量的最大值。
需要说明的是,以下步骤所涉及的添加操作的过程为先建立一个长度为MaxChromaCandidateListNum的列表,按照以下顺序依次将每个模式添加到列表中,该列表构建完成后也可能会有调整(包括顺序和模式上的调整)。可以分为三种情况:第一种情况,若该列表按照以下步骤的顺序进行构造,并且构建完成后不需要进行调整,则编码端需要构建列表,对于解码端只需要构建得到码流传输的列表索引的模式即可;第二种情况,若该列表按照以下步骤的顺序进行构造,并且构建完成后需要进行调整,则编码端和解码端均需要构建完整列表,并在调整后,解码端选择码流传输的列表索引对应的模式;第三种情况,对于第一种情况下的编码端,需要构建出完整的列表进行率失真优化,但可以存在快速算法,包括但不限于只进行前几种模式的率失真优化,则此时编码端不需要将全部模式列表构建。
还需要说明的是,以下步骤的添加顺序和各个步骤的位置扫描顺序包括但不限于以下描述的顺序。下述步骤仅以举例说明。
(1)以双树划分为例,按照如图3所示的位置,按顺序依次添加当前色度编码块对应的同位亮度块的C、TL、TR、BL、BR所在的CU的帧内亮度预测模式。
下面给出C、TL、TR、BL、BR的详细位置推导过程:
假设当前块左上角对应的同位亮度像素相对于图像左上角亮度像素的位置(即亮度像素TL的位置)为(xCb,yCb),当前块对应的同位亮度区域(左图整个斜线的填充区域)的宽为cbWidth,高为cbHeight。
亮度像素C的位置的坐标为(xCb+cbWidth/2,yCb+cbHeight/2);
亮度像素TL的位置的坐标为(xCb,yCb);
亮度像素TR的位置的坐标为(xCb+cbWidth-1,yCb);
亮度像素BL的位置的坐标为(xCb,yCb+cbHeight-1);
亮度像素BR的位置的坐标为(xCb+cbWidth-1,yCb+cbHeight-1)。
可以理解,同位亮度块的亮度预测模式的具体推导规则如下:
若划分树类型treeType为单树(SINGLE_TREE),如图4所示,执行步骤(1)的C所在的CU的亮度预测模式的添加过程;
若划分树类型treeType为双树(DUAL_TREE),如图3所示,执行如下操作:
以推导亮度像素C位置所在的CU的亮度预测模式为例,假设当前块左上角对应的同位亮度像素相对于图像左上角亮度像素的位置(即亮度像素TL的位置)为(xCb,yCb),当前块对应的同位亮度区域(左图整个斜线的填充区域)的宽为cbWidth,高为cbHeight。对应的同位亮度块的预测模式lumaIntraPredMode推导过程如下:
-判断图3中的同位亮度区域中心位置亮度采样点(即C)所在的CU是否使用MIP模式。其中,中心位置指的是坐标为(xCb+cbWidth/2,yCb+cbHeight/2)的亮度采样点。
如果IntraMipFlag[xCb+cbWidth/2][yCb+cbHeight/2]为1,则lumaIntraPredMode=INTRA_PLANAR;其中,数组IntraMipFlag[x][y]指的是包含坐标(x,y)的像素点的当前块是否使用MIP模式。
-否则,如果CuPredMode[0][xCb+cbWidth/2][yCb+cbHeight/2]为MODE_IBC或者MODE_PLT,则lumaIntraPredMode=INTRA_DC;其中,数组CuPredMode[chType][x][y]指的是包含坐标(x,y)的像素点的亮度块或者色度块使用的预测模式,chType为0指的是亮度分量,chType为1指的是色度分量。
-否则,lumaIntraPredMode=IntraPredModeY[xCb+cbWidth/2][yCb+cbHeight/2];其中,数组IntraPredModeY[x][y]指的是包含坐标(x,y)的像素点的当前块使用的帧内预测模式。示例性地,表2示出了数字视频的色度亚采样格式与sps_chroma_format_idc之间的映射关系。
表2
sps_chroma_format_idc 色度亚采样格式
0 单色
1 4:2:0
2 4:2:2
3 4:4:4
同位亮度块的亮度预测模式转换为色度预测模式的具体推导规则如下:
当sps_chroma_format_idc为0时,不需要使用色度帧内预测模式,因此不存在该推导规则;
当sps_chroma_format_idc为2时,可以使用如表3中规定的帧内亮度预测模式lumaIntraPredMode的模式X导出帧内色度预测模式的模式Y;
否则,帧内色度预测模式等于帧内亮度预测模式lumaIntraPredMode。
示例性地,表3示出了帧内亮度预测模式X到帧内色度预测模式Y之间的映射关系,具体如下所示。
表3
Figure PCTCN2022130727-appb-000002
(2)按照如图5所示的位置,按顺序依次添加当前块相邻的参考色度像素0、1、2、3、4位置所在的已编解码色度块的帧内色度预测模式。
下面解释邻近色度像素0、1、2、3、4的详细位置推导过程:
假设当前色度块(整个斜线的填充区域)左上角色度像素相对于图像左上角色度像素的位置为(xCb,yCb),当前色度块的宽为cbWidth,高为cbHeight。其中:
色度像素0的位置信息为(xCb-1,yCb+cbHeight-1);
色度像素1的位置信息为(xCb+cbWidth-1,yCb-1);
色度像素2的位置信息为(xCb-1,yCb+cbHeight);
色度像素3的位置信息为(xCb+cbWidth,yCb-1);
色度像素4的位置信息为(xCb-1,yCb-1)。
可以理解,邻近已编解码色度块的色度预测模式转换为传统色度预测模式的具体规则如下:
当邻近已编解码色度块的预测模式为帧间模式时,不执行任何添加操作;
当邻近已编解码色度块的色度预测模式为CCLM模式时,不执行任何添加操作;
否则,直接添加邻近已编解码色度块的帧内色度预测模式。
(3)将上述(1)、(2)步骤中所添加的模式中前2个模式的模式索引值+1或者-1的模式进行添加。这里存在两种方式:第一种方式为判断该模式是否角度模式,若为角度模式则添加该角度模式映射的角度顺时针或逆时针偏移一个最小角度单位所映射的角度模式,若该模式不为角度模式则不做任何操作;第二种为直接将对应的模式索引值+1或-1:若模式索引值为0,则只添加模式索引值为1的模式;若模式索引值为最大模式索引,则只添加模式索引值为最大模式索引值-1的模式。另外,若只存在1个模式,则只添加该模式偏移的两个角度模式;若非角度模式,则不执行步骤(3)。
(4)添加预先设定的一组默认non-CCLM模式列表(默认列表包括但不限于后续描述的模式),示例性地,列表中的模式分别为PLANAR_IDX、VER_IDX、HOR_IDX、DC_IDX、VDIA_IDX、VER_IDX–4、VER_IDX+4、HOR_IDX–4、HOR_IDX+4。
简单来说,在相关技术中,针对当前块使用一组non-CCLM的帧内预测模式列表进行色度分量的预测,并且忽略相邻已编解码的色度块的CCLM模式。这样会存在一些缺陷,例如,使用分量间线性关系进行预测的CCLM模式在不同内容特性的编解码块中预测效果较佳,因此在一帧图像中所使用的CCLM模式的色度编解码块较多。但是,non-CCLM模式的帧内预测模式列表的构建过程忽略了相邻已编解码色度块的CCLM模式,会丢失部分空间相关性;也就是说,已有色度预测模式的non-CCLM模式列表的获取构建过程存在不完备性,导致色度预测效果较差。
基于此,本申请实施例提供了一种解码方法,确定当前块的参考块;其中,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。
本申请实施例还提供了一种编码方法,确定当前块的参考块;其中,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值;根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。
这样,无论是编码端还是解码端,通过对当前块相邻的参考块进行相关参数分析,从而能够确定出non-CCLM的参考帧内预测模式参数;根据这参考帧内预测模式参数,可以提高帧内色度预测模式的完备性和多样性,从而能够提高帧内色度预测的准确性,而且还能够提高编解码效率,进而提升编解码性能。
下面将结合附图对本申请各实施例进行详细说明。
参见图6,其示出了本申请实施例提供的一种编码器的组成框图示意图。如图6所示,编码器(具体为“视频编码器”)100可以包括变换与量化单元101、帧内估计单元102、帧内预测单元103、运动补偿单元104、运动估计单元105、反变换与反量化单元106、滤波器控制分析单元107、滤波单元108、编码单元109和解码图像缓存单元110等,其中,滤波单元108可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,编码单元109可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmetic Coding,CABAC)。针对输入的原始视频信号,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个视频编码块,然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元101对该视频编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元102和帧内预测单元103是用于对该视频编码块进行帧内预测;明确地说,帧内估计单元102和帧内预测单元103用于确定待用以编码该视频编码块的帧内预测模式;运动补偿单元104和运动估计单元105用于执行所接收的视频编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元105执行的运动估计为产生运动向量的过程,所述运动向量可以估计该视频编码块的运动,然后由运动补偿单元104基于由运动估计单元105所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元103还用于将所选择的帧内预测数据提供到编码单元109,而且运动估计单元105将所计算确定的运动向量数据也发送到编码单元109;此外,反变换与反量化单元106是用于该视频编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元107和滤波单元108去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元110的帧中的一个预测性块,用以产生经重构建的视频编码块;编码单元109是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而解码图像缓存单元110是用于存放重构建的视频编码块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元110中。
参见图7,其示出了本申请实施例提供的一种解码器的组成框图示意图。如图7所示,解码器(具体为“视频解码器”)200包括解码单元201、反变换与反量化单元202、帧内预测单元203、运动补偿单元204、滤波单元205和解码图像缓存单元206等,其中,解码单元201可以实现头信息解码以及CABAC解码,滤波单元205可以实现去方块滤波以及SAO滤波。输入的视频信号经过图6的编码处理之后,输出该视频信号的码流;该码流输入解码器200中,首先经过解码单元201,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元202进行处理,以便在像素域中产生残差块;帧内预测单元203可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前视频解码块的预测数据;运动补偿单元204是通过剖析运动向量和其他关联语法元素来确定用于视频解码块的预测信息,并使用该预测信息以产生正被解码的视频解码块的预测性块;通过对来自反变换与反量化单元202的残差块与由帧内预测单元203或运动补偿单元204产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频信号通过滤波单元205以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于解码图像缓存单元206中,解码图像缓存单元206存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。
进一步地,本申请实施例还提供了一种包含编码器和解码器的编解码***的网络架构,其中,图8示出了本申请实施例提供的一种编解码***的网络架构示意图。如图8所示,该网络架构包括一个或多个电子设备13至1N和通信网络01,其中,电子设备13至1N可以通过通信网络01进行视频交互。电子设备在实施的过程中可以为各种类型的具有视频编解码功能的设备,例如,所述电子设备可以包括智能手机、平板电脑、个人计算机、个人数字助理、导航仪、数字电话、视频电话、电视机、传感设备、服务器等,本申请实施例不作具体限定。在这里,本申请实施例所述的解码器或编码器就可以为上述电子设备。
需要说明的是,本申请实施例的方法主要应用在如图6所示的帧内预测单元103部分和如图7所示的帧内预测单元203部分。也就是说,本申请实施例既可以应用于编码器,也可以应用于解码器,甚至还可以同时应用于编码器和解码器,但是本申请实施例不作具体限定。
还需要说明的是,当应用于帧内预测单元103部分时,“当前块”具体是指当前待进行帧内预测的编码块;当应用于帧内预测单元203部分时,“当前块”具体是指当前待进行帧内预测的解码块。
在本申请的一实施例中,参见图9,应用于解码器,其示出了本申请实施例提供的一种解码方法的流程示意图一。如图9所示,该方法可以包括:
S910,确定当前块的参考块。
需要说明的是,本申请实施例的解码方法应用于解码装置,或者集成有该解码装置的解码设备(也可简称为“解码器”)。另外,本申请实施例的解码方法具体可以是指一种帧内预测方法。其中,假定第一颜色分量为亮度分量,第二颜色分量为色度分量,那么更具体地,这里是一种帧内色度预测模式的推导方法。
还需要说明的是,在本申请实施例中,当前块至少包括第一颜色分量和第二颜色分量。对于当前块的第一颜色分量,这时候的块可简称为第一颜色分量块;而且在第一颜色分量为亮度分量时,那么第一颜色分量块又可称为亮度块。同理,对于当前块的第二颜色分量,这时候的块可简称为第二颜色分量块;而且在第二颜色分量为色度分量时,那么第二颜色分量块又可称为色度块。
还需要说明的是,在本申请实施例中,当前块可以是指视频图像中当前待进行帧内预测的解码块。其中,参考块是当前块的相邻块;而且这里的“相邻”可以是指空间相邻、时域相邻等,对此不作具体限定。如此,当该方法应用于解码器时,当前块的参考块可以是当前块的相邻已解码块。
示例性地,在本申请实施例中,当该方法应用于解码器时,以色度分量预测为例,参考色度块即为当前块的相邻已解码色度块。
S920,在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数。
需要说明的是,在本申请实施例中,如果参考块的第二颜色分量的预测模式满足第一条件,那么可以根据参考块来推导出参考帧内预测模式参数。
在一些实施例中,第一条件可以包括:参考块的第二颜色分量的预测模式是第一预设模式。
在一种可能的实现方式中,第一预设模式可以是非角度预测模式。示例性地,第一预设模式可以包括下述至少之一:分量间预测模式、IBC模式、MIP模式和Palette模式。
应理解,在本申请实施例中,分量间预测模式可以是CCLM模式。
在另一种可能的实现方式中,第一预设模式可以是帧间预测模式。
也就是说,在本申请实施例中,如果参考块的第二颜色分量的预测模式是第一预设模式,例如CCLM模式,那么根据当前块的参考块可以推导出参考帧内预测模式参数。
在一些实施例中,第一条件可以包括:参考块的第二颜色分量的预测模式不是第二预设模式。
在又一种可能的实现方式中,第二预设模式可以是角度预测模式。
在又一种可能的实现方式中,第二预设模式可以是传统预测模式。示例性地,第二预设模式可以是DC模式或Planar模式。
也就是说,在本申请实施例中,如果参考块的第二颜色分量的预测模式不是第二预设模式,例如角度预测模式、DC模式或Planar模式等,那么根据当前块的参考块也可以推导出参考帧内预测模式参数。
另外,对于帧内预测模式来说,以DC模式和Planar模式为例,示例性地,DC模式也可以用INTRA_DC表示,Planar模式也可以用INTRA_PLANAR表示。
在一些实施例中,第一条件可以包括:解码码流,确定第一参数;其中,第一参数指示根据参考块确定参考帧内预测模式参数。
还应理解,在本申请实施例中,还可以在码流中写入第一参数,然后解码端通过解码码流来确定第一参数,而第一参数指示需要根据参考块,确定参考帧内预测模式参数。
在一些实施例中,根据参考块,确定参考帧内预测模式参数,可以包括:根据参考块,确定参考像素;根据参考像素的重建样值,确定第一参数;根据第一参数,确定参考帧内预测模式参数。
在一种具体的实施例中,根据参考块,确定参考像素,可以包括:根据参考块的相邻区域中的像素,确定参考像素;其中,相邻区域包括下述至少之一:左侧相邻区域、上侧相邻区域和左上侧相邻区域。
示例性地,参考块的左侧相邻区域、参考块的上侧相邻区域、参考块的左上侧相邻区域作为相邻区域均可被称为参考块的相邻区域。参见图10A,当前块即为色度块,当前块的参考块即为邻近已解码色度块,该邻近已解码色度块的相邻区域可以为多个圆点像素组成的色度区域;参见图10B,参考块即为邻近已解码色度块的同位亮度块,该邻近已解码色度块的同位亮度块的相邻区域可以为多个圆点像素组成的亮度区域。
在另一种具体的实施例中,根据参考块,确定参考像素,可以包括:根据参考块中的像素,确定参考像素。
示例性地,参见图11A,当前块即为色度块,当前块的参考块即为邻近已解码色度块,参考像素即为该邻近已解码色度块中的像素;参见图11B,参考块即为邻近已解码色度块的同位亮度块,参考像素即为该邻近已解码色度块的同位亮度块中的像素。
进一步地,对于第一参数而言,在一些实施例中,根据参考像素的重建样值,确定第一参数,可以 包括:对参考像素的重建样值进行梯度计算,确定参考像素的水平梯度值和垂直梯度值;根据参考像素的水平梯度值和垂直梯度值进行角度映射,确定参考像素对应的至少一种帧内预测模式;根据参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定参考像素对应的至少一个梯度强度值;根据参考像素对应的至少一种帧内预测模式以及至少一个梯度强度值,确定第一参数。
可以理解地,参考像素可以包括至少一个候选像素,且每一个候选像素对应一种帧内预测模式和一个梯度强度值。以任意一个候选像素为例,在一些实施例中,根据参考像素的水平梯度值和垂直梯度值进行角度映射,确定参考像素对应的至少一种帧内预测模式,可以包括:确定候选像素的水平梯度绝对值和垂直梯度绝对值;根据候选像素的水平梯度绝对值和垂直梯度绝对值进行角度映射,确定候选像素的初始模式索引值;根据候选像素的初始模式索引值,确定候选像素对应的一种帧内预测模式。
需要说明的是,在本申请实施例中,候选像素的水平梯度值可以用gVer[x][y]表示,候选像素的垂直梯度值可以用gHor[x][y]表示;那么候选像素的水平梯度绝对值可以用abs(gVer[x][y])表示,候选像素的垂直梯度绝对值可以用abs(gHor[x][y])表示。这样,根据abs(gVer[x][y])和abs(gHor[x][y])进行角度映射,可以确定候选像素的初始模式索引值,用angIdx[x][y]表示;然后根据angIdx[x][y]的取值,能够确定出候选像素对应的一种帧内预测模式。
进一步地,在一些实施例中,根据候选像素的初始模式索引值,确定候选像素对应的一种帧内预测模式,可以包括:根据预设角度补偿值对初始模式索引值进行补偿处理,确定候选像素的目标模式索引值;根据候选像素的目标模式索引值,确定候选像素对应的一种帧内预测模式。
还需要说明的是,在本申请实施例中,预设角度补偿值可以用angOffset[region[x][y]]表示,候选像素的目标模式索引值(即所对应的帧内预测模式)可以用ipm[x][y]表示。在这里,ipm[x][y]的取值等于angOffset[region[x][y]]和angIdx[x][y]之和。然后根据ipm[x][y]的取值,即可确定所对应的帧内预测模式。
还可以理解地,对于angOffset[region[x][y]]而言,在一些实施例中,该方法还可以包括:确定候选像素的目标象限值;确定在预设映射关系下目标象限值对应的取值;将预设角度补偿值设置为等于取值。
需要说明的是,在本申请实施例中,候选像素的目标象限值可以用region[x][y]表示,预设映射关系可以为angOffset={18,18,50,50}。其中,如果目标象限值等于0或1,那么预设角度补偿值可以为18;如果目标象限值等于2或3,那么预设角度补偿值可以为50。
进一步地,在一些实施例中,确定候选像素的目标象限值,可以包括:根据候选像素的水平梯度值,确定第一符号值;以及根据候选像素的垂直梯度值,确定第二符号值;根据所述候选像素的水平梯度绝对值和垂直梯度绝对值的比较结果,确定所述候选像素的比较值;根据比较值、第一符号值和第二符号值进行象限映射,确定候选像素对应的目标象限值。
还需要说明的是,在本申请实施例中,第一符号值可以表示为signV[x][y],用于表征gVer[x][y]是大于0还是小于0;第二符号值可以表示为signH[x][y],用于表征gHor[x][y]是大于0还是小于0,比较值可以表示为HgV[x][y],用于表征abs(gHor[x][y])是否大于abs(gVer[x][y]);这样,根据signV[x][y]、signH[x][y]以及HgV[x][y]就可以确定出目标象限值region[x][y]。
还可以理解地,在一些实施例中,根据参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定参考像素对应的至少一个梯度强度值,可以包括:根据候选像素的水平梯度绝对值和垂直梯度绝对值进行加法计算,确定候选像素对应的一个梯度强度值。
在本申请实施例中,候选像素对应的一个梯度强度值可以用iAmp[x][y]表示,其取值等于abs(gVer[x][y])和abs(gHor[x][y])之和。
在本申请实施例中,参考像素的重建样值至少包括下述其中一项:参考像素的第一颜色分量的重建样值;参考像素的第二颜色分量的重建样值。
在一种具体的实施例中,根据参考像素的重建样值,确定第一参数,可以包括:在参考像素的重建样值为参考像素的第一颜色分量的重建样值时,确定参考像素的第一颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;在参考像素的重建样值为参考像素的第二颜色分量的重建样值时,确定参考像素的第二颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;根据参考像素的第一颜色分量对应的至少一种帧内预测模式和参考像素的第二颜色分量对应的至少一种帧内预测模式,组成第一集合,且第一集合包括具有互异特性的至少一种参考帧内预测模式;根据参考像素的第一颜色分量对应的至少一个梯度强度值和参考像素的第二颜色分量对应的至少一个梯度强度值,对归属于同一参考帧内预测模式的梯度强度值进行累加计算,确定至少一种参考帧内预测模式对应的梯度强度值;根据至少一种参考帧内预测模式和至少一种参考帧内预测模式对应的梯度强度值,确定第一参数。
下面结合图10A至图13B,分别进行确定参考像素对应的至少一种帧内预测模式和至少一个梯度强度值的示例性说明。
一种可能的实现方式中,根据参考块相邻区域中的像素,确定参考像素,再根据该参考像素,确定 该参考像素对应的至少一种帧内预测模式和至少一个梯度强度值,其中,参考像素包括至少一个候选像素。
示例性地,参见图10A,当前块即为色度块,参考块即为邻近已解码色度块,假设邻近已解码色度块的宽为CbNbWidth,高为CbNbHeight。假设候选色度像素的坐标信息为pC[x][y],则有x∈[-3,CbNbWidth],y∈[-3,-1]且有x∈[-3,-1],y∈[0,CbNbHeight],其中原点[0][0]为邻近已解码色度块内左上角的像素坐标信息,候选色度像素即位于图10A中多个圆点组成的色度区域。参见图10B,以YUV420格式为例,则邻近已解码色度块的同位亮度块的宽为2×CbNbWidth,高为2×CbNbHeight。假设候选亮度像素的坐标信息为pY[x][y],则有x∈[-3,2×CbNbWidth],y∈[-3,-1]且有x∈[-3,-1],y∈[0,2×CbNbHeight],其中原点[0][0]为邻近已解码色度块的同位亮度块左上角的像素坐标信息,候选亮度像素即位于图10B中多个圆点组成的亮度区域。
下面给出确定候选像素对应的一种帧内预测模式和一个梯度强度值的伪代码。在下述伪代码中,angOffset为预设角度补偿值,gHor[x][y]为垂直梯度值,gVer[x][y]为水平梯度值,signH[x][y]为第二符号值,signV[x][y]为第一符号值,region[x][y]为目标象限值,ipm[x][y]为帧内预测模式,iAmp[x][y]为梯度强度值。
令mapHgV={{2,1},{1,2}},mapVgH={{3,4},{4,3}},angTable={0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536};
令预设角度补偿值angOffset={18,18,50,50};
步骤(1):对于候选色度像素的坐标信息pC[x][y],即x∈[-3,CbNbWidth],y=-2且有x=-2,y∈[0,CbNbHeight],对以下数值进行计算。
垂直梯度值:gHor[x][y]=pC[x-1][y-1]+2×pC[x-1][y]+pC[x-1][y+1]-pC[x+1][y-1]-2×pC[x+1][y]-pC[x+1][y+1];
水平梯度值:gVer[x][y]=pC[x-1][y-1]+2×pC[x][y-1]+pC[x+1][y-1]-pC[x-1][y+1]-2×pC[x][y+1]-pC[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
步骤(2):对于候选亮度像素的坐标信息pY[x][y],即x∈[-3,2×CbNbWidth],y=-2且有x=-2,y∈[0,2×CbNbHeight],对以下数值进行计算。
垂直梯度值:gHor[x][y]=pY[x-1][y-1]+2×pY[x-1][y]+pY[x-1][y+1]-pY[x+1][y-1]-2×pY[x+1][y]-pY[x+1][y+1];
水平梯度值:gVer[x][y]=pY[x-1][y-1]+2×pY[x][y-1]+pY[x+1][y-1]-pY[x-1][y+1]-2×pY[x][y+1]-pY[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
在另一种可能的实现方式中,根据参考块中的全部像素,确定参考像素,再根据该参考像素,确定该参考像素对应的至少一种帧内预测模式和至少一个梯度强度值,其中,参考像素包括至少一个候选像素。
示例性地,参见图11A,当前块即为色度块,参考块即为邻近已解码色度块。假设邻近已解码色度块的宽为CbNbWidth,高为CbNbHeight。假设候选色度像素的坐标信息为pC[x][y],则有x∈[0,CbNbWidth-1],y∈[0,CbNbHeight-1],其中原点[0][0]为邻近已解码色度块内左上角的像素坐标信息,候选色度像素即位于图11A中多个圆点组成的色度区域。参见图11B,以YUV420格式为例,则邻近已解码色度块的同位亮度块的宽为2×CbNbWidth,高为2×CbNbHeight。假设候选亮度像素的坐标信息为pY[x][y],则有x∈[0,2×CbNbWidth-1],y∈[0,2×CbNbHeight-1],其中原点[0][0]为邻近已解码色度块的同位亮度块左上角的像素坐标信息,候选亮度像素即位于图11B中多个圆点组成的亮度区域。
下面给出确定候选像素对应的一种帧内预测模式和一个梯度强度值的伪代码。在下述伪代码中,angOffset为预设角度补偿值,gHor[x][y]为垂直梯度值,gVer[x][y]为水平梯度值,signH[x][y]为第二符号值,signV[x][y]为第一符号值,region[x][y]为目标象限值,ipm[x][y]为帧内预测模式,iAmp[x][y]为梯度强度值。
令mapHgV={{2,1},{1,2}},mapVgH={{3,4},{4,3}},angTable={0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536};
令预设角度补偿值angOffset={18,18,50,50};
步骤(1):对于候选色度像素的坐标信息pC[x][y],即x∈[1,CbNbWidth-2],y∈[1,CbNbHeight-2],对以下数值进行计算。
垂直梯度值:gHor[x][y]=pC[x-1][y-1]+2×pC[x-1][y]+pC[x-1][y+1]-pC[x+1][y-1]-2×pC[x+1][y]-pC[x+1][y+1];
水平梯度值:gVer[x][y]=pC[x-1][y-1]+2×pC[x][y-1]+pC[x+1][y-1]-pC[x-1][y+1]-2×pC[x][y+1]-pC[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
步骤(2):对于候选亮度像素的坐标信息pY[x][y],即x∈[1,2×CbNbWidth-2],y∈[1,2×CbNbHeight-2],对以下数值进行计算。
垂直梯度值:gHor[x][y]=pY[x-1][y-1]+2×pY[x-1][y]+pY[x-1][y+1]-pY[x+1][y-1]-2×pY[x+1][y]-pY[x+1][y+1];
水平梯度值:gVer[x][y]=pY[x-1][y-1]+2×pY[x][y-1]+pY[x+1][y-1]-pY[x-1][y+1]-2×pY[x][y+1]-pY[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
在又一种可能的实现方式中,考虑到根据参考块中的全部像素确定参考像素会增加计算复杂度,因此,也可以根据参考块中的部分像素,确定参考像素,再根据该参考像素,确定该参考像素对应的至少一种帧内预测模式和至少一个梯度强度值,其中,参考像素包括至少一个候选像素。
示例性地,参见图12A,当前块即为色度块,参考块即为邻近已解码色度块。假设邻近已解码色度块的宽为CbNbWidth,高为CbNbHeight。假设候选色度像素的坐标信息为pC[x][y],则有x∈ [CbNbWidth-3,CbNbWidth-1],y∈[0,CbNbHeight-1],其中原点[0][0]为邻近已解码色度块内左上角的像素坐标信息,候选色度像素即位于图12A中多个圆点组成的色度区域。参见图12B,以YUV420格式为例,则邻近已解码色度块的同位亮度块的宽为2×CbNbWidth,高为2×CbNbHeight。假设候选亮度像素的坐标信息为pY[x][y],则有x∈[2×CbNbWidth-3,2×CbNbWidth-1],y∈[0,2×CbNbHeight-1],其中原点[0][0]为邻近已解码色度块的同位亮度块左上角的像素坐标信息,候选亮度像素即位于图12B中多个圆点组成的亮度区域。
下面给出确定候选像素对应的一种帧内预测模式和一个梯度强度值的伪代码。在下述伪代码中,angOffset为预设角度补偿值,gHor[x][y]为垂直梯度值,gVer[x][y]为水平梯度值,signH[x][y]为第二符号值,signV[x][y]为第一符号值,region[x][y]为目标象限值,ipm[x][y]为帧内预测模式,iAmp[x][y]为梯度强度值。
令mapHgV={{2,1},{1,2}},mapVgH={{3,4},{4,3}},angTable={0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536};
令预设角度补偿值angOffset={18,18,50,50};
步骤(1):对于候选色度像素的坐标信息pC[x][y],即x=CbNbWidth-2,y∈[1,CbNbHeight-2],对以下数值进行计算。
垂直梯度值:gHor[x][y]=pC[x-1][y-1]+2×pC[x-1][y]+pC[x-1][y+1]-pC[x+1][y-1]-2×pC[x+1][y]-pC[x+1][y+1];
水平梯度值:gVer[x][y]=pC[x-1][y-1]+2×pC[x][y-1]+pC[x+1][y-1]-pC[x-1][y+1]-2×pC[x][y+1]-pC[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
步骤(2):对于候选亮度像素的坐标信息pY[x][y],即x=2×CbNbWidth-2],y∈[1,2×CbNbHeight-2],对以下数值进行计算。
垂直梯度值:gHor[x][y]=pY[x-1][y-1]+2×pY[x-1][y]+pY[x-1][y+1]-pY[x+1][y-1]-2×pY[x+1][y]-pY[x+1][y+1];
水平梯度值:gVer[x][y]=pY[x-1][y-1]+2×pY[x][y-1]+pY[x+1][y-1]-pY[x-1][y+1]-2×pY[x][y+1]-pY[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
在又一种可能的实现方式中,以另一参考像素位置为例,同时考虑到根据参考块中的全部像素确定参考像素会增加计算复杂度,因此,也可以根据参考块中的部分像素,确定参考像素,再根据该参考像素,确定该参考像素对应的至少一种帧内预测模式和至少一个梯度强度值,其中,参考像素包括至少一个候选像素。
示例性地,参见图13A,当前块即为色度块,参考块即为邻近已解码色度块。假设邻近已解码色度块的宽为CbNbWidth,高为CbNbHeight。假设候选色度像素的坐标信息为pC[x][y],则有x∈[0,CbNbWidth-1],y∈[CbNbHeight-3,CbNbHeight-1],其中原点[0][0]为邻近已解码色度块内左 上角的像素坐标信息,候选色度像素即位于图13A中多个圆点组成的色度区域。参见图13B,以YUV420格式为例,则邻近已解码色度块的同位亮度块的宽为2×CbNbWidth,高为2×CbNbHeight。假设候选亮度像素的坐标信息为pY[x][y],则有x∈[0,2×CbNbWidth-1],y∈[2×CbNbHeight-3,2×CbNbHeight-1],其中原点[0][0]为邻近已解码色度块的同位亮度块左上角的像素坐标信息,候选亮度像素即位于13B中多个圆点组成的亮度区域。
下面给出确定候选像素对应的一种帧内预测模式和一个梯度强度值的伪代码。在下述伪代码中,angOffset为预设角度补偿值,gHor[x][y]为垂直梯度值,gVer[x][y]为水平梯度值,signH[x][y]为第二符号值,signV[x][y]为第一符号值,region[x][y]为目标象限值,ipm[x][y]为帧内预测模式,iAmp[x][y]为梯度强度值。
令mapHgV={{2,1},{1,2}},mapVgH={{3,4},{4,3}},angTable={0,2048,4096,6144,8192,12288,16384,20480,24576,28672,32768,36864,40960,47104,53248,59392,65536};
令预设角度补偿值angOffset={18,18,50,50};
步骤(1):对于候选色度像素的坐标信息pC[x][y],即x=[0,CbNbWidth-1],y=CbNbHeight-2,对以下数值进行计算。
垂直梯度值:gHor[x][y]=pC[x-1][y-1]+2×pC[x-1][y]+pC[x-1][y+1]-pC[x+1][y-1]-2×pC[x+1][y]-pC[x+1][y+1];
水平梯度值:gVer[x][y]=pC[x-1][y-1]+2×pC[x][y-1]+pC[x+1][y-1]-pC[x-1][y+1]-2×pC[x][y+1]-pC[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
步骤(2):对于候选亮度像素的坐标信息pY[x][y],即x=[0,2×CbNbWidth-1],y=2×CbNbHeight-2,对以下数值进行计算。
垂直梯度值:gHor[x][y]=pY[x-1][y-1]+2×pY[x-1][y]+pY[x-1][y+1]-pY[x+1][y-1]-2×pY[x+1][y]-pY[x+1][y+1];
水平梯度值:gVer[x][y]=pY[x-1][y-1]+2×pY[x][y-1]+pY[x+1][y-1]-pY[x-1][y+1]-2×pY[x][y+1]-pY[x+1][y+1];
第二符号值:signH[x][y]=gHor[x][y]<0?1:0;
第一符号值:signV[x][y]=gVer[x][y]<0?1:0;
比较值:HgV[x][y]=(abs(gHor[x][y])>abs(gVer[x][y])?1:0);
目标象限值:region[x][y]=(HgV[x][y]==1?mapHgV[signH[x][y]][signV[x][y]]:mapVgH[signH[x][y]][signV[x][y]]);
grad[x][y]=(HgV[x][y]==1?abs(gVer[x][y])/abs(gHor[x][y]):abs(gHor[x][y])/abs(gVer[x][y]));
grad[x][y]=round(grad[x][y]*(1<<16));
初始模式索引值:angIdx[x][y]=argmini(abs(angTable[i]-grad[x][y]));
帧内预测模式:ipm[x][y]=angOffset[region[x][y]]+angIdx[x][y];
梯度强度值:iAmp[x][y]=abs(gHor[x][y])+abs(gVer[x][y])。
需要注意的是,初始模式索引值为计算得到最接近的帧内预测模式索引值,利用预设角度补偿值angOffset[region[x][y]]对其进行补偿处理之后,即可确定所计算出的帧内预测模式。
示例性地,图14示出了本申请实施例提供的至少一种帧内预测模式对应的梯度强度值的示意直方图。如图14所示,可以将前述的任意一种实现方式中步骤(1)和步骤(2)的梯度值iAmp按照对应的帧内预测模式ipm累加,以帧内预测模式ipm为横坐标,以梯度强度值iAmp为纵坐标建立直方图,该直方图可以为包含至少一种帧内预测模式对应的梯度强度值,且这至少一种帧内预测模式的模式索引区间范围为[0,66]。
在一些实施例中,根据第一参数,确定参考帧内预测模式参数,可以包括:根据至少一种参考帧内 预测模式对应的梯度强度值,组成第二集合;若第二集合中的梯度强度值均为零,则根据PLANAR模式确定参考帧内预测模式参数;若第二集合中的梯度强度值存在非零项,则从第二集合中确定最大梯度强度值,根据最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
示例性地,如图14所示,第二集合可以为包含至少一种帧内预测模式对应的梯度强度值,从图14可以直观得到,这至少一种帧内预测模式对应的梯度强度值中的最大梯度值,以及最大梯度强度值对应的帧内预测模式。
进一步地,在一些实施例中,将第二集合中的最大梯度强度值赋值为-1,确定第三集合;若最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则从第三集合中确定新的最大梯度强度值,根据新的最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
进一步地,在一些实施例中,若新的最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则根据DC模式确定参考帧内预测模式参数。
在本申请实施例中,根据图14所示的直方图所推导得到的参考帧内预测模式参数用IntraPredModeD表示,其对应的模式索引区间范围为[0,66]。
示例性地,若直方图中不含非零项,则IntraPredModeD=INTRA_PLANAR。
否则,将IntraPredModeD=argmax i(HoG[i]),将HoG[IntraPredModeD]设置为-1;
若IntraPredModeD与(xCbNb,yCbNb)所在色度块的DM模式相等时,则再次重新搜索,将IntraPredModeD=argmax i(HoG[i]);
若IntraPredModeD继续与(xCbNb,yCbNb)所在色度块的DM模式相等,则将IntraPredModeD=INTRA_DC。
S930,根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。
需要说明的是,在本申请实施例中,根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值,可以包括:根据参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表;根据模式候选列表,确定当前块的第二颜色分量的预测值。
在一些实施例中,根据模式候选列表,确定当前块的第二颜色分量的预测值,可以包括:解析码流,确定当前块的第二颜色分量的模式索引序号;根据模式候选列表,确定模式索引序号对应的目标预测模式;利用目标预测模式对当前块的第二颜色分量进行预测处理,确定当前块的第二颜色分量的预测值。
需要说明的是,在本申请实施例中,解码端在建立模式候选列表之后,通过解码获得的模式索引序号,可以确定出目标预测模式;然后利用目标预测模式对当前块的第二颜色分量进行预测处理,确定当前块的第二颜色分量的预测值。
进一步地,在一些实施例中,该方法还可以包括:解析码流,确定当前块的第二颜色分量的预测差值;根据当前块的第二颜色分量的预测值和当前块的第二颜色分量的预测差值,确定当前块的第二颜色分量的重建值。
还需要说明的是,在本申请实施例中,由于编码端已经确定出当前块的第二颜色分量的预测差值并将其写入码流中,那么解码端通过解码获得预测差值之后,可以对当前块的第二颜色分量的预测值和当前块的第二颜色分量的预测差值进行加法运算,从而就能够得到当前块的第二颜色分量的重建值。
也就是说,以色度分量为例,根据参考帧内预测模式参数构建当前块的色度分量的模式候选列表。由于在构建过程中,充分利用了相邻的已解码参考块的内容特性执行纹理分析,构建具有对应于多个角度模式条目的梯度直方图,通过使用水平梯度值和垂直梯度值来确定梯度强度值,进而可以确定出一种参考帧内预测模式参数并添加到模式候选列表中,能够完善帧内色度预测模式的多样性;并且根据该模式候选列表,还可以得到更加精确的色度预测值,提高了帧内色度预测的准确度。
还应理解,在本申请实施例中,对于当前块的参考块而言,在一些实施例中,确定当前块的参考块,还可以包括:确定当前块邻近的至少一个目标像素;根据至少一个目标像素各自所在的块,确定至少一个第一目标块;根据至少一个第一目标块,确定当前块的参考块。
示例性地,参见图5,当前块(整个斜线的填充区域)即为色度块,当前块邻近的目标像素可以为像素0,像素0所在的块可以作为第一目标块,该第一目标块即为当前块的参考块。
进一步地,在一些实施例中,基于至少一个第一目标块,按照第一预设顺序依次作为参考块,确定至少一个第一目标块各自的参考帧内预测模式参数;根据至少一个第一目标块各自的参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表。
应理解,第一预设顺序可以是人为设定的,也可以是在特定场景下按照某种规则设定的,本申请实施例对此不作限定。
示例性地,参见图5,当前块(整个斜线的填充区域)即为色度块,0、1、2、3、4分别为目标像 素,假设当前块左上角相对于图像左上角色度像素的坐标信息为(xCb,yCb),当前块的宽为cbWidth,高为cbHeight,则0、1、2、3、4的坐标信息如下所示。
目标像素0的坐标信息为(xCb-1,yCb+cbHeight-1);
目标像素1的坐标信息为(xCb+cbWidth-1,yCb-1);
目标像素2的坐标信息为(xCb-1,yCb+cbHeight);
目标像素3的坐标信息为(xCb+cbWidth,yCb-1);
目标像素4的坐标信息为(xCb-1,yCb-1)。
根据0、1、2、3、4的坐标信息,可以确定0、1、2、3、4各自所在的块,并将0、1、2、3、4各自所在的块分别作为五个第一目标块,可以将该五个第一目标块依次作为参考块,从而可以参考前文所述的方法,确定五个第一目标块各自的参考帧内预测模式参数,进而能够根据该五个第一目标块各自的参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表。
除此之外,在一些实施例中,该方法还可以包括:确定参考块的第二颜色分量的预测模式;在参考块的第二颜色分量的预测模式不满足第一条件时,将参考块的第二颜色分量的预测模式直接添加到模式候选列表中。
示例性地,在本申请实施例中,如果参考块的第二颜色分量的预测模式为除帧间预测模式和CCLM模式之外的其他帧内预测模式,那么可以根据参考块的第二颜色分量的预测模式来构建当前块的第二颜色分量的模式候选列表,即将参考块的第二颜色分量的预测模式直接添加到模式候选列表中。
本申请实施例提供了一种解码方法,在确定当前块的参考块之后,可以在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表;根据模式候选列表,确定当前块的第二颜色分量的预测值。这样,在确定non-CCLM模式的参考帧内预测模式参数时,考虑了相邻已解码参考块的预测模式;如此,不仅能够提高帧内色度预测模式的完备性和多样性,而且还能够提高帧内色度预测的准确性,从而能够提高解码效率,进而提升解码性能。
在本申请的另一实施例中,基于前述实施例所述的解码方法,参见图15,其示出了本申请实施例提供的一种解码方法的流程示意图二。如图15所示,该方法可以包括:
S1510,确定当前块的同位置的第一颜色分量区域。
示例性地,参见图3或图4,当前块即为色度块(图3或图4中右图整个斜线的填充区域),当前块的同位置的第一颜色分量区域即为亮度块(图3或图4中左图整个斜线的填充区域)。
S1520,从第一颜色分量区域划分的至少一个块中,确定处于预设位置的至少一个第二目标块。
示例性地,在单树模式下,参见图4,第一颜色分量区域划分了一个块,此时处于预设位置的第二目标块为C块。
示例性地,在双树模式下,参见图3,第一颜色分量区域划分了多个块,此时处于预设位置的第二目标块有五个,分别记为TL块、TR块、C块、BL块、BR块。
如图3所示,假设当前块左上角位置对应的同位亮度块相对于图像左上角亮度块的位置(即亮度块TL的位置)为(xCb,yCb),当前块对应的同位亮度区域(左图中整个斜线的填充区域)的宽度为cbWidth,高度为cbHeight,则五个第二目标块的位置坐标可以分别记为如下所示:
C块的坐标信息为(xCb+cbWidth/2,yCb+cbHeight/2);
TL块的坐标信息为(xCb,yCb);
TR块的坐标信息为(xCb+cbWidth-1,yCb);
BL块的坐标信息为(xCb,yCb+cbHeight-1);
BR块的坐标信息为(xCb+cbWidth-1,yCb+cbHeight-1)。
应理解,图3所示的第二目标块的五个位置为示例性说明,本申请实施例并不局限于该五个位置,可以是多个不同位置,本申请实施例对第二目标块的数量以及具***置不作限定。
S1530,根据至少一个第二目标块,确定当前块的参考块。
示例性地,可以将至少一个第二目标块中的其中之一作为当前块的参考块。例如,将图4中的C块作为当前块的参考块。
示例性地,可以将多个第二目标块作为当前块的参考块。例如,将图3中的C块、TL块、TR块、BL块、BR块依次作为当前块的参考块。
在一些实施例中,该方法还可以包括:基于至少一个第二目标块的预设顺序,依次确定至少一个第二目标块各自的第一颜色分量预测模式参数;根据至少一个第二目标块各自的第一颜色分量预测模式参数,构建当前块的第二颜色分量的模式候选列表。
应理解,当第二目标块有一个时,第二目标块的预设顺序可以不作参考。
还应理解,第二预设顺序可以是人为设定的,也可以是在特定场景下按照某种规则设定的,本申请实施例对此不作限定。例如,参见图3,第二目标块的预设顺序可以包括但不局限于以下顺序:C->TL->TR->BL->BR。
示例性地,下面给出确定至少一个第二目标块各自的第一颜色分量预测模式参数的示例性说明。
一示例中,在单树模式下,参见图4,第二目标块的第一颜色分量预测模式参数可以为C块的第一颜色分量预测模式。
另一示例中,在双树模式下,参见图3,假设第二目标块为图3中的C块,确定图3中C块的第一颜色分量预测模式参数可以分为以下几个步骤。
第一步,判断图3中的C块是否使用MIP模式。
如果图3中的C块使用MIP模式,则图3中C块的第一颜色分量预测模式参数为PLANAR模式。用伪代码可以表示为:如果IntraMipFlag[xCb+cbWidth/2][yCb+cbHeight/2]等于1,则lumaIntraPredMode设置为INTRA_PLANAR。其中,(xCb+cbWidth/2,yCb+cbHeight/2)为C块的坐标信息;IntraMipFlag[xCb+cbWidth/2][yCb+cbHeight/2]为数组,用于表示C块是否使用MIP模式;lumaIntraPredMode为第一颜色分量预测模式参数,INTRA_PLANAR为PLANAR模式。
否则,如果图3中的C块不使用MIP模式,则执行第二步。
第二步,如果CuPredMode[0][xCb+cbWidth/2][yCb+cbHeight/2]为IBC模式或者PLT模式,则C块的第一颜色分量预测模式参数为DC模式。其中,(xCb+cbWidth/2,yCb+cbHeight/2)为C块的坐标信息,[0]为第一颜色分量。
否则,如果CuPredMode[0][xCb+cbWidth/2][yCb+cbHeight/2]不为IBC模式或者PLT模式,则执行第三步。
第三步,将C块的第一颜色分量预测模式参数设置为IntraPredModeY[xCb+cbWidth/2][yCb+cbHeight/2],其中,(xCb+cbWidth/2,yCb+cbHeight/2)为C块的坐标信息。
应理解,确定图3中除C块的其他块(如TL块,又如TR块)的第一颜色分量预测模式参数与确定C块的第一颜色分量预测模式参数类似,在此不再进行赘述。
进一步地,在一些实施例中,根据至少一个第二目标块各自的第一颜色分量预测模式参数,构建当前块的第二颜色分量的模式候选列表,可以有两种可能的实现方式:
在一种可能的实现方式中,将至少一个第二目标块各自的第一颜色分量预测模式参数添加到当前块的第二颜色分量的模式候选列表中。
示例性地,在单树模式下,参见图4,可以将C块的第一颜色分量预测模式添加到当前块的第二颜色分量的模式候选列表中。
示例性地,在双树模式下,参见图3,假设第二目标块为图3中的C块,且C块的第一颜色分量预测模式参数为PLANAR模式,则可以将PLANAR模式添加到当前块的第二颜色分量的模式候选列表中。
在另一种可能的实现方式中,参见前述的表2,在色度亚采样格式中,至少一个第二目标块各自的第一颜色分量预测模式参数可能需要转换,方可添加到当前块的第二颜色分量的模式候选列表中。
示例性地,当sps_chroma_format_idc为0时,无需转换至少一个第二目标块各自的第一颜色分量预测模式参数,此时该至少一个第二目标块各自的第一颜色分量预测模式参数不用添加至当前块的第二颜色分量的模式候选列表中。
示例性地,当sps_chroma_format_idc为2时,参见前述的表3,使用表3规定的预设规则转换至少一个第二目标块各自的第一颜色分量预测模式参数,得到转换后的至少一个第二目标块各自的第一颜色分量预测模式参数,并将转换后的第二目标块各自的第一颜色分量预测模式参数添加到当前块的第二颜色分量的模式候选列表中。
示例性地,当sps_chroma_format_idc为1或3时,无需转换至少一个第二目标块各自的第一颜色分量预测模式参数,此时该至少一个第二目标块各自的第一颜色分量预测模式参数可以直接添加到当前块的第二颜色分量的模式候选列表中。
需要说明的是,图9所示的解码方法可以根据至少一个第一目标块各自的参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表,图15所示的解码方法可以根据至少一个第二目标块各自的第一颜色分量预测模式参数,构建当前块的第二颜色分量的模式候选列表,基于此,可以根据至少一个第一目标块各自的参考帧内预测模式参数和至少一个第二目标块各自的第一颜色分量预测模式参数,构建当前块的第二颜色分量的模式候选列表。
在本申请的又一实施例中,基于前述实施例所述的解码方法,参见图16,其示出了本申请实施例 提供的一种解码方法的流程示意图三。如图16所示,该方法可以包括:
S1610,确定模式候选列表中的前两个预测模式。
其中,该模式候选列表可以是根据至少一个第一目标块各自的参考帧内预测模式参数构建的,也可以是根据至少一个第二目标块各自的第一颜色分量预测模式参数构建的,也可以是根据至少一个第一目标块各自的参考帧内预测模式参数和至少一个第二目标块各自的第一颜色分量预测模式参数共同构建的。
S1620,对前两个预测模式的模式索引序号进行偏移操作,确定至少一个新的帧内预测模式。
需要说明的是,对于模式候选列表中的前两个预测模式而言,在一种可能的实现方式中,首先判断预测模式是否为角度模式。若预测模式为角度模式,则将该角度模式映射的角度顺时针或逆时针偏移一个最小角度单位,得到映射后的角度模式,并将该映射后的角度模式作为新的帧内预测模式。若预测模式不为角度模式,则不确定新的帧内预测模式。
在另一种可能的实现方式中,直接将预测模式的模式索引序号加1或者减1。该场景下存在以下特殊情况:若模式索引序号为0,则将模式索引序号为1的预测模式作为新的帧内预测模式;若模式索引序号为最大模式索引序号,则将该最大模式索引序号减1后对应的预测模式作为新的帧内预测模式。若存在单个预测模式,则判断该单个模式是否为角度模式。若该单个模式为角度模式,则将该角度模式作为新的帧内预测模式;若该单个模式不为角度模式,则不确定新的帧内预测模式。
S1630,将至少一个新的帧内预测模式放置于模式候选列表中。
在本申请实施例中,将至少一个新的帧内预测模式放置于模式候选列表中时,需要保证模式候选列表中的帧内预测模式具有互异特性。
在本申请实施例中,将至少一个新的帧内预测模式放置于模式候选列表后,该模式候选列表可以是通过该新的帧内预测模式构建的,也可以是通过新的帧内预测模式和至少一个第一目标块各自的参考帧内预测模式参数构建的,也可以是通过新的帧内预测模式和至少一个第二目标块各自的第一颜色分量预测模式参数构建的,也可以是通过新的帧内预测模式、至少一个第一目标块各自的参考帧内预测模式参数和至少一个第二目标块各自的第一颜色分量预测模式参数构建的,本申请实施例对此不作限定。
需要说明的是,本申请实施例可以通过对模式候选列表中的前两个预测模式的模式索引序号进行偏移操作,确定至少一个新的帧内预测模式,并将至少一个新的帧内预测模式放置于模式候选列表中。如此,不仅能够提高帧内色度预测模式的完备性和多样性,而且还能够提高帧内色度预测的准确性,从而能够提高解码效率,进而提升解码性能。
还需要说明的是,本申请实施例也可以使用预先设定的帧内预测模式构建模式候选列表。其中,该预先设定的帧内预测模式可以为下述至少一项:PLANAR_IDX,VER_IDX,HOR_IDX,DC_IDX,VDIA_IDX,VER_IDX-4,VER_IDX+4,HOR_IDX-4,HOR_IDX+4。
在本申请实施例中,通过预先设定的帧内预测模式构建模式候选列表、通过新的帧内预测模式构建模式候选列表、通过至少一个第一目标块各自的参考帧内预测模式参数构建模式候选列表,以及通过至少一个第二目标块各自的第一颜色分量预测模式构建候选列表,这四种构建模式候选列表的方法可以单独使用其中一种,也可以将其组合使用,本申请实施例对此不作限定。
应理解,在本申请实施例中,可以对模式候选列表中的预测模式进行顺序调整。
还应理解,在本申请实施例中,模式候选列表中的帧内预测模式具有互异特性。
本申请实施例提供了一种解码方法,可以通过预先设定的帧内预测模式构建模式候选列表,也可以通过新的帧内预测模式构建模式候选列表,也可以通过至少一个第一目标块各自的参考帧内预测模式参数构建模式候选列表,也可以通过至少一个第二目标块各自的第一颜色分量预测模式构建候选列表。且这四种构建模式候选列表的方法可以单独使用其中一种,也可以将其组合使用。如此,不仅能够提高帧内色度预测模式的完备性和多样性,而且还能够提高帧内色度预测的准确性,从而能够提高解码效率,进而提升解码性能。
在本申请的又一实施例中,参见图17,应用于编码器,其示出了本申请实施例提供的一种编码方法的流程示意图。如图17所示,该方法可以包括:
S1710,确定当前块的参考块。
需要说明的是,本申请实施例的编码方法应用于编码装置,或者集成有该编码装置的编码设备(也可简称为“编码器”)。另外,本申请实施例的编码方法具体可以是指一种帧内预测方法。其中,假定第一颜色分量为亮度分量,第二颜色分量为色度分量,那么更具体地,这里是一种帧内色度预测模式的推导方法。
还需要说明的是,在本申请实施例中,当前块可以是指视频图像中当前待进行帧内预测的编码块。 其中,参考块是当前块的相邻块;而且这里的“相邻”可以是指空间相邻、时域相邻等,对此不作具体限定。如此,当该方法应用于编码器时,当前块的参考块可以是当前块的相邻已编码块。
示例性地,在本申请实施例中,当该方法应用于编码器时,以色度分量预测为例,参考色度块即为当前块的相邻已编码色度块。
S1720,在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数。
需要说明的是,在本申请实施例中,如果参考块的第二颜色分量的预测模式满足第一条件,那么可以根据参考块来推导出参考帧内预测模式参数。
在一些实施例中,第一条件可以包括:参考块的第二颜色分量的预测模式是第一预设模式。
在一种可能的实现方式中,第一预设模式可以是非角度预测模式。示例性地,第一预设模式包括下述至少之一:分量间预测模式、IBC模式、MIP模式和Palette模式。
其中,分量间预测模式可以是CCLM模式。
在另一种可能的实现方式中,第一预设模式是帧间预测模式。
也就是说,在本申请实施例中,如果参考块的第二颜色分量的预测模式是第一预设模式,例如CCLM模式,那么根据当前块的参考块可以推导出参考帧内预测模式参数。
在一些实施例中,第一条件可以包括:参考块的第二颜色分量的预测模式不是第二预设模式。
在又一种可能的实现方式中,第二预设模式是角度预测模式。
在又一种可能的实现方式中,第二预设模式是传统预测模式。示例性地,第二预设模式可以是DC模式或Planar模式。
也就是说,在本申请实施例中,如果参考块的第二颜色分量的预测模式不是第二预设模式,例如角度预测模式、DC模式或Planar模式等,那么根据当前块的参考块也可以推导出参考帧内预测模式参数。
在一些实施例中,第一条件可以包括确定第一参数,第一参数指示根据参考块确定参考帧内预测模式参数;其中,方法还包括:对第一参数进行编码,将所得到的编码比特写入码流。
还应理解,在本申请实施例中,还可以在码流中写入第一参数,然后解码端通过解码码流来确定第一参数,而第一参数指示需要根据参考块,确定参考帧内预测模式参数。
在一些实施例中,根据参考块,确定参考帧内预测模式参数,可以包括:根据参考块,确定参考像素;根据参考像素的重建样值,确定第一参数;根据第一参数,确定参考帧内预测模式参数。
在一种具体的实施例中,根据参考块,确定参考像素,可以包括:根据参考块的相邻区域中的像素,确定参考像素;其中,相邻区域包括下述至少之一:左侧相邻区域、上侧相邻区域和左上侧相邻区域。
在另一种具体的实施例中,根据参考块,确定参考像素,可以包括:根据参考块中的像素,确定参考像素。
进一步地,对于第一参数而言,在一些实施例中,根据参考像素的重建样值,确定第一参数,可以包括:对参考像素的重建样值进行梯度计算,确定参考像素的水平梯度值和垂直梯度值;根据参考像素的水平梯度值和垂直梯度值进行角度映射,确定参考像素对应的至少一种帧内预测模式;根据参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定参考像素对应的至少一个梯度强度值;根据参考像素对应的至少一种帧内预测模式以及至少一个梯度强度值,确定第一参数。
在本申请实施例中,参考像素包括至少一个候选像素,且每一个候选像素对应一种帧内预测模式和一个梯度强度值。以任意一个候选像素为例,在一些实施例中,根据参考像素的水平梯度值和垂直梯度值进行角度映射,确定参考像素对应的至少一种帧内预测模式,可以包括:确定候选像素的水平梯度绝对值和垂直梯度绝对值;根据候选像素的水平梯度绝对值和垂直梯度绝对值进行角度映射,确定候选像素的初始模式索引值;根据候选像素的初始模式索引值,确定候选像素对应的一种帧内预测模式。
进一步地,在一些实施例中,根据候选像素的初始模式索引值,确定候选像素对应的一种帧内预测模式,可以包括:根据预设角度补偿值对初始模式索引值进行补偿处理,确定候选像素的目标模式索引值;根据候选像素的目标模式索引值,确定候选像素对应的一种帧内预测模式。
进一步地,在一些实施例中,该方法还包括:确定候选像素的目标象限值;确定在预设映射关系下目标象限值对应的取值;将预设角度补偿值设置为等于取值。
进一步地,在一些实施例中,确定候选像素的目标象限值,可以包括:根据候选像素的水平梯度值,确定第一符号值;以及根据候选像素的垂直梯度值,确定第二符号值;根据候选像素的水平梯度绝对值和垂直梯度绝对值的比较结果,确定候选像素的比较值;根据比较值、第一符号值和第二符号值进行象限映射,确定候选像素对应的目标象限值。
进一步地,在一些实施例中,根据参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定参考像素对应的至少一个梯度强度值,可以包括:根据候选像素的水平梯度绝对值和垂直梯度绝对值进行 加法计算,确定候选像素对应的一个梯度强度值。
在本申请实施例中,参考像素的重建样值至少包括下述其中一项:参考像素的第一颜色分量的重建样值;参考像素的第二颜色分量的重建样值。
这样,对于第一参数而言,在一种具体的实施例中,根据参考像素的重建样值,确定第一参数,可以包括:在参考像素的重建样值为参考像素的第一颜色分量的重建样值时,确定参考像素的第一颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;在参考像素的重建样值为参考像素的第二颜色分量的重建样值时,确定参考像素的第二颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;根据参考像素的第一颜色分量对应的至少一种帧内预测模式和参考像素的第二颜色分量对应的至少一种帧内预测模式,组成第一集合,且第一集合包括具有互异特性的至少一种参考帧内预测模式;根据参考像素的第一颜色分量对应的至少一个梯度强度值和参考像素的第二颜色分量对应的至少一个梯度强度值,对归属于同一参考帧内预测模式的梯度强度值进行累加计算,确定至少一种参考帧内预测模式对应的梯度强度值;根据至少一种参考帧内预测模式和至少一种参考帧内预测模式对应的梯度强度值,确定第一参数。
示例性地,在本申请中的实施例应用于编码器时,确定参考像素对应的至少一种帧内预测模式和至少一个梯度强度值的示例性说明与解码器侧类似,在此不再进行赘述。
在一些实施例中,根据第一参数,确定参考帧内预测模式参数,可以包括:根据至少一种参考帧内预测模式对应的梯度强度值,组成第二集合;若第二集合中的梯度强度值均为零,则根据PLANAR模式确定参考帧内预测模式参数;若第二集合中的梯度强度值存在非零项,则从第二集合中确定最大梯度强度值,根据最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
需要说明的是,在根据最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数之后,该方法还可以包括:将第二集合中的最大梯度强度值赋值为-1,确定第三集合;若最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则从第三集合中确定新的最大梯度强度值,根据新的最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
还需要说明的是,在根据新的最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数之后,该方法还可以包括:若新的最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则根据DC模式确定参考帧内预测模式参数。
S1730,根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。
需要说明的是,在本申请实施例中,根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值,可以包括:根据参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表;根据模式候选列表,确定当前块的第二颜色分量的预测值。
在一些实施例中,确定当前块的参考块,可以包括:确定当前块邻近的至少一个目标像素;根据至少一个目标像素各自所在的块,确定至少一个第一目标块;根据至少一个第一目标块,当前块的参考块。
需要说明的是,在本申请实施例中,对于这些第一目标块,该方法还可以包括:基于至少一个第一目标块,按照第一预设顺序依次作为参考块,确定至少一个第一目标块各自的参考帧内预测模式参数;根据至少一个第一目标块各自的参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表。
应理解,第一预设顺序可以是人为设定的,也可以是在特定场景下按照某种规则设定的,本申请实施例对此不作限定。
在一些实施例中,确定当前块的参考块,还可以包括:确定当前块的同位置的第一颜色分量区域;从第一颜色分量区域划分的多个块中,确定处于预设位置的至少一个第二目标块;根据至少一个第二目标块,确定当前块的参考块。
还需要说明的是,在本申请实施例中,对于这些第二目标块,该方法还可以包括:基于至少一个第二目标块的预设顺序,依次确定至少一个第二目标块各自的第一颜色分量预测模式参数;根据至少一个第一目标块各自的参考帧内预测模式参数和至少一个第二目标块各自的第一颜色分量预测模式参数,构建当前块的第二颜色分量的模式候选列表。
进一步地,在确定模式候选列表之后,在一些实施例中,该方法还可以包括:确定模式候选列表中的前两个预测模式;对前两个预测模式的模式索引序号进行偏移操作,确定至少一个新的帧内预测模式;将至少一个新的帧内预测模式放置于模式候选列表中。
进一步地,在确定模式候选列表之后,在一些实施例中,该方法还可以包括:对模式候选列表中的预测模式进行顺序调整。
进一步地,在一些实施例中,根据模式候选列表,确定当前块的第二颜色分量的预测值,可以包括:根据模式候选列表,确定当前块的第二颜色分量的目标预测模式;利用目标预测模式对当前块的第二颜色分量进行预测处理,确定当前块的第二颜色分量的预测值。
在一种具体的实施例中,根据模式候选列表,确定当前块的第二颜色分量的目标预测模式,可以包括:根据模式候选列表中的至少一种候选预测模式对当前块的第二颜色分量进行预编码,确定至少一种候选预测模式各自的预编码结果;根据至少一种候选预测模式各自的预编码结果,确定至少一种候选预测模式各自的率失真代价值;从至少一种候选预测模式各自的率失真代价值中确定最小率失真代价值,将最小率失真代价值对应的候选预测模式确定为当前块的第二颜色分量的目标预测模式。
在本申请实施例中,针对模式候选列表中的至少一种候选预测模式,可以确定这至少一种候选预测模式各自的失真值大小。在一种具体的实施例中,可以根据率失真优化(Rate Distortion Optimization,RDO)的代价结果进行确定,也可以根据绝对误差和(Sum of Absolute Difference,SAD)的代价结果进行确定,甚至也可以根据绝对变换差和(Sum of Absolute Transformed Difference,SATD)的代价结果进行确定,但是这里并不作任何限定。
示例性地,以率失真代价值为例,可以根据至少一种候选预测模式各自的预编码结果,确定这至少一种候选预测模式各自的率失真代价值;然后从中选取最小率失真代价值,并将最小率失真代价值对应的候选预测模式确定为目标预测模式(即最佳预测模式),从而能够提升第二颜色分量的编码效率。
在一些实施例中,该方法还可以包括:根据模式候选列表,确定目标预测模式对应的模式索引序号;对模式索引序号进行编码,将所得到编码比特写入码流。
示例性地,参见表4,使用截断一元码进行二值化,每个模式索引序号既可以使用上下文模型进行编码,也可以使用旁路编码。
表4
Figure PCTCN2022130727-appb-000003
S1740,根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。
需要说明的是,根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值,可以包括:根据当前块的第二颜色分量的原始值和当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。
进一步地,在一些实施例中,该方法还可以包括:对当前块的第二颜色分量的预测差值进行编码,将所得到编码比特写入码流。
在本申请实施例中,在确定出当前块的第二颜色分量的预测值之后,可以对当前块的第二颜色分量的原始值和当前块的第二颜色分量的预测值进行减法运算,得到当前块的第二颜色分量的预测差值,然后将其写入码流中。
还需要说明的是,本申请实施例还提供了一种码流,该码流是根据待编码信息进行比特编码生成的;其中,待编码信息可以包括下述至少一项:当前块的第二颜色分量的预测差值、模式索引序号和第一参数。
在本申请实施例中,编码端在确定当前块的第二颜色分量的预测差值、模式索引序号和第一参数之后,可以将这些信息进行编码并写入码流中,通过码流传输到解码端。这样,后续在解码端,通过解码码流就可以直接确定出当前块的第二颜色分量的预测差值、模式索引序号和第一参数等信息,从而能够提高解码效率。
本申请实施例提供了一种编码方法,可以在确定non-CCLM的参考帧内预测模式参数时,考虑了相邻已编码的色度块预测模式。如此,不仅能够提高帧内色度预测模式的完备性和多样性,而且还能够提高帧内色度预测的准确性,从而能够提高编码效率,进而提升编码性能。
在本申请的再一实施例中,当邻近已编码块的色度预测模式为CCLM模式时,此邻近已编码块仍存在自身的纹理内容特性以及与当前块的空间相关性,并且邻近已编码块的重建亮度信息、重建色度信息都是已编码的重建信息,因此,本申请实施例提出了一种利用这些重建信息的LM模式的推导技术 (Linear Model–Derived Mode,LM-DM)。在这里,为了更好地预测色度,可以通过一种推导流程添加多种预测模式,以补充已有可供选择的色度预测模式的完备性。
示例性地,使用MaxChromaCandidateListNum种色度预测模式替换原有的五种non-CCLM模式,该MaxChromaCandidateListNum种色度预测模式可以按照预设顺序进行依次添加,并保证各个色度预测模式的互异性。其中,MaxChromaCandidateListNum指的是预设数量,表示该色度候选列表可以存储的模式数量的最大值。例如,可以先建立一个长度为MaxChromaCandidateListNum的色度候选列表,然后按照预设顺序依次将每个模式添加到该色度候选列表中。
此外,色度候选列表构建完成后也可能会有调整(包括顺序和模式上的调整)。具体可以分为三种情况:
(1)若该色度候选列表按照预设顺序进行构造,并且构建完成后不需要进行调整,则编码端需要构建色度候选列表,对于解码端只需要构建得到码流传输的色度候选列表索引的模式即可;
(2)若该色度候选列表按照预设顺序进行构造,并且构建完成后需要进行调整,则编码端和解码端均需要构建完整色度候选列表,并在调整后,解码端选择码流传输的色度候选列表索引对应的模式;
(3)对于(1)中的编码端,需要构建出完整的色度候选列表进行率失真优化,但可以存在快速算法,包括但不限于只进行前几种模式的率失真优化,则此时编码端不需要将全部模式色度候选列表构建。
在一种具体的实施例中,以解码端为例,MaxChromaCandidateListNum种色度预测模式可以按照以下预设顺序进行依次添加。
应理解,以下预设顺序仅为示例性说明,添加色度预测模式的预设顺序包括但不限于以下描述的顺序。
(1)按照如图3所示的位置,按顺序依次将当前色度解码块(右图中整个斜线的填充区域)对应的同位亮度块(左图中整个斜线的填充区域)的C、TL、TR、BL、BR所在的CU的帧内亮度预测模式添加至色度候选列表中。
下面给出C、TL、TR、BL、BR的详细位置推导过程:
假设当前色度解码块左上角对应的同位亮度块相对于图像左上角亮度块的位置(即亮度块TL的位置)为(xCb,yCb),当前色度解码块对应的同位亮度块(左图中整个斜线的填充区域)的宽为cbWidth,高为cbHeight。
C块的坐标信息为(xCb+cbWidth/2,yCb+cbHeight/2);
TL块的坐标信息为(xCb,yCb);
TR块的坐标信息为(xCb+cbWidth-1,yCb);
BL块的坐标信息为(xCb,yCb+cbHeight-1);
BR块的坐标信息为(xCb+cbWidth-1,yCb+cbHeight-1)。
同位亮度块的亮度预测模式的具体推导规则如下:
若划分树类型单树类型,参见图4,可以将C所在的CU的亮度预测模式添加至色度候选列表中。
若划分树类型双树类型,参见图3,执行如下操作:
以推导C块的亮度预测模式举例,假设当前色度解码块左上角对应的同位亮度块相对于图像左上角亮度块的位置(即亮度块TL的位置)为(xCb,yCb),当前色度解码块对应的同位亮度块(左图中整个斜线的填充区域)的宽为cbWidth,高为cbHeight。则C块的亮度预测模式lumaIntraPredMode推导过程如下:
首先,判断图3中的C块是否使用MIP模式。其中,C块的位置信息为(xCb+cbWidth/2,yCb+cbHeight/2)。
如果IntraMipFlag[xCb+cbWidth/2][yCb+cbHeight/2]为1,则lumaIntraPredMode=INTRA_PLANAR。
其中,数组IntraMipFlag[x][y]指的是包含坐标(x,y)的解码块是否使用MIP模式。
其次,如果CuPredMode[0][xCb+cbWidth/2][yCb+cbHeight/2]为MODE_IBC或者MODE_PLT,则lumaIntraPredMode=INTRA_DC。
其中,数组CuPredMode[chType][x][y]指的是包含坐标(x,y)的亮度或者色度解码块使用的帧内预测模式,chType为0指的是亮度,chType为1指的是色度。
否则,
lumaIntraPredMode=IntraPredModeY[xCb+cbWidth/2][yCb+cbHeight/2]。
其中,数组IntraPredModeY[x][y]指的是包含坐标(x,y)的解码块使用的帧内预测模式。
在一些实施例中,参见表2,同位亮度块的亮度预测模式转换为色度预测模式的具体推导规则如下:
当sps_chroma_format_idc为0时,不需要使用色度帧内预测模式,因此不存在该推导规则;
当sps_chroma_format_idc为2时,使用表3规定的帧内亮度预测模式lumaIntraPredMode的mode X导出帧内色度预测模式的mode Y;
否则,帧内色度预测模式等于帧内亮度预测模式lumaIntraPredMode。
(2)按照如图5所示的位置,按顺序依次添加当前色度解码块邻近的色度像素0、1、2、3、4位置所在的已解码色度块的帧内色度预测模式。
下面解释邻近色度像素0、1、2、3、4的详细位置推导过程:
假设当前色度解码块左上角色度像素相对于图像左上角色度像素的位置为(xCb,yCb),当前色度解码块的宽为cbWidth,高为cbHeight。
色度像素0的位置信息为(xCb-1,yCb+cbHeight-1);
色度像素1的位置信息为(xCb+cbWidth-1,yCb-1);
色度像素2的位置信息为(xCb-1,yCb+cbHeight);
色度像素3的位置信息为(xCb+cbWidth,yCb-1);
色度像素4的位置信息为(xCb-1,yCb-1)。
在一些实施例中,邻近已解码色度块的色度预测模式转换为传统色度预测模式的具体规则如下:
当邻近已解码色度块的预测模式为帧间模式时,不执行任何添加操作;
当邻近已解码色度块的色度预测模式为CCLM模式时,执行LM模式的推导过程(Linear Model-Derived Mode,LM-DM);
否则,直接添加邻近已解码色度块的帧内色度预测模式。
(3)将上述(1)、(2)步骤中所添加的模式中的前2个模式的模式索引+1或者-1的模式添加。这里存在两种方式:第一种为判断该模式是否角度模式,若为角度模式则添加该角度模式映射的角度顺时针或逆时针偏移一个最小角度单位所映射的角度模式,若该模式不为角度模式则不做任何操作;第二种为直接将对应的模式索引值+1或-1:若模式索引值为0,则只添加模式索引为1的模式;若模式索引值为最大模式索引,则只添加模式索引为最大模式索引-1的模式。若只存在1个模式,则只添加该模式偏移的两个角度模式。若非角度模式,则不执行步骤(3)。
(4)添加预先设定的一组默认non-CCLM模式色度候选列表(默认色度候选列表包括但不限于后续描述形式),色度候选列表中的模式分别为PLANAR_IDX,VER_IDX,HOR_IDX,DC_IDX,VDIA_IDX,VER_IDX–4,VER_IDX+4,HOR_IDX–4,HOR_IDX+4。
在本申请实施例中,当邻近已解码色度块的色度预测模式为CCLM模式时,执行LM-DM的推导过程,该推导过程主要包括以下三个步骤:第一步为确定模板区域并获取模板区域的重建像素,第二步为对第一步所获取的像素计算梯度映射为角度模式并将其统计在直方图中,第三步为推导LM-DM的模式。
假设邻近已解码色度块的左上角色度像素位置相对于图像左上角色度像素位置的坐标为(xCbNb,yCbNb),色度像素大小的宽为CbNbWidth,色度像素大小的高为CbNbHeight。以sps_chroma_format_idc为1举例,即YUV420格式,因此,亮度像素大小的宽为2×CbNbWidth,亮度像素大小的高为2×CbNbHeight。
第一步:确定模板区域并获取模板区域的重建像素(即输入)。
输入:该邻近已解码色度块的相邻区域的色度像素为pC[x][y],其中,x∈[-3,CbNbWidth],y∈[-3,-1]且有x∈[-3,-1],y∈[0,CbNbHeight],原点[0][0]是块内左上角的色度像素坐标。该邻近已解码色度块的相邻区域的同位亮度像素为pY[x][y],其中,x∈[-3,2×CbNbWidth],y∈[-3,-1]且有x∈[-3,-1],y∈[0,2×CbNbHeight],原点[0][0]是块内左上角的色度像素所对应的亮度像素坐标。具***置如图10A中的圆点像素和图10B中的圆点像素所示。
第二步:对第一步所获取的像素计算梯度映射为角度模式并将其统计在直方图中。具体伪代码可以参见前述内容的描述。
以其中一种实现方式为例,将步骤(1)和步骤(2)的梯度强度值iAmp按照对应的帧内预测模式ipm累加,以帧内预测模式ipm为横坐标,以梯度强度值iAmp为纵坐标建立直方图HOG,如图14所示。
第三步:推导LM-DM的模式(即输出)。
输出:由LM-DM推导可以得到帧内色度预测模式IntraPredModeD,模式索引区间范围为[0,66]。
若直方图HOG中不含非零项,则IntraPredModeD=INTRA_PLANAR。
否则,将IntraPredModeD=argmax i(HoG[i]),将HoG[IntraPredModeD]设置为-1;
若IntraPredModeD与(xCbNb,yCbNb)所在色度块的DM模式相等时,则再次重新搜索,将IntraPredModeD=argmax i(HoG[i])。
若IntraPredModeD继续与(xCbNb,yCbNb)所在色度块的DM模式相等,则将IntraPredModeD=INTRA_DC。
在一些实施例中,对于邻近已解码块,仍使用其相邻区域的模板像素进行分析。但是,因为其块内部的亮度像素和色度像素也已重建完成,因此,本申请实施例也可以使用邻近已解码块的全部内部像素对LM-DM的模式进行推导。
LM-DM的推导过程的详细步骤如下:
假设邻近已解码色度块的左上角像素位置相对于图像左上角像素位置的坐标为(xCbNb,yCbNb),色度像素大小的宽为CbNbWidth,色度像素大小的高为CbNbHeight。以sps_chroma_format_idc为1举例,即YUV420格式,因此,亮度像素大小的宽为2×CbNbWidth,亮度像素大小的高为2×CbNbHeight。
第一步:确定模板区域并获取模板区域的重建像素(即输入)。
输入:该邻近已解码色度块的色度像素为pC[x][y],其中,x∈[0,CbNbWidth-1],y∈[0,CbNbHeight-1],原点[0][0]是块内左上角的色度像素坐标。该邻近已解码色度块的同位亮度像素为pY[x][y],其中,x∈[0,2×CbNbWidth-1],y∈[0,2×CbNbHeight-1],原点[0][0]是块内左上角的色度像素所对应的亮度像素坐标。具***置如图11A中的圆点像素和图11B中的圆点像素所示。
第二步:对第一步所获取的像素计算梯度映射为角度模式并将其统计在直方图中。具体伪代码可以参见前述内容的描述,在此不再进行赘述。
第三步:推导LM-DM的模式(即输出)。
输出:由LM-DM推导可以得到帧内色度预测模式IntraPredModeD,模式索引区间范围为[0,66]。
若直方图HOG中不含非零项,则IntraPredModeD=INTRA_PLANAR。
否则,将IntraPredModeD=argmax i(HoG[i]),将HoG[IntraPredModeD]设置为-1;
若IntraPredModeD与(xCbNb,yCbNb)所在色度块的DM模式相等时,则再次重新搜索,将IntraPredModeD=argmax i(HoG[i])。
若IntraPredModeD继续与(xCbNb,yCbNb)所在色度块的DM模式相等,则将IntraPredModeD=INTRA_DC。
在另一些实施例中,使用邻近已解码块的全部内部像素对LM-DM的模式进行推导时,计算复杂度也会随其块内部的亮度像素和色度像素的数量增多而增加。因此,为了减少复杂度,本申请实施例也可以使用邻近已解码块的部分内部像素进行分析。
这里,分别以邻近色度像素位于图5中的色度像素0和色度像素1为例进行说明。
(i)以邻近色度像素位于图5中的色度像素0为例,LM-DM的推导过程的详细步骤如下:
第一步:确定模板区域并获取模板区域的重建像素(即输入)。
输入:该邻近已解码色度块的色度像素为pC[x][y],其中,x∈[CbNbWidth-3,CbNbWidth-1],y∈[0,CbNbHeight-1],原点[0][0]是块内左上角的色度像素坐标。该邻近已解码色度块的同位亮度像素为pY[x][y],其中,x∈[2×CbNbWidth-3,2×CbNbWidth-1],y∈[0,2×CbNbHeight-1],原点[0][0]是块内左上角的色度像素所对应的亮度像素坐标。具***置如图12A中的圆点像素和图12B中的圆点像素所示。
第二步:对第一步所获取的像素计算梯度映射为角度模式并将其统计在直方图中。具体伪代码可以参见前述描述,在此不再进行赘述。
第三步:推导LM-DM的模式(即输出)。
(ii)以邻近色度像素位于图5中的色度像素1为例,LM-DM的推导过程的详细步骤如下:
第一步:确定模板区域并获取模板区域的重建像素(即输入)。
输入:该邻近已解码色度块的色度像素为pC[x][y],其中,x∈[0,CbNbWidth-1],y∈[CbNbHeight-3,CbNbHeight-1],原点[0][0]是块内左上角的色度像素坐标。该邻近已解码色度块的同位亮度像素为pY[x][y],其中,x∈[0,2×CbNbWidth-1],y∈[2×CbNbHeight-3,2×CbNbHeight-1],原点[0][0]是块内左上角的色度像素所对应的亮度像素坐标。具***置如图13A中的圆点像素和图13B中的圆点像素所示。
第二步:对第一步所获取的像素计算梯度映射为角度模式并将其统计在直方图中。具体伪代码可以参见前述内容的描述,在此不再进行赘述。
第三步:推导LM-DM的模式(即输出)。
输出:由LM-DM推导可以得到帧内色度预测模式IntraPredModeD,模式索引区间范围为[0,66]。
若直方图HOG中不含非零项,则IntraPredModeD=INTRA_PLANAR。
否则,将IntraPredModeD=argmax i(HoG[i]),将HoG[IntraPredModeD]设置为-1;
若IntraPredModeD与(xCbNb,yCbNb)所在色度块的DM模式相等时,则再次重新搜索,将 IntraPredModeD=argmax i(HoG[i])。
若IntraPredModeD继续与(xCbNb,yCbNb)所在色度块的DM模式相等,则将IntraPredModeD=INTRA_DC。
简单来说,在本申请实施例中,当邻近已编解码色度块的色度预测模式为CCLM模式时,该邻近已编解码色度块仍存在自身的纹理内容特性以及与当前编解码块的空间相关性,并且邻近编解码块的重建亮度信息、重建色度信息都是已编解码的重建信息,因此,本申请实施例利用了上述重建信息进行LM-DM模式的推导。
还需要说明的是,在本申请实施例中,LM-DM模式的帧内色度块的non-CCLM模式的推导方式可以包括:
(1)充分利用已编解码块的内容特性,执行纹理梯度分析;
(2)充分利用亮度与色度的高度相关性,同时通过亮度与色度推导non-CCLM模式。
在本申请实施例中,通过上述实施例对前述实施例的具体实现进行详细阐述,从中可以看出,根据前述实施例的技术方案,本申请实施例可以提高帧内色度预测模式的完备性。其中,通过对邻近已编解码色度块和亮度块执行纹理梯度分析,构建具有对应于多个角度模式条目的梯度直方图,通过使用水平和垂直Sobel滤波器分别计算模板区域的纯水平和垂直方向的强度,确定角度模式并更新振幅,得到最佳的non-CCLM模式。在当前编解码块内容特性不同的情况下,本申请实施例可以进一步完善帧内色度预测模式的多样性,进而得到更加精确的色度预测值。
在本申请的再一实施例中,参见图18,其示出了本申请实施例提供的一种编码器的组成结构示意图。如图18所示,该编码器1800可以包括:第一确定单元1810和第一预测单元1820;其中,
第一确定单元1810,配置为确定当前块的参考块;其中,参考块是当前块的相邻块;以及在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;
第一预测单元1820,配置为根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值;
第一确定单元1810,还配置为根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。
在一些实施例中,第一条件包括:参考块的第二颜色分量的预测模式是第一预设模式。
在一些实施例中,第一预设模式包括下述至少之一:分量间预测模式、IBC模式、MIP模式和Palette模式。
在一些实施例中,分量间预测模式是CCLM模式。
在一些实施例中,第一预设模式是帧间预测模式。
在一些实施例中,第一条件包括:参考块的第二颜色分量的预测模式不是第二预设模式。
在一些实施例中,第二预设模式是角度预测模式。
在一些实施例中,第二预设模式是DC模式或Planar模式。
在一些实施例中,第一条件包括确定第一参数,第一参数指示根据参考块确定参考帧内预测模式参数。
在一些实施例中,参见图18,编码器1800还可以包括编码单元1830,配置为对第一参数进行编码,将所得到的编码比特写入码流。
在一些实施例中,参见图18,编码器1800还可以包括第一构建单元1840;其中,
第一构建单元1840,配置为根据参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表;
第一预测单元1820,还配置为根据模式候选列表,确定当前块的第二颜色分量的预测值。
在一些实施例中,第一确定单元1810,还配置为根据参考块,确定参考像素;根据参考像素的重建样值,确定第一参数;根据第一参数,确定参考帧内预测模式参数。
在一些实施例中,第一确定单元1810,还配置为根据参考块的相邻区域中的像素,确定参考像素;其中,相邻区域包括下述至少之一:左侧相邻区域、上侧相邻区域和左上侧相邻区域。
在一些实施例中,第一确定单元1810,还配置为根据参考块中的像素,确定参考像素。
在一些实施例中,第一确定单元1810,还配置为对参考像素的重建样值进行梯度计算,确定参考像素的水平梯度值和垂直梯度值;根据参考像素的水平梯度值和垂直梯度值进行角度映射,确定参考像素对应的至少一种帧内预测模式;根据参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定参考像素对应的至少一个梯度强度值;根据参考像素对应的至少一种帧内预测模式以及至少一个梯度强度值,确定第一参数。
在一些实施例中,参考像素包括至少一个候选像素,且每一个候选像素对应一种帧内预测模式和一 个梯度强度值;第一确定单元1810,还配置为确定候选像素的水平梯度绝对值和垂直梯度绝对值;根据候选像素的水平梯度绝对值和垂直梯度绝对值进行角度映射,确定候选像素的初始模式索引值;根据候选像素的初始模式索引值,确定候选像素对应的一种帧内预测模式。
在一些实施例中,第一确定单元1810,还配置为根据预设角度补偿值对初始模式索引值进行补偿处理,确定候选像素的目标模式索引值;根据候选像素的目标模式索引值,确定候选像素对应的一种帧内预测模式。
在一些实施例中,第一确定单元1810,还配置为确定候选像素的目标象限值;确定在预设映射关系下目标象限值对应的取值;将预设角度补偿值设置为等于取值。
在一些实施例中,第一确定单元1810,还配置为根据候选像素的水平梯度值,确定第一符号值;以及根据候选像素的垂直梯度值,确定第二符号值;根据候选像素的水平梯度绝对值和垂直梯度绝对值的比较结果,确定候选像素的比较值;根据比较值、第一符号值和第二符号值进行象限映射,确定候选像素对应的目标象限值。
在一些实施例中,第一确定单元1810,还配置为根据候选像素的水平梯度绝对值和垂直梯度绝对值进行加法计算,确定候选像素对应的一个梯度强度值。
在一些实施例中,参考像素的重建样值至少包括下述其中一项:参考像素的第一颜色分量的重建样值;参考像素的第二颜色分量的重建样值。
在一些实施例中,第一确定单元1810,还配置为在参考像素的重建样值为参考像素的第一颜色分量的重建样值时,确定参考像素的第一颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;在参考像素的重建样值为参考像素的第二颜色分量的重建样值时,确定参考像素的第二颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;
第一构建单元1840,还配置为根据参考像素的第一颜色分量对应的至少一种帧内预测模式和参考像素的第二颜色分量对应的至少一种帧内预测模式,组成第一集合,且第一集合包括具有互异特性的至少一种参考帧内预测模式;
第一确定单元1810,还配置为根据参考像素的第一颜色分量对应的至少一个梯度强度值和参考像素的第二颜色分量对应的至少一个梯度强度值,对归属于同一参考帧内预测模式的梯度强度值进行累加计算,确定至少一种参考帧内预测模式对应的梯度强度值;根据至少一种参考帧内预测模式和至少一种参考帧内预测模式对应的梯度强度值,确定第一参数。
在一些实施例中,第一确定单元1810,还配置为根据至少一种参考帧内预测模式对应的梯度强度值,组成第二集合;以及若第二集合中的梯度强度值均为零,则根据PLANAR模式确定参考帧内预测模式参数;若第二集合中的梯度强度值存在非零项,则从第二集合中确定最大梯度强度值,根据最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
在一些实施例中,第一确定单元1810,还配置为将第二集合中的最大梯度强度值赋值为-1,确定第三集合;若最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则从第三集合中确定新的最大梯度强度值,根据新的最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
在一些实施例中,第一确定单元1810,还配置为若新的最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则根据DC模式确定参考帧内预测模式参数。
在一些实施例中,第一确定单元1810,还配置为确定当前块邻近的至少一个目标像素;根据至少一个目标像素各自所在的块,确定至少一个第一目标块;根据至少一个第一目标块,确定当前块的参考块。
在一些实施例中,第一确定单元1810,还配置为基于至少一个第一目标块,按照第一预设顺序依次作为参考块,确定至少一个第一目标块各自的参考帧内预测模式参数;
第一构建单元1840,还配置为根据至少一个第一目标块各自的参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表。
在一些实施例中,第一确定单元1810,还配置为确定当前块的同位置的第一颜色分量区域;从第一颜色分量区域划分的至少一个块中,确定处于预设位置的至少一个第二目标块;根据至少一个第二目标块,确定当前块的参考块。
在一些实施例中,第一确定单元1810,还配置为基于至少一个第二目标块的预设顺序,依次确定至少一个第二目标块各自的第一颜色分量预测模式参数;第一构建单元1840,还配置为根据至少一个第一目标块各自的参考帧内预测模式参数和至少一个第二目标块各自的第一颜色分量预测模式参数,构建当前块的第二颜色分量的模式候选列表。
在一些实施例中,第一确定单元1810,还配置为确定模式候选列表中的前两个预测模式;对前两个预测模式的模式索引序号进行偏移操作,确定至少一个新的帧内预测模式;将至少一个新的帧内预测模式放置于模式候选列表中。
在一些实施例中,参见图18,编码器1800还可以包括第一调整单元1850,配置为对模式候选列表中的预测模式进行顺序调整。
在一些实施例中,第一确定单元1810,还配置为根据模式候选列表,确定当前块的第二颜色分量的目标预测模式;第一预测单元1820,还配置为利用目标预测模式对当前块的第二颜色分量进行预测处理,确定当前块的第二颜色分量的预测值。
在一些实施例中,编码单元1830,还配置为根据模式候选列表中的至少一种候选预测模式对当前块的第二颜色分量进行预编码,确定至少一种候选预测模式各自的预编码结果;
第一确定单元1810,还配置为根据至少一种候选预测模式各自的预编码结果,确定至少一种候选预测模式各自的率失真代价值;从至少一种候选预测模式各自的率失真代价值中确定最小率失真代价值,将最小率失真代价值对应的候选预测模式确定为当前块的第二颜色分量的目标预测模式。
在一些实施例中,第一确定单元1810,还配置为根据模式候选列表,确定目标预测模式对应的模式索引序号;
编码单元1830,还配置为对模式索引序号进行编码,将所得到编码比特写入码流。
在一些实施例中,第一确定单元1810,还配置为根据当前块的第二颜色分量的原始值和当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值;
编码单元1830,还配置为对当前块的第二颜色分量的预测差值进行编码,将所得到编码比特写入码流。
可以理解地,在本申请实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机可读存储介质,应用于编码器1800,该计算机可读存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现前述实施例中任一项所述的方法。
基于编码器1800的组成以及计算机可读存储介质,参见图19,其示出了本申请实施例提供的编码器1800的具体硬件结构示意图。如图19所示,编码器1800可以包括:第一通信接口1910、第一存储器1920和第一处理器1930;各个组件通过第一总线***1940耦合在一起。可理解,第一总线***1940用于实现这些组件之间的连接通信。第一总线***1940除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图19中将各种总线都标为第一总线***1940。其中,
第一通信接口1910,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第一存储器1920,用于存储能够在第一处理器1930上运行的计算机程序;
第一处理器1930,用于在运行所述计算机程序时,执行:
确定当前块的参考块;其中,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值;根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。
可以理解,本申请实施例中的第一存储器1920可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced  SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的***和方法的第一存储器1920旨在包括但不限于这些和任意其它适合类型的存储器。
而第一处理器1930可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器1930中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器1930可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器1920,第一处理器1930读取第一存储器1920中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,第一处理器1930还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
本实施例提供了一种编码器,在该编码器中,通过对当前块相邻的参考块进行相关参数分析,从而能够确定出non-CCLM的参考帧内预测模式参数;根据这参考帧内预测模式参数,可以提高帧内色度预测模式的完备性和多样性,从而能够提高帧内色度预测的准确性,而且还能够提高编码效率,进而提升编码性能。
在本申请的再一实施例中,参见图20,其示出了本申请实施例提供的一种解码器的组成结构示意图。如图20所示,该解码器2000可以包括:第二确定单元2010和第二预测单元2020;其中,
第二确定单元2010,配置为确定当前块的参考块;其中,参考块是当前块的相邻块;以及在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;
第二预测单元2020,配置为根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。
在一些实施例中,第一条件包括:参考块的第二颜色分量的预测模式是第一预设模式。
在一些实施例中,第一预设模式包括下述至少之一:分量间预测模式、IBC模式、MIP模式和Palette模式。
在一些实施例中,分量间预测模式是CCLM模式。
在一些实施例中,第一预设模式是帧间预测模式。
在一些实施例中,第一条件包括:参考块的第二颜色分量的预测模式不是第二预设模式。
在一些实施例中,第二预设模式是角度预测模式。
在一些实施例中,第二预设模式是DC模式或Planar模式。
在一些实施例中,第一条件包括:解码码流,确定第一参数;其中,第一参数指示根据参考块确定参考帧内预测模式参数。
在一些实施例中,参见图20,解码器2000还可以包括第二构建单元2030,其中:
第二构建单元2030,配置为根据参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表;
第二预测单元2020,还配置为根据模式候选列表,确定当前块的第二颜色分量的预测值。
在一些实施例中,第二确定单元2010,还配置为根据参考块,确定参考像素;根据参考像素的重建样值,确定第一参数;根据第一参数,确定参考帧内预测模式参数。
在一些实施例中,第二确定单元2010,还配置为根据参考块的相邻区域中的像素,确定参考像素;其中,相邻区域包括下述至少之一:左侧相邻区域、上侧相邻区域和左上侧相邻区域。
在一些实施例中,第二确定单元2010,还配置为根据参考块中的像素,确定参考像素。
在一些实施例中,第二确定单元2010,还配置为对参考像素的重建样值进行梯度计算,确定参考像素的水平梯度值和垂直梯度值;根据参考像素的水平梯度值和垂直梯度值进行角度映射,确定参考像 素对应的至少一种帧内预测模式;根据参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定参考像素对应的至少一个梯度强度值;根据参考像素对应的至少一种帧内预测模式以及至少一个梯度强度值,确定第一参数。
在一些实施例中,参考像素包括至少一个候选像素,且每一个候选像素对应一种帧内预测模式和一个梯度强度值;第二确定单元2010,还配置为确定候选像素的水平梯度绝对值和垂直梯度绝对值;根据候选像素的水平梯度绝对值和垂直梯度绝对值进行角度映射,确定候选像素的初始模式索引值;根据候选像素的初始模式索引值,确定候选像素对应的一种帧内预测模式。
在一些实施例中,第二确定单元2010,还配置为根据预设角度补偿值对初始模式索引值进行补偿处理,确定候选像素的目标模式索引值;根据候选像素的目标模式索引值,确定候选像素对应的一种帧内预测模式。
在一些实施例中,第二确定单元2010,还配置为确定候选像素的目标象限值;确定在预设映射关系下目标象限值对应的取值;将预设角度补偿值设置为等于取值。
在一些实施例中,第二确定单元2010,还配置为根据候选像素的水平梯度值,确定第一符号值;以及根据候选像素的垂直梯度值,确定第二符号值;根据候选像素的水平梯度绝对值和垂直梯度绝对值的比较结果,确定候选像素的比较值;根据比较值、第一符号值和第二符号值进行象限映射,确定候选像素对应的目标象限值。
在一些实施例中,第二确定单元2010,还配置为根据候选像素的水平梯度绝对值和垂直梯度绝对值进行加法计算,确定候选像素对应的一个梯度强度值。
在一些实施例中,参考像素的重建样值至少包括下述其中一项:参考像素的第一颜色分量的重建样值;参考像素的第二颜色分量的重建样值。
在一些实施例中,第二确定单元2010,还配置为在参考像素的重建样值为参考像素的第一颜色分量的重建样值时,确定参考像素的第一颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;在参考像素的重建样值为参考像素的第二颜色分量的重建样值时,确定参考像素的第二颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;
第二构建单元2030,还配置为根据参考像素的第一颜色分量对应的至少一种帧内预测模式和参考像素的第二颜色分量对应的至少一种帧内预测模式,组成第一集合,且第一集合包括具有互异特性的至少一种参考帧内预测模式;
第二确定单元2010,还配置为根据参考像素的第一颜色分量对应的至少一个梯度强度值和参考像素的第二颜色分量对应的至少一个梯度强度值,对归属于同一参考帧内预测模式的梯度强度值进行累加计算,确定至少一种参考帧内预测模式对应的梯度强度值;
第二确定单元2010,还配置为根据至少一种参考帧内预测模式和至少一种参考帧内预测模式对应的梯度强度值,确定第一参数。
在一些实施例中,第二确定单元2010,还配置为根据至少一种参考帧内预测模式对应的梯度强度值,组成第二集合;以及若第二集合中的梯度强度值均为零,则根据PLANAR模式确定参考帧内预测模式参数;若第二集合中的梯度强度值存在非零项,则从第二集合中确定最大梯度强度值,根据最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
在一些实施例中,第二确定单元2010,还配置为将第二集合中的最大梯度强度值赋值为-1,确定第三集合;若最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则从第三集合中确定新的最大梯度强度值,根据新的最大梯度强度值对应的帧内预测模式确定参考帧内预测模式参数。
在一些实施例中,第二确定单元2010,还配置为若新的最大梯度强度值对应的帧内预测模式与参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则根据DC模式确定参考帧内预测模式参数。
在一些实施例中,第二确定单元2010,还配置为确定当前块邻近的至少一个目标像素;根据至少一个目标像素各自所在的块,确定至少一个第一目标块;根据至少一个第一目标块,确定当前块的参考块。
在一些实施例中,第二确定单元2010,还配置为基于至少一个第一目标块,按照第一预设顺序依次作为参考块,确定至少一个第一目标块各自的参考帧内预测模式参数;第二构建单元2030,还配置为根据至少一个第一目标块各自的参考帧内预测模式参数,构建当前块的第二颜色分量的模式候选列表。
在一些实施例中,第二确定单元2010,还配置为确定当前块的同位置的第一颜色分量区域;从第一颜色分量区域划分的至少一个块中,确定处于预设位置的至少一个第二目标块;根据至少一个第二目标块,确定当前块的参考块。
在一些实施例中,第二确定单元2010,还配置为基于至少一个第二目标块的预设顺序,依次确定至少一个第二目标块各自的第一颜色分量预测模式参数;
第二构建单元2030,还配置为根据至少一个第一目标块各自的参考帧内预测模式参数和至少一个第二目标块各自的第一颜色分量预测模式参数,构建当前块的第二颜色分量的模式候选列表。
在一些实施例中,第二确定单元2010,还配置为确定模式候选列表中的前两个预测模式;对前两个预测模式的模式索引序号进行偏移操作,确定至少一个新的帧内预测模式;将至少一个新的帧内预测模式放置于模式候选列表中。
在一些实施例中,参见图20,解码器2000还可以包括第二调整单元2040,配置为对模式候选列表中的预测模式进行顺序调整。
在一些实施例中,参见图20,解码器2000还可以包括解码单元2050,配置为解析码流,确定当前块的第二颜色分量的模式索引序号;
第二确定单元2010,还配置为根据模式候选列表,确定模式索引序号对应的目标预测模式;
第二预测单元2020,还配置为利用目标预测模式对当前块的第二颜色分量进行预测处理,确定当前块的第二颜色分量的预测值。
在一些实施例中,解码单元2050,还配置为解析码流,确定当前块的第二颜色分量的预测差值;
第二确定单元2010,还配置为根据当前块的第二颜色分量的预测值和当前块的第二颜色分量的预测差值,确定当前块的第二颜色分量的重建值。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例提供了一种计算机可读存储介质,应用于解码器2000,该计算机可读存储介质存储有计算机程序,所述计算机程序被第二处理器执行时实现前述实施例中任一项所述的方法。
基于解码器2000的组成以及计算机可读存储介质,参见图21,其示出了本申请实施例提供的解码器2000的具体硬件结构示意图。如图21所示,解码器2000可以包括:第二通信接口2110、第二存储器2120和第二处理器2130;各个组件通过第二总线***2140耦合在一起。可理解,第二总线***2140用于实现这些组件之间的连接通信。第二总线***2140除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图21中将各种总线都标为第二总线***2140。其中,
第二通信接口2110,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第二存储器2120,用于存储能够在第二处理器2130上运行的计算机程序;
第二处理器2130,用于在运行所述计算机程序时,执行:
确定当前块的参考块;其中,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。
可选地,作为另一个实施例,第二处理器2130还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
可以理解,第二存储器2120与第一存储器1920的硬件功能类似,第二处理器2130与第一处理器1930的硬件功能类似;这里不再详述。
本实施例提供了一种解码器,在该解码器中,通过对当前块相邻的参考块进行相关参数分析,从而能够确定出non-CCLM的参考帧内预测模式参数;根据这参考帧内预测模式参数,可以提高帧内色度预测模式的完备性和多样性,从而能够提高帧内色度预测的准确性,而且还能够提高解码效率,进而提升解码性能。
在本申请的再一实施例中,参见图22,其示出了本申请实施例提供的一种编解码***的组成结构示意图。如图22所示,编解码***2200可以包括编码器2210和解码器2220。
在本申请实施例中,编码器2210可以为前述实施例中任一项所述的编码器,解码器2220可以为前述实施例中任一项所述的解码器。
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外 的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中,在解码端,确定当前块的参考块;其中,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值。在编码端,确定当前块的参考块;其中,参考块是当前块的相邻块;在参考块的第二颜色分量的预测模式满足第一条件时,根据参考块,确定参考帧内预测模式参数;根据参考帧内预测模式参数,确定当前块的第二颜色分量的预测值;根据当前块的第二颜色分量的预测值,确定当前块的第二颜色分量的预测差值。这样,通过对当前块相邻的参考块进行相关参数分析,从而能够确定出non-CCLM的参考帧内预测模式参数;根据这参考帧内预测模式参数,可以提高帧内色度预测模式的完备性和多样性,从而能够提高帧内色度预测的准确性,而且还能够提高编解码效率,进而提升编解码性能。

Claims (72)

  1. 一种解码方法,应用于解码器,所述方法包括:
    确定当前块的参考块;其中,所述参考块是所述当前块的相邻块;
    在所述参考块的第二颜色分量的预测模式满足第一条件时,根据所述参考块,确定参考帧内预测模式参数;
    根据所述参考帧内预测模式参数,确定所述当前块的第二颜色分量的预测值。
  2. 根据权利要求1所述的方法,其中,所述第一条件包括:所述参考块的第二颜色分量的预测模式是第一预设模式。
  3. 根据权利要求2所述的方法,其中,所述第一预设模式包括下述至少之一:分量间预测模式、IBC模式、MIP模式和Palette模式。
  4. 根据权利要求3所述的方法,其中,所述分量间预测模式是CCLM模式。
  5. 根据权利要求2所述的方法,其中,所述第一预设模式是帧间预测模式。
  6. 根据权利要求1所述的方法,其中,所述第一条件包括:所述参考块的第二颜色分量的预测模式不是第二预设模式。
  7. 根据权利要求6所述的方法,其中,所述第二预设模式是角度预测模式。
  8. 根据权利要求6所述的方法,其中,所述第二预设模式是DC模式或Planar模式。
  9. 根据权利要求1所述的方法,其中,所述第一条件包括:解码码流,确定第一参数;其中,所述第一参数指示根据所述参考块确定参考帧内预测模式参数。
  10. 根据权利要求1所述的方法,其中,所述根据所述参考帧内预测模式参数,确定所述当前块的第二颜色分量的预测值,包括:
    根据所述参考帧内预测模式参数,构建所述当前块的第二颜色分量的模式候选列表;
    根据所述模式候选列表,确定所述当前块的第二颜色分量的预测值。
  11. 根据权利要求1所述的方法,其中,所述根据所述参考块,确定参考帧内预测模式参数,包括:
    根据所述参考块,确定参考像素;
    根据所述参考像素的重建样值,确定第一参数;
    根据所述第一参数,确定参考帧内预测模式参数。
  12. 根据权利要求11所述的方法,其中,所述根据所述参考块,确定参考像素,包括:
    根据所述参考块的相邻区域中的像素,确定所述参考像素;
    其中,所述相邻区域包括下述至少之一:左侧相邻区域、上侧相邻区域和左上侧相邻区域。
  13. 根据权利要求11所述的方法,其中,所述根据所述参考块,确定参考像素,包括:
    根据所述参考块中的像素,确定所述参考像素。
  14. 根据权利要求11所述的方法,其中,所述根据所述参考像素的重建样值,确定第一参数,包括:
    对所述参考像素的重建样值进行梯度计算,确定所述参考像素的水平梯度值和垂直梯度值;
    根据所述参考像素的水平梯度值和垂直梯度值进行角度映射,确定所述参考像素对应的至少一种帧内预测模式;
    根据所述参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定所述参考像素对应的至少一个梯度强度值;
    根据所述参考像素对应的至少一种帧内预测模式以及至少一个梯度强度值,确定所述第一参数。
  15. 根据权利要求14所述的方法,其中,所述参考像素包括至少一个候选像素,且每一个候选像素对应一种帧内预测模式和一个梯度强度值;
    所述根据所述参考像素的水平梯度值和垂直梯度值进行角度映射,确定所述参考像素对应的至少一种帧内预测模式,包括:
    确定所述候选像素的水平梯度绝对值和垂直梯度绝对值;
    根据所述候选像素的水平梯度绝对值和垂直梯度绝对值进行角度映射,确定所述候选像素的初始模式索引值;
    根据所述候选像素的初始模式索引值,确定所述候选像素对应的一种帧内预测模式。
  16. 根据权利要求15所述的方法,其中,所述根据所述候选像素的初始模式索引值,确定所述候选像素对应的一种帧内预测模式,包括:
    根据预设角度补偿值对所述初始模式索引值进行补偿处理,确定所述候选像素的目标模式索引值;
    根据所述候选像素的目标模式索引值,确定所述候选像素对应的一种帧内预测模式。
  17. 根据权利要求16所述的方法,其中,所述方法还包括:
    确定所述候选像素的目标象限值;
    确定在预设映射关系下所述目标象限值对应的取值;
    将所述预设角度补偿值设置为等于所述取值。
  18. 根据权利要求17所述的方法,其中,所述确定所述候选像素的目标象限值,包括:
    根据所述候选像素的水平梯度值,确定第一符号值;以及根据所述候选像素的垂直梯度值,确定第二符号值;
    根据所述候选像素的水平梯度绝对值和垂直梯度绝对值的比较结果,确定所述候选像素的比较值;
    根据所述比较值、所述第一符号值和所述第二符号值进行象限映射,确定所述候选像素对应的目标象限值。
  19. 根据权利要求15所述的方法,其中,所述根据所述参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定所述参考像素对应的至少一个梯度强度值,包括:
    根据所述候选像素的水平梯度绝对值和垂直梯度绝对值进行加法计算,确定所述候选像素对应的一个梯度强度值。
  20. 根据权利要求14所述的方法,其中,所述参考像素的重建样值至少包括下述其中一项:
    所述参考像素的第一颜色分量的重建样值;
    所述参考像素的第二颜色分量的重建样值。
  21. 根据权利要求20所述的方法,其中,所述根据所述参考像素的重建样值,确定第一参数,包括:
    在所述参考像素的重建样值为所述参考像素的第一颜色分量的重建样值时,确定所述参考像素的第一颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;
    在所述参考像素的重建样值为所述参考像素的第二颜色分量的重建样值时,确定所述参考像素的第二颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;
    根据所述参考像素的第一颜色分量对应的至少一种帧内预测模式和所述参考像素的第二颜色分量对应的至少一种帧内预测模式,组成第一集合,且所述第一集合包括具有互异特性的至少一种参考帧内预测模式;
    根据所述参考像素的第一颜色分量对应的至少一个梯度强度值和所述参考像素的第二颜色分量对应的至少一个梯度强度值,对归属于同一参考帧内预测模式的梯度强度值进行累加计算,确定所述至少一种参考帧内预测模式对应的梯度强度值;
    根据所述至少一种参考帧内预测模式和所述至少一种参考帧内预测模式对应的梯度强度值,确定所述第一参数。
  22. 根据权利要求21所述的方法,其中,所述根据所述第一参数,确定参考帧内预测模式参数,包括:
    根据所述至少一种参考帧内预测模式对应的梯度强度值,组成第二集合;
    若所述第二集合中的梯度强度值均为零,则根据PLANAR模式确定所述参考帧内预测模式参数;
    若所述第二集合中的梯度强度值存在非零项,则从所述第二集合中确定最大梯度强度值,根据所述最大梯度强度值对应的帧内预测模式确定所述参考帧内预测模式参数。
  23. 根据权利要求22所述的方法,其中,所述方法还包括:
    将所述第二集合中的所述最大梯度强度值赋值为-1,确定第三集合;
    若所述最大梯度强度值对应的帧内预测模式与所述参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则从所述第三集合中确定新的最大梯度强度值,根据所述新的最大梯度强度值对应的帧内预测模式确定所述参考帧内预测模式参数。
  24. 根据权利要求23所述的方法,其中,所述方法还包括:
    若所述新的最大梯度强度值对应的帧内预测模式与所述参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则根据DC模式确定所述参考帧内预测模式参数。
  25. 根据权利要求10所述的方法,其中,所述确定当前块的参考块,包括:
    确定所述当前块邻近的至少一个目标像素;
    根据所述至少一个目标像素各自所在的块,确定至少一个第一目标块;
    根据所述至少一个第一目标块,确定所述当前块的参考块。
  26. 根据权利要求25所述的方法,其中,所述方法还包括:
    基于所述至少一个第一目标块,按照第一预设顺序依次作为所述参考块,确定所述至少一个第一目 标块各自的参考帧内预测模式参数;
    根据所述至少一个第一目标块各自的参考帧内预测模式参数,构建所述当前块的第二颜色分量的模式候选列表。
  27. 根据权利要求26所述的方法,其中,所述确定当前块的参考块,还包括:
    确定所述当前块的同位置的第一颜色分量区域;
    从所述第一颜色分量区域划分的至少一个块中,确定处于预设位置的至少一个第二目标块;
    根据所述至少一个第二目标块,确定所述当前块的参考块。
  28. 根据权利要求27所述的方法,其中,所述方法还包括:
    基于所述至少一个第二目标块的预设顺序,依次确定所述至少一个第二目标块各自的第一颜色分量预测模式参数;
    根据所述至少一个第一目标块各自的参考帧内预测模式参数和所述至少一个第二目标块各自的第一颜色分量预测模式参数,构建所述当前块的第二颜色分量的模式候选列表。
  29. 根据权利要求28所述的方法,其中,所述方法还包括:
    确定所述模式候选列表中的前两个预测模式;
    对所述前两个预测模式的模式索引序号进行偏移操作,确定至少一个新的帧内预测模式;
    将所述至少一个新的帧内预测模式放置于所述模式候选列表中。
  30. 根据权利要求28所述的方法,其中,所述方法还包括:
    对所述模式候选列表中的预测模式进行顺序调整。
  31. 根据权利要求10所述的方法,其中,所述根据所述模式候选列表,确定所述当前块的第二颜色分量的预测值,包括:
    解析码流,确定所述当前块的第二颜色分量的模式索引序号;
    根据所述模式候选列表,确定所述模式索引序号对应的目标预测模式;
    利用所述目标预测模式对所述当前块的第二颜色分量进行预测处理,确定所述当前块的第二颜色分量的预测值。
  32. 根据权利要求31所述的方法,其中,所述方法还包括:
    解析码流,确定所述当前块的第二颜色分量的预测差值;
    根据所述当前块的第二颜色分量的预测值和所述当前块的第二颜色分量的预测差值,确定所述当前块的第二颜色分量的重建值。
  33. 一种编码方法,应用于编码器,所述方法包括:
    确定当前块的参考块;其中,所述参考块是所述当前块的相邻块;
    在所述参考块的第二颜色分量的预测模式满足第一条件时,根据所述参考块,确定参考帧内预测模式参数;
    根据所述参考帧内预测模式参数,确定所述当前块的第二颜色分量的预测值;
    根据所述当前块的第二颜色分量的预测值,确定所述当前块的第二颜色分量的预测差值。
  34. 根据权利要求33所述的方法,其中,所述第一条件包括:所述参考块的第二颜色分量的预测模式是第一预设模式。
  35. 根据权利要求34所述的方法,其中,所述第一预设模式包括下述至少之一:分量间预测模式、IBC模式、MIP模式和Palette模式。
  36. 根据权利要求35所述的方法,其中,所述分量间预测模式是CCLM模式。
  37. 根据权利要求34所述的方法,其中,所述第一预设模式是帧间预测模式。
  38. 根据权利要求33所述的方法,其中,所述第一条件包括:所述参考块的第二颜色分量的预测模式不是第二预设模式。
  39. 根据权利要求38所述的方法,其中,所述第二预设模式是角度预测模式。
  40. 根据权利要求38所述的方法,其中,所述第二预设模式是DC模式或Planar模式。
  41. 根据权利要求33所述的方法,其中,所述第一条件包括确定第一参数,所述第一参数指示根据所述参考块确定参考帧内预测模式参数;
    其中,所述方法还包括:
    对所述第一参数进行编码,将所得到的编码比特写入码流。
  42. 根据权利要求33所述的方法,其中,所述根据所述参考帧内预测模式参数,确定所述当前块的第二颜色分量的预测值,包括:
    根据所述参考帧内预测模式参数,构建所述当前块的第二颜色分量的模式候选列表;
    根据所述模式候选列表,确定所述当前块的第二颜色分量的预测值。
  43. 根据权利要求33所述的方法,其中,所述根据所述参考块,确定参考帧内预测模式参数,包括:
    根据所述参考块,确定参考像素;
    根据所述参考像素的重建样值,确定第一参数;
    根据所述第一参数,确定参考帧内预测模式参数。
  44. 根据权利要求43所述的方法,其中,所述根据所述参考块,确定参考像素,包括:
    根据所述参考块的相邻区域中的像素,确定所述参考像素;
    其中,所述相邻区域包括下述至少之一:左侧相邻区域、上侧相邻区域和左上侧相邻区域。
  45. 根据权利要求43所述的方法,其中,所述根据所述参考块,确定参考像素,包括:
    根据所述参考块中的像素,确定所述参考像素。
  46. 根据权利要求43所述的方法,其中,所述根据所述参考像素的重建样值,确定第一参数,包括:
    对所述参考像素的重建样值进行梯度计算,确定所述参考像素的水平梯度值和垂直梯度值;
    根据所述参考像素的水平梯度值和垂直梯度值进行角度映射,确定所述参考像素对应的至少一种帧内预测模式;
    根据所述参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定所述参考像素对应的至少一个梯度强度值;
    根据所述参考像素对应的至少一种帧内预测模式以及至少一个梯度强度值,确定所述第一参数。
  47. 根据权利要求46所述的方法,其中,所述参考像素包括至少一个候选像素,且每一个候选像素对应一种帧内预测模式和一个梯度强度值;
    所述根据所述参考像素的水平梯度值和垂直梯度值进行角度映射,确定所述参考像素对应的至少一种帧内预测模式,包括:
    确定所述候选像素的水平梯度绝对值和垂直梯度绝对值;
    根据所述候选像素的水平梯度绝对值和垂直梯度绝对值进行角度映射,确定所述候选像素的初始模式索引值;
    根据所述候选像素的初始模式索引值,确定所述候选像素对应的一种帧内预测模式。
  48. 根据权利要求47所述的方法,其中,所述根据所述候选像素的初始模式索引值,确定所述候选像素对应的一种帧内预测模式,包括:
    根据预设角度补偿值对所述初始模式索引值进行补偿处理,确定所述候选像素的目标模式索引值;
    根据所述候选像素的目标模式索引值,确定所述候选像素对应的一种帧内预测模式。
  49. 根据权利要求48所述的方法,其中,所述方法还包括:
    确定所述候选像素的目标象限值;
    确定在预设映射关系下所述目标象限值对应的取值;
    将所述预设角度补偿值设置为等于所述取值。
  50. 根据权利要求49所述的方法,其中,所述确定所述候选像素的目标象限值,包括:
    根据所述候选像素的水平梯度值,确定第一符号值;以及根据所述候选像素的垂直梯度值,确定第二符号值;
    根据所述候选像素的水平梯度绝对值和垂直梯度绝对值的比较结果,确定所述候选像素的比较值;
    根据所述比较值、所述第一符号值和所述第二符号值进行象限映射,确定所述候选像素对应的目标象限值。
  51. 根据权利要求47所述的方法,其中,所述根据所述参考像素的水平梯度值和垂直梯度值进行梯度强度计算,确定所述参考像素对应的至少一个梯度强度值,包括:
    根据所述候选像素的水平梯度绝对值和垂直梯度绝对值进行加法计算,确定所述候选像素对应的一个梯度强度值。
  52. 根据权利要求46所述的方法,其中,所述参考像素的重建样值至少包括下述其中一项:
    所述参考像素的第一颜色分量的重建样值;
    所述参考像素的第二颜色分量的重建样值。
  53. 根据权利要求52所述的方法,其中,所述根据所述参考像素的重建样值,确定第一参数,包括:
    在所述参考像素的重建样值为所述参考像素的第一颜色分量的重建样值时,确定所述参考像素的第一颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;
    在所述参考像素的重建样值为所述参考像素的第二颜色分量的重建样值时,确定所述参考像素的第 二颜色分量对应的至少一种帧内预测模式以及至少一个梯度强度值;
    根据所述参考像素的第一颜色分量对应的至少一种帧内预测模式和所述参考像素的第二颜色分量对应的至少一种帧内预测模式,组成第一集合,且所述第一集合包括具有互异特性的至少一种参考帧内预测模式;
    根据所述参考像素的第一颜色分量对应的至少一个梯度强度值和所述参考像素的第二颜色分量对应的至少一个梯度强度值,对归属于同一参考帧内预测模式的梯度强度值进行累加计算,确定所述至少一种参考帧内预测模式对应的梯度强度值;
    根据所述至少一种参考帧内预测模式和所述至少一种参考帧内预测模式对应的梯度强度值,确定所述第一参数。
  54. 根据权利要求53所述的方法,其中,所述根据所述第一参数,确定参考帧内预测模式参数,包括:
    根据所述至少一种参考帧内预测模式对应的梯度强度值,组成第二集合;
    若所述第二集合中的梯度强度值均为零,则根据PLANAR模式确定所述参考帧内预测模式参数;
    若所述第二集合中的梯度强度值存在非零项,则从所述第二集合中确定最大梯度强度值,根据所述最大梯度强度值对应的帧内预测模式确定所述参考帧内预测模式参数。
  55. 根据权利要求54所述的方法,其中,所述方法还包括:
    将所述第二集合中的所述最大梯度强度值赋值为-1,确定第三集合;
    若所述最大梯度强度值对应的帧内预测模式与所述参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则从所述第三集合中确定新的最大梯度强度值,根据所述新的最大梯度强度值对应的帧内预测模式确定所述参考帧内预测模式参数。
  56. 根据权利要求55所述的方法,其中,所述方法还包括:
    若所述新的最大梯度强度值对应的帧内预测模式与所述参考块同位置的第一颜色分量块的第一颜色分量预测模式相同,则根据DC模式确定所述参考帧内预测模式参数。
  57. 根据权利要求42所述的方法,其中,所述确定当前块的参考块,包括:
    确定所述当前块邻近的至少一个目标像素;
    根据所述至少一个目标像素各自所在的块,确定至少一个第一目标块;
    根据所述至少一个第一目标块,确定所述当前块的参考块。
  58. 根据权利要求57所述的方法,其中,所述方法还包括:
    基于所述至少一个第一目标块,按照第一预设顺序依次作为所述参考块,确定所述至少一个第一目标块各自的参考帧内预测模式参数;
    根据所述至少一个第一目标块各自的参考帧内预测模式参数,构建所述当前块的第二颜色分量的模式候选列表。
  59. 根据权利要求58所述的方法,其中,所述确定当前块的参考块,还包括:
    确定所述当前块的同位置的第一颜色分量区域;
    从所述第一颜色分量区域划分的至少一个块中,确定处于预设位置的至少一个第二目标块;
    根据所述至少一个第二目标块,确定所述当前块的参考块。
  60. 根据权利要求59所述的方法,其中,所述方法还包括:
    基于所述至少一个第二目标块的预设顺序,依次确定所述至少一个第二目标块各自的第一颜色分量预测模式参数;
    根据所述至少一个第一目标块各自的参考帧内预测模式参数和所述至少一个第二目标块各自的第一颜色分量预测模式参数,构建所述当前块的第二颜色分量的模式候选列表。
  61. 根据权利要求60所述的方法,其中,所述方法还包括:
    确定所述模式候选列表中的前两个预测模式;
    对所述前两个预测模式的模式索引序号进行偏移操作,确定至少一个新的帧内预测模式;
    将所述至少一个新的帧内预测模式放置于所述模式候选列表中。
  62. 根据权利要求60所述的方法,其中,所述方法还包括:
    对所述模式候选列表中的预测模式进行顺序调整。
  63. 根据权利要求42所述的方法,其中,所述根据所述模式候选列表,确定所述当前块的第二颜色分量的预测值,包括:
    根据所述模式候选列表,确定所述当前块的第二颜色分量的目标预测模式;
    利用所述目标预测模式对所述当前块的第二颜色分量进行预测处理,确定所述当前块的第二颜色分量的预测值。
  64. 根据权利要求63所述的方法,其中,所述根据所述模式候选列表,确定所述当前块的第二颜色分量的目标预测模式,包括:
    根据所述模式候选列表中的至少一种候选预测模式对所述当前块的第二颜色分量进行预编码,确定所述至少一种候选预测模式各自的预编码结果;
    根据所述至少一种候选预测模式各自的预编码结果,确定所述至少一种候选预测模式各自的率失真代价值;
    从所述至少一种候选预测模式各自的率失真代价值中确定最小率失真代价值,将所述最小率失真代价值对应的候选预测模式确定为所述当前块的第二颜色分量的目标预测模式。
  65. 根据权利要求63所述的方法,其中,所述方法还包括:
    根据所述模式候选列表,确定所述目标预测模式对应的模式索引序号;
    对所述模式索引序号进行编码,将所得到编码比特写入码流。
  66. 根据权利要求33至65任一项所述的方法,其中,所述根据所述当前块的第二颜色分量的预测值,确定所述当前块的第二颜色分量的预测差值,包括:
    根据所述当前块的第二颜色分量的原始值和所述当前块的第二颜色分量的预测值,确定所述当前块的第二颜色分量的预测差值;
    其中,所述方法还包括:
    对所述当前块的第二颜色分量的预测差值进行编码,将所得到编码比特写入码流。
  67. 一种码流,其中,所述码流是根据待编码信息进行比特编码生成的;其中,待编码信息包括下述至少一项:
    当前块的第二颜色分量的预测差值、模式索引序号和第一参数。
  68. 一种编码器,包括第一确定单元和第一预测单元;其中,
    所述第一确定单元,配置为确定当前块的参考块;其中,所述参考块是所述当前块的相邻块;以及在所述参考块的第二颜色分量的预测模式满足第一条件时,根据所述参考块,确定参考帧内预测模式参数;
    所述第一预测单元,配置为根据所述参考帧内预测模式参数,确定所述当前块的第二颜色分量的预测值;
    所述第一确定单元,还配置为根据所述当前块的第二颜色分量的预测值,确定所述当前块的第二颜色分量的预测差值。
  69. 一种编码器,所述编码器包括第一存储器和第一处理器;其中,
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求33至66任一项所述的方法。
  70. 一种解码器,包括第二确定单元和第二预测单元;其中,
    所述第二确定单元,配置为确定当前块的参考块;其中,所述参考块是所述当前块的相邻块;以及在所述参考块的第二颜色分量的预测模式满足第一条件时,根据所述参考块,确定参考帧内预测模式参数;
    所述第二预测单元,配置为根据所述参考帧内预测模式参数,确定所述当前块的第二颜色分量的预测值。
  71. 一种解码器,所述解码器包括第二存储器和第二处理器;其中,
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求1至32任一项所述的方法。
  72. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序被执行时实现如权利要求1至32任一项所述的方法、或者实现如权利要求33至66任一项所述的方法。
PCT/CN2022/130727 2022-11-08 2022-11-08 编解码方法、码流、编码器、解码器以及存储介质 WO2024098263A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/130727 WO2024098263A1 (zh) 2022-11-08 2022-11-08 编解码方法、码流、编码器、解码器以及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/130727 WO2024098263A1 (zh) 2022-11-08 2022-11-08 编解码方法、码流、编码器、解码器以及存储介质

Publications (1)

Publication Number Publication Date
WO2024098263A1 true WO2024098263A1 (zh) 2024-05-16

Family

ID=91031755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/130727 WO2024098263A1 (zh) 2022-11-08 2022-11-08 编解码方法、码流、编码器、解码器以及存储介质

Country Status (1)

Country Link
WO (1) WO2024098263A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020251420A2 (en) * 2019-10-05 2020-12-17 Huawei Technologies Co., Ltd. Removing blocking artifacts inside coding unit predicted by intra block copy
KR20210003604A (ko) * 2019-07-02 2021-01-12 한국전자통신연구원 화면내 예측 방법 및 장치
CN114145017A (zh) * 2019-06-13 2022-03-04 Lg 电子株式会社 基于帧内预测模式转换的图像编码/解码方法和设备,以及发送比特流的方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114145017A (zh) * 2019-06-13 2022-03-04 Lg 电子株式会社 基于帧内预测模式转换的图像编码/解码方法和设备,以及发送比特流的方法
KR20210003604A (ko) * 2019-07-02 2021-01-12 한국전자통신연구원 화면내 예측 방법 및 장치
WO2020251420A2 (en) * 2019-10-05 2020-12-17 Huawei Technologies Co., Ltd. Removing blocking artifacts inside coding unit predicted by intra block copy

Similar Documents

Publication Publication Date Title
US10440396B2 (en) Filter information sharing among color components
TWI701942B (zh) 用於視訊寫碼中可適性顏色轉換之量化參數推導及偏移
JP7369124B2 (ja) 結合されたピクセル/変換ベースの量子化を用いたビデオコーディングのための量子化パラメータ制御
WO2020262396A1 (en) Systems and methods for reducing a reconstruction error in video coding based on a cross-component correlation
WO2021004152A1 (zh) 图像分量的预测方法、编码器、解码器以及存储介质
US11843781B2 (en) Encoding method, decoding method, and decoder
US20200288126A1 (en) Reshaping filter average calculation for video coding
CN114007067B (zh) 对视频信号进行解码的方法、设备和介质
CN116418993A (zh) 视频编码的方法、装置和介质
WO2021049126A1 (en) Systems and methods for reducing a reconstruction error in video coding based on a cross-component correlation
TW202112135A (zh) 用於視訊寫碼之色度內預測單元
EP3959878A1 (en) Chroma coding enhancement in cross-component correlation
CN113068026B (zh) 编码预测方法、装置及计算机存储介质
US20240146930A1 (en) Video or image coding based on luma mapping and chroma scaling
WO2021070427A1 (en) Systems and methods for reducing a reconstruction error in video coding based on a cross-component correlation
WO2024098263A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
CN111758255A (zh) 用于视频编解码的位置相关空间变化变换
WO2024077569A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
CN114598873B (zh) 量化参数的解码方法和装置
WO2024119521A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2023197229A1 (zh) 视频编解码方法、装置、设备、***及存储介质
TWI843809B (zh) 用於視訊寫碼中具有運動向量差之合併模式之信令傳輸
CN116962684A (zh) 视频编解码方法与***、及视频编码器与视频解码器
WO2023154359A1 (en) Methods and devices for multi-hypothesis-based prediction
TW202404358A (zh) 編解碼方法、裝置、編碼設備、解碼設備以及儲存媒介

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22964738

Country of ref document: EP

Kind code of ref document: A1