WO2020192180A1 - 图像分量的预测方法、编码器、解码器及计算机存储介质 - Google Patents

图像分量的预测方法、编码器、解码器及计算机存储介质 Download PDF

Info

Publication number
WO2020192180A1
WO2020192180A1 PCT/CN2019/124114 CN2019124114W WO2020192180A1 WO 2020192180 A1 WO2020192180 A1 WO 2020192180A1 CN 2019124114 W CN2019124114 W CN 2019124114W WO 2020192180 A1 WO2020192180 A1 WO 2020192180A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
component
chrominance component
predicted value
chrominance
Prior art date
Application number
PCT/CN2019/124114
Other languages
English (en)
French (fr)
Inventor
霍俊彦
万帅
马彦卓
张伟
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN201980091480.8A priority Critical patent/CN113412621A/zh
Priority to CN202111093518.0A priority patent/CN113840144B/zh
Priority to JP2021556939A priority patent/JP2022528333A/ja
Priority to KR1020217034441A priority patent/KR20210141683A/ko
Priority to EP19920890.1A priority patent/EP3930329A4/en
Publication of WO2020192180A1 publication Critical patent/WO2020192180A1/zh
Priority to US17/477,193 priority patent/US20220007042A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Definitions

  • the embodiments of the present application relate to intra-frame prediction technology in the field of video coding, and in particular, relate to a prediction method, encoder, decoder, and computer storage medium for image components.
  • CCLM cross-component linear model prediction
  • the luminance component used for prediction needs to be down-sampled to the same resolution as the chrominance component that needs to be predicted, and then the prediction is performed at the same resolution between the luminance and chrominance, so as to realize the luminance component into it.
  • a chrominance component prediction is performed.
  • the luminance component has a rich texture, and the chrominance component is relatively flat, using the luminance component to predict the chrominance component makes a large deviation between the predicted chrominance component and the true chrominance value, resulting in the accuracy of the predicted value The performance is low, which affects the efficiency of encoding and decoding.
  • the embodiments of the present application provide a prediction method, encoder, decoder, and computer storage medium for image components, which can implement predictive coding from chrominance component to luminance component, improve the accuracy of luminance component prediction, and make the predicted value of the luminance component more accurate.
  • the pixel value close to the real brightness component.
  • an embodiment of the present application provides a method for predicting image components, and the method includes:
  • the chrominance component is encoded
  • the predicted value of the luminance component is obtained.
  • an embodiment of the present application provides a method for predicting image components, and the method includes:
  • the chrominance component is decoded
  • the predicted value of the luminance component is obtained.
  • an encoder in a third aspect, provides an encoder, and the encoder includes:
  • An encoding module configured to encode the chrominance component in the encoding of the image component
  • the first acquiring module is configured to acquire the predicted value of the luminance component according to the encoded chrominance component.
  • an embodiment of the present application provides a decoder, and the decoder includes:
  • a decoding module configured to decode the chrominance component in the decoding of the image component
  • the second acquiring module is configured to acquire the predicted value of the luminance component according to the decoded chrominance component.
  • an encoder in a fifth aspect, provides an encoder, and the encoder includes:
  • an embodiment of the present application provides a decoder, and the decoder includes:
  • an embodiment of the present application provides a computer-readable storage medium, in which executable instructions are stored, and when the executable instructions are executed by one or more processors, the processors execute one or more of the above A method for predicting image components described in multiple embodiments.
  • the embodiments of the present application provide a method for predicting image components, an encoder, a decoder, and a computer storage medium.
  • the method includes: in the encoding of the image component, encoding the chrominance component, and obtaining the chrominance component according to the encoded chrominance component
  • the predicted value of the luminance component that is to say, in the embodiment of the present application, the chrominance component is encoded and decoded first, and the luminance component is predicted from the chrominance component obtained through the encoding and decoding.
  • Component to predict the luminance component that is, to encode and decode the flat chrominance component first, and then predict the luminance component with rich texture based on the chrominance component obtained by the encoding and decoding, which can improve the accuracy of the luminance component prediction, so that the luminance component The predicted value of is closer to the real pixel value of the luminance component.
  • FIG. 1 is a schematic flowchart of an optional image component prediction method provided by an embodiment of the application
  • Figure 2 is a schematic diagram of the structure of a video encoding system
  • Figure 3 is a schematic structural diagram of a video decoding system
  • FIG. 4 is a schematic flowchart of another optional image component prediction method provided by an embodiment of the application.
  • Figure 5 is a schematic structural diagram of an optional encoder proposed in an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of an optional decoder proposed in an embodiment of this application.
  • Fig. 7 is a schematic structural diagram of another optional encoder proposed in an embodiment of the application.
  • FIG. 8 is a schematic structural diagram of another optional decoder proposed in an embodiment of the application.
  • the first image component, the second image component, and the third image component are generally used to characterize image blocks; among them, the first image component, the second image component, and the third image component may include one brightness component and two Chrominance component.
  • the luminance component is usually represented by the symbol Y
  • the chrominance component is usually represented by the symbols Cb and Cr, where Cb is the blue chrominance component, and Cr is the red chrominance component.
  • the first image component, the second image component, and the third image component may be the luminance component Y, the blue chrominance component Cb, and the red chrominance component Cr, for example, the first The image component may be the luminance component Y, the second image component may be the red chrominance component Cr, and the third image component may be the blue chrominance component Cb, which is not specifically limited in the embodiment of the present application.
  • the commonly used sampling format in which the luminance component and the chrominance component are separately represented is also referred to as the YCbCr format, where the YCbCr format may include 4:4:4 format; 4:2:2 format, and 4:2:0 format.
  • the video image adopts the YCbCr 4:2:0 format
  • the luminance component of the video image is the current image block of 2N ⁇ 2N size
  • the corresponding chrominance component is the current image block of N ⁇ N size, where N Is the side length of the current image block.
  • the 4:2:0 format will be used as an example for description, but the technical solutions of the embodiments of the present application are also applicable to other sampling formats.
  • CCLM Cross-component Linear Model Prediction
  • the luminance component of the current image block is predicted first, and then the chrominance component is predicted by the luminance component.
  • the luminance component has rich texture, and the chrominance component is relatively flat, the luminance component is used to predict the chrominance Component, the predicted chrominance component and the true chrominance value have a large deviation, resulting in lower accuracy of the predicted value, thereby affecting the efficiency of coding and decoding.
  • FIG. 1 is a schematic flowchart of an optional method for predicting image components provided by an embodiment of this application. As shown in FIG. 1, the method may include:
  • S102 Obtain a predicted value of the luminance component according to the encoded chrominance component.
  • FIG. 2 is a schematic structural diagram of a video encoding system.
  • the video encoding system 200 includes a transform and quantization unit 201 , Intra-frame estimation unit 202, intra-frame prediction unit 203, motion compensation unit 204, motion estimation unit 205, inverse transform and inverse quantization unit 206, filter control analysis unit 207, filtering unit 208, encoding unit 209, and decoded image buffering unit 210, etc.
  • the filtering unit 208 can implement deblocking filtering and Sample Adaptive Offset (SAO) filtering
  • the encoding unit 209 can implement header information coding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmetic Coding).
  • SAO Sample Adaptive Offset
  • a video coding block can be obtained by dividing the coding tree unit (CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is paired by the transform and quantization unit 201
  • the video coding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
  • the intra-frame estimation unit 202 and the intra-frame prediction unit 203 are used for Perform intra prediction on the video coding block; specifically, the intra estimation unit 202 and the intra prediction unit 203 are used to determine the intra prediction mode to be used to encode the video coding block;
  • the motion compensation unit 204 and the motion estimation unit 205 is used to perform inter-frame prediction coding of the received video coding block with respect to one or more blocks in one or more reference frames to provide temporal prediction information;
  • the motion estimation performed by the motion estimation unit 205 is a method for generating a motion vector In the process, the motion vector can estimate the motion of
  • the context content can be based on adjacent coding blocks, can be used to encode information indicating the determined intra prediction mode, and output the code stream of the video signal; and the decoded image buffer unit 210 is used to store reconstructed video coding blocks for Forecast reference. As the video image encoding progresses, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image buffer unit 210.
  • FIG. 3 is a schematic structural diagram of a video decoding system.
  • the video decoding system 300 includes a decoding unit 301, an inverse transform and inverse quantization unit 302, an intra prediction unit 303, a motion compensation unit 304, a filtering unit 305, and a decoding unit.
  • the code stream of the video signal is output; the code stream is input into the video decoding system 300, and first passes through the decoding unit 301 to obtain the decoded transform coefficient;
  • the inverse transform and inverse quantization unit 302 performs processing to generate a residual block in the pixel domain;
  • the intra prediction unit 303 can be used to generate data based on the determined intra prediction mode and the data from the previous decoded block of the current frame or picture The prediction data of the current video decoding block;
  • the motion compensation unit 304 determines the prediction information for the video decoding block by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the predictability of the video decoding block being decoded Block; by summing the residual block from the inverse transform and inverse quantization unit 302 and the corresponding predictive block generated by the intra prediction unit 303 or the motion compensation unit 304 to form a decoded video block; the decoded video signal
  • the filter unit 305 is used to remove the block artifacts to improve the video quality; then
  • S101 and S102 in the embodiment of this application are mainly applied to the part of the intra prediction unit 203 shown in FIG. 2 and the part of the intra prediction unit 303 shown in FIG. 3; that is, the implementation of this application
  • the encoder and the decoder can function simultaneously, which is not specifically limited in the embodiment of the present application.
  • one chrominance component in the encoding of image components, one chrominance component can be encoded, and then the chrominance component can be used to predict the luminance component, or a chrominance component can be encoded and then used.
  • Chrominance component to predict another chrominance component you can also encode two chrominance components first, and then use one chrominance component to predict the luminance component, or use two chrominance components to predict the luminance component.
  • this The application embodiment does not specifically limit this.
  • S102 may include:
  • the second chroma component when the first chroma component is a blue chroma component, the second chroma component is a red chroma component; when the first chroma component is a red chroma component, the second chroma component is a blue chroma component.
  • the first chrominance component and the second chrominance component in the coded chrominance components are used to predict the predicted value of the luminance component.
  • the first chrominance component is Cb
  • the second chrominance component is Cr
  • the luminance If the component is Y, then Y can be predicted by Cb and Cr, or if the first chrominance component is Cr, the second chrominance component is Cb, and the luminance component is Y, then Y can be predicted by Cb and Cr.
  • the prediction is performed based on the first chrominance component in the encoded chrominance component and the second chrominance component in the encoded chrominance component , Get the predicted value of the luminance component, including:
  • the predicted value of the luminance component is determined.
  • the current image block is the current image block to be encoded.
  • the reconstruction value of the first chrominance component of the current image block is the adjacent reference value of the first chrominance component
  • the second chrominance component of the adjacent image block of the current image block is the adjacent second chrominance component Reference value
  • the brightness component of the adjacent image block of the current image block is the adjacent reference value of the brightness component; it should be noted that the adjacent image block of the current image block is the previous row of image blocks and the left column of the current image block Image block.
  • the first predicted value can be predicted based on the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component, and the first predicted value can be predicted according to the second color
  • the reconstructed value of the degree component, the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are used to predict the second predicted value.
  • the current image block is obtained according to the first predicted value and the second predicted value.
  • the predicted value of the luminance component in this way, a predicted value is predicted by one chrominance component, and finally the two predicted values are fused to obtain the predicted value of the luminance component of the current image block.
  • prediction is performed based on the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component, to obtain the luminance
  • the first predicted value of the component includes:
  • the prediction model is called to obtain the first predicted value.
  • the image component prediction model used above can be a linear model or a non-linear model to predict the luminance component.
  • the embodiment of the present application does not specifically limit it here.
  • the next-generation video coding standard encoder such as H.266/VVC early test model (Joint Exploration Model, JEM) or VVC test model (VVC Test model, VTM) uses inter-component Linear model prediction mode.
  • JEM Joint Exploration Model
  • VVC Test model VTM
  • the reconstructed chrominance value of the same coding block is used to construct the predicted value of the luminance component:
  • i, j represent the position coordinates of the sampling point in the current image block
  • i represents the horizontal direction
  • j represents the vertical direction
  • Pred L [i, j] represents the brightness component of the sampling point with the position coordinates of i, j in the coding block
  • Rec C [i,j] represents the reconstruction value of the chrominance component of the sampling point with position coordinates i, j in the current image block.
  • ⁇ and ⁇ are the scale factors of the linear model, which can be minimized by the adjacent chrominance component
  • the regression error between the reference value and the adjacent reference value of the brightness component is derived, as shown in the following formula (2):
  • L(n) represents the adjacent reference value of the passing luminance component (for example, the left and upper side)
  • C(n) represents the adjacent reference value of the chrominance component (for example, the left and upper side)
  • N is the adjacent reference value of the luminance component.
  • the number of reference values is also calculated by formula (2) in the decoder to obtain ⁇ and ⁇ .
  • ⁇ and ⁇ can be calculated according to formula (2); then the chrominance of the current image block
  • the component reconstruction value is brought into the linear model described in formula (2), and the predicted value of the brightness component of the current image block can be calculated.
  • the prediction model can also adopt a nonlinear model.
  • the image component prediction model also proposes a nonlinear model calculation method.
  • the chrominance component reconstruction value of the current image block and the adjacent reference value of the chrominance component are considered.
  • the correlation and similarity of, to obtain ⁇ and ⁇ in the existing linear model, and the resulting linear model fits the current reconstruction value of chrominance more closely, and further obtains the predicted value of the luminance component of the current image block.
  • the adjacent reference value of the chrominance component and the adjacent reference value of the luminance component of the current image block are divided into two groups, and each group can be used as a training set for inferring the linear model parameters separately, that is, each grouping A set of parameters can be derived. Therefore, it can also overcome the large deviation between the adjacent reference value of the chrominance component and the parameter corresponding to the current image block, or when the adjacent reference value of another chrominance component has a large deviation from the parameter corresponding to the current image block, the linear
  • the defect that the model deviates from the expected model, and when the brightness component of the current image block is predicted according to the linear model, the prediction accuracy of the predicted value of the brightness component can be greatly improved.
  • the adjacent reference value and the intermediate value can be divided into two groups by setting a threshold, and the nonlinear model can be established according to the two adjacent reference values and the reconstruction value.
  • the threshold is the classification basis for the adjacent reference value of the chrominance component and the adjacent reference value of the luminance component of the current image block, as well as the classification basis for the reconstruction value of the chrominance component of the current image block.
  • the threshold is used to indicate the setting value on which multiple calculation models are established, and the size of the threshold is related to the chrominance component reconstruction value of all sampling points of the current image block. Specifically, it can be obtained by calculating the average value of the chrominance component reconstruction values of all sampling points of the current image block, or it can be obtained by calculating the median value of the chrominance component reconstruction values of all sampling points of the current image block. This embodiment of the application does not do this. Specific restrictions.
  • the mean value Mean can be calculated according to the reconstruction values of the chrominance components of all sampling points of the current image block and formula (3):
  • Mean represents the average value of the reconstructed values of the chrominance components of all sampling points of the current image block
  • ⁇ Rec C [i,j] represents the sum of the reconstructed values of the chrominance components of all the sampling points of the current image block
  • M represents the current image block The number of samples of the reconstructed value of the chrominance component of all sampling points.
  • the calculated mean value Mean is directly used as the threshold value, and two calculation models can be established by using the threshold value; however, the embodiment of the present application is not limited to only establishing two calculation models.
  • the average value is calculated according to the sum of the reconstructed values ⁇ Rec C [i, j] of all the sampling points of the current image block to obtain the mean value Mean. If two calculation models are established, Mean can be directly used as the threshold.
  • the adjacent reference value of the chrominance component of the current image block can be divided into two parts, indicating that two calculation models can be established in the future;
  • the minimum reconstruction value of chrominance component+Mean+1)>>1 is used as the first threshold
  • (the maximum reconstruction value of chrominance component+Mean+1)>>1 is used as the second threshold, according to These two thresholds can divide the adjacent reference value of the chrominance component of the current image block into three parts, indicating that three calculation models can be established in the future; the following will use the calculated mean value as the threshold to establish two calculations
  • the model is described as an example.
  • the adjacent reference value of the chrominance component of the current image block is not greater than at least one threshold, the adjacent reference value C(m) of the chrominance component of the first group and the adjacent reference value of the luminance component are obtained.
  • the adjacent reference value of the chrominance component of the current image block is greater than at least one threshold, the adjacent reference value C(k) of the chrominance component and the adjacent reference value L(k) of the luminance component of the second group are obtained.
  • the adjacent reference value of the chrominance component of the current image block can be divided into two parts, namely C(m) and C(k); accordingly, It is also possible to divide the adjacent reference value of the brightness component of the current image block into two parts, namely L(m) and L(k).
  • the adjacent reference value of each group of chrominance components and the adjacent reference value of the luminance component can be used as a separate training set, that is, each group can be trained Therefore, in the foregoing implementation manner, specifically, establishing at least two calculation models based on the adjacent reference values of at least two sets of chrominance components and the adjacent reference values of luminance components, including:
  • the first calculation is established Model Pred 1L [i,j] and second calculation model Pred 2L [i,j]:
  • M represents the number of adjacent reference values C(m) of the chrominance component of the first group or the adjacent reference values L(m) of the luminance component
  • K represents the adjacent reference value of the chrominance component of the second group C(k) or the number of adjacent reference values L(k) of the luminance component
  • [i,j] represents the position coordinates of the sampling point in the current image block, i represents the horizontal direction, j represents the vertical direction
  • Threshold represents the preset Set the threshold.
  • the preset threshold is obtained based on the reconstructed values of the chrominance components of all sampling points in the current image block; Rec C [i,j] represents the chrominance of the sampling point in the current image block with position coordinates [i,j] The reconstructed value of the component; Pred 1L [i,j] and Pred 2L [i,j] represent the predicted value of the brightness component of the sampling point with the position coordinate [i,j] in the current image block.
  • the threshold is first calculated according to the reconstruction value of the first chrominance component, and the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component are classified according to the threshold.
  • Two calculation models are taken as examples. In other words, ⁇ 1 and ⁇ 1 are obtained according to formula (4), ⁇ 2 and ⁇ 2 are obtained according to formula (5), and then the reconstructed value of the first chrominance component is substituted into the above formula (6) to obtain the first predicted value.
  • prediction is performed based on the reconstructed value of the second chrominance component, the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component to obtain the luminance
  • the second predicted value of the component includes:
  • the prediction model is called to obtain the second predicted value.
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are substituted into formula (2) to obtain ⁇ and ⁇ , and then substitute the reconstructed value of the second chrominance component into formula (1) to obtain the second predicted value; in this way, the second predicted value can be obtained by using the linear model.
  • the threshold is first calculated according to the reconstruction value of the second chrominance component, and the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are classified according to the threshold.
  • the threshold is first calculated according to the reconstruction value of the second chrominance component, and the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are classified according to the threshold.
  • determining the predicted value of the brightness component according to the first predicted value and the second predicted value includes:
  • the first predicted value and the second predicted value are weighted and summed to obtain the predicted value of the luminance component.
  • the first predicted value and the second predicted value may be fused, and the fused value may be determined as the predicted value of the luminance component.
  • the weight value of the first predicted value and the weight value of the second predicted value can be obtained first, and then the first predicted value and the second predicted value are weighted and summed by the weighted summation method to obtain the prediction of the brightness component value.
  • the weight value of the first predicted value and the weight value of the second predicted value are obtained.
  • weight values preset in the encoder there are multiple sets of weight values preset in the encoder, such as (0.5, 0.5), (0.2, 0.8), (0.3, 0.7) and (0.1, 0.9), etc., which can be preset from the weight value A group is selected from the group. If (0.5, 0.5) is selected, the weight value of the first predicted value is 0.5, and the weight value of the second predicted value is 0.5. In this way, the first predicted value can be determined The weight value and the weight value of the second predicted value.
  • the selected result can be identified from the code stream with corresponding syntax elements, so as to facilitate the decoder side to predict the predicted value of the luminance component.
  • obtaining the weight value of the first predicted value and the weight value of the second predicted value may include:
  • the first weight value is determined as the weight value of the first predicted value
  • the second weight value is determined as the weight value of the second predicted value
  • the weight value of the first prediction value and the weight value of the second prediction value can be determined in advance through the user input, so that the encoder end receives the first weight value and the second weight value, then the first weight value The value is determined as the weight value of the first predicted value, and the second weight value is determined as the weight value of the second predicted value; at the same time, the selected result is identified by the corresponding syntax element in the code stream to facilitate the decoder The end receives the weight value and predicts the predicted value of the luminance component.
  • the resolution of the chrominance component of the current image block is less than the resolution of the luminance component.
  • the method further includes:
  • the reconstructed value of the first chrominance component and the reconstructed value of the second chrominance component are respectively up-sampled to obtain the reconstructed value of the first chrominance component And the reconstructed value after processing the second chrominance component;
  • the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component are predicted to obtain the first predicted value of the luminance component, including:
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are predicted to obtain the second predicted value of the luminance component, including:
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are predicted to obtain the second predicted value of the luminance component.
  • the reconstruction value of the first chrominance component and the reconstruction value of the second chrominance component are obtained.
  • the reconstructed value of the first chrominance component and the reconstructed value of the second chrominance component need to be up-sampled respectively, so that the resolution of the reconstructed value after processing of the first chrominance component and the processed value of the second chrominance component are
  • the resolution of the reconstructed value is the same as the resolution of the luminance component respectively to improve the prediction accuracy.
  • the neighboring reference of the first chrominance component can be Value and the adjacent reference value of the luminance component to obtain the first predicted value of the luminance component, the reconstructed value after processing according to the second chrominance component, the adjacent reference value of the second chrominance component and the adjacent reference of the luminance component
  • the value is predicted to obtain the second predicted value of the brightness component, and the first predicted value and the second predicted value obtained from this are used to determine the predicted value of the brightness component of the current image block.
  • the method before determining the predicted value of the luminance component according to the first predicted value and the second predicted value, the method also includes:
  • the first predicted value and the second predicted value are respectively up-sampled to obtain the processed first predicted value and the processed second predicted value;
  • determining the predicted value of the luminance component according to the first predicted value and the second predicted value includes:
  • the predicted value of the luminance component is determined.
  • the resolution of the chrominance component is smaller than that of the luminance component
  • the first predicted value and the second predicted value are respectively performed
  • Up-sampling is performed to obtain the processed first predicted value and the processed second predicted value
  • the predicted value of the luminance component is determined according to the processed first predicted value and the processed second predicted value.
  • the method further includes:
  • the predicted value of the luminance component is up-sampled to obtain the predicted value of the processed luminance component.
  • the predicted value of the brightness component is up-sampled so that the resolution of the predicted value of the processed brightness component is the same as the resolution of the brightness component.
  • FIG. 4 is a schematic flowchart of an optional method for predicting image components provided by an embodiment of this application. As shown in FIG. 4, the method may include:
  • S401 and S402 in the embodiment of this application are mainly applied to the part of the intra prediction unit 203 shown in FIG. 2 and the part of the intra prediction unit 303 shown in FIG. 3; that is, the implementation of this application
  • the encoder and the decoder can function simultaneously, which is not specifically limited in the embodiment of the present application.
  • one chrominance component in the decoding of image components, one chrominance component can be decoded, and then the chrominance component can be used to predict the luminance component, or it can be decoded one chrominance component and then used Chrominance component to predict another chrominance component, you can also decode two chrominance components first, and then use one chrominance component to predict the luminance component, or use two chrominance components to predict the luminance component.
  • this The application embodiment does not specifically limit this.
  • S102 may include:
  • the second chroma component when the first chroma component is a blue chroma component, the second chroma component is a red chroma component; when the first chroma component is a red chroma component, the second chroma component is a blue chroma component.
  • the first chrominance component and the second chrominance component in the decoded chrominance components are used to predict the predicted value of the luminance component.
  • the first chrominance component is Cb
  • the second chrominance component is Cr. If the component is Y, then Y can be predicted by Cb and Cr, or if the first chrominance component is Cr, the second chrominance component is Cb, and the luminance component is Y, then Y can be predicted by Cb and Cr.
  • the prediction is performed based on the first chrominance component in the decoded chrominance component and the second chrominance component in the decoded chrominance component , Get the predicted value of the luminance component, including:
  • the predicted value of the luminance component is determined.
  • the current image block is the current image block to be decoded.
  • the reconstruction value of the first chrominance component of the current image block is the adjacent reference value of the first chrominance component
  • the second chrominance component of the adjacent image block of the current image block is the adjacent second chrominance component Reference value
  • the brightness component of the adjacent image block of the current image block is the adjacent reference value of the brightness component; it should be noted that the adjacent image block of the current image block is the previous row of image blocks and the left column of the current image block Image block.
  • the first predicted value can be predicted based on the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component, and the first predicted value can be predicted according to the second color
  • the reconstructed value of the degree component, the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are used to predict the second predicted value.
  • the current image block is obtained according to the first predicted value and the second predicted value.
  • the predicted value of the luminance component in this way, a predicted value is predicted by one chrominance component, and finally the two predicted values are fused to obtain the predicted value of the luminance component of the current image block.
  • prediction is performed based on the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component, to obtain the luminance
  • the first predicted value of the component includes:
  • the prediction model is called to obtain the first predicted value.
  • the image component prediction model used above can be a linear model or a non-linear model to predict the luminance component.
  • the embodiment of the present application does not specifically limit it here.
  • the prediction model can also adopt a nonlinear model.
  • the image component prediction model also proposes a nonlinear model calculation method.
  • the threshold is first calculated according to the reconstruction value of the first chrominance component, and the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component are classified according to the threshold.
  • Two calculation models are taken as examples. In other words, ⁇ 1 and ⁇ 1 are obtained according to formula (4), ⁇ 2 and ⁇ 2 are obtained according to formula (5), and then the reconstructed value of the first chrominance component is substituted into the above formula (6) to obtain the first predicted value.
  • prediction is performed based on the reconstructed value of the second chrominance component, the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component to obtain the luminance
  • the second predicted value of the component includes:
  • the prediction model is called to obtain the second predicted value.
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are substituted into formula (2) to obtain ⁇ and ⁇ , and then substitute the reconstructed value of the second chrominance component into formula (1) to obtain the second predicted value; in this way, the second predicted value can be obtained by using the linear model.
  • the threshold is first calculated according to the reconstruction value of the second chrominance component, and the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are classified according to the threshold.
  • the threshold is first calculated according to the reconstruction value of the second chrominance component, and the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are classified according to the threshold.
  • determining the predicted value of the brightness component according to the first predicted value and the second predicted value includes:
  • the first predicted value and the second predicted value are weighted and summed to obtain the predicted value of the luminance component.
  • the first predicted value and the second predicted value may be fused, and the fused value may be determined as the predicted value of the luminance component.
  • the weight value of the first predicted value and the weight value of the second predicted value can be obtained first, and then the first predicted value and the second predicted value are weighted and summed by the weighted summation method to obtain the prediction of the brightness component value.
  • the weight value of the first predicted value and the weight value of the second predicted value are obtained.
  • weight values preset in the decoder there are multiple sets of weight values preset in the decoder, such as (0.5, 0.5), (0.2, 0.8), (0.3, 0.7) and (0.1, 0.9), etc., which can be preset from the weight value A group is selected from the group. If (0.5, 0.5) is selected, the weight value of the first predicted value is 0.5, and the weight value of the second predicted value is 0.5. In this way, the first predicted value can be determined The weight value and the weight value of the second predicted value.
  • the selected result can be identified from the code stream with corresponding syntax elements, so as to facilitate the decoder side to predict the predicted value of the luminance component.
  • obtaining the weight value of the first predicted value and the weight value of the second predicted value may include:
  • the first weight value is determined as the weight value of the first predicted value
  • the second weight value is determined as the weight value of the second predicted value
  • the weight value of the first prediction value and the weight value of the second prediction value can be determined in advance through the user input, so that the encoder end receives the first weight value and the second weight value, then the first weight value The value is determined as the weight value of the first predicted value, and the second weight value is determined as the weight value of the second predicted value; at the same time, the selected result is identified by the corresponding syntax element in the code stream to facilitate the decoder to receive The weight value and predict the predicted value of the brightness component.
  • the resolution of the chrominance component of the current image block is less than the resolution of the luminance component.
  • the method further includes:
  • the reconstructed value of the first chrominance component and the reconstructed value of the second chrominance component are respectively up-sampled to obtain the reconstructed value of the first chrominance component And the reconstructed value after processing the second chrominance component;
  • the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component are predicted to obtain the first predicted value of the luminance component, including:
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are predicted to obtain the second predicted value of the luminance component, including:
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are predicted to obtain the second predicted value of the luminance component.
  • the reconstruction value of the first chrominance component and the reconstruction value of the second chrominance component are obtained.
  • the reconstructed value of the first chrominance component and the reconstructed value of the second chrominance component need to be up-sampled respectively, so that the resolution of the reconstructed value after processing of the first chrominance component and the processed value of the second chrominance component are
  • the resolution of the reconstructed value is the same as the resolution of the luminance component respectively to improve the prediction accuracy.
  • the neighboring reference of the first chrominance component can be Value and the adjacent reference value of the luminance component to obtain the first predicted value of the luminance component, the reconstructed value after processing according to the second chrominance component, the adjacent reference value of the second chrominance component and the adjacent reference of the luminance component
  • the value is predicted to obtain the second predicted value of the brightness component, and the first predicted value and the second predicted value obtained from this are used to determine the predicted value of the brightness component of the current image block.
  • the method before determining the predicted value of the luminance component according to the first predicted value and the second predicted value, the method also includes:
  • the first predicted value and the second predicted value are respectively up-sampled to obtain the processed first predicted value and the processed second predicted value;
  • determining the predicted value of the luminance component according to the first predicted value and the second predicted value includes:
  • the predicted value of the luminance component is determined.
  • the resolution of the chrominance component is smaller than that of the luminance component
  • the first predicted value and the second predicted value are respectively performed
  • Up-sampling is performed to obtain the processed first predicted value and the processed second predicted value
  • the predicted value of the luminance component is determined according to the processed first predicted value and the processed second predicted value.
  • the method further includes:
  • the predicted value of the luminance component is up-sampled to obtain the predicted value of the processed luminance component.
  • the predicted value of the brightness component is up-sampled so that the resolution of the predicted value of the processed brightness component is the same as the resolution of the brightness component.
  • the embodiment of the present application provides a method for predicting image components, the method includes: encoding the chrominance component in the encoding of the image component, and obtaining the predicted value of the luminance component according to the encoded chrominance component; that is, In the embodiment of this application, the chrominance component is coded and decoded first, and the luminance component is predicted from the chrominance component obtained by the coding and decoding. In this way, the chrominance component obtained by the coding and decoding is used to predict the luminance component, that is, the comparison is first.
  • the flat chrominance component is encoded and decoded, and then based on the chrominance component obtained by the encoding and decoding, the luminance component with rich texture can be predicted, which can improve the accuracy of the luminance component prediction, making the predicted value of the luminance component closer to the real luminance component. Pixel values.
  • FIG. 5 is a schematic structural diagram of an optional encoder proposed in an embodiment of this application.
  • the encoder proposed in an embodiment of this application may include an encoding module 51 and a first acquisition module. 52; among them,
  • the encoding module 51 is configured to encode the chrominance component in the encoding of the image component
  • the first obtaining module 52 is configured to obtain the predicted value of the luminance component according to the encoded chrominance component.
  • first obtaining module 52 is specifically configured as follows:
  • the second chrominance component when the first chrominance component is a blue chrominance component, the second chrominance component is a red chrominance component; when the first chrominance component is a red chrominance component, the second color The degree component is the blue chrominance component.
  • the first obtaining module 52 performs prediction based on the first chroma component in the coded chroma component and the second chroma component in the coded chroma component, and the predicted value of the luma component obtained includes:
  • the predicted value of the luminance component is determined.
  • the first obtaining module 52 predicts according to the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component, and the adjacent reference value of the luminance component, and the first predicted value of the luminance component obtained includes :
  • the prediction model is called to obtain the first predicted value.
  • the first acquisition module 52 performs prediction based on the reconstructed value of the second chrominance component, the adjacent reference value of the second chrominance component, and the adjacent reference value of the luminance component, and the second predicted value of the luminance component obtained includes :
  • the prediction model is called to obtain the second predicted value.
  • the first obtaining module 52 determines the predicted value of the luminance component according to the first predicted value and the second predicted value, including:
  • the first predicted value and the second predicted value are weighted and summed to obtain the predicted value of the luminance component.
  • the first obtaining module 52 obtaining the weight value of the first predicted value and the weight value of the second predicted value includes:
  • first obtaining module 52 is specifically configured as follows:
  • the reconstructed value of the first chrominance component For the current image block, obtain the reconstructed value of the first chrominance component, the reconstructed value of the second chrominance component, the adjacent reference value of the first chrominance component, the adjacent reference value of the second chrominance component, and the luminance component.
  • the adjacent reference value when the resolution of the chrominance component is less than the resolution of the luminance component, the reconstructed value of the first chrominance component and the reconstructed value of the second chrominance component are respectively up-sampled to obtain the first chrominance component
  • the first acquisition module 52 predicts according to the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component, and the first predicted value of the luminance component is obtained, including :
  • the first obtaining module 52 predicts according to the reconstructed value of the second chrominance component, the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component, and the second predicted value of the luminance component is obtained, including :
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are predicted to obtain the second predicted value of the luminance component.
  • the first obtaining module 52 is specifically configured to:
  • the first predicted value and the second predicted value are respectively up-sampled To obtain the processed first predicted value and the processed second predicted value;
  • the first obtaining module 52 determines the predicted value of the luminance component according to the first predicted value and the second predicted value, including:
  • the predicted value of the luminance component is determined.
  • the first obtaining module 52 is specifically configured to:
  • the predicted value of the luminance component is up-sampled to obtain the processed luminance The predicted value of the component.
  • FIG. 6 is a schematic structural diagram of an optional decoder proposed in an embodiment of this application.
  • the decoder proposed in an embodiment of this application may include a decoding module 61 and a second acquisition module. 62; among them,
  • the decoding module 61 is configured to decode the chrominance component in the decoding of the image component
  • the second obtaining module 62 is configured to obtain the predicted value of the luminance component according to the decoded chrominance component.
  • the second obtaining module 62 is specifically configured as follows:
  • the second chroma component when the first chroma component is a blue chroma component, the second chroma component is a red chroma component; when the first chroma component is a red chroma component, the second chroma component is a blue chroma component.
  • the second acquisition module 62 performs prediction based on the first chrominance component in the decoded chrominance components and the second chrominance component in the decoded chrominance components, and the predicted value of the luminance component obtained includes:
  • the predicted value of the luminance component is determined.
  • the second acquisition module 62 predicts according to the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component, and the first predicted value of the luminance component is obtained, including: :
  • the prediction model is called to obtain the first predicted value.
  • the second acquisition module 62 predicts according to the reconstructed value of the second chrominance component, the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component, and the second predicted value of the luminance component obtained includes :
  • the prediction model is called to obtain the second predicted value.
  • the second obtaining module 62 determines the predicted value of the luminance component according to the first predicted value and the second predicted value, including:
  • the first predicted value and the second predicted value are weighted and summed to obtain the predicted value of the luminance component.
  • the second obtaining module 62 obtaining the weight value of the first predicted value and the weight value of the second predicted value includes:
  • the second obtaining module 62 is specifically configured as follows:
  • the reconstructed value of the first chrominance component For the current image block, obtain the reconstructed value of the first chrominance component, the reconstructed value of the second chrominance component, the adjacent reference value of the first chrominance component, the adjacent reference value of the second chrominance component, and the luminance component.
  • the adjacent reference value when the resolution of the chrominance component is less than the resolution of the luminance component, the reconstructed value of the first chrominance component and the reconstructed value of the second chrominance component are respectively up-sampled to obtain the first chrominance component
  • the second acquisition module 62 predicts according to the reconstructed value of the first chrominance component, the adjacent reference value of the first chrominance component and the adjacent reference value of the luminance component, and the first predicted value of the luminance component is obtained, including :
  • the second acquiring module 62 predicts according to the reconstructed value of the second chrominance component, the adjacent reference value of the second chrominance component, and the adjacent reference value of the luminance component, and the second predicted value of the luminance component obtained includes :
  • the adjacent reference value of the second chrominance component and the adjacent reference value of the luminance component are predicted to obtain the second predicted value of the luminance component.
  • the second obtaining module 62 is specifically configured to:
  • the first predicted value and the second predicted value are respectively up-sampled To obtain the processed first predicted value and the processed second predicted value;
  • the second acquisition module 62 determines the predicted value of the luminance component according to the first predicted value and the second predicted value, including:
  • the predicted value of the luminance component is determined.
  • the second obtaining module 62 is specifically configured to:
  • the predicted value of the luminance component is up-sampled to obtain the processed luminance The predicted value of the component.
  • FIG. 7 is a schematic structural diagram of another optional encoder proposed in an embodiment of the application.
  • the encoder 700 proposed in an embodiment of the application may further include a processor 71 and a processor 71 executable A storage medium 72 for instructions.
  • the storage medium 72 relies on the processor 71 to perform operations through the communication bus 73.
  • the instructions are executed by the processor 71, the image component prediction method described in one or more embodiments is executed.
  • the communication bus 73 is used to implement connection and communication between these components.
  • the communication bus 73 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the communication bus 73 in FIG. 7.
  • FIG. 8 is a schematic structural diagram of another optional decoder proposed in an embodiment of the application.
  • the encoder 800 proposed in an embodiment of the application may further include a processor 81 and a processor 81 stored therein.
  • the storage medium 82 relies on the processor 81 to perform operations through the communication bus 83.
  • the instructions are executed by the processor 81, the image component prediction method described in one or more embodiments is executed.
  • the communication bus 83 is used to implement connection and communication between these components.
  • the communication bus 83 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the communication bus 83 in FIG. 8.
  • An embodiment of the present application provides a computer storage medium that stores executable instructions.
  • the processors execute the operations described in one or more embodiments above. Prediction method of image components.
  • the memory in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
  • Synchlink DRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus RAM
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • the steps of the above method can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the aforementioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the embodiments described herein can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Equipment
  • PLD programmable Logic Device
  • PLD Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described herein can be implemented through modules (such as procedures, functions, etc.) that perform the functions described herein.
  • the software codes can be stored in the memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.
  • the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes a number of instructions to enable a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method described in each embodiment of the present application.
  • the embodiments of the present application provide a prediction method, an encoder, and a computer storage medium for an image component, including: in the encoding of the image component, encoding the chrominance component, and obtaining the predicted value of the luminance component according to the encoded chrominance component , Can improve the accuracy of the brightness component prediction, so that the predicted value of the brightness component is closer to the real pixel value of the brightness component.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Color Television Systems (AREA)

Abstract

本申请实施例公开了一种图像分量的预测方法、编码器以及计算机存储介质,该方法包括:在图像分量的编码中,对色度分量进行编码,根据编码的色度分量,获取亮度分量的预测值。

Description

图像分量的预测方法、编码器、解码器及计算机存储介质 技术领域
本申请实施例涉及视频编码领域的帧内预测技术,尤其涉及一种图像分量的预测方法、编码器、解码器及计算机存储介质。
背景技术
在下一代视频编码标准H.266或多功能视频编码(Versatile Video Coding,VVC)中,可以通过颜色分量间线性模型预测方法(Cross-component Linear Model Prediction,CCLM)实现跨分量预测,通过跨分量预测,基于分量之间的依赖性,可以通过亮度分量来预测色度分量。
目前,用于预测的亮度分量需要进行下采样,下采样到和需要预测的色度分量相同的分辨率,然后在亮度和色度之间以相同的分辨率执行预测,从而实现亮度分量到其中一个色度分量的预测。
然而,由于亮度分量具有丰富的纹理,而色度分量比较平坦,使用亮度分量预测色度分量,使得预测出的色度分量与真实色度值之间存在较大的偏差,导致预测值的准确性较低,从而影响编解码的效率。
发明内容
本申请实施例提供一种图像分量的预测方法、编码器、解码器及计算机存储介质,能够实现色度分量到亮度分量的预测编码,提高亮度分量预测的准确性,使得亮度分量的预测值更加接近真实的亮度分量的像素值。
本申请实施例的技术方案可以如下实现:
第一方面,本申请实施例提供一种图像分量的预测方法,所述方法包括:
在图像分量的编码中,对色度分量进行编码;
根据编码的色度分量,获取亮度分量的预测值。
第二方面,本申请实施例提供一种图像分量的预测方法,所述方法包括:
在图像分量的解码中,对色度分量进行解码;
根据解码的色度分量,获取亮度分量的预测值。
第三方面,本申请实施例提供一种编码器,所述编码器包括:
编码模块,配置为在图像分量的编码中,对色度分量进行编码;
第一获取模块,配置为根据编码的色度分量,获取亮度分量的预测值。
第四方面,本申请实施例提供一种解码器,所述解码器包括:
解码模块,配置为在图像分量的解码中,对色度分量进行解码;
第二获取模块,配置为根据解码的色度分量,获取亮度分量的预测值。
第五方面,本申请实施例提供一种编码器,所述编码器包括:
处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述一个或多个实施例所述的图像分量的预测方法。
第六方面,本申请实施例提供一种解码器,所述解码器包括:
处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述一个或多个实施例所述的图像分量的预测方法。
第七方面,本申请实施例提供一种计算机可读存储介质,其中,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行上述一个或多个实施例所述的图像分量的预测的方法。
本申请实施例提供了一种图像分量的预测方法、编码器、解码器及计算机存储介质,该方法包括:在图像分量的编码中,对色度分量进行编码,根据编码的色度分量,获取亮度分量的预测值;也就是说,在本申请实施例中,通过先对色度分量进行编解码,在通过编解码出的色度分量来预测亮度分量,这样,通过编解码得到的色度分量来预测亮度分量,即先对较平坦的色度分量进行编解码,再基于编解码得到的色度分量来预测具有丰富纹理的亮度分量,能够提高对亮度分量预测的准确性,使得亮度分量的预测值更加接近真实的亮度分量的像素值。
附图说明
图1为本申请实施例提供的一种可选的图像分量的预测方法的流程示意图;
图2为视频编码***的结构示意图;
图3为视频解码***的结构示意图;
图4为本申请实施例提供的另一种可选的图像分量的预测方法的流程示意图;
图5为本申请实施例提出的一种可选的编码器的结构示意图;
图6为本申请实施例提出的一种可选的解码器的结构示意图;
图7为本申请实施例提出的另一种可选的编码器的结构示意图;
图8为本申请实施例提出的另一种可选的解码器的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。
实施例一
在视频图像中,一般采用第一图像分量、第二图像分量和第三图像分量来表征图像块;其中,第一图像分量、第二图像分量和第三图像分量可以包括一个亮度分量与两个色度分量。具体地,亮度分量通常使用符号Y表示,色度分量通常使用符号Cb、Cr表示,其中,Cb表示为蓝色色度分量,Cr表示为红色色度分量。
需要说明的是,在本申请的实施例中,第一图像分量、第二图像分量和第三图像分量可以分别为亮度分量Y、蓝色色度分量Cb以及红色色度分量Cr,例如,第一图像分量可以为亮度分量Y,第二图像分量可以为红色色度分量Cr,第三图像分量可以为蓝色色度分量Cb,本申请实施例对此并不作具体限定。
进一步地,在本申请的实施例中,常用的亮度分量和色度分量分开表示的采样格式也称为YCbCr格式,其中,YCbCr格式可以包括4:4:4格式;4:2:2格式以及4:2:0格式。
在视频图像采用YCbCr为4:2:0格式的情况下,若视频图像的亮度分量为2N×2N 大小的当前图像块,则对应的色度分量为N×N大小的当前图像块,其中N为当前图像块的边长。在本申请实施例中,将以4:2:0格式为例进行描述,但是本申请实施例的技术方案同样适用于其他采样格式。
在H.266中,为了进一步提升了编码性能和编码效率,针对分量间预测(CCP,Cross-component Prediction)进行了扩展改进,提出了分量间计算模型预测(CCLM,Cross-component Linear Model Prediction)。在H.266中,CCLM实现了第一图像分量到第二图像分量、第一图像分量到第三图像分量以及第二图像分量与第三图像分量之间的预测。
然而,针对CCLM来说,都是先预测当前图像块的亮度分量,然后通过亮度分量预测色度分量,然而,由于亮度分量具有丰富的纹理,而色度分量比较平坦,使用亮度分量预测色度分量,使得预测出的色度分量与真实色度值之间存在较大的偏差,导致预测值的准确性较低,从而影响编解码的效率。
本申请实施例提供一种图像分量的预测方法,图1为本申请实施例提供的一种可选的图像分量的预测方法的流程示意图,如图1所示,该方法可以包括:
S101:在图像分量的编码中,对色度分量进行编码;
S102:根据编码的色度分量,获取亮度分量的预测值。
其中,本申请实施例提供的图像分量的预测方法可以应用于编码器或者解码器中,图2为视频编码***的结构示意图,如图2所示,该视频编码***200包括变换与量化单元201、帧内估计单元202、帧内预测单元203、运动补偿单元204、运动估计单元205、反变换与反量化单元206、滤波器控制分析单元207、滤波单元208、编码单元209和解码图像缓存单元210等,其中,滤波单元208可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,编码单元209可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmatic Coding,CABAC)。针对输入的原始视频信号,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个视频编码块,然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元201对该视频编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元202和帧内预测单元203是用于对该视频编码块进行帧内预测;明确地说,帧内估计单元202和帧内预测单元203用于确定待用以编码该视频编码块的帧内预测模式;运动补偿单元204和运动估计单元205用于执行所接收的视频编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元205执行的运动估计为产生运动向量的过程,所述运动向量可以估计该视频编码块的运动,然后由运动补偿单元204基于由运动估计单元205所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元203还用于将所选择的帧内预测数据提供到编码单元209,而且运动估计单元205将所计算确定的运动向量数据也发送到编码单元209;此外,反变换与反量化单元206是用于该视频编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元207和滤波单元208去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元210的帧中的一个预测性块,用以产生经重构建的视频编码块;编码单元209是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而解码图像缓存单元210是用于存放重构建的视频编码块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元210中。
图3为视频解码***的结构示意图,如图3所示,该视频解码***300包括解码单 元301、反变换与反量化单元302、帧内预测单元303、运动补偿单元304、滤波单元305和解码图像缓存单元306等,其中,解码单元301可以实现头信息解码以及CABAC解码,滤波单元305可以实现去方块滤波以及SAO滤波。输入的视频信号经过图2的编码处理之后,输出该视频信号的码流;该码流输入视频解码***300中,首先经过解码单元301,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元302进行处理,以便在像素域中产生残差块;帧内预测单元303可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前视频解码块的预测数据;运动补偿单元304是通过剖析运动向量和其他关联语法元素来确定用于视频解码块的预测信息,并使用该预测信息以产生正被解码的视频解码块的预测性块;通过对来自反变换与反量化单元302的残差块与由帧内预测单元303或运动补偿单元304产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频信号通过滤波单元305以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于解码图像缓存单元306中,解码图像缓存单元306存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。
需要说明的是,本申请实施例中的S101和S102主要应用在如图2所示的帧内预测单元203部分和如图3所示的帧内预测单元303部分;也就是说,本申请实施例对于编码器和解码器可以同时作用,本申请实施例对此不作具体限定。
另外,在S101和S102中,在图像分量的编码中,可以是对一个色度分量进行编码,然后使用该色度分量来预测亮度分量,也可以是对一个色度分量进行编码,然后使用该色度分量来预测另一个色度分量,还可以是先对两个色度分量进行编码,然后使用一个色度分量来预测亮度分量,或者使用两个色度分量来预测亮度分量,这里,本申请实施例对此不作具体限定。
为了得到亮度分量的预测值,在一种可选的实施例中,S102可以包括:
根据编码的色度分量中的第一色度分量和编码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值;
其中,当第一色度分量为蓝色色度分量时,第二色度分量为红色色度分量;当第一色度分量为红色色度分量时,第二色度分量为蓝色色度分量。
这里,采用编码的色度分量中的第一色度分量和编码的第二色度分量来预测亮度分量的预测值,例如,第一色度分量为Cb,第二色度分量为Cr,亮度分量为Y,那么,可以通过Cb和Cr来预测Y,或者,第一色度分量为Cr,第二色度分量为Cb,亮度分量为Y,那么,可以通过Cb和Cr来预测Y。
进一步地,为了预测出亮度分量的预测值,在一种可选的实施例中,根据编码的色度分量中的第一色度分量和编码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值,包括:
针对当前图像块,获取第一色度分量的重建值、第二色度分量的重建值、第一色度分量的相邻参考值、第二色度分量的相邻参考值以及亮度分量的相邻参考值;
根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值;
根据第一预测值和第二预测值,确定亮度分量的预测值。
其中,当前图像块为当前待编码图像块,这里,针对当前图像块,需要获取当前图像块的第一色度分量的重建值,当前图像块的第二色度分量的重建值,当前图像块的相邻图像块的第一色度分量即为上述第一色度分量的相邻参考值,当前图像块的相邻图像 块的第二色度分量即为上述第二色度分量的相邻参考值,当前图像块的相邻图像块的亮度分量即为亮度分量的相邻参考值;需要说明的是,上述当前图像块的相邻图像块为当前图像块的上一行图像块和左一列图像块。
那么,在获取到上述值之后,可以根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值来预测出第一预测值,根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值来预测出第二预测值,然后,再根据第一预测值和第二预测值得到当前图像块的亮度分量的预测值;这样,分别通过一种色度分量来预测一个预测值,最后将预测出的两个预测值进行融合得到当前图像块的亮度分量的预测值。
为了得到第一预测值,在一种可选的实施例中,根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值,包括:
根据第一色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第一色度分量的重建值,调用预测模型,得到第一预测值。
在得到第一色度分量的重建值和第一色度分量的相邻参考值之后,上述所采用的图像分量的预测模型可以为线型模型,也可以为非线性模型,来预测亮度分量,以得到第一预测值,这里,本申请实施例不作具体限定。
具体来说,针对线型模型,在下一代视频编码标准的编码器,例如H.266/VVC早期测试模型(Joint Exploration Model,JEM)或VVC测试模型(VVC Test model,VTM)中使用分量间线性模型预测模式。例如根据公式(1),利用同一编码块的重建色度值构造亮度分量的预测值:
Pred L[i,j]=α·Rec C[i,j]+β      (1)
其中,i,j表示当前图像块中采样点的位置坐标,i表示水平方向,j表示竖直方向,Pred L[i,j]表示编码块中位置坐标为i,j的采样点的亮度分量预测值,Rec C[i,j]表示当前图像块中位置坐标为i,j的采样点的色度分量重建值,α和β是线性模型的比例因子,可以通过最小化色度分量相邻参考值和亮度分量相邻参考值的回归误差推导得到,如下公式(2):
Figure PCTCN2019124114-appb-000001
其中,L(n)表示经过亮度分量相邻参考值(例如左侧和上侧),C(n)表示色度分量相邻参考值(例如左侧和上侧),N为亮度分量相邻参考值的个数在解码器中也通过公式(2)计算得到α和β。
其中,利用当前图像块亮度分量相邻参考值L(n)和色度分量相邻参考值C(n),根据式(2)可以计算得出α和β;然后将当前图像块的色度分量重建值带入式(2)所述的线性模型中,可以计算得到当前图像块的亮度分量预测值。
在实际应用中,将第一色度分量的相邻参考值和亮度分量的相邻参考值代入公式(2)得到α和β,然后,将第一色度分量的重建值代入公式(1)得到第一预测值;这样,利用线型模型可以得到第一预测值。
预测模型除了采用上述线型模型之外,还可以采用非线性模型,在上述线型模型的基础上,图像分量的预测模型还提出了非线性模型计算方法。
具体地,在计算线性模型的参数时,不仅仅考虑色度分量相邻参考值、亮度分量相邻参考值,还考虑当前图像块的色度分量重建值和色度分量相邻参考值之间的相关性和相似程度,以得到现有的线型模型中的α和β,从而得到的线型模型更贴合当前色度分量重建值,进一步得到当前图像块的亮度分量预测值。
例如,在非线性模型中,将当前图像块的色度分量相邻参考值和亮度分量相邻参考值分成两组,每一组均可以单独作为推导线性模型参数的训练集,即每一个分组都能推导出一组参数。因此,同样可以克服色度分量相邻参考值与当前图像块对应的参数存在较大偏离时,或者,另一个色度分量相邻参考值与当前图像块对应的参数存在较大偏离时,线性模型与期望模型偏离的缺陷,进而在根据该线性模型对当前图像块的亮度分量进行预测时,能够大大提高亮度分量预测值的预测精度。
针对非线性模型来说,可以通过设置一个阈值将相邻参考值和中间值分为两组,根据两组相邻参考值和重建值建立非线型模型。
其中,阈值是当前图像块的色度分量相邻参考值和亮度分量相邻参考值的分类依据,同时也是当前图像块的色度分量重建值的分类依据。其中,阈值是用于指示建立多个计算模型所依据的设定值,阈值的大小与当前图像块所有采样点的色度分量重建值有关。具体地,可以通过计算当前图像块所有采样点的色度分量重建值的均值得到,也可以通过计算当前图像块所有采样点的色度分量重建值的中值得到,本申请实施例对此不作具体限定。
在本申请实施例中,首先,可以根据当前图像块所有采样点的色度分量的重建值以及式(3),计算得出均值Mean:
Figure PCTCN2019124114-appb-000002
其中,Mean表示当前图像块所有采样点的色度分量的重建值的均值,∑Rec C[i,j]表示当前图像块所有采样点的色度分量的重建值之和,M表示当前图像块所有采样点的色度分量的重建值的采样个数。
其次,将计算得出的均值Mean直接作为阈值,利用该阈值可以建立两个计算模型;但是本申请实施例并不限于只建立两个计算模型。举例来说,根据当前图像块所有采样点的色度分量的重建值之和∑Rec C[i,j]求取平均值,进而得到均值Mean。如果建立两个计算模型,则可以将Mean直接作为阈值,根据该阈值可以将当前图像块的色度分量的相邻参考值分为两部分,预示着后续可以建立两个计算模型;如果建立三个计算模型,则将(色度分量最小重建值+Mean+1)>>1作为第一个阈值,将(色度分量最大重建值+Mean+1)>>1作为第二个阈值,根据这两个阈值可以将当前图像块的色度分量的相邻参考值分为三部分,预示着后续可以建立三个计算模型;下述将以计算得出的均值Mean作为阈值来建立两个计算模型为例进行描述。
在一些实施例中,若当前图像块的色度分量的相邻参考值不大于至少一个阈值,则获得第一组的色度分量的相邻参考值C(m)和亮度分量的相邻参考值L(m);
若当前图像块的色度分量的相邻参考值大于至少一个阈值,则获得第二组的色度分量的相邻参考值C(k)和亮度分量的相邻参考值L(k)。
需要说明的是,以上述计算得出的均值Mean作为阈值,可以将当前图像块的色度分量的相邻参考值分为两部分,分别为C(m)和C(k);相应地,也可以将当前图像块的亮度分量的相邻参考值分为两部分,分别为L(m)和L(k)。
可以理解地,在得到第一组的色度分量的相邻参考值C(m)和亮度分量的相邻参考值L(m)以及第二组的色度分量的相邻参考值C(k)和亮度分量的相邻参考值L(k)之后, 可以将每一组的色度分量的相邻参考值和亮度分量的相邻参考值作为单独的训练集,即每一个分组都能训练出一组模型参数;因此,在上述实现方式中,具体地,根据至少两组色度分量的相邻参考值和亮度分量的相邻参考值,建立至少两个计算模型,包括:
根据L(m)、C(m)以及式(7),计算得出第一计算模型的第一参数α1和第一计算模型的第二参数β1:
Figure PCTCN2019124114-appb-000003
根据L(k)、C(k)以及式(5),计算得出第二计算模型的第一参数α2和第二计算模型的第二参数β2:
Figure PCTCN2019124114-appb-000004
根据第一计算模型的第一参数α1和第一计算模型的第二参数β1、第二计算模型的第一参数α2和第二计算模型的第二参数β2以及式(6),建立第一计算模型Pred 1L[i,j]和第二计算模型Pred 2L[i,j]:
Figure PCTCN2019124114-appb-000005
其中,M表示第一组的色度分量的相邻参考值C(m)或亮度分量的相邻参考值L(m)的个数,K表示第二组的色度分量的相邻参考值C(k)或亮度分量的相邻参考值L(k)的个数,[i,j]表示当前图像块中采样点的位置坐标,i表示水平方向,j表示竖直方向;Threshold表示预设阈值,预设阈值是根据当前图像块所有采样点的色度分量的重建值得到的;Rec C[i,j]表示当前图像块中位置坐标为[i,j]的采样点的色度分量的重建值;Pred 1L[i,j]和Pred 2L[i,j]表示当前图像块中位置坐标为[i,j]的采样点的亮度分量的预测值。
在实际应用中,先根据第一色度分量的重建值计算阈值,根据阈值对第一色度分量的相邻参考值和亮度分量的相邻参考值进行分类,以两个计算模型为例来说,根据公式(4)得到α1和β1,根据公式(5)得到α2和β2,然后将第一色度分量的重建值代入至上述公式(6)就可以得到第一预测值。
为了得到第二预测值,在一种可选的实施例中,根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值,包括:
根据第二色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第二色度分量的重建值,调用预测模型,得到第二预测值。
同理,与计算第一预测值的方式相同,针对线型模型来说,在实际应用中,将第二色度分量的相邻参考值和亮度分量的相邻参考值代入公式(2)得到α和β,然后,将第二色度分量的重建值代入公式(1)得到第二预测值;这样,利用线型模型可以得到第二预测值。
针对非线性模型来说,在实际应用中,先根据第二色度分量的重建值计算阈值,根 据阈值对第二色度分量的相邻参考值和亮度分量的相邻参考值进行分类,以两个计算模型为例来说,根据公式(4)得到α1和β1,根据公式(5)得到α2和β2,然后将第二色度分量的重建值代入至上述公式(6)就可以得到第二预测值。
这样,便可以得到第一预测值和第二预测值。
为了确定出亮度分量的预测值,在一种可选的实施例中,根据第一预测值和第二预测值,确定亮度分量的预测值,包括:
获取第一预测值的权重值和第二预测值的权重值;
根据第一预测值的权重值和第二预测值的权重值,对第一预测值和第二预测值加权求和,得到亮度分量的预测值。
在确定出第一预测值和第二预测值之后,可以对第一预测值和第二预测值进行融合处理,将融合处理后的值确定为亮度分量的预测值。
具体来说,可以先获取第一预测值的权重值和第二预测值的权重值,然后通过加权求和的方式对第一预测值和第二预测值进行加权求和,得到亮度分量的预测值。
其中,获取第一预测值的权重值和第二预测值的权重值的方式有多种,在一种可选的实施例中,获取第一预测值的权重值和第二预测值的权重值,可以包括:
从预设的权重值组中选取出一组权重值,将一组权重值中的其中一个值确定为第一预测值的权重值,将一组权重值中的另一个值确定为第二预测值的权重值。
也就是说,在编码器中预先设置有多组权重值,例如(0.5,0.5),(0.2,0.8),(0.3,0.7)和(0.1,0.9)等等,可以从预设的权重值组中选取出一组,若选取出的是(0.5,0.5),那么第一预测值的权重值为0.5,第二预测值的权重值为0.5,这样,便可以确定出第一预测值的权重值和第二预测值的权重值。
那么,在解码器端,可以从码流中对选择的结果以相应的语法元素进行标识,以方便解码器端预测出亮度分量的预测值。
在一种可选的实施例中,获取第一预测值的权重值和第二预测值的权重值,可以包括:
接收第一权重值和第二权重值;
将第一权重值确定为第一预测值的权重值,将第二权重值确定为第二预测值的权重值。
这里,可以预先通过用户输入的方式确定第一预测值的权重值和第二预测值的权重值,这样使得编码器端接收到第一权重值和第二权重值,那么,可以将第一权重值确定为第一预测值的权重值,将第二权重值确定为第二预测值的权重值;同时在码流中码流中对选择的结果以相应的语法元素进行标识,以方便解码器端接收到权重值并预测出亮度分量的预测值。
另外,在实际应用中,当前图像块的色度分量的分辨率小于亮度分量的分辨率,可以选择对第一色度分量的重建值和第二色度分量的重建值进行上采样,也可以选择对第一预测值和第二预测值进行上采样,还可以得到亮度分量的预测值之后对亮度分量的预测值进行上采样处理,这里,本申请实施例对此不作具体限定。
在对第一色度分量的重建值和第二色度分量的重建值的上采样中,在一种可选的实施例中,在针对当前图像块,获取第一色度分量的重建值,第二色度分量的重建值,第一色度分量的相邻参考值,第二色度分量的相邻参考值以及亮度分量的相邻参考值之后,该方法还包括:
当色度分量的分辨率小于亮度分量的分辨率时,分别对第一色度分量的重建值和第二色度分量的重建值进行上采样处理,得到第一色度分量处理后的重建值和第二色度分量处理后的重建值;
相应地,根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值,包括:
根据第一色度分量处理后的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
相应地,根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值,包括:
根据第二色度分量处理后的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值。
具体来说,由于当前图像块的色度分量的分辨率小于亮度分量的分辨率,那么为了提高预测精度,这里,在获取到第一色度分量的重建值和第二色度分量的重建值之后,需要分别对第一色度分量的重建值和第二色度分量的重建值进行上采样,使得第一色度分量的处理后的重建值的分辨率和第二色度分量处理后的重建值的分辨率分别与亮度分量的分辨率相同,以提高预测精度。
那么,在得到第一色度分量的处理后的重建值和第二色度分量处理后的重建值之后,可以根据第一色度分量处理后的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值,根据第二色度分量处理后的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值,以此得到的第一预测值和第二预测值来确定当前图像块的亮度分量的预测值。
这里,在对第一预测值和第二预测值的上采样中,在一种可选的实施例中,在根据第一预测值和第二预测值,确定亮度分量的预测值之前,该方法还包括:
当色度分量的分辨率小于亮度分量的分辨率时,分别对第一预测值和第二预测值进行上采样处理,得到处理后的第一预测值和处理后的第二预测值;
相应地,根据第一预测值和所述第二预测值,确定亮度分量的预测值,包括:
根据处理后的第一预测值和处理后的第二预测值,确定亮度分量的预测值。
这里,在色度分量的分辨率小于亮度分量的分辨率时,为了适应亮度分量的分辨率,在得到第一预测值和第二预测值之后,分别对第一预测值和第二预测值进行上采样,得到处理后的第一预测值和处理后的第二预测值,最后,再根据处理后的第一预测值和处理后的第二预测值,确定亮度分量的预测值。
在对亮度分量的预测值进行上采样中,在一种可选的实施例中,在根据第一预测值和第二预测值,确定亮度分量的预测值之后,该方法还包括:
当色度分量的分辨率小于亮度分量的分辨率时,对亮度分量的预测值上采样处理,得到处理后的亮度分量的预测值。
也就是说,选择在得到亮度分量的预测值之后,在对亮度分量的预测值进行上采样,以使得处理后的亮度分量的预测值的分辨率与亮度分量的分辨率相同。
本申请实施例提供一种图像分量的预测方法,图4为本申请实施例提供的一种可选的图像分量的预测方法的流程示意图,如图4所示,该方法可以包括:
S401:在图像分量的解码中,对色度分量进行解码;
S402:根据解码的色度分量,获取亮度分量的预测值。
需要说明的是,本申请实施例中的S401和S402主要应用在如图2所示的帧内预测单元203部分和如图3所示的帧内预测单元303部分;也就是说,本申请实施例对于编码器和解码器可以同时作用,本申请实施例对此不作具体限定。
另外,在S401和S402中,在图像分量的解码中,可以是对一个色度分量进行解码,然后使用该色度分量来预测亮度分量,也可以是对一个色度分量进行解码,然后使用该色度分量来预测另一个色度分量,还可以是先对两个色度分量进行解码,然后使用一个 色度分量来预测亮度分量,或者使用两个色度分量来预测亮度分量,这里,本申请实施例对此不作具体限定。
为了得到亮度分量的预测值,在一种可选的实施例中,S102可以包括:
根据解码的色度分量中的第一色度分量和解码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值;
其中,当第一色度分量为蓝色色度分量时,第二色度分量为红色色度分量;当第一色度分量为红色色度分量时,第二色度分量为蓝色色度分量。
这里,采用解码的色度分量中的第一色度分量和解码的第二色度分量来预测亮度分量的预测值,例如,第一色度分量为Cb,第二色度分量为Cr,亮度分量为Y,那么,可以通过Cb和Cr来预测Y,或者,第一色度分量为Cr,第二色度分量为Cb,亮度分量为Y,那么,可以通过Cb和Cr来预测Y。
进一步地,为了预测出亮度分量的预测值,在一种可选的实施例中,根据解码的色度分量中的第一色度分量和解码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值,包括:
针对当前图像块,获取第一色度分量的重建值、第二色度分量的重建值、第一色度分量的相邻参考值、第二色度分量的相邻参考值以及亮度分量的相邻参考值;
根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值;
根据第一预测值和第二预测值,确定亮度分量的预测值。
其中,当前图像块为当前待解码图像块,这里,针对当前图像块,需要获取当前图像块的第一色度分量的重建值,当前图像块的第二色度分量的重建值,当前图像块的相邻图像块的第一色度分量即为上述第一色度分量的相邻参考值,当前图像块的相邻图像块的第二色度分量即为上述第二色度分量的相邻参考值,当前图像块的相邻图像块的亮度分量即为亮度分量的相邻参考值;需要说明的是,上述当前图像块的相邻图像块为当前图像块的上一行图像块和左一列图像块。
那么,在获取到上述值之后,可以根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值来预测出第一预测值,根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值来预测出第二预测值,然后,再根据第一预测值和第二预测值得到当前图像块的亮度分量的预测值;这样,分别通过一种色度分量来预测一个预测值,最后将预测出的两个预测值进行融合得到当前图像块的亮度分量的预测值。
为了得到第一预测值,在一种可选的实施例中,根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值,包括:
根据第一色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第一色度分量的重建值,调用预测模型,得到第一预测值。
在得到第一色度分量的重建值和第一色度分量的相邻参考值之后,上述所采用的图像分量的预测模型可以为线型模型,也可以为非线性模型,来预测亮度分量,以得到第一预测值,这里,本申请实施例不作具体限定。
在实际应用中,将第一色度分量的相邻参考值和亮度分量的相邻参考值代入公式(2)得到α和β,然后,将第一色度分量的重建值代入公式(1)得到第一预测值;这样,利用线型模型可以得到第一预测值。
预测模型除了采用上述线型模型之外,还可以采用非线性模型,在上述线型模型的基础上,图像分量的预测模型还提出了非线性模型计算方法。
在实际应用中,先根据第一色度分量的重建值计算阈值,根据阈值对第一色度分量的相邻参考值和亮度分量的相邻参考值进行分类,以两个计算模型为例来说,根据公式(4)得到α1和β1,根据公式(5)得到α2和β2,然后将第一色度分量的重建值代入至上述公式(6)就可以得到第一预测值。
为了得到第二预测值,在一种可选的实施例中,根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值,包括:
根据第二色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第二色度分量的重建值,调用预测模型,得到第二预测值。
同理,与计算第一预测值的方式相同,针对线型模型来说,在实际应用中,将第二色度分量的相邻参考值和亮度分量的相邻参考值代入公式(2)得到α和β,然后,将第二色度分量的重建值代入公式(1)得到第二预测值;这样,利用线型模型可以得到第二预测值。
针对非线性模型来说,在实际应用中,先根据第二色度分量的重建值计算阈值,根据阈值对第二色度分量的相邻参考值和亮度分量的相邻参考值进行分类,以两个计算模型为例来说,根据公式(4)得到α1和β1,根据公式(5)得到α2和β2,然后将第二色度分量的重建值代入至上述公式(6)就可以得到第二预测值。
这样,便可以得到第一预测值和第二预测值。
为了确定出亮度分量的预测值,在一种可选的实施例中,根据第一预测值和第二预测值,确定亮度分量的预测值,包括:
获取第一预测值的权重值和第二预测值的权重值;
根据第一预测值的权重值和第二预测值的权重值,对第一预测值和第二预测值加权求和,得到亮度分量的预测值。
在确定出第一预测值和第二预测值之后,可以对第一预测值和第二预测值进行融合处理,将融合处理后的值确定为亮度分量的预测值。
具体来说,可以先获取第一预测值的权重值和第二预测值的权重值,然后通过加权求和的方式对第一预测值和第二预测值进行加权求和,得到亮度分量的预测值。
其中,获取第一预测值的权重值和第二预测值的权重值的方式有多种,在一种可选的实施例中,获取第一预测值的权重值和第二预测值的权重值,可以包括:
从预设的权重值组中选取出一组权重值,将一组权重值中的其中一个值确定为第一预测值的权重值,将一组权重值中的另一个值确定为第二预测值的权重值。
也就是说,在解码器中预先设置有多组权重值,例如(0.5,0.5),(0.2,0.8),(0.3,0.7)和(0.1,0.9)等等,可以从预设的权重值组中选取出一组,若选取出的是(0.5,0.5),那么第一预测值的权重值为0.5,第二预测值的权重值为0.5,这样,便可以确定出第一预测值的权重值和第二预测值的权重值。
那么,在解码器端,可以从码流中对选择的结果以相应的语法元素进行标识,以方便解码器端预测出亮度分量的预测值。
在一种可选的实施例中,获取第一预测值的权重值和第二预测值的权重值,可以包括:
接收第一权重值和第二权重值;
将第一权重值确定为第一预测值的权重值,将第二权重值确定为第二预测值的权重值。
这里,可以预先通过用户输入的方式确定第一预测值的权重值和第二预测值的权重值,这样使得编码器端接收到第一权重值和第二权重值,那么,可以将第一权重值确定为第一预测值的权重值,将第二权重值确定为第二预测值的权重值;同时在码流中对选择的结果以相应的语法元素进行标识,以方便解码器端接收到权重值并预测出亮度分量的预测值。
另外,在实际应用中,当前图像块的色度分量的分辨率小于亮度分量的分辨率,可以选择对第一色度分量的重建值和第二色度分量的重建值进行上采样,也可以选择对第一预测值和第二预测值进行上采样,还可以得到亮度分量的预测值之后对亮度分量的预测值进行上采样处理,这里,本申请实施例对此不作具体限定。
在对第一色度分量的重建值和第二色度分量的重建值的上采样中,在一种可选的实施例中,在针对当前图像块,获取第一色度分量的重建值,第二色度分量的重建值,第一色度分量的相邻参考值,第二色度分量的相邻参考值以及亮度分量的相邻参考值之后,该方法还包括:
当色度分量的分辨率小于亮度分量的分辨率时,分别对第一色度分量的重建值和第二色度分量的重建值进行上采样处理,得到第一色度分量处理后的重建值和第二色度分量处理后的重建值;
相应地,根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值,包括:
根据第一色度分量处理后的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
相应地,根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值,包括:
根据第二色度分量处理后的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值。
具体来说,由于当前图像块的色度分量的分辨率小于亮度分量的分辨率,那么为了提高预测精度,这里,在获取到第一色度分量的重建值和第二色度分量的重建值之后,需要分别对第一色度分量的重建值和第二色度分量的重建值进行上采样,使得第一色度分量的处理后的重建值的分辨率和第二色度分量处理后的重建值的分辨率分别与亮度分量的分辨率相同,以提高预测精度。
那么,在得到第一色度分量的处理后的重建值和第二色度分量处理后的重建值之后,可以根据第一色度分量处理后的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值,根据第二色度分量处理后的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值,以此得到的第一预测值和第二预测值来确定当前图像块的亮度分量的预测值。
这里,在对第一预测值和第二预测值的上采样中,在一种可选的实施例中,在根据第一预测值和第二预测值,确定亮度分量的预测值之前,该方法还包括:
当色度分量的分辨率小于亮度分量的分辨率时,分别对第一预测值和第二预测值进行上采样处理,得到处理后的第一预测值和处理后的第二预测值;
相应地,根据第一预测值和所述第二预测值,确定亮度分量的预测值,包括:
根据处理后的第一预测值和处理后的第二预测值,确定亮度分量的预测值。
这里,在色度分量的分辨率小于亮度分量的分辨率时,为了适应亮度分量的分辨率,在得到第一预测值和第二预测值之后,分别对第一预测值和第二预测值进行上采样,得到处理后的第一预测值和处理后的第二预测值,最后,再根据处理后的第一预测值和处理后的第二预测值,确定亮度分量的预测值。
在对亮度分量的预测值进行上采样中,在一种可选的实施例中,在根据第一预测值和第二预测值,确定亮度分量的预测值之后,该方法还包括:
当色度分量的分辨率小于亮度分量的分辨率时,对亮度分量的预测值上采样处理,得到处理后的亮度分量的预测值。
也就是说,选择在得到亮度分量的预测值之后,在对亮度分量的预测值进行上采样,以使得处理后的亮度分量的预测值的分辨率与亮度分量的分辨率相同。
本申请实施例提供了一种图像分量的预测方法,该方法包括:在图像分量的编码中,对色度分量进行编码,根据编码的色度分量,获取亮度分量的预测值;也就是说,在本申请实施例中,通过先对色度分量进行编解码,在通过编解码出的色度分量来预测亮度分量,这样,通过编解码得到的色度分量来预测亮度分量,即先对较平坦的色度分量进行编解码,再基于编解码得到的色度分量来预测具有丰富纹理的亮度分量,能够提高对亮度分量预测的准确性,使得亮度分量的预测值更加接近真实的亮度分量的像素值。
实施例二
基于同一发明构思下,图5为本申请实施例提出的一种可选的编码器的结构示意图,如图5所示,本申请实施例提出的编码器可以包括编码模块51和第一获取模块52;其中,
编码模块51,配置为在图像分量的编码中,对色度分量进行编码;
第一获取模块52,配置为根据编码的色度分量,获取亮度分量的预测值。
进一步地,第一获取模块52,具体配置为:
根据编码的色度分量中的第一色度分量和编码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值;
其中,当所述第一色度分量为蓝色色度分量时,所述第二色度分量为红色色度分量;当所述第一色度分量为红色色度分量时,所述第二色度分量为蓝色色度分量。
进一步地,第一获取模块52根据编码的色度分量中的第一色度分量和编码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值中,包括:
针对当前图像块,获取第一色度分量的重建值、第二色度分量的重建值、第一色度分量的相邻参考值、第二色度分量的相邻参考值以及亮度分量的相邻参考值;
根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值;
根据第一预测值和第二预测值,确定亮度分量的预测值。
进一步地,第一获取模块52根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值中,包括:
根据第一色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第一色度分量的重建值,调用预测模型,得到第一预测值。
进一步地,第一获取模块52根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值中,包括:
根据第二色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第二色度分量的重建值,调用预测模型,得到第二预测值。
进一步地,第一获取模块52根据第一预测值和第二预测值,确定亮度分量的预测值中,包括:
获取第一预测值的权重值和第二预测值的权重值;
根据第一预测值的权重值和第二预测值的权重值,对第一预测值和第二预测值加权 求和,得到亮度分量的预测值。
进一步地,第一获取模块52获取第一预测值的权重值和第二预测值的权重值中,包括:
从预设的权重值组中选取出一组权重值,将一组权重值中的其中一个值确定为第一预测值的权重值,将一组权重值中的另一个值确定为第二预测值的权重值。
进一步地,第一获取模块52,具体配置为:
在针对当前图像块,获取第一色度分量的重建值、第二色度分量的重建值、第一色度分量的相邻参考值、第二色度分量的相邻参考值以及亮度分量的相邻参考值之后,当色度分量的分辨率小于亮度分量的分辨率时,分别对第一色度分量的重建值和第二色度分量的重建值进行上采样处理,得到第一色度分量处理后的重建值和第二色度分量处理后的重建值;
相应地,第一获取模块52根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值中,包括:
根据第一色度分量处理后的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
相应地,第一获取模块52根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值中,包括:
根据第二色度分量处理后的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值。
在一种可选的实施例中,第一获取模块52,具体还配置为:
在根据第一预测值和第二预测值,确定亮度分量的预测值之前,当色度分量的分辨率小于亮度分量的分辨率时,分别对第一预测值和第二预测值进行上采样处理,得到处理后的第一预测值和处理后的第二预测值;
相应地,第一获取模块52根据第一预测值和第二预测值,确定亮度分量的预测值中,包括:
根据处理后的第一预测值和处理后的第二预测值,确定亮度分量的预测值。
在一种可选的实施例中,第一获取模块52,具体还配置为:
在根据第一预测值和第二预测值,确定亮度分量的预测值之后,当色度分量的分辨率小于亮度分量的分辨率时,对亮度分量的预测值上采样处理,得到处理后的亮度分量的预测值。
基于同一发明构思下,图6为本申请实施例提出的一种可选的解码器的结构示意图,如图6所示,本申请实施例提出的解码器可以包括解码模块61和第二获取模块62;其中,
解码模块61,配置为在图像分量的解码中,对色度分量进行解码;
第二获取模块62,配置为根据解码的色度分量,获取亮度分量的预测值。
进一步地,第二获取模块62,具体配置为:
根据解码的色度分量中的第一色度分量和解码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值;
其中,当第一色度分量为蓝色色度分量时,第二色度分量为红色色度分量;当第一色度分量为红色色度分量时,第二色度分量为蓝色色度分量。
进一步地,第二获取模块62根据解码的色度分量中的第一色度分量和解码的色度分量中的第二色度分量进行预测,得到亮度分量的预测值中,包括:
针对当前图像块,获取第一色度分量的重建值、第二色度分量的重建值、第一色度分量的相邻参考值、第二色度分量的相邻参考值以及亮度分量的相邻参考值;
根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值;
根据第一预测值和所述第二预测值,确定亮度分量的预测值。
进一步地,第二获取模块62根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值中,包括:
根据第一色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第一色度分量的重建值,调用预测模型,得到第一预测值。
进一步地,第二获取模块62根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值中,包括:
根据第二色度分量的相邻参考值和亮度分量的相邻参考值,确定预测模型的参数;
根据第二色度分量的重建值,调用预测模型,得到第二预测值。
进一步地,第二获取模块62根据第一预测值和第二预测值,确定亮度分量的预测值中,包括:
获取第一预测值的权重值和第二预测值的权重值;
根据第一预测值的权重值和第二预测值的权重值,对第一预测值和第二预测值加权求和,得到亮度分量的预测值。
进一步地,第二获取模块62获取第一预测值的权重值和第二预测值的权重值中,包括:
从预设的权重值组中选取出一组权重值,将一组权重值中的其中一个值确定为第一预测值的权重值,将一组权重值中的另一个值确定为第二预测值的权重值。
进一步地,第二获取模块62,具体配置为:
在针对当前图像块,获取第一色度分量的重建值、第二色度分量的重建值、第一色度分量的相邻参考值、第二色度分量的相邻参考值以及亮度分量的相邻参考值之后,当色度分量的分辨率小于亮度分量的分辨率时,分别对第一色度分量的重建值和第二色度分量的重建值进行上采样处理,得到第一色度分量处理后的重建值和第二色度分量处理后的重建值;
相应地,第二获取模块62根据第一色度分量的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值中,包括:
根据第一色度分量处理后的重建值,第一色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第一预测值;
相应地,第二获取模块62根据第二色度分量的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值中,包括:
根据第二色度分量处理后的重建值,第二色度分量的相邻参考值和亮度分量的相邻参考值进行预测,得到亮度分量的第二预测值。
在一种可选的实施例中,第二获取模块62,具体还配置为:
在根据第一预测值和第二预测值,确定亮度分量的预测值之前,当色度分量的分辨率小于亮度分量的分辨率时,分别对第一预测值和第二预测值进行上采样处理,得到处理后的第一预测值和处理后的第二预测值;
相应地,第二获取模块62根据第一预测值和第二预测值,确定亮度分量的预测值中,包括:
根据处理后的第一预测值和处理后的第二预测值,确定亮度分量的预测值。
在一种可选的实施例中,第二获取模块62,具体还配置为:
在根据第一预测值和第二预测值,确定亮度分量的预测值之后,当色度分量的分辨率小于亮度分量的分辨率时,对亮度分量的预测值上采样处理,得到处理后的亮度分量的预测值。
图7为本申请实施例提出的另一种可选的编码器的结构示意图,如图7所示,本申请实施例提出的编码器700还可以包括处理器71以及存储有处理器71可执行指令的存储介质72,存储介质72通过通信总线73依赖处理器71执行操作,当指令被处理器71执行时,执行上述一个或多个实施例所述的图像分量的预测方法。
需要说明的是,实际应用时,终端中的各个组件通过通信总线73耦合在一起。可理解,通信总线73用于实现这些组件之间的连接通信。通信总线73除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图7中将各种总线都标为通信总线73。
图8为本申请实施例提出的另一种可选的解码器的结构示意图,如图8所示,本申请实施例提出的编码器800还可以包括处理器81以及存储有处理器81可执行指令的存储介质82,存储介质82通过通信总线83依赖处理器81执行操作,当指令被处理器81执行时,执行上述一个或多个实施例所述的图像分量的预测方法。
需要说明的是,实际应用时,终端中的各个组件通过通信总线83耦合在一起。可理解,通信总线83用于实现这些组件之间的连接通信。通信总线83除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图8中将各种总线都标为通信总线83。
本申请实施例提供了一种计算机存储介质,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行上述一个或多个实施例所述的图像分量的预测方法。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的***和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
而处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存 储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本文所述功能的模块(例如过程、函数等)来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机、计算机、服务器、或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本申请的保护之内。
工业实用性
本申请实施例提供了一种图像分量的预测方法、编码器以及计算机存储介质,包括:在图像分量的编码中,对色度分量进行编码,根据编码的色度分量,获取亮度分量的预测值,能够提高对亮度分量预测的准确性,使得亮度分量的预测值更加接近真实的亮度分量的像素值。

Claims (25)

  1. 一种图像分量的预测方法,所述方法包括:
    在图像分量的编码中,对色度分量进行编码;
    根据编码的色度分量,获取亮度分量的预测值。
  2. 根据权利要求1所述的方法,其中,所述根据编码的色度分量,获取亮度分量的预测值,包括:
    根据所述编码的色度分量中的第一色度分量和所述编码的色度分量中的第二色度分量进行预测,得到所述亮度分量的预测值;
    其中,当所述第一色度分量为蓝色色度分量时,所述第二色度分量为红色色度分量;当所述第一色度分量为红色色度分量时,所述第二色度分量为蓝色色度分量。
  3. 根据权利要求2所述的方法,其中,所述根据所述编码的色度分量中的第一色度分量和所述编码的色度分量中的第二色度分量进行预测,得到所述亮度分量的预测值,包括:
    针对当前图像块,获取所述第一色度分量的重建值、所述第二色度分量的重建值、所述第一色度分量的相邻参考值、所述第二色度分量的相邻参考值以及所述亮度分量的相邻参考值;
    根据所述第一色度分量的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值;
    根据所述第二色度分量的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值;
    根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值。
  4. 根据权利要求3所述的方法,其中,所述根据所述第一色度分量的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值,包括:
    根据所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值,确定预测模型的参数;
    根据所述第一色度分量的重建值,调用所述预测模型,得到所述第一预测值。
  5. 根据权利要求3所述的方法,其中,所述根据所述第二色度分量的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值,包括:
    根据所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值,确定预测模型的参数;
    根据所述第二色度分量的重建值,调用所述预测模型,得到所述第二预测值。
  6. 根据权利要求3所述的方法,其中,所述根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值,包括:
    获取所述第一预测值的权重值和所述第二预测值的权重值;
    根据所述第一预测值的权重值和所述第二预测值的权重值,对所述第一预测值和所述第二预测值加权求和,得到所述亮度分量的预测值。
  7. 根据权利要求6所述的方法,其中,所述获取所述第一预测值的权重值和所述第二预测值的权重值,包括:
    从预设的权重值组中选取出一组权重值,将一组权重值中的其中一个值确定为所述第一预测值的权重值,将一组权重值中的另一个值确定为所述第二预测值的权重值。
  8. 根据权利要求3所述的方法,其中,在针对当前图像块,获取所述第一色度分量的重建值、所述第二色度分量的重建值、所述第一色度分量的相邻参考值、所述第二色度分量的相邻参考值以及所述亮度分量的相邻参考值之后,所述方法还包括:
    当所述色度分量的分辨率小于所述亮度分量的分辨率时,分别对所述第一色度分量的重建值和所述第二色度分量的重建值进行上采样处理,得到所述第一色度分量处理后的重建值和所述第二色度分量处理后的重建值;
    相应地,所述根据所述第一色度分量的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值,包括:
    根据所述第一色度分量处理后的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值;
    相应地,所述根据所述第二色度分量的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值,包括:
    根据所述第二色度分量处理后的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值。
  9. 根据权利要求3所述的方法,其中,在所述根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值之前,所述方法还包括:
    当所述色度分量的分辨率小于所述亮度分量的分辨率时,分别对所述第一预测值和所述第二预测值进行上采样处理,得到处理后的第一预测值和处理后的第二预测值;
    相应地,所述根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值,包括:
    根据处理后的第一预测值和处理后的第二预测值,确定所述亮度分量的预测值。
  10. 根据权利要求3所述的方法,其中,在根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值之后,所述方法还包括:
    当所述色度分量的分辨率小于所述亮度分量的分辨率时,对所述亮度分量的预测值上采样处理,得到处理后的亮度分量的预测值。
  11. 一种图像分量的预测方法,所述方法包括:
    在图像的解码中,对色度分量进行解码;
    根据解码的色度分量,获取亮度分量的预测值。
  12. 根据权利要求11所述的方法,其中,所述根据解码的色度分量,获取亮度分量的预测值,包括:
    根据所述解码的色度分量中的第一色度分量和所述解码的色度分量中的第二色度分量进行预测,得到所述亮度分量的预测值;
    其中,当所述第一色度分量为蓝色色度分量时,所述第二色度分量为红色色度分量;当所述第一色度分量为红色色度分量时,所述第二色度分量为蓝色色度分量。
  13. 根据权利要求12所述的方法,其中,所述根据所述解码的色度分量中的第一色度分量和所述解码的色度分量中的第二色度分量进行预测,得到所述亮度分量的预测值,包括:
    针对当前图像块,获取所述第一色度分量的重建值、所述第二色度分量的重建值、所述第一色度分量的相邻参考值、所述第二色度分量的相邻参考值以及所述亮度分量的相邻参考值;
    根据所述第一色度分量的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值;
    根据所述第二色度分量的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值;
    根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值。
  14. 根据权利要求13所述的方法,其中,所述根据所述第一色度分量的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值,包括:
    根据所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值,确定预测模型的参数;
    根据所述第一色度分量的重建值,调用至所述预测模型中,得到所述第一预测值。
  15. 根据权利要求13所述的方法,其中,所述根据所述第二色度分量的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值,包括:
    根据所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值,确定预测模型的参数;
    根据所述第二色度分量的重建值,调用至所述预测模型中,得到所述第二预测值。
  16. 根据权利要求13所述的方法,其中,所述根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值,包括:
    获取所述第一预测值的权重值和所述第二预测值的权重值;
    根据所述第一预测值的权重值和所述第二预测值的权重值,对所述第一预测值和所述第二预测值加权求和,得到所述亮度分量的预测值。
  17. 根据权利要求16所述的方法,其中,所述获取所述第一预测值的权重值和所述第二预测值的权重值,包括:
    从预设的权重值组中选取出一组权重值,将一组权重值中的其中一个值确定为所述第一预测值的权重值,将一组权重值中的另一个值确定为所述第二预测值的权重值。
  18. 根据权利要求13所述的方法,其中,在针对当前图像块,获取所述第一色度分量的重建值、所述第二色度分量的重建值、所述第一色度分量的相邻参考值、所述第二色度分量的相邻参考值以及所述亮度分量的相邻参考值之后,所述方法还包括:
    当所述色度分量的分辨率小于所述亮度分量的分辨率时,分别对所述第一色度分量的重建值和所述第二色度分量的重建值进行上采样处理,得到所述第一色度分量处理后的重建值和所述第二色度分量处理后的重建值;
    相应地,所述根据所述第一色度分量的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值,包括:
    根据所述第一色度分量处理后的重建值,所述第一色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第一预测值;
    相应地,所述根据所述第二色度分量的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值,包括:
    根据所述第二色度分量处理后的重建值,所述第二色度分量的相邻参考值和所述亮度分量的相邻参考值进行预测,得到所述亮度分量的第二预测值。
  19. 根据权利要求13所述的方法,其中,在所述根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值之前,所述方法还包括:
    当所述色度分量的分辨率小于所述亮度分量的分辨率时,分别对所述第一预测值和所述第二预测值进行上采样处理,得到处理后的第一预测值和处理后的第二预测值;
    相应地,所述根据所述第一预测值和所述第二预测值,确定所述亮度分量的预测值,包括:
    根据处理后的第一预测值和处理后的第二预测值,确定所述亮度分量的预测值。
  20. 根据权利要求13所述的方法,其中,在根据所述第一预测值和所述第二预测 值,确定所述亮度分量的预测值之后,所述方法还包括:
    当所述色度分量的分辨率小于所述亮度分量的分辨率时,对所述亮度分量的预测值上采样处理,得到处理后的亮度分量的预测值。
  21. 一种编码器,其中,所述编码器包括:
    编码模块,配置为在图像分量的编码中,对色度分量进行编码;
    第一获取模块,配置为根据编码的色度分量,获取亮度分量的预测值。
  22. 一种解码器,其中,所述解码器包括:
    解码模块,配置为在图像分量的解码中,对色度分量进行解码;
    第二获取模块,配置为根据解码的色度分量,获取亮度分量的预测值。
  23. 一种编码器,其中,所述编码器包括:
    处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述的权利要求1至10任一项所述的图像分量的预测方法。
  24. 一种解码器,其中,所述解码器包括:
    处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述的权利要求11至20任一项所述的图像分量的预测方法。
  25. 一种计算机可读存储介质,其中,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行所述的权利要求1至10任一项所述的图像分量的预测方法或者所述权利要求11至20任一项所述的图像分量的预测方法。
PCT/CN2019/124114 2019-03-25 2019-12-09 图像分量的预测方法、编码器、解码器及计算机存储介质 WO2020192180A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201980091480.8A CN113412621A (zh) 2019-03-25 2019-12-09 图像分量的预测方法、编码器、解码器及计算机存储介质
CN202111093518.0A CN113840144B (zh) 2019-03-25 2019-12-09 图像分量的预测方法、编码器、解码器及计算机存储介质
JP2021556939A JP2022528333A (ja) 2019-03-25 2019-12-09 色成分予測方法、エンコーダー、デコーダー及びコンピュータ記憶媒体
KR1020217034441A KR20210141683A (ko) 2019-03-25 2019-12-09 이미지 요소의 예측 방법, 인코더, 디코더 및 컴퓨터 저장 매체
EP19920890.1A EP3930329A4 (en) 2019-03-25 2019-12-09 IMAGE COMPONENT PREDICTION METHOD, ENCODER, DECODER AND COMPUTER STORAGE MEDIA
US17/477,193 US20220007042A1 (en) 2019-03-25 2021-09-16 Colour component prediction method, encoder, decoder, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962823621P 2019-03-25 2019-03-25
US62/823,621 2019-03-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/477,193 Continuation US20220007042A1 (en) 2019-03-25 2021-09-16 Colour component prediction method, encoder, decoder, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020192180A1 true WO2020192180A1 (zh) 2020-10-01

Family

ID=72608524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124114 WO2020192180A1 (zh) 2019-03-25 2019-12-09 图像分量的预测方法、编码器、解码器及计算机存储介质

Country Status (6)

Country Link
US (1) US20220007042A1 (zh)
EP (1) EP3930329A4 (zh)
JP (1) JP2022528333A (zh)
KR (1) KR20210141683A (zh)
CN (2) CN113412621A (zh)
WO (1) WO2020192180A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859576A (zh) * 2005-10-11 2006-11-08 华为技术有限公司 对空间分层编码视频图象的上采样方法及其***
CN101335902B (zh) * 2007-06-25 2010-06-02 华为技术有限公司 视频编解码中的加权预测方法和装置
CN103096055A (zh) * 2011-11-04 2013-05-08 华为技术有限公司 一种图像信号帧内预测及解码的方法和装置
US20150365677A1 (en) * 2002-09-06 2015-12-17 Fujitsu Limited Motion picture encoding device and motion picture decoding device

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5619590A (en) * 1994-10-06 1997-04-08 Eastman Kodak Company System for electronic image signal processing to provide a tonescale corrected full resolution luminance and two half resolution chrominance signals
US9693070B2 (en) * 2011-06-24 2017-06-27 Texas Instruments Incorporated Luma-based chroma intra-prediction for video coding
EP2805509A4 (en) * 2012-01-19 2015-07-29 Magnum Semiconductor Inc METHOD AND DEVICES FOR PROVIDING AN ADAPTIVE UPGRADE MODE WITH REDUCED RESOLUTION
CN104380741B (zh) * 2012-01-19 2018-06-05 华为技术有限公司 用于lm帧内预测的参考像素缩减
WO2015009068A1 (ko) * 2013-07-16 2015-01-22 삼성전자 주식회사 비트 뎁스 및 컬러 포맷의 변환을 동반하는 업샘플링 필터를 이용하는 스케일러블 비디오 부호화 방법 및 장치, 스케일러블 비디오 복호화 방법 및 장치
WO2015054307A2 (en) * 2013-10-07 2015-04-16 Vid Scale, Inc. Combined scalability processing for multi-layer video coding
WO2015053112A1 (ja) * 2013-10-11 2015-04-16 ソニー株式会社 復号装置および復号方法、並びに、符号化装置および符号化方法
KR101683175B1 (ko) * 2014-03-06 2016-12-06 삼성전자주식회사 영상 복호화 방법 및 그 장치, 영상 부호화 방법 및 그 장치
GB201500719D0 (en) * 2015-01-15 2015-03-04 Barco Nv Method for chromo reconstruction
US9854201B2 (en) * 2015-01-16 2017-12-26 Microsoft Technology Licensing, Llc Dynamically updating quality to higher chroma sampling rate
US10321140B2 (en) * 2015-01-22 2019-06-11 Mediatek Singapore Pte. Ltd. Method of video coding for chroma components
US10455249B2 (en) * 2015-03-20 2019-10-22 Qualcomm Incorporated Downsampling process for linear model prediction mode
US20170150156A1 (en) * 2015-11-25 2017-05-25 Qualcomm Incorporated Illumination compensation with non-square predictive blocks in video coding
WO2017139937A1 (en) * 2016-02-18 2017-08-24 Mediatek Singapore Pte. Ltd. Advanced linear model prediction for chroma coding
US10484712B2 (en) * 2016-06-08 2019-11-19 Qualcomm Incorporated Implicit coding of reference line index used in intra prediction
US10326986B2 (en) * 2016-08-15 2019-06-18 Qualcomm Incorporated Intra video coding using a decoupled tree structure
US10419757B2 (en) * 2016-08-31 2019-09-17 Qualcomm Incorporated Cross-component filter
US10652575B2 (en) * 2016-09-15 2020-05-12 Qualcomm Incorporated Linear model chroma intra prediction for video coding
US11025903B2 (en) * 2017-01-13 2021-06-01 Qualcomm Incorporated Coding video data using derived chroma mode
CN109274969B (zh) * 2017-07-17 2020-12-22 华为技术有限公司 色度预测的方法和设备
CN118075473A (zh) * 2017-11-28 2024-05-24 Lx 半导体科技有限公司 图像编码/解码方法、图像数据的传输方法和存储介质
GB2571313B (en) * 2018-02-23 2022-09-21 Canon Kk New sample sets and new down-sampling schemes for linear component sample prediction
CN116708837A (zh) * 2018-07-02 2023-09-05 Lg电子株式会社 编解码设备和数据发送设备
WO2020015648A1 (en) * 2018-07-16 2020-01-23 Huawei Technologies Co., Ltd. Video encoder, video decoder, and corresponding encoding and decoding methods
CN113132722B (zh) * 2018-08-09 2023-02-17 Oppo广东移动通信有限公司 视频图像分量的预测方法和装置、及计算机存储介质
KR20200028856A (ko) * 2018-09-07 2020-03-17 김기백 인트라 예측을 이용한 영상 부호화/복호화 방법 및 장치
CN113691807B (zh) * 2018-11-08 2023-05-09 Oppo广东移动通信有限公司 视频信号编码/解码方法以及用于所述方法的设备
KR20210108389A (ko) * 2019-01-02 2021-09-02 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 크로마 인트라 예측 방법, 장치 및 컴퓨터 저장 매체
KR20210110796A (ko) * 2019-01-10 2021-09-09 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 화상 디코딩 방법, 디코더 및 컴퓨터 저장 매체
CN113747179B (zh) * 2019-03-07 2023-08-08 Oppo广东移动通信有限公司 环路滤波实现方法、装置及计算机存储介质
CN116547969A (zh) * 2020-11-19 2023-08-04 华为技术有限公司 基于机器学习的图像译码中色度子采样格式的处理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150365677A1 (en) * 2002-09-06 2015-12-17 Fujitsu Limited Motion picture encoding device and motion picture decoding device
CN1859576A (zh) * 2005-10-11 2006-11-08 华为技术有限公司 对空间分层编码视频图象的上采样方法及其***
CN101335902B (zh) * 2007-06-25 2010-06-02 华为技术有限公司 视频编解码中的加权预测方法和装置
CN103096055A (zh) * 2011-11-04 2013-05-08 华为技术有限公司 一种图像信号帧内预测及解码的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3930329A4 *

Also Published As

Publication number Publication date
CN113840144A (zh) 2021-12-24
KR20210141683A (ko) 2021-11-23
US20220007042A1 (en) 2022-01-06
CN113412621A (zh) 2021-09-17
EP3930329A1 (en) 2021-12-29
EP3930329A4 (en) 2022-07-20
CN113840144B (zh) 2023-06-02
JP2022528333A (ja) 2022-06-10

Similar Documents

Publication Publication Date Title
CN113068028B (zh) 视频图像分量的预测方法、装置及计算机存储介质
WO2021120122A1 (zh) 图像分量预测方法、编码器、解码器以及存储介质
JP7480170B2 (ja) 画像予測方法、エンコーダー、デコーダー及び記憶媒体
CN113068025B (zh) 解码预测方法、装置及计算机存储介质
WO2020186763A1 (zh) 图像分量预测方法、编码器、解码器以及存储介质
CN113766233B (zh) 图像预测方法、编码器、解码器以及存储介质
WO2020192180A1 (zh) 图像分量的预测方法、编码器、解码器及计算机存储介质
WO2020056767A1 (zh) 视频图像分量的预测方法、装置及计算机存储介质
CN112970257A (zh) 解码预测方法、装置及计算机存储介质
RU2805048C2 (ru) Способ предсказания изображения, кодер и декодер
JP7305769B2 (ja) 予測方向の決定方法、デコーダ及びコンピュータ記憶媒体
WO2023141781A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2021056224A1 (zh) 预测值的确定方法、编码器、解码器以及存储介质
TW202325023A (zh) 視頻編解碼系統中基於交叉分量線性模型的預測方法及裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19920890

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021556939

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019920890

Country of ref document: EP

Effective date: 20210922

ENP Entry into the national phase

Ref document number: 20217034441

Country of ref document: KR

Kind code of ref document: A