WO2020192084A1 - 图像预测方法、编码器、解码器以及存储介质 - Google Patents
图像预测方法、编码器、解码器以及存储介质 Download PDFInfo
- Publication number
- WO2020192084A1 WO2020192084A1 PCT/CN2019/110809 CN2019110809W WO2020192084A1 WO 2020192084 A1 WO2020192084 A1 WO 2020192084A1 CN 2019110809 W CN2019110809 W CN 2019110809W WO 2020192084 A1 WO2020192084 A1 WO 2020192084A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image component
- current block
- component
- image
- processing
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Definitions
- the embodiments of the present application relate to the technical field of video coding and decoding, and in particular, to an image prediction method, an encoder, a decoder, and a storage medium.
- cross-component prediction In the latest video coding standard H.266/Versatile Video Coding (VVC), cross-component prediction has been allowed; among them, cross-component Linear Model Prediction (CCLM) is typical One of the cross-component prediction techniques.
- VVC Video Coding
- CCLM Linear Model Prediction
- one component can be used to predict another component (or its residual), such as predicting chrominance components from luminance components, or predicting luminance components from chrominance components, or predicting chrominance components from chrominance components, etc. .
- the embodiments of the present application provide an image prediction method, an encoder, a decoder, and a storage medium, which balance the statistical characteristics of the components before cross-component prediction, thereby not only improving the prediction efficiency, but also improving the coding and decoding efficiency of video images.
- an embodiment of the present application provides an image prediction method applied to an encoder or a decoder, and the method includes:
- Preprocessing at least one image component of the current block to obtain at least one image component after preprocessing
- an embodiment of the present application provides an image prediction method applied to an encoder or a decoder, and the method includes:
- Determining the reference value of the first image component of the current block in the image wherein the reference value of the first image component of the current block is the first image component value of the neighboring pixels of the current block;
- the model parameters of the prediction model are calculated by using the filtered reference value, wherein the prediction model is used to map the value of the first image component of the current block to the value of the second image component of the current block ,
- the second image component is different from the first image component.
- an encoder which includes a first determining unit, a first processing unit, and a first building unit, wherein:
- the first determining unit is configured to determine at least one image component of the current block in the image
- the first processing unit is configured to preprocess at least one image component of the current block to obtain at least one image component after preprocessing;
- the first construction unit is configured to construct a prediction model according to the at least one image component after the preprocessing; wherein the prediction model is used to perform cross-component prediction processing on at least one image component of the current block.
- an embodiment of the present application provides an encoder.
- the encoder includes a first memory and a first processor, where:
- the first memory is configured to store a computer program that can run on the first processor
- the first processor is configured to execute the method described in the first aspect or the second aspect when running the computer program.
- an embodiment of the present application provides a decoder, which includes a second determining unit, a second processing unit, and a second building unit, wherein:
- the second determining unit is configured to determine at least one image component of the current block in the image
- the second processing unit is configured to preprocess at least one image component of the current block to obtain at least one image component after preprocessing
- the second construction unit is configured to construct a prediction model according to the at least one image component after the preprocessing; wherein the prediction model is used to perform cross-component prediction processing on at least one image component of the current block.
- an embodiment of the present application provides a decoder, the decoder includes a second memory and a second processor, wherein:
- the second memory is used to store a computer program that can run on the second processor
- the second processor is configured to execute the method described in the first aspect or the second aspect when the computer program is running.
- an embodiment of the present application provides a computer storage medium that stores an image prediction program, and when the image prediction program is executed by a first processor or a second processor, it implements the first aspect or the first aspect. The method described in the two aspects.
- the embodiments of the present application provide an image prediction method, an encoder, a decoder, and a storage medium, which determine at least one image component of a current block in an image; preprocess at least one image component of the current block to obtain at least An image component; construct a prediction model based on at least one image component after preprocessing, and the prediction model is used to perform cross-component prediction processing on at least one image component of the current block; in this way, at least one image component of the current block Before prediction, the at least one image component is preprocessed first, which can balance the statistical characteristics of each image component before cross-component prediction, thereby improving the prediction efficiency; in addition, because the prediction value of the image component predicted by the prediction model is more It is close to the true value, so that the prediction residual of the image component is small, so that the bit rate transmitted in the encoding and decoding process is small, and the encoding and decoding efficiency of the video image is also improved.
- FIG. 1 is a schematic diagram of the composition structure of a traditional cross-component prediction architecture provided by an embodiment of this application;
- FIG. 2 is a schematic diagram of the composition of a video encoding system provided by an embodiment of the application.
- FIG. 3 is a schematic diagram of the composition of a video decoding system provided by an embodiment of the application.
- FIG. 4 is a schematic flowchart of an image prediction method provided by an embodiment of the application.
- FIG. 5 is a schematic flowchart of another image prediction method provided by an embodiment of the application.
- FIG. 6 is a schematic diagram of the composition structure of an improved cross-component prediction architecture provided by an embodiment of this application.
- FIG. 7 is a schematic diagram of the composition structure of another improved cross-component prediction architecture provided by an embodiment of this application.
- FIG. 8 is a schematic diagram of the composition structure of an encoder provided by an embodiment of the application.
- FIG. 9 is a schematic diagram of a specific hardware structure of an encoder provided by an embodiment of the application.
- FIG. 10 is a schematic diagram of the composition structure of a decoder provided by an embodiment of the application.
- FIG. 11 is a schematic diagram of a specific hardware structure of a decoder provided by an embodiment of the application.
- the first image component, the second image component, and the third image component are generally used to characterize the coding block; among them, the three image components are a luminance component, a blue chrominance component, and a red chrominance component.
- the luminance component is usually represented by the symbol Y
- the blue chrominance component is usually represented by the symbol Cb or U
- the red chrominance component is usually represented by the symbol Cr or V; in this way, the video image can be represented in YCbCr format or YUV. Format representation.
- the first image component may be a luminance component
- the second image component may be a blue chrominance component
- the third image component may be a red chrominance component
- H.266/VCC proposed CCLM cross-component prediction technology.
- the cross-component prediction technology based on CCLM can not only realize the prediction of the luminance component to the chrominance component, that is, the prediction of the first image component to the second image component, or the first image component to the third image component, but also the color The prediction from the degree component to the luminance component, that is, the prediction from the second image component to the first image component, or the third image component to the first image component, and even the prediction between the chrominance component and the chrominance component, that is, the first Prediction from the second image component to the third image component, or the third image component to the second image component, etc.
- the following will describe the prediction from the first image component to the second image component as an example, but the technical solution of the embodiment of the present application can also be applied to the prediction of other image components.
- FIG. 1 shows a schematic diagram of the composition structure of a traditional cross-component prediction architecture provided by an embodiment of the present application.
- the first image component for example, represented by Y component
- the second image component for example, represented by U component
- the video image adopts the 4:2:0 format of YUV, the Y component and U component
- the method of using the Y component to predict the third image component for example, represented by the V component
- the traditional cross-component prediction architecture 10 may include a Y component coding block 110, a resolution adjustment unit 120, a Y 1 component coding block 130, a U component coding block 140, a prediction model 150, and a cross component prediction unit 160.
- the Y component of the video image is represented by a Y component coding block 110 with a size of 2N ⁇ 2N.
- the larger bolded box here is used to highlight the Y component coding block 110, and the surrounding gray solid circle is used to indicate the Y component coding block.
- the neighboring reference value Y(n) of 110; the U component of the video image is represented by the U-component coding block 140 of size N ⁇ N.
- the larger block in bold here is used to highlight the U-component coding block 140, and the surrounding gray
- the solid circle is used to indicate the adjacent reference value C(n) of the U component encoding block 140; since the Y component and the U component have different resolutions, the resolution adjustment unit 120 needs to adjust the resolution of the Y component to obtain N ⁇ N-size Y 1 component coding block 130; for Y 1 component coding block 130, the larger bolded box here is used to highlight Y 1 component coding block 130, and the surrounding gray solid circle is used to indicate Y 1 component coding block 130 adjacent reference values Y 1 (n-); and Y 1 (n-) and the U component encoding block adjacent to the reference value C (n) 140 of the prediction model can be constructed by a Y component value encoding neighboring reference blocks 130 150; According to the Y component of the Y 1 component encoding block 130, the pixel value and the prediction model 150 are reconstructed, component prediction can be performed across the component prediction unit 160, and the U component prediction value
- an embodiment of the present application provides an image prediction method. Firstly, at least one image component of the current block in the image is determined; then at least one image component of the current block is preprocessed to obtain at least one image after preprocessing.
- the prediction model is used to perform cross-component prediction processing on at least one image component of the current block; in this way, at least one image component of the current block is Before prediction, the at least one image component is preprocessed first, which can balance the statistical characteristics of each image component before cross-component prediction, thereby not only improving the prediction efficiency, but also improving the coding and decoding efficiency of the video image.
- the video encoding system 20 includes a transform and quantization unit 201, an intra-frame estimation unit 202, and an intra-frame
- the encoding unit 209 can implement header information encoding and context-based Adaptive Binary Arithmatic Coding (CABAC).
- CABAC Sample Adaptive Offset
- a coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is performed by the transform and quantization unit 201.
- the encoding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
- the intra-frame estimation unit 202 and the intra-frame prediction unit 203 are used to The coding block performs intra prediction; specifically, the intra estimation unit 202 and the intra prediction unit 203 are used to determine the intra prediction mode to be used to code the coding block;
- the motion compensation unit 204 and the motion estimation unit 205 are used to perform Inter-frame prediction coding of the received coding block with respect to one or more blocks in one or more reference frames to provide temporal prediction information;
- the motion estimation performed by the motion estimation unit 205 is a process of generating a motion vector, the The motion vector can estimate the motion of the coded block, and then the
- the context content can be based on adjacent coding
- the block can be used to encode information indicating the determined intra-frame prediction mode and output the code stream of the video signal; and the decoded image buffer unit 210 is used to store reconstructed video blocks for prediction reference. As the encoding of the video image progresses, new reconstructed video blocks are continuously generated, and these reconstructed video blocks are all stored in the decoded image buffer unit 210.
- the video decoding system 30 includes a decoding unit 301, an inverse transform and inverse quantization unit 302, and an intra-frame
- the code stream of the video signal is output; the code stream is input into the video decoding system 30, and first passes through the decoding unit 301 to obtain the decoded transform coefficient;
- the inverse transform and inverse quantization unit 302 performs processing to generate a residual block in the pixel domain;
- the intra prediction unit 303 can be used to generate data based on the determined intra prediction mode and the data from the previous decoded block of the current frame or picture The prediction data of the current video block to be decoded;
- the motion compensation unit 304 determines the prediction information for the video block to be decoded by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the video being decoded
- the predictive block of the block; the residual block from the inverse transform and inverse quantization unit 302 and the corresponding predictive block generated by the intra prediction unit 303 or the motion compensation unit 304 are summed to form a decoded video block;
- the decoded video block is passed through the filtering unit 305 in order to remove the block arti
- the embodiment of this application is mainly applied to the part of the intra prediction unit 203 shown in FIG. 2 and the part of the intra prediction unit 303 shown in FIG. 3; that is, the embodiment of this application can be applied to both video coding systems and It can be applied to a video decoding system, which is not specifically limited in the embodiment of the present application.
- FIG. 4 shows a schematic flowchart of an image prediction method provided by an embodiment of the present application.
- the method may include:
- S401 Determine at least one image component of the current block in the image
- S402 Perform preprocessing on at least one image component of the current block to obtain at least one image component after preprocessing;
- S403 Construct a prediction model according to the at least one image component after the preprocessing; wherein the prediction model is used to perform cross-component prediction processing on at least one image component of the current block.
- each image block currently to be encoded can be called an encoding block.
- each coding block may include a first image component, a second image component, and a third image component; and the current block is the coding of the first image component, the second image component, or the third image component currently to be predicted in the video image. Piece.
- the image prediction method in the embodiments of the present application can be applied to both a video encoding system and a video decoding system, and can even be applied to both a video encoding system and a video decoding system. Specific restrictions.
- At least one image component of the current block in the image is first determined; then at least one image component of the current block is preprocessed to obtain at least one image component after preprocessing;
- Component construct a prediction model, which is used to perform cross-component prediction processing on at least one image component of the current block; in this way, before predicting at least one image component of the current block, first perform the at least one image component Preprocessing can balance the statistical characteristics of each image component before cross-component prediction, which not only improves the prediction efficiency, but also improves the coding and decoding efficiency of the video image.
- the method may further include:
- the reference value of the first image component of the current block and/or the reference value of the second image component of the current block is obtained; wherein, the first image component is used when constructing the prediction model
- the component used for prediction, the second image component is the component predicted when the prediction model is constructed.
- At least one image component of the current block may be the first image component, the second image component, or even the first image component and the second image component.
- the first image component is the component used for prediction when constructing the prediction model, and can also be called the image component to be referenced
- the second image component is the component that is predicted when the prediction model is constructed, and can also be called the image component to be predicted.
- the component used for prediction when constructing the prediction model is the brightness component, and the component predicted when constructing the prediction model is the chrominance component, that is, the first image component is the brightness Component, the second image component is the chrominance component; or, assuming that the prediction of the chrominance component to the luminance component is achieved through the prediction model, the component used for prediction when the prediction model is constructed is the chrominance component, which is predicted when the prediction model is constructed
- the component of is the luminance component, that is, the first image component is the chrominance component, and the second image component is the luminance component.
- the reference value of the first image component of the current block and/or the reference value of the second image component of the current block can be obtained.
- the difference in statistical characteristics among various image components may be considered. That is, before performing cross-component prediction on at least one image component through the prediction model, the at least one image component can also be preprocessed according to the statistical characteristics of the image component, such as filtering, grouping, value correction, and quantization. Or to quantify processing and so on. Therefore, in some embodiments, for S402, the preprocessing of at least one image component of the current block to obtain at least one image component after preprocessing may include:
- the first image component is first processed using a preset processing mode; wherein, the The preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing and dequantization processing;
- the processed value of the first image component of the current block is obtained.
- the preset can be used The processing mode performs first processing on the first image component.
- filtering processing may be used to perform the first processing on the first image component
- grouping processing may be used to perform the first processing on the first image component
- value correction processing may also be used to perform the first processing on the first image component.
- One processing, or quantization processing can also be used to perform first processing on the first image component, or inverse quantization processing (also called dequantization processing) can also be used to perform first processing on the first image component, etc.
- dequantization processing also be used to perform first processing on the first image component, etc.
- the application examples are not specifically limited.
- the processing for the first image component may be for the adjacent reference pixel value of the first image component, or for the reconstructed pixel value of the first image component, or even for the Other pixel values of the first image component are processed; in the embodiment of the present application, the setting is performed according to the actual situation of the prediction model, which is not specifically limited in the embodiment of the present application.
- the prediction model uses the luminance component to predict the chrominance component, in order to improve the prediction efficiency, that is, to improve the accuracy of the predicted value, it is necessary to process the luminance component and/or chrominance component according to the preset processing mode, such as The reconstructed pixel value corresponding to the luminance component is processed according to the preset processing mode.
- the preset processing mode adopts value correction processing, because the luminance component and the chrominance component have different statistical characteristics, a deviation factor can be obtained according to the difference in the statistical characteristics of the two image components; then the deviation factor is used to perform the luminance component Value correction processing (such as adding the reconstructed pixel value corresponding to the brightness component to the deviation factor) to balance the statistical characteristics of the image components before cross-component prediction, so as to obtain the processed brightness component, which is then based on the prediction
- the predicted value of the chrominance component predicted by the model is closer to the true value of the chrominance component;
- the preset processing mode adopts filtering processing, since the luminance component and the chrominance component have different statistical characteristics, according to the statistics of the two image components Differences in characteristics, then the luminance component can be filtered to balance the statistical characteristics of the image components before cross-component prediction, so that the processed luminance component is obtained, and then the chrominance component predicted by the prediction model The value is closer to the true value of the chromin
- the predicted value of the chroma component predicted by the prediction model is closer to the true chroma component
- the quantization process and inverse quantization process are involved in the process of using the prediction model to predict the chrominance component, and because the luminance component and the chrominance component have different statistical characteristics, according to the statistics of the two image components Differences in characteristics may result in differences between quantization and dequantization.
- the preset processing mode uses quantization, then the luminance and/or chrominance components can be quantized to balance the image components before cross-component prediction If the statistical characteristics of the chrominance component are obtained, the processed luminance component and/or the processed chrominance component are obtained.
- the predicted value of the chrominance component predicted according to the prediction model is closer to the true value of the chrominance component; if The preset processing mode adopts dequantization processing, then the luminance component and/or chrominance component can be dequantized to balance the statistical characteristics of the image components before cross-component prediction, so as to obtain the processed luminance component and/or After processing the chrominance component, the predicted value of the chrominance component predicted by the prediction model at this time is closer to the true value of the chrominance component; thus, the accuracy of the predicted value is improved, and the prediction efficiency is improved; The obtained prediction value of the chrominance component is closer to the true value, so that the prediction residual of the chrominance component is smaller, so that the bit rate transmitted in the encoding and decoding process is less, and the encoding and decoding efficiency of the video image is also improved.
- the preset processing mode can be used to perform processing on the first image component based on the reference value of the first image component of the current block.
- One image component is processed to balance the statistical characteristics of the image components before cross-component prediction, and then the processed value of the first image component of the current block is obtained; it can also be based on the reference value of the second image component of the current block, using preset processing
- the mode processes the first image component to balance the statistical characteristics of the image components before cross-component prediction, and then obtains the processed value of the first image component of the current block; it can even be based on the reference value of the first image component of the current block and
- the first image component is processed using a preset processing mode to balance the statistical characteristics of the image components before cross-component prediction, and then the processed value of the first image component of the current block is obtained;
- the predicted value of the second image component predicted by the prediction model is closer to the real value; wherein, the prediction model can realize the second image component through the first image component Cross-component prediction.
- the resolution of each image component is not the same.
- the resolution of the image component also needs to be adjusted (including up-sampling or down-sampling the image component) to achieve the goal Resolution.
- using the preset processing mode to perform the first processing and resolution adjustment on the first image component can be cascaded processing, and using the preset processing mode to perform the first processing and resolution adjustment on the first image component can also be combined processing.
- the method may further include:
- the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, the resolution of the first image component is adjusted; wherein, the resolution Adjustment includes up-sampling adjustment or down-sampling adjustment;
- the reference value of the first image component of the current block is updated; wherein the adjusted resolution of the first image component and the second image component The resolution is the same.
- resolution adjustment that is, resolution mapping
- maps the resolution of the first image component to the adjusted resolution of the first image component here, the resolution can be achieved through up-sampling adjustment or down-sampling adjustment. Rate adjustment or resolution mapping.
- the resolution adjustment can be performed in the first image component using the preset processing mode. prior to.
- the resolution of the first image component can be determined Adjust the resolution of the first image component based on the adjusted resolution, and update the reference value of the first image component of the current block based on the adjusted resolution of the first image component.
- the method may further include:
- the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, the resolution of the first image component is adjusted; wherein, the resolution Adjustment includes up-sampling adjustment or down-sampling adjustment;
- the processing value of the first image component of the current block is updated; wherein the adjusted resolution of the first image component and the second image component The resolution is the same.
- the resolution adjustment can also be performed in the first image component using the preset processing mode. After processing. That is to say, after preprocessing at least one image component of the current block, if the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, the resolution of the first image component can be determined The resolution adjustment is performed at the rate, and based on the adjusted resolution of the first image component, the processing value of the first image component of the current block is updated.
- the preprocessing at least one image component of the current block to obtain at least one image component after preprocessing may include:
- the second processing is performed on the first image component; wherein the second processing includes correlation processing of up-sampling and preset processing mode, or correlation processing of down-sampling and preset processing mode ;
- the processed value of the first image component of the current block is obtained; wherein the resolution of the processed first image component of the current block and the resolution of the second image component of the current block after processing The rate is the same.
- the processing value of the first image component of the current block may be processed and resolved at the same time. Obtained after rate adjustment. That is to say, if the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, it can be based on the reference value of the first image component of the current block and/or the second image of the current block.
- the reference value of the component, the second processing is performed on the first image component, the second processing integrates the first processing and the resolution adjustment two processing methods, the second processing can include upsampling and related processing of the preset processing mode , Or related processing of downsampling and preset processing modes, etc.; in this way, according to the result of the second processing, the processed value of the first image component of the current block can be obtained, and the resolution of the first image component of the current block after processing The rate is the same as the resolution of the second image component of the current block.
- the prediction model uses the luminance component to predict the chrominance component.
- the image component to be predicted is the chrominance component
- the image component to be used is the luminance component; since the resolution of the luminance component and the chrominance component are different.
- the resolution of the brightness component needs to be adjusted at this time, such as down-sampling the brightness component to make the adjusted
- the resolution of the luminance component meets the target resolution; on the contrary, if the chrominance component is used to predict the luminance component, after the target resolution of the luminance component is obtained, since the resolution of the chrominance component does not meet the target resolution, color matching is required.
- Adjust the resolution of the chrominance component such as upsampling the chrominance component, so that the resolution of the adjusted chrominance component meets the target resolution; in addition, if the blue chrominance component is used to predict the red chrominance component, After reaching the target resolution of the red chrominance component, since the resolution of the blue chrominance component meets the target resolution, there is no need to adjust the resolution of the blue chrominance component at this time. It has been ensured that the resolution of the blue chrominance component meets the target Resolution; In this way, the subsequent image component prediction can be performed at the same resolution.
- the model parameters of the prediction model need to be determined according to the at least one image component after preprocessing, so as to construct the prediction model. Therefore, in some embodiments, for S403, the constructing a prediction model based on the at least one image component after the preprocessing may include:
- the prediction model is constructed.
- the prediction model in the embodiment of the present application may be a linear model, such as the cross-component prediction technology of CCLM; the prediction model may also be a non-linear model, such as multi-model CCLM (Multiple Model CCLM, MMLM) cross-component prediction Technology, it is composed of multiple linear models.
- CCLM Multiple Model CCLM, MMLM
- the embodiment of the present application will take the prediction model as a linear model as an example for the following description, but the image prediction method of the embodiment of the present application can also be applied to a nonlinear model.
- the model parameters include a first model parameter (represented by ⁇ ) and a second model parameter (represented by ⁇ ).
- ⁇ and ⁇ There are many ways to calculate ⁇ and ⁇ . It can be a preset factor calculation model constructed by the least squares method, or a preset factor calculation model constructed by the maximum and minimum values, or even other ways.
- the preset factor calculation model is not specifically limited in the embodiment of this application.
- the neighboring reference pixel values around the current block (such as the neighboring reference value of the first image component and the neighboring reference value of the second image component) can be used.
- the adjacent reference value of the component and the adjacent reference value of the second image component are obtained after preprocessing) to obtain the minimized regression error, specifically, as shown in formula (1):
- L(n) represents the adjacent reference value of the first image component corresponding to the left side and the upper side of the current block after down-sampling
- C(n) represents the first image component corresponding to the left side and the upper side of the current block.
- the preset factor calculation model constructed by the maximum and minimum values as an example, it provides a simplified version of the derivation method of model parameters. Specifically, it can search for the adjacent reference value of the largest first image component and the smallest first image component.
- the adjacent reference values of image components are used to derive model parameters based on the principle of "two points determine one line", as shown in the preset factor calculation model shown in formula (2):
- L max and L min represent the maximum and minimum values found in the adjacent reference values of the first image component corresponding to the left side and the upper side of the current block after down-sampling
- C max and C min represent L max The adjacent reference value of the second image component corresponding to the reference pixel at the position corresponding to L min .
- the first model parameter ⁇ and the second model parameter ⁇ can also be obtained through the calculation of formula (2).
- a predictive model can be constructed. Specifically, based on ⁇ and ⁇ , assuming that the second image component is predicted based on the first image component, the constructed prediction model is shown in equation (3),
- i, j represent the position coordinates of the pixel in the current block
- i represents the horizontal direction
- j represents the vertical direction
- Pred C [i,j] represents the pixel corresponding to the pixel with the location coordinate [i,j] in the current block
- Rec L [i, j] represents the reconstructed value of the first image component corresponding to the pixel with the position coordinate [i, j] in the same current block (down-sampled).
- the method may further include:
- the luminance component can be used to perform prediction processing on the chrominance component, so that the predicted value of the chrominance component can be obtained.
- the image component can be predicted according to the prediction model; on the one hand, the first image component can be used to predict the second image component, for example, the luminance component can be used to predict the chrominance component.
- the predicted value of the chrominance component on the other hand, the second image component can also be used to predict the first image component, for example, the chrominance component can be used to predict the luminance component to obtain the predicted value of the luminance component; on the other hand, the first image component can also be used
- the second image component predicts the third image component, for example, the blue chroma component is used to predict the red chroma component, and the predicted value of the red chroma component is obtained; since the prediction model is constructed, the embodiment of the present application will perform the method for at least one image component of the current block Preprocessing is to balance the statistical characteristics of the image components before cross-component prediction, and then use the processed image components to construct a prediction model, so as to achieve the purpose of improving prediction efficiency.
- This embodiment provides an image prediction method, which determines at least one image component of a current block in an image; preprocesses at least one image component of the current block to obtain at least one image component after preprocessing; A prediction model is constructed for one image component, and the prediction model is used to perform cross-component prediction processing on at least one image component of the current block; in this way, before predicting at least one image component of the current block, first the at least one image
- the component preprocessing can balance the statistical characteristics of each image component before cross-component prediction, and improve the prediction efficiency; in addition, because the predicted value of the image component predicted by the prediction model is closer to the true value, the prediction residual of the image component The difference is small, so that the bit rate transmitted in the encoding and decoding process is less, and the encoding and decoding efficiency of the video image is also improved.
- FIG. 5 shows a schematic flowchart of another image prediction method provided by an embodiment of the present application.
- the method may include:
- S501 Determine the reference value of the first image component of the current block in the image; wherein the reference value of the first image component of the current block is the first image component value of the neighboring pixels of the current block;
- S502 Perform filtering processing on the reference value of the first image component of the current block to obtain a filtered reference value
- S503 Calculate model parameters of a prediction model by using the filtered reference value, where the prediction model is used to map the value of the first image component of the current block to the value of the second image component of the current block Value, the second image component is different from the first image component.
- each image block currently to be encoded can be called an encoding block.
- each coding block may include a first image component, a second image component, and a third image component; and the current block is the coding of the first image component, the second image component, or the third image component currently to be predicted in the video image. Piece.
- the image prediction method can be applied to both a video encoding system and a video decoding system, or even a video encoding system and a video decoding system at the same time, which is not specifically limited in the embodiment of the present application.
- the reference value of the first image component of the current block in the image is first determined.
- the reference value of the first image component of the current block is the first image component value of the neighboring pixels of the current block;
- the reference value of an image component is filtered to obtain the filtered reference value;
- the filtered reference value is then used to calculate the model parameters of the prediction model, where the prediction model is used to map the value of the first image component of the current block to
- the value of the second image component of the current block is different from the first image component; in this way, before at least one image component of the current block is predicted, the at least one image component is first filtered to balance
- the statistical characteristics of each image component before cross-component prediction not only improves the prediction efficiency, but also improves the coding and decoding efficiency of the video image.
- the calculation of the model parameters of the component prediction model by using the filtered reference value may include:
- the model parameters of the prediction model are calculated by using the filtered reference value and the reference value of the second image component of the current block.
- the reference value of the second image component of the current block is obtained, and then the prediction is calculated according to the filtered reference value and the reference value of the second image component of the current block
- the model parameters of the model are used to construct a prediction model based on the calculated model parameters.
- the predicted value of the image component predicted by the prediction model is closer to the true value, so that the prediction residual of the image component is smaller, so that it is transmitted during the encoding and decoding process
- the bit rate is less, and the coding and decoding efficiency of video images is also improved.
- performing filtering processing on the reference value of the first image component of the current block to obtain the filtered reference value may include:
- the first adjustment process is performed on the reference value of the first image component of the current block, and the current The reference value of the first image component of the block, wherein the first adjustment processing includes one of the following: down-sampling filtering, up-sampling filtering;
- the method may also include:
- the preset processing mode includes at least one of the following: filtering processing, grouping processing, Value correction processing, quantization processing, dequantization processing, low-pass filtering and adaptive filtering.
- performing filtering processing on the reference value of the first image component of the current block to obtain the filtered reference value may include:
- the second adjustment processing includes: down-sampling and smoothing filtering, or up-sampling and smoothing filtering.
- the resolution of each image component is not the same.
- the resolution of the image component also needs to be adjusted (including up-sampling or down-sampling the image component).
- Reach the target resolution Specifically, resolution adjustment, that is, resolution mapping, maps the resolution of the first image component to the adjusted resolution of the first image component; here, the resolution can be achieved through up-sampling adjustment or down-sampling adjustment. Adjustment or resolution mapping.
- the filtering processing and resolution adjustment of the first image component can be processed in cascade, for example, the resolution adjustment is performed before the filtering processing of the first image component, or after the filtering processing is performed on the first image component Perform resolution adjustment; in addition, it is also possible to perform joint processing of filtering processing and resolution adjustment of the first image component (that is, the first adjustment processing).
- the calculation of the model parameters of the component prediction model by using the filtered reference value may include:
- Determining the reference value of the second image component of the current block wherein the reference value of the second image component of the current block is the second image component value of the neighboring pixels of the current block;
- the method may further include:
- the value of the first image component of the current block is mapped to obtain the predicted value of the second image component of the current block.
- the reference value of the second image component of the current block may be the value of the second image component of the neighboring pixels of the current block; in this way, after the reference value of the second image component is determined, the reference value and the filtered reference value The determined reference value of the second image component is used to calculate the model parameters of the prediction model, and the prediction model is constructed according to the calculated model parameters.
- the predicted value of the image component predicted by the prediction model is closer to the true value, so that the prediction of the image component
- the residual is small, so that the bit rate transmitted in the encoding and decoding process is less, and the encoding and decoding efficiency of the video image is also improved.
- FIG. 6 shows a schematic diagram of the composition structure of an improved cross-component prediction architecture provided by an embodiment of the present application.
- the improved cross-component prediction architecture 60 may also include a processing unit 610, which is mainly used to precede the cross-component prediction unit 160 Perform correlation processing on at least one image component.
- the processing unit 610 may be located before the resolution adjustment unit 120 or after the resolution adjustment unit 120; for example, in FIG.
- the processing unit 610 is located after the resolution adjustment unit 120, and performs related processing on the Y component, such as Filtering processing, grouping processing, value correction processing, quantization processing and inverse quantization processing, etc., so that a more accurate prediction model can be constructed so that the predicted value of the U component obtained by the prediction is closer to the true value.
- the Y component is used to predict the U component. Since the Y component current block 110 and the U component current block 140 have different resolutions, the resolution adjustment unit 120 is required at this time.
- the resolution of the Y component is adjusted to obtain the Y 1 component current block 130 with the same resolution as the U component current block 140; before this, the Y component can also be processed by the processing unit 610 to obtain the Y 1 component The current block 130; and then using the adjacent reference value Y 1 (n) of the current block 130 of the Y 1 component and the adjacent reference value C(n) of the current block 140 of the U component to construct the prediction model 150; according to the current Y 1 component
- the Y component of the block 130 reconstructs the pixel value and the prediction model 150, through the cross-component prediction unit 160 to perform image component prediction to obtain the U component prediction value; since the Y component is processed before the cross-component prediction, according to the processed luminance component
- the resolution adjustment unit 120 and the processing unit 610 may perform cascade processing on image components (for example, the resolution adjustment unit 120 first performs resolution adjustment, and then the processing unit 610 performs related processing; or The processing unit 610 performs related processing, and then the resolution adjustment unit 120 performs resolution adjustment), and the image components can be jointly processed (for example, the resolution adjustment unit 120 and the processing unit 610 are combined and then processed).
- FIG. 7 shows a schematic diagram of the composition structure of another improved cross-component prediction architecture provided by an embodiment of the present application. Based on the improved cross-component prediction architecture 60 shown in FIG. 6, the improved cross-component prediction architecture shown in FIG.
- the joint unit 710 includes the functions of the resolution adjustment unit 120 and the processing unit 510, which can not only implement resolution adjustment of at least one image component, but also implement related processing of at least one image component, such as filtering processing and grouping. Processing, value correction processing, quantization processing, and inverse quantization processing, etc., so that a more accurate prediction model 150 can be constructed.
- the U component predicted value predicted by the prediction model 150 is closer to the real value, thereby improving the prediction efficiency , It also improves the coding and decoding efficiency of video images.
- the prediction model when the image prediction method is applied to the encoder side, the prediction model can be calculated according to the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block. Model parameters, and then write the calculated model parameters into the code stream; the code stream is transmitted from the encoder side to the decoder side; correspondingly, when the image prediction method is applied to the decoder side, the code stream can be analyzed
- the model parameters of the prediction model are obtained to construct a prediction model, and the prediction model is used to perform cross-component prediction processing on at least one image component of the current block.
- This embodiment provides an image prediction method to determine a reference value of a first image component of a current block in an image, where the reference value of the first image component of the current block is the value of the first image component of adjacent pixels of the current block; Perform filtering processing on the reference value of the first image component of the current block to obtain the filtered reference value; use the filtered reference value to calculate the model parameters of the prediction model, which is used to take the first image component of the current block
- the value is mapped to the value of the second image component of the current block, the second image component is different from the first image component; in this way, before at least one image component of the current block is predicted, the at least one image
- the component preprocessing can balance the statistical characteristics of each image component before cross-component prediction, and improve the prediction efficiency; in addition, because the predicted value of the image component predicted by the prediction model is closer to the true value, the prediction residual of the image component The difference is small, so that the bit rate transmitted in the encoding and decoding process is less, and the encoding and de
- FIG. 8 shows a schematic diagram of the composition structure of an encoder 80 provided by an embodiment of the present application.
- the encoder 80 may include: a first determining unit 801, a first processing unit 802, and a first constructing unit 803, where
- the first determining unit 801 is configured to determine at least one image component of the current block in the image
- the first processing unit 802 is configured to preprocess at least one image component of the current block to obtain at least one image component after preprocessing;
- the first construction unit 803 is configured to construct a prediction model according to the at least one image component after the preprocessing; wherein the prediction model is used to perform cross-component prediction processing on at least one image component of the current block.
- the encoder 80 may further include a first statistics unit 804 and a first acquisition unit 805, wherein,
- the first statistical unit 804 is configured to perform characteristic statistics on at least one image component of the current block; wherein the at least one image component includes a first image component and/or a second image component;
- the first obtaining unit 805 is configured to obtain the reference value of the first image component of the current block and/or the reference value of the second image component of the current block according to the result of characteristic statistics; wherein, the first image component An image component is a component used for prediction when constructing the prediction model, and the second image component is a component predicted when constructing the prediction model.
- the first processing unit 802 is further configured to use a preset processing mode based on the reference value of the first image component of the current block and/or the reference value of the second image component of the current block Perform first processing on the first image component; wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
- the first obtaining unit 805 is further configured to obtain the processed value of the first image component of the current block according to the result of the first processing.
- the encoder 80 may further include a first adjustment unit 806 and a first update unit 807, where:
- the first adjustment unit 806 is configured to perform the resolution of the first image component when the resolution of the first image component of the current block is different from the resolution of the second image component of the current block Resolution adjustment; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment;
- the first update unit 807 is configured to update the reference value of the first image component of the current block based on the adjusted resolution of the first image component; wherein The resolution is the same as the resolution of the second image component.
- the first adjustment unit 806 is further configured to: when the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, Resolution adjustment is performed on the resolution of the image components; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment;
- the first update unit 807 is further configured to update the processing value of the first image component of the current block based on the adjusted resolution of the first image component; wherein, the adjusted first image component The resolution of is the same as the resolution of the second image component.
- the first adjustment unit 806 is further configured to, when the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, based on the current block And/or the reference value of the first image component of the current block and/or the reference value of the second image component of the current block, performing a second processing on the first image component; wherein the second processing includes upsampling and preset processing Mode-related processing, or down-sampling and related processing of preset processing modes;
- the first obtaining unit 805 is further configured to obtain the processed value of the first image component of the current block according to the result of the second processing; wherein the processed value of the first image component of the current block is the same as The resolutions of the second image components of the current block are the same.
- the first determining unit 801 is further configured to determine the model parameter of the prediction model according to the processing value of the first image component and the reference value of the second image component;
- the first construction unit 803 is configured to construct the prediction model according to the model parameters.
- the encoder 80 may further include a first prediction unit 808 configured to perform cross-component prediction on the second image component of the current block according to the prediction model to obtain the second image component of the current block.
- the predicted value of the two image components may be further included in the encoder 80.
- a “unit” may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may also be non-modular.
- the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software function module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can A personal computer, server, or network device, etc.) or a processor (processor) executes all or part of the steps of the method described in this embodiment.
- the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- an embodiment of the present application provides a computer storage medium that stores an image prediction program that implements the steps of the method described in the foregoing embodiment when the image prediction program is executed by at least one processor.
- FIG. 9 shows the specific hardware structure of the encoder 80 provided by the embodiment of the present application, which may include: a first communication interface 901, a first memory 902, and a first communication interface 901; Processor 903; the components are coupled together through the first bus system 904.
- the first bus system 904 is used to implement connection and communication between these components.
- the first bus system 904 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the first bus system 904 in FIG. 9. among them,
- the first communication interface 901 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
- the first memory 902 is configured to store a computer program that can run on the first processor 903;
- the first processor 903 is configured to execute: when the computer program is running:
- Preprocessing at least one image component of the current block to obtain at least one image component after preprocessing
- the first memory 902 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
- the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
- RAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
- Synchlink DRAM Synchronous Link Dynamic Random Access Memory
- DRRAM Direct Rambus RAM
- the first processor 903 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the first processor 903 or instructions in the form of software.
- the aforementioned first processor 903 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- DSP Digital Signal Processor
- ASIC application specific integrated circuit
- FPGA ready-made programmable gate array
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers.
- the storage medium is located in the first memory 902, and the first processor 903 reads the information in the first memory 902, and completes the steps of the foregoing method in combination with its hardware.
- the embodiments described in this application can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
- the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
- ASIC Application Specific Integrated Circuits
- DSP Digital Signal Processing
- DSP Device Digital Signal Processing Equipment
- PLD programmable Logic Device
- PLD programmable Logic Device
- Field-Programmable Gate Array Field-Programmable Gate Array
- FPGA Field-Programmable Gate Array
- the technology described in this application can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this application.
- the first processor 903 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
- This embodiment provides an encoder, which may include a first determining unit, a first processing unit, and a first constructing unit, wherein the first determining unit is configured to determine at least one image component of the current block in the image;
- a processing unit is configured to preprocess at least one image component of the current block to obtain at least one image component after preprocessing;
- the first construction unit is configured to construct a prediction model based on the at least one image component after preprocessing
- the prediction model is used to perform cross-component prediction processing on at least one image component of the current block; in this way, before at least one image component of the current block is predicted, the at least one image component is preprocessed first, which can balance The statistical characteristics of each image component before cross-component prediction, thereby improving the prediction efficiency, and at the same time improving the coding and decoding efficiency of the video image.
- FIG. 10 shows a schematic diagram of the composition structure of a decoder 100 provided by an embodiment of the present application.
- the decoder 100 may include a second determination unit 1001, a second processing unit 1002, and a second construction unit 1003, where:
- the second determining unit 1001 is configured to determine at least one image component of the current block in the image
- the second processing unit 1002 is configured to preprocess at least one image component of the current block to obtain at least one image component after preprocessing;
- the second construction unit 1003 is configured to construct a prediction model according to the at least one image component after the preprocessing; wherein the prediction model is used to perform cross-component prediction processing on at least one image component of the current block.
- the decoder 100 may further include a second statistics unit 1004 and a second acquisition unit 1005, where:
- the second statistical unit 1004 is configured to perform characteristic statistics on at least one image component of the current block; wherein the at least one image component includes a first image component and/or a second image component;
- the second acquiring unit 1005 is configured to acquire the reference value of the first image component of the current block and/or the reference value of the second image component of the current block according to the result of characteristic statistics; wherein, the first image component An image component is a component used for prediction when constructing the prediction model, and the second image component is a component predicted when constructing the prediction model.
- the second processing unit 1002 is further configured to use a preset processing mode based on the reference value of the first image component of the current block and/or the reference value of the second image component of the current block Perform first processing on the first image component; wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
- the second acquiring unit 1005 is further configured to acquire the processed value of the first image component of the current block according to the result of the first processing.
- the decoder 100 may further include a second adjustment unit 1006 and a second update unit 1007, where:
- the second adjustment unit 1006 is configured to perform the resolution of the first image component when the resolution of the first image component of the current block is different from the resolution of the second image component of the current block Resolution adjustment; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment;
- the second update unit 1007 is configured to update the reference value of the first image component of the current block based on the adjusted resolution of the first image component; wherein The resolution is the same as the resolution of the second image component.
- the second adjustment unit 1006 is further configured to: when the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, Resolution adjustment is performed on the resolution of the image components; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment;
- the second update unit 1007 is further configured to update the processing value of the first image component of the current block based on the adjusted resolution of the first image component; wherein the adjusted first image component The resolution of is the same as the resolution of the second image component.
- the second adjustment unit 1006 is further configured to, when the resolution of the first image component of the current block is different from the resolution of the second image component of the current block, based on the current block And/or the reference value of the first image component of the current block and/or the reference value of the second image component of the current block, performing a second processing on the first image component; wherein the second processing includes upsampling and preset processing Mode-related processing, or down-sampling and related processing of preset processing modes;
- the second obtaining unit 1005 is further configured to obtain the processed value of the first image component of the current block according to the result of the second processing; wherein the processed value of the first image component of the current block is the same as The resolutions of the second image components of the current block are the same.
- the second construction unit 1003 is configured to analyze the code stream, and construct the prediction model according to the model parameters obtained by the analysis.
- the decoder 100 may further include a second prediction unit 1008 configured to perform cross-component prediction on the second image component of the current block according to the prediction model to obtain the second image component of the current block.
- the predicted value of the two image components may be further included in the decoder 100.
- a "unit" may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may be non-modular.
- the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software function module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
- this embodiment provides a computer storage medium that stores an image prediction program, and when the image prediction program is executed by a second processor, the method described in any one of the preceding embodiments is implemented. .
- FIG. 11 shows the specific hardware structure of the decoder 100 provided by the embodiment of the present application, which may include: a second communication interface 1101, a second memory 1102, and a second Processor 1103; the components are coupled together through the second bus system 1104.
- the second bus system 1104 is used to implement connection and communication between these components.
- the second bus system 1104 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the second bus system 1104 in FIG. 11. among them,
- the second communication interface 1101 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
- the second memory 1102 is configured to store a computer program that can run on the second processor 1103;
- the second processor 1103 is configured to execute when the computer program is running:
- Preprocessing at least one image component of the current block to obtain at least one image component after preprocessing
- the second processor 1103 is further configured to execute the method described in any one of the foregoing embodiments when running the computer program.
- This embodiment provides a decoder, which may include a second determining unit, a second processing unit, and a second constructing unit, wherein the second determining unit is configured to determine at least one image component of the current block in the image;
- the second processing unit is configured to preprocess at least one image component of the current block to obtain at least one image component after preprocessing;
- the second construction unit is configured to construct a prediction model based on the at least one image component after preprocessing
- the prediction model is used to perform cross-component prediction processing on at least one image component of the current block; in this way, before at least one image component of the current block is predicted, the at least one image component is preprocessed first, which can balance The statistical characteristics of each image component before cross-component prediction, thereby improving the prediction efficiency, and at the same time improving the coding and decoding efficiency of the video image.
- At least one image component of the current block in the image is first determined; then at least one image component of the current block is preprocessed to obtain at least one image component after preprocessing;
- Component construct a prediction model, which is used to perform cross-component prediction processing on at least one image component of the current block; in this way, before predicting at least one image component of the current block, first perform the at least one image component Preprocessing can balance the statistical characteristics of each image component before cross-component prediction, thereby improving the prediction efficiency; in addition, because the predicted value of the image component predicted by the prediction model is closer to the true value, the prediction residual of the image component Smaller, so that the bit rate transmitted during the encoding and decoding process is less, and at the same time, the encoding and decoding efficiency of the video image is improved.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Algebra (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Abstract
Description
Claims (20)
- 一种图像预测方法,应用于编码器或解码器,所述方法包括:确定图像中当前块的至少一个图像分量;对所述当前块的至少一个图像分量进行预处理,得到预处理后的至少一个图像分量;根据所述预处理后的至少一个图像分量,构建预测模型;其中,所述预测模型用于对所述当前块的至少一个图像分量进行跨分量预测处理。
- 根据权利要求1所述的方法,其中,在所述确定图像中当前块的至少一个图像分量之后,所述方法还包括:对所述当前块的至少一个图像分量进行特性统计;其中,所述至少一个图像分量包括第一图像分量和/或第二图像分量;根据特性统计的结果,获取所述当前块的第一图像分量的参考值和/或所述当前块的第二图像分量的参考值;其中,所述第一图像分量为构建所述预测模型时所用于预测的分量,所述第二图像分量为构建所述预测模型时所被预测的分量。
- 根据权利要求2所述的方法,其中,所述对所述当前块的至少一个图像分量进行预处理,得到预处理后的至少一个图像分量,包括:基于所述当前块的第一图像分量的参考值和/或所述当前块的第二图像分量的参考值,利用预设处理模式对所述第一图像分量进行第一处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;根据第一处理的结果,获得所述当前块的第一图像分量的处理值。
- 根据权利要求2所述的方法,其中,在所述对所述当前块的至少一个图像分量进行预处理,得到预处理后的至少一个图像分量之前,所述方法还包括:当所述当前块的第一图像分量的分辨率与所述当前块的第二图像分量的分辨率不同时,对所述第一图像分量的分辨率进行分辨率调整;其中,所述分辨率调整包括上采样调整或下采样调整;基于调整后的所述第一图像分量的分辨率,更新所述当前块的第一图像分量的参考值;其中,调整后的所述第一图像分量的分辨率与所述第二图像分量的分辨率相同。
- 根据权利要求3所述的方法,其中,在所述对所述当前块的至少一个图像分量进行预处理,得到预处理后的至少一个图像分量之后,所述方法还包括:当所述当前块的第一图像分量的分辨率与所述当前块的第二图像分量的分辨率不同时,对所述第一图像分量的分辨率进行分辨率调整;其中,所述分辨率调整包括上采样调整或下采样调整;基于调整后的所述第一图像分量的分辨率,更新所述当前块的第一图像分量的处理值;其中,调整后的所述第一图像分量的分辨率与所述第二图像分量的分辨率相同。
- 根据权利要求2所述的方法,其中,所述对所述当前块的至少一个图像分量进行预处理,得到预处理后的至少一个图像分量,包括:当所述当前块的第一图像分量的分辨率与所述当前块的第二图像分量的分辨率不同时,基于所述当前块的第一图像分量的参考值和/或所述当前块的第二图像分量的参考值,对所述第一图像分量进行第二处理;其中,所述第二处理包括上采样及预设处理模式的相关处理、或者下采样及预设处理模式的相关处理;根据第二处理的结果,获得所述当前块的第一图像分量的处理值;其中,处理后的所述当前块的第一图像分量的分辨率与所述当前块的第二图像分量的分辨率相同。
- 根据权利要求3、5或6任一项所述的方法,其中,所述根据所述预处理后的至少一个图像分量,构建预测模型,包括:根据所述第一图像分量的处理值和所述第二图像分量的参考值,确定所述预测模型的模型参数;根据所述模型参数,构建所述预测模型。
- 根据权利要求7所述的方法,其中,在所述构建预测模型之后,所述方法还包括:根据所述预测模型对所述当前块的第二图像分量进行跨分量预测,得到所述当前块的第二图像分量的预测值。
- 一种图像预测方法,应用于编码器或解码器,所述方法包括:确定图像中当前块的第一图像分量的参考值;其中,所述当前块的第一图像分量的参考值是所述当前块相邻像素的第一图像分量值;对所述当前块的第一图像分量的参考值进行滤波处理,得到滤波后的参考值;利用所述滤波后的参考值计算预测模型的模型参数,其中,所述预测模型用于将所述当前块的第一图像分量的取值映射为所述当前块的第二图像分量的取值,所述第二图像分量不同于所述第一图像分量。
- 根据权利要求9所述的方法,其中,所述利用所述滤波后的参考值计算分量预测模型的模型参数,包括:对所述图像的至少一个图像分量或所述当前块的至少一个图像分量进行特性统计,其中,所述至少一个图像分量包含所述第一图像分量和/所述第二图像分量;根据特性统计的结果,获取所述当前块的第二图像分量的参考值;其中,所述当前块的第二图像分量的参考值是所述当前块相邻像素的第二图像分量值;利用所述滤波后的参考值和所述当前块的第二图像分量的参考值,计算所述预测模型的模型参数。
- 根据权利要求9所述的方法,其中,所述对所述当前块的第一图像分量的参考值进行滤波处理,得到滤波后的参考值,包括:当所述图像的第二图像分量的分辨率与所述图像的第一图像分量的分辨率不同时,对所述当前块的第一图像分量的参考值进行第一调整处理,更新所述当前块的第一图像分量的参考值,其中,所述第一调整处理包括以下之一:下采样滤波,上采样滤波;对所述当前块的第一图像分量的参考值进行所述滤波处理,得到所述滤波后的参考值。
- 根据权利要求9或11所述的方法,其中,所述方法还包括:根据所述当前块的第一图像分量的参考值,利用预设处理模式对所述参考值进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、去量化处理、低通滤波和自适应滤波。
- 根据权利要求9所述的方法,其中,所述对所述当前块的第一图像分量的参考值进行滤波处理,得到滤波后的参考值,包括:当所述图像的第二图像分量的分辨率与所述图像的第一图像分量的分辨率不同时,对所述当前块的第二图像分量的参考值进行第二调整处理,更新所述当前块的第一图像分量的第一参考值,其中,所述第二调整处理包括:下采样和平滑滤波,或者,上采样和平滑滤波。
- 根据权利要求9所述的方法,其中,所述利用所述滤波后的参考值计算分量预测模型的模型参数,包括:确定所述当前块的第二图像分量的参考值;其中,所述当前块的第二图像分量的参考值是所述当前块相邻像素的第二图像分量值;利用所述滤波后的参考值和所述当前块的第二图像分量的参考值计算所述分量预测模型的模型参数。
- 根据权利要求9所述的方法,其中,在所述利用所述滤波后的参考值计算预测模型的模型参数之后,还包括:根据所述预测模型,对所述当前块的第一图像分量的值进行映射,得到所述当前块的第二图像分量的预测值。
- 一种编码器,所述编码器包括第一确定单元、第一处理单元和第一构建单元,其中,所述第一确定单元,配置为确定图像中当前块的至少一个图像分量;所述第一处理单元,配置为对所述当前块的至少一个图像分量进行预处理,得到预处理后的至少一个图像分量;所述第一构建单元,配置为根据所述预处理后的至少一个图像分量,构建预测模型;其中,所述预测模型用于对所述当前块的至少一个图像分量进行跨分量预测处理。
- 一种编码器,所述编码器包括第一存储器和第一处理器,其中,所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;所述第一处理器,用于在运行所述计算机程序时,执行如权利要求1至15任一项所述的方法。
- 一种解码器,所述解码器包括第二确定单元、第二处理单元和第二构建单元,其中,所述第二确定单元,配置为确定图像中当前块的至少一个图像分量;所述第二处理单元,配置为对所述当前块的至少一个图像分量进行预处理,得到预处理后的至少一个图像分量;所述第二构建单元,配置为根据所述预处理后的至少一个图像分量,构建预测模型;其中,所述预测模型用于对所述当前块的至少一个图像分量进行跨分量预测处理。
- 一种解码器,所述解码器包括第二存储器和第二处理器,其中,所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;所述第二处理器,用于在运行所述计算机程序时,执行如权利要求1至15任一项所述的方法。
- 一种计算机存储介质,其中,所述计算机存储介质存储有图像预测程序,所述图像预测程序被第一处理器或第二处理器执行时实现如权利要求1至15任一项所述的方法。
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2021011662A MX2021011662A (es) | 2019-03-25 | 2019-10-12 | Procedimiento de prediccion de imagenes, codificador, descodificador y medio de almacenamiento. |
CN202310642221.8A CN116634153A (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
CN202111034246.7A CN113766233B (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
EP19920728.3A EP3955571B1 (en) | 2019-03-25 | 2019-10-12 | Image prediction method, encoder, decoder and storage medium |
JP2021557116A JP2022528835A (ja) | 2019-03-25 | 2019-10-12 | 画像予測方法、エンコーダ、デコーダ及び記憶媒体 |
AU2019437150A AU2019437150A1 (en) | 2019-03-25 | 2019-10-12 | Image prediction method, encoder, decoder and storage medium |
EP24176095.8A EP4391537A2 (en) | 2019-03-25 | 2019-10-12 | Image prediction method, encoder, decoder and storage medium |
KR1020217032516A KR20210139327A (ko) | 2019-03-25 | 2019-10-12 | 화상 예측 방법, 인코더, 디코더 및 저장 매체 |
CN201980085344.8A CN113228647A (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
US17/482,191 US20220014765A1 (en) | 2019-03-25 | 2021-09-22 | Method for picture prediction, encoder, decoder, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962823602P | 2019-03-25 | 2019-03-25 | |
US62/823,602 | 2019-03-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/482,191 Continuation US20220014765A1 (en) | 2019-03-25 | 2021-09-22 | Method for picture prediction, encoder, decoder, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020192084A1 true WO2020192084A1 (zh) | 2020-10-01 |
Family
ID=72611297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/110809 WO2020192084A1 (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
Country Status (8)
Country | Link |
---|---|
US (1) | US20220014765A1 (zh) |
EP (2) | EP4391537A2 (zh) |
JP (1) | JP2022528835A (zh) |
KR (1) | KR20210139327A (zh) |
CN (3) | CN113228647A (zh) |
AU (1) | AU2019437150A1 (zh) |
MX (1) | MX2021011662A (zh) |
WO (1) | WO2020192084A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20230022061A (ko) * | 2021-08-06 | 2023-02-14 | 삼성전자주식회사 | 디코딩 장치 및 그의 동작 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106576164A (zh) * | 2014-06-27 | 2017-04-19 | 三菱电机株式会社 | 用于预测和过滤图片中的颜色分量的方法和解码器 |
CN107079166A (zh) * | 2014-10-28 | 2017-08-18 | 联发科技(新加坡)私人有限公司 | 用于视频编码的引导交叉分量预测的方法 |
CN107211124A (zh) * | 2015-01-27 | 2017-09-26 | 高通股份有限公司 | 适应性跨分量残差预测 |
CN107409209A (zh) * | 2015-03-20 | 2017-11-28 | 高通股份有限公司 | 用于线性模型预测模式的降取样处理 |
WO2018070914A1 (en) * | 2016-10-12 | 2018-04-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Residual refinement of color components |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140198846A1 (en) * | 2013-01-16 | 2014-07-17 | Qualcomm Incorporated | Device and method for scalable coding of video information |
US10045023B2 (en) * | 2015-10-09 | 2018-08-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Cross component prediction in video coding |
US20170150156A1 (en) * | 2015-11-25 | 2017-05-25 | Qualcomm Incorporated | Illumination compensation with non-square predictive blocks in video coding |
-
2019
- 2019-10-12 CN CN201980085344.8A patent/CN113228647A/zh active Pending
- 2019-10-12 CN CN202310642221.8A patent/CN116634153A/zh active Pending
- 2019-10-12 EP EP24176095.8A patent/EP4391537A2/en active Pending
- 2019-10-12 KR KR1020217032516A patent/KR20210139327A/ko unknown
- 2019-10-12 CN CN202111034246.7A patent/CN113766233B/zh active Active
- 2019-10-12 EP EP19920728.3A patent/EP3955571B1/en active Active
- 2019-10-12 WO PCT/CN2019/110809 patent/WO2020192084A1/zh unknown
- 2019-10-12 JP JP2021557116A patent/JP2022528835A/ja active Pending
- 2019-10-12 MX MX2021011662A patent/MX2021011662A/es unknown
- 2019-10-12 AU AU2019437150A patent/AU2019437150A1/en active Pending
-
2021
- 2021-09-22 US US17/482,191 patent/US20220014765A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106576164A (zh) * | 2014-06-27 | 2017-04-19 | 三菱电机株式会社 | 用于预测和过滤图片中的颜色分量的方法和解码器 |
CN107079166A (zh) * | 2014-10-28 | 2017-08-18 | 联发科技(新加坡)私人有限公司 | 用于视频编码的引导交叉分量预测的方法 |
CN107211124A (zh) * | 2015-01-27 | 2017-09-26 | 高通股份有限公司 | 适应性跨分量残差预测 |
CN107409209A (zh) * | 2015-03-20 | 2017-11-28 | 高通股份有限公司 | 用于线性模型预测模式的降取样处理 |
WO2018070914A1 (en) * | 2016-10-12 | 2018-04-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Residual refinement of color components |
Non-Patent Citations (1)
Title |
---|
ZHANG, KAI ET AL.: "Multi-Model Based Cross-Component Linear Model Chroma Intra-Prediction for Video Coding", 2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 13 December 2017 (2017-12-13), pages 1 - 4, XP033325781 * |
Also Published As
Publication number | Publication date |
---|---|
AU2019437150A1 (en) | 2021-11-11 |
CN113766233A (zh) | 2021-12-07 |
JP2022528835A (ja) | 2022-06-16 |
CN113766233B (zh) | 2023-05-23 |
EP3955571B1 (en) | 2024-06-05 |
MX2021011662A (es) | 2021-10-22 |
CN116634153A (zh) | 2023-08-22 |
US20220014765A1 (en) | 2022-01-13 |
EP3955571A4 (en) | 2022-06-15 |
EP3955571A1 (en) | 2022-02-16 |
KR20210139327A (ko) | 2021-11-22 |
EP4391537A2 (en) | 2024-06-26 |
CN113228647A (zh) | 2021-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6698077B2 (ja) | モデルベースの映像符号化用の知覚的最適化 | |
CN113068028B (zh) | 视频图像分量的预测方法、装置及计算机存储介质 | |
WO2020192085A1 (zh) | 图像预测方法、编码器、解码器以及存储介质 | |
WO2021120122A1 (zh) | 图像分量预测方法、编码器、解码器以及存储介质 | |
WO2022087901A1 (zh) | 图像预测方法、编码器、解码器以及计算机存储介质 | |
WO2020132908A1 (zh) | 解码预测方法、装置及计算机存储介质 | |
WO2020186763A1 (zh) | 图像分量预测方法、编码器、解码器以及存储介质 | |
WO2020192084A1 (zh) | 图像预测方法、编码器、解码器以及存储介质 | |
WO2022021422A1 (zh) | 视频编码方法、编码器、***以及计算机存储介质 | |
EP3843399A1 (en) | Video image component prediction method and apparatus, and computer storage medium | |
WO2020056767A1 (zh) | 视频图像分量的预测方法、装置及计算机存储介质 | |
WO2020258052A1 (zh) | 图像分量预测方法、装置及计算机存储介质 | |
WO2023141781A1 (zh) | 编解码方法、装置、编码设备、解码设备以及存储介质 | |
RU2805048C2 (ru) | Способ предсказания изображения, кодер и декодер | |
WO2020192180A1 (zh) | 图像分量的预测方法、编码器、解码器及计算机存储介质 | |
WO2023197192A1 (zh) | 编解码方法、装置、编码设备、解码设备以及存储介质 | |
CN112970257A (zh) | 解码预测方法、装置及计算机存储介质 | |
WO2023197194A1 (zh) | 编解码方法、装置、编码设备、解码设备以及存储介质 | |
WO2023197189A1 (zh) | 编解码方法、装置、编码设备、解码设备以及存储介质 | |
WO2024011370A1 (zh) | 视频图像处理方法及装置、编解码器、码流、存储介质 | |
WO2024007165A1 (zh) | 编解码方法、装置、编码设备、解码设备以及存储介质 | |
WO2023197193A1 (zh) | 编解码方法、装置、编码设备、解码设备以及存储介质 | |
WO2021174396A1 (zh) | 图像预测方法、编码器、解码器以及存储介质 | |
WO2021056224A1 (zh) | 预测值的确定方法、编码器、解码器以及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19920728 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021557116 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217032516 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019920728 Country of ref document: EP Effective date: 20211015 |
|
ENP | Entry into the national phase |
Ref document number: 2019437150 Country of ref document: AU Date of ref document: 20191012 Kind code of ref document: A |