WO2021175108A1 - 帧间预测方法、编码器、解码器、计算机可读存储介质 - Google Patents

帧间预测方法、编码器、解码器、计算机可读存储介质 Download PDF

Info

Publication number
WO2021175108A1
WO2021175108A1 PCT/CN2021/075974 CN2021075974W WO2021175108A1 WO 2021175108 A1 WO2021175108 A1 WO 2021175108A1 CN 2021075974 W CN2021075974 W CN 2021075974W WO 2021175108 A1 WO2021175108 A1 WO 2021175108A1
Authority
WO
WIPO (PCT)
Prior art keywords
illumination compensation
compensation information
current
block
information
Prior art date
Application number
PCT/CN2021/075974
Other languages
English (en)
French (fr)
Inventor
唐海
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021175108A1 publication Critical patent/WO2021175108A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Definitions

  • This application relates to the technical field of video coding and decoding, and in particular to an inter-frame prediction method, an encoder, a decoder, and a computer-readable storage medium.
  • the current general video coding and decoding standards all adopt a block-based hybrid coding framework.
  • Each frame in the video is divided into square largest coding units (LCU largest coding unit) or largest decoding units of the same size (such as 128x128, 64x64, etc.).
  • Each maximum coding unit or maximum decoding unit can be divided into rectangular coding blocks or decoding blocks according to rules. Since there is a strong correlation between adjacent pixels in a frame of video, the intra-frame prediction method is used in the video coding and decoding technology to eliminate the spatial redundancy between adjacent pixels, thereby improving the coding efficiency.
  • the local illumination compensation technology is used to eliminate the changes in the illumination angle caused by the illumination changes, the object movement, the shadow changes caused by the relative movement of the objects before and after, the brightness changes introduced by the post-production, etc.
  • the current block (encoding block or decoding Block) and the reference block may have similar textures but different brightness.
  • the local illumination compensation technology is used for the current block, using some pixels in the reconstructed pixels in the left column and the upper row of the current block, and the pixels in the corresponding positions in the pixels in the left column and the upper row of the reference block to determine The illumination compensation factor of the current block is used for illumination compensation.
  • the local illumination compensation technology uses some pixels in the reconstructed pixels in the left column and the upper row of the current block when calculating the illumination compensation factor, and the pixels in the corresponding positions in the pixels in the left column and the upper row of the reference block. This increases storage during hardware implementation, and the calculation range is limited, which affects the final encoding and decoding accuracy.
  • the embodiments of the present application provide an inter-frame prediction method, an encoder, a decoder, and a computer-readable storage medium.
  • the embodiment of the present application provides an inter-frame prediction method, which is applied to a decoder, and includes:
  • the previous illumination compensation information is based on the update obtained when the previous decoded block is decoded, and the previous illumination compensation information contains the information that satisfies the requirements before the current decoded block is decoded.
  • the embodiment of the present application provides an inter-frame prediction method, which is applied to an encoder, and includes:
  • the previous illumination compensation information is updated based on the previous encoding block when the encoding is completed, and the previous illumination compensation information contains the information that satisfies the requirements before the current encoding block is encoded.
  • An embodiment of the present application provides a decoder, including:
  • the first obtaining unit is configured to obtain a code stream
  • the first parsing unit is configured to parse the illumination compensation model index from the code stream
  • the first obtaining unit is further configured to obtain the current decoded block and the previous illumination compensation information; the previous illumination compensation information is obtained based on the update when the previous decoded block is decoded, and the previous illumination compensation information includes There is illumination compensation information of at least one reference block corresponding to a decoded historical decoding block that satisfies the illumination compensation rules before the current decoding block is decoded;
  • the first determining unit is configured to determine current illumination compensation information from the previous illumination compensation information based on the illumination compensation model index;
  • the first illumination compensation unit is configured to obtain the current prediction value after performing illumination compensation on the current decoded block based on the current illumination compensation information.
  • An embodiment of the present application provides an encoder, including:
  • the second acquisition unit is configured to acquire the current encoding block and the previous illumination compensation information; the previous illumination compensation information is updated based on the previous encoding block when the encoding is completed, and the previous illumination compensation information contains the current Before the coding block is coded, illumination compensation information of at least one reference block corresponding to the coded historical coding block that satisfies the illumination compensation rule;
  • the second determining unit is configured to determine current illumination compensation information from the previous illumination compensation information
  • the second illumination compensation unit is configured to obtain the current prediction value after performing illumination compensation on the current encoding block based on the current illumination compensation information.
  • An embodiment of the present application also provides a decoder, including: a first processor and a first memory storing executable instructions of the first processor, the first memory being dependent on the first memory through a first communication bus
  • the processor executes the operation, and when the instruction is executed by the first processor, executes the above-mentioned inter-frame prediction method on the decoder side.
  • An embodiment of the present application also provides an encoder, including: a second processor and a second memory storing executable instructions of the second processor, and the second memory relies on the second memory through a second communication bus.
  • the processor executes the operation, and when the instruction is executed by the second processor, executes the inter-frame prediction method on the encoder side.
  • An embodiment of the present application provides a computer-readable storage medium that stores executable instructions.
  • the first processor executes the decoder described above.
  • the second processor executes the inter prediction method on the encoder side .
  • FIG. 1 is a schematic diagram of the composition structure of an exemplary video codec network architecture provided by an embodiment of the application;
  • Fig. 2 is a block diagram of an exemplary encoder provided by an embodiment of the application
  • Fig. 3 is a block diagram of an exemplary decoder provided by an embodiment of the application.
  • FIG. 4 is a first schematic flowchart of an example of an optional inter-frame prediction corresponding to a decoder provided by an embodiment of the application;
  • FIG. 5 is a second schematic flowchart of an example of an optional inter-frame prediction corresponding to a decoder provided by an embodiment of the application;
  • FIG. 6 is a third schematic flowchart of an example of an optional inter-frame prediction corresponding to a decoder provided by an embodiment of the application.
  • FIG. 7 is a fourth schematic flowchart of an example of an optional inter-frame prediction corresponding to a decoder provided by an embodiment of the application.
  • FIG. 8 is a first schematic flowchart of an example of an optional inter-frame prediction corresponding to an encoder according to an embodiment of the application.
  • FIG. 9 is a second schematic flowchart of an example of optional inter-frame prediction corresponding to an encoder according to an embodiment of the application.
  • FIG. 10 is a third schematic flowchart of an example of optional inter-frame prediction corresponding to an encoder according to an embodiment of the application.
  • FIG. 11 is a first structural diagram of a decoder provided by an embodiment of this application.
  • FIG. 12 is a second schematic structural diagram of a decoder provided by an embodiment of this application.
  • FIG. 13 is a first structural diagram of an encoder provided by an embodiment of the application.
  • FIG. 14 is a second structural diagram of an encoder provided by an embodiment of the application.
  • the main function of predictive coding and decoding is: in the video coding and decoding, the existing reconstructed image in space or time is used to construct the predicted value of the current processing block, and only the difference between the original value and the predicted value is transmitted to reduce the amount of transmitted data. Purpose.
  • a frame that can be coded with inter-frame prediction has one or more reference frames.
  • the current block can be a coding unit or a prediction unit, and a motion vector (MV motion vector) can be used to indicate a pixel area of a reference frame with the same size as the current block.
  • MV motion vector motion vector
  • two motion vectors can also be used to indicate two reference blocks of two reference frames that may be the same or different.
  • Motion compensation obtains the prediction value of the current coding unit according to the reference block indicated by the motion vector.
  • FIG. 1 is a schematic diagram of the composition structure of the video encoding and decoding network architecture of an embodiment of the application, as shown in FIG. 1
  • the network architecture includes one or more electronic devices 111 to 11N and a communication network 01, where the electronic devices 111 to 11N can perform video interaction through the communication network 01.
  • the electronic device can be various types of devices with video coding and decoding functions.
  • the electronic device can include a mobile phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, Televisions, sensor devices, servers, etc., are not limited in the embodiments of the present application.
  • the encoder or decoder for inter-frame prediction in the embodiment of the present application may be the above-mentioned electronic device.
  • the electronic device in the embodiment of the present application has an encoding and decoding function, and generally includes an encoder and a decoder.
  • the composition structure of the encoder 21 includes: a transform and quantization unit 211, an intra-frame estimation unit 212, an intra-frame prediction unit 213, a motion compensation unit 214, a motion estimation unit 215, an inverse transform and an inverse
  • the filtering unit 218 can implement deblocking filtering and sample adaptive indentation ( Sample Adaptive Offset (SAO) filtering
  • the entropy coding unit 219 can implement header information coding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC).
  • the coding tree block (Coding Tree Unit, CTU) can be divided to obtain a block to be coded of the current video frame, and then the block to be coded will be intra-frame or inter-frame predicted.
  • the residual information is transformed by the transform and quantization unit 211, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate; the intra-frame estimation unit 212 and intra-frame prediction
  • the unit 213 is configured to perform intra-frame prediction on the block to be coded, for example, to determine the intra-frame prediction mode used to code the block to be coded;
  • the motion compensation unit 214, the motion estimation unit 215, and the illumination compensation unit 2110 are configured to execute the block to be coded Relative to the inter-frame predictive coding of one or more blocks in one or more reference frames to provide prediction information; wherein the motion estimation unit 215 is configured to estimate a motion vector, the motion vector can estimate the motion of the block to be coded, and then The motion compensation unit
  • the reconstructed residual block passes through the filter control analysis unit 217 and the filtering unit 218 to remove the block effect artifacts, and then the reconstructed residual block is added to a predictive one of the frames of the decoded image buffer unit 210 Block, used to generate reconstructed video coding blocks;
  • the entropy coding unit 219 is configured to encode various coding parameters and quantized transform coefficients.
  • the context content can be based on adjacent coding blocks. It is configured to encode information indicating the determined intra prediction mode and output the code stream of the video data; and the decoded image buffer unit 210 is configured to store reconstructed video coding blocks for prediction reference. As the video coding progresses, new reconstructed video coding blocks are continuously generated, and these reconstructed video coding blocks are all stored in the decoded image buffer unit 210.
  • illumination compensation may also be used as a part of motion compensation in the inter-frame prediction process, that is, the illumination compensation unit 2110 may be included in the motion compensation unit 214, which is not shown in the figure.
  • the decoder 22 corresponding to the encoder 21 has a composition structure as shown in FIG. 3, including: an entropy decoding unit 221, an inverse transform and inverse quantization unit 222, an intra prediction unit 223, a motion compensation unit 224, an illumination compensation unit 227, The filtering unit 225 and the decoded image buffer unit 226, etc., wherein the entropy decoding unit 221 can implement header information decoding and CABAC decoding, and the filtering unit 225 can implement deblocking filtering and SAO filtering.
  • the code stream of the video signal is output; the code stream is input to the video decoder 22, first passes through the entropy decoding unit 221 to obtain the decoded transform coefficient;
  • the inverse transform and inverse quantization unit 222 performs processing to generate a residual block in the pixel domain;
  • the intra prediction unit 223 may be configured to perform processing based on the determined intra prediction mode and the data from the previously decoded block of the current frame or picture The prediction data of the current decoding block is generated;
  • the motion compensation unit 224 determines the prediction information of the current decoding block by analyzing the motion vector, and the illumination compensation unit 227 performs illumination compensation on the prediction information to generate the predictability of the current decoding block being decoded Block; by summing the residual block from the inverse transform and inverse quantization unit 222 and the corresponding predictive block generated by the intra prediction unit 223 or the motion compensation unit 224 to form a decoded video block;
  • the filter unit 225 is used to remove the block effect artifacts, thereby improving
  • illumination compensation may also be used as a part of motion compensation in the inter-frame prediction process, that is, the illumination compensation unit 227 may be included in the motion compensation unit 224, which is not shown in the figure.
  • the inter-frame prediction method provided in the embodiment of the present application is a prediction of the inter-frame prediction process in predictive coding and decoding, and it can be applied to both the encoder 21 and the decoder 22.
  • the embodiment of the present application There is no specific restriction on this.
  • the inter-frame prediction method provided in the embodiments of the present application is mainly a process of illumination compensation implemented in an illumination compensation unit.
  • the embodiment of the present application provides an inter-frame prediction method, which is applied in a decoder.
  • FIG. 4 is a schematic diagram of the implementation process of an inter-frame prediction method provided by an embodiment of the application. As shown in FIG. 4, the method includes:
  • the previous illumination compensation information is updated based on the previous decoded block when the decoding is completed, and the previous illumination compensation information contains the illumination compensation that satisfies the illumination compensation before the current decoded block is decoded.
  • the decoder may obtain the illumination compensation model index generated by the encoder during encoding by analyzing the code stream.
  • the decoder when the decoder receives the illumination compensation flag, and the illumination compensation flag indicates that illumination compensation can be performed, the decoder can obtain the illumination compensation model index from the code stream.
  • the illumination compensation model index represents the index information corresponding to the illumination compensation information used when encoding the previous illumination compensation information.
  • the decoder after the decoder performs motion compensation, it may perform illumination compensation on the current decoded block to implement inter-frame prediction.
  • the illumination compensation model is a processing method used for illumination compensation.
  • the illumination compensation model may be a linear model that changes to the illumination, and the embodiment of the present application does not limit the type of the model.
  • the previous illumination compensation information or the current illumination compensation information includes: at least one item of illumination compensation information; and each item of illumination compensation information includes: at least one set of illumination compensation factors; or at least one set of illumination compensation information The illumination compensation factor and the index of the corresponding reference frame; wherein the at least one set of illumination compensation factors includes: the illumination compensation factor corresponding to at least one reference direction, or at least one reference frame list. That is to say, the previous illumination compensation information or the current illumination compensation information can be understood as an information set including at least one item of illumination compensation information.
  • At least one set of illumination compensation factors may include: illumination compensation factors corresponding to one reference direction; or, illumination compensation factors corresponding to two reference directions, where the two reference directions may be One forward direction and one backward direction, or both reference directions are forward directions, which are not limited in the embodiments of the present application.
  • a set of illumination compensation factors may include: a scaling factor and an offset factor.
  • a and b are determined based on the previous illumination compensation information.
  • the pixel value of the current decoded block is consistent with the position distribution of the pixel value of the corresponding reference block, and the current predicted value corresponding to the current decoded block is obtained by predicting the pixel value with the same position distribution.
  • the decoder when the decoder decodes the current decoded block, since the previous decoded block has been decoded, the decoder can obtain the current decoded block and the previous illumination compensation information obtained after the previous decoded block is decoded.
  • the decoder can decode the video sequence.
  • the decoding is performed in units of blocks.
  • the decoder is decoding the current
  • the previous or previous decoded block has been decoded.
  • the initial illumination compensation information is stored. The initial illumination compensation information is empty. After each decoding block is decoded, a series of judgments can be made to determine whether there is an illumination compensation factor generated. If there is, it will be updated to the last or previous illumination compensation information, if not, it will not be updated.
  • the previous illumination compensation information is updated based on the previous decoding block being decoded.
  • the previous illumination compensation information includes illumination compensation information of at least one reference block corresponding to the illumination compensation rule and decoded historical decoding block before the current decoding block is decoded.
  • the previous illumination compensation information may be stored in the form of a list; that is to say, the decoder stores an empty illumination compensation list at the beginning of decoding. Yes, in the decoding process, when each decoded block is decoded, it can go through a series of judgments to determine whether there is illumination compensation information (specifically illumination compensation factors) generated, if so, update to the previous or previous illumination If there is no compensation list, the illumination compensation list will not be updated.
  • illumination compensation information specifically illumination compensation factors
  • each item in the previous illumination compensation list is a set of illumination compensation information.
  • the illumination compensation information is an illumination compensation factor
  • a set of illumination compensation factors Including scaling factor and offset factor that is, a set of scaling factor a and offset factor b.
  • the illumination compensation information saved in the previous illumination compensation list includes the values of the scaling factor a and the offset factor b mentioned above.
  • the illumination compensation information saved in the list is still Including the index of the reference frame
  • another possible implementation is that the illumination compensation information saved in the list does not include the index of the reference frame.
  • the reference direction of the current decoding block can be one or at least one, for example, two reference directions. Therefore, when the reference direction is one, if the previous illumination compensation list is The illumination compensation information saved in includes the index of the corresponding reference frame.
  • each item in the above list only saves one illumination compensation information, that is, there is only a set of zoom factor a, offset factor b and the corresponding reference frame The index; or in addition to the above information, it also includes a set of default scaling factor a, offset factor b and the index of the corresponding reference frame, or in addition to the above information, the illumination compensation saved in the unused reference direction Information can be set as unavailable.
  • Another possible implementation manner is for two reference directions, each item of the above list includes the scaling factor a, the offset factor b, and the index of the reference frame corresponding to the two reference blocks.
  • the illumination compensation rule includes: when the current decoded block is a decoded block obtained by encoding in a normal inter-frame coding mode; and/or a decoded block obtained by encoding in a merge mode.
  • the illumination compensation rules in this application may be that the current decoded block is a decoded block that is coded in a normal inter-frame coding mode.
  • the normal inter-frame decoding mode refers to the use of inter-frame prediction and the need to transmit motion vector difference (MVD motion vector difference) and Residual block; it can also be that the current decoded block is a decoded block coded in a normal inter-frame coding mode or a merge mode, where the merge mode refers to the use of inter-frame prediction, which does not require transmission of the motion vector difference but needs to be transmitted The residual block.
  • the decoder can determine the current illumination compensation information from the previous illumination compensation information based on the illumination compensation model index.
  • the current illumination compensation information is determined from the previous illumination compensation information as follows: from the previous illumination compensation information, it is determined that the current reference frame is the same as the current decoded block The first candidate illumination compensation information; based on the illumination compensation model index, determine the current illumination compensation information from the first candidate illumination compensation information; or, the remaining illumination compensation information except the first candidate illumination compensation information in the previous illumination compensation information
  • the illumination compensation information is scaled according to the reference frame used by the illumination compensation model, the reference frame where the current reference block is located, and the time distance between the frame where the current decoding block is located, to obtain the second candidate illumination compensation information, and then based on the illumination compensation model index, From the first candidate illumination compensation information and the second candidate illumination compensation information, the current illumination compensation information is determined.
  • the encoder can determine the current illumination compensation information from the previous illumination compensation information (previous illumination compensation list). If the saved illumination compensation information in the previous illumination compensation list contains the information of the reference frame, such as reference Block list index and reference block index.
  • One possible implementation is to filter the illumination compensation factor of the reference frame that is the same as the reference frame used in the current block from the previous illumination compensation list.
  • Another possible implementation is to use the same reference frame as the current block.
  • the reference frame used by the illumination compensation model of the reference frame with different reference frames, the reference frame where the current reference block is located, and the time distance between the frame where the current decoding block is located are used after scaling to select the optimal lighting compensation factor Is the current illumination compensation factor.
  • the decoder may select the illumination compensation information indicated by the illumination compensation model index in the previous illumination compensation information as the current illumination compensation information. There will be other methods for determining the illumination compensation information in the future, which will be described in the following embodiments.
  • the decoder may perform illumination compensation on the current decoded block according to the current illumination compensation information, thereby obtaining the current prediction value.
  • the decoder performs the inter-frame prediction method provided in the embodiment of the present application for the current decoding block that has undergone motion compensation.
  • the decoder may input the pixel value of the reference block corresponding to the current decoded block according to the linear model of lighting compensation in combination with the determined current lighting compensation factor (current lighting compensation information) to obtain the current decoded block.
  • the predicted value of the block may be input the pixel value of the reference block corresponding to the current decoded block according to the linear model of lighting compensation in combination with the determined current lighting compensation factor (current lighting compensation information) to obtain the current decoded block.
  • the predicted value of the block may be input the pixel value of the reference block corresponding to the current decoded block according to the linear model of lighting compensation in combination with the determined current lighting compensation factor (current lighting compensation information) to obtain the current decoded block.
  • the decoder in the embodiment of the present application performs inter-frame prediction, it also implements prediction based on pixels as a unit.
  • the decoder can obtain the illumination compensation model index for illumination compensation from the code stream, so that the previous illumination compensation information obtained after the decoding of the previous decoding block is completed can find the illumination corresponding to the illumination compensation model index Compensation information, thereby performing illumination compensation, because the previous illumination compensation information is based on the update obtained when the previous decoding block is decoded, and the previous illumination compensation information contains the illumination compensation rules that satisfy the illumination compensation rules before the current decoding block is decoded. And the illumination compensation information of at least one reference block corresponding to the decoded historical decoding block. Therefore, the decoder can select the illumination compensation factor obtained from the historical decoding block to determine the current illumination compensation information to be finally used, and the selection range becomes larger. A more accurate value can be selected for illumination compensation. Therefore, the accuracy of decoding will be improved. Moreover, the previous illumination compensation information only stores the illumination compensation information obtained from historical decoding blocks that meet the illumination compensation rules. Therefore, it is filtered out. Values that do not meet the lighting compensation rules reduce storage space.
  • FIG. 5 is a schematic diagram of the implementation process of an inter-frame prediction method provided by an embodiment of the present application. As shown in FIG. 5, after S104, the method further includes:
  • S105 Based on the current predicted value, complete the decoding of the current decoded block to obtain the decoded block.
  • the decoder obtains the current illumination compensation information, it is mainly for calculating the current illumination compensation factor.
  • the decoder after obtaining the current prediction value, the decoder continues to perform decoding processing until the decoding of the current decoding block is completed, and the decoding block is obtained. At this time, after the decoder completes the decoding of the current decoding block, it needs to make a judgment , Judge whether the decoded block complies with the illumination compensation rules, if so, then calculate the illumination compensation factor of the decoded block. Specifically, use the current reference block corresponding to the decoded block and combine the initial illumination compensation model to calculate the corresponding decoded block The latest lighting compensation information.
  • the decoder determines whether the decoded block meets the lighting compensation rules, when the decoded block is when the current decoded block is a decoded block obtained by encoding in a normal inter-frame coding mode; and/or a decoded block obtained by encoding in a merge mode ,
  • the decoded block satisfies the illumination compensation rules, so in the embodiment of the present application, the decoded block obtains the current pixel value of the decoded block and the reference pixel value of the current reference block; and then calculates based on the current pixel value and the reference pixel value The latest illumination compensation information corresponding to the decoded block.
  • the current pixel value is the pixel value corresponding to the first position of the decoding block
  • the reference pixel value is the pixel value corresponding to the first position of the current reference block
  • the current pixel value is the pixel value obtained after the first down-sampling of the decoding block, and the reference pixel value is the pixel value obtained after the second down-sampling of the current reference block;
  • the current pixel value is a first preset number of pixel values determined in the decoding block, and the reference pixel value is a first preset number of pixel values determined in the current reference block;
  • the first position is at least one row or at least one column of at least one edge of the decoding block or the current reference block; the first down-sampling and the second down-sampling are determined according to the size of the block, or according to the size and shape of the block.
  • the selected pixel can be an edge of the current block and the reference block, such as a row/column of the top edge, bottom edge, left edge, and right edge. Or several rows/columns, you can also select one or several of them at the same time to get one or several illumination compensation factors, that is, the current pixel value is the pixel value corresponding to the first position of the decoding block, and the reference pixel value is the current reference block The pixel value corresponding to the first position of. It is also possible to use all pixels of the current block and the reference block or a certain regular down-sampling of all pixels.
  • one pixel is taken every 2/4/8 pixels in the horizontal and vertical directions of the current block, that is, the first down-sampling or
  • the second down-sampling is to sample every 2/4/8 in the horizontal and vertical directions of the current block.
  • the downsampling rules can also be distinguished according to the size of the block. For example, when the number of pixels in the current block is less than or equal to 64, one pixel is taken for every 2 pixels in the horizontal and vertical directions. When the number of pixels in the current block is greater than 64, One pixel is taken for every 4 pixels in the horizontal and vertical directions, which is not limited in the embodiment of the present application. Or it is stipulated to take the first preset number of pixels. For example, regardless of the size of the block, 16 pixels are taken for each of the current block and the reference block. Alternatively, sampling rules are set according to different sizes and shapes of blocks, which are not limited in the embodiment of the present application.
  • the decoder After the decoder calculates the latest illumination compensation information corresponding to the decoded block, it needs to make a second judgment to determine whether the latest illumination compensation factor in the latest illumination compensation information is valid. If it is valid, the latest illumination compensation information is used to update the previous illumination compensation. Information or update the previous illumination compensation list. If it is invalid, it will not be updated. That is to say, if the a and b of the illumination compensation model calculated by the decoder meet the reasonable value range respectively, the model is considered to be valid; otherwise, the model is considered invalid and the set of illumination compensation factors is discarded.
  • the decoder may use a preset threshold to make a secondary judgment.
  • the decoder uses the latest illumination compensation information to update the previous illumination compensation information to obtain the current illumination compensation information.
  • Secondary illumination compensation information the current secondary illumination compensation information is used in the next decoding block decoding.
  • the latest illumination compensation information does not meet the preset threshold, the previous illumination compensation information is used as the current illumination compensation information, and the current illumination compensation information is used in the next decoding block. This set of illumination compensation information is valid The illumination compensation information.
  • the decoder may use a preset threshold to make a secondary judgment, or may use other judgment methods, which is not limited in the embodiment of the present application.
  • the decoder uses the latest illumination compensation information to update the previous illumination compensation information to obtain the current illumination compensation information.
  • the implementation method may be: if the latest illumination compensation information does not exist in the previous illumination compensation information, and the previous illumination compensation information If the total amount of illumination compensation information is less than the preset amount of information, the latest illumination compensation information is arranged at the end of the information to obtain the current illumination compensation information; if the latest illumination compensation information does not exist in the previous illumination compensation information, and the previous illumination If the total amount of compensation information is equal to the preset amount of information, the first item of illumination compensation information arranged in the header of the previous illumination compensation information is deleted, and the latest illumination compensation information is arranged at the end of the information to obtain the current illumination compensation information; If there is the i-th item of illumination compensation information in the previous illumination compensation information that is the same as the latest illumination compensation information, delete the i-th item of illumination compensation information, and then arrange the latest illumination compensation information at the end of the information to obtain the current illumination compensation information; Wherein, i is an integer
  • the illumination compensation information can be embodied in the form of an illumination compensation list. After each block to be decoded is decoded, the latest illumination compensation information is obtained, and the previous illumination compensation list is updated.
  • the illumination compensation list can have the maximum length, that is, the number of items: the amount of preset information.
  • the illumination compensation information is stored in each item of the illumination compensation list.
  • the insertion of the illumination compensation list can use the first-in first-out rule. You need to check duplicates and update the list when inserting new items. That is, it is necessary to check the duplicate of the latest illumination compensation information.
  • the latest illumination compensation information is the item to be inserted.
  • a possible implementation is that when the respective values of the illumination compensation factor corresponding to the item to be inserted and the existing item are equal, it is considered to be a duplicate item , Delete the existing item, and insert the item to be inserted into the end of the queue; another possible implementation is when the reference frame corresponding to the illumination compensation model corresponding to the item to be inserted and the existing item is the same and the difference between the values of a and b is less than When the difference threshold is preset, it is considered as a duplicate item, the existing item is deleted, and the item to be inserted is inserted at the end of the queue.
  • a and b are scaled according to the reference frame of the item to be inserted and the existing item and then compared.
  • the scaling method is similar to the scaling method of the motion vector. Specifically, it needs to be scaled according to the reference frame used by the illumination compensation model, the reference frame where the current reference block is located, and the time distance between the frames where the current decoding block is located.
  • the decoder every time the decoder decodes a block, if the block complies with the illumination compensation rules, it calculates its illumination compensation factor, and if the illumination compensation factor meets the secondary judgment requirements, it will be inserted into the illumination compensation list. , To update the illumination compensation list.
  • the illumination compensation model that the current block can refer to is not limited to adjacent blocks, but all blocks that have been decoded before, so that the accuracy of decoding can be improved.
  • FIG. 6 is a schematic diagram of the implementation process of an inter-frame prediction method provided by an embodiment of the present application.
  • another implementation method for obtaining current illumination compensation information is:
  • the method further includes:
  • the previous illumination compensation information includes a group of illumination compensation information, use a group of illumination compensation information as current illumination compensation information; a group of illumination compensation information is effective illumination compensation information.
  • the decoder obtains the current decoded block and the previous illumination compensation information
  • a group of illumination compensation factors is used as the current illumination compensation information
  • a group of illumination compensation information is effective illumination compensation information.
  • the illumination compensation factor in the group of illumination compensation information is an effective illumination compensation factor.
  • illumination compensation when decoding a block, if illumination compensation can be performed, there is only one available illumination compensation information or illumination compensation information available for the current block in the illumination compensation list, and the illumination compensation flag determines that the current block uses illumination compensation, one One possible implementation is to use the only available illumination compensation information in the list by default in this case instead of determining the current illumination compensation information based on the illumination compensation model index, thereby saving the transmission of the illumination compensation model index and saving transmission resources.
  • FIG. 7 is a schematic diagram of the implementation flow of an inter-frame prediction method provided by an embodiment of the present application. As shown in FIG. 7, after obtaining the code stream in S101, after obtaining the current decoded block in S102 , And before S104, the method further includes:
  • the illumination compensation model can be implicitly inherited
  • the current decoded block is subjected to illumination compensation using the implicitly inherited illumination compensation model to obtain the current predicted value.
  • the block when a block is coded and decoded, if the block meets the requirements, it can perform illumination compensation.
  • This requirement can be that the current block is a block in any inter-frame decoding mode, if the current block is in inter-frame skip mode or merge mode; one possible implementation is that it implicitly inherits the block it refers to The illumination compensation factor (that is, illumination compensation information); another possible implementation is that it cannot implicitly inherit the illumination compensation factor of the block it refers to. If the current block can perform illumination compensation but it cannot implicitly inherit the illumination compensation factor of the block it refers to, a flag needs to be transmitted to indicate whether it uses illumination compensation.
  • the current decoded block is a block in any inter-frame decoding mode, it is determined whether the current decoded block can implicitly inherit the illumination compensation model. If the illumination compensation model is not inherited implicitly, and the previous illumination compensation information does not contain effective illumination compensation information (for example: illumination compensation factor), the illumination compensation is ended; if the illumination compensation model is not implicitly inherited, And when the previous illumination compensation information contains effective illumination compensation information, the code stream is parsed to obtain the illumination compensation flag. When the illumination compensation flag indicates to perform illumination compensation, and the previous illumination compensation information includes at least two sets of effective illumination compensation information, the code stream is parsed to obtain the illumination compensation model index, and the decoder is instructed to use the first number in the list as described above. For the illumination compensation mode information in each item, the use of illumination compensation can improve the quality of prediction. Among them, in the embodiment of the present application, the illumination compensation model index represents the index information corresponding to the illumination compensation information used in the previous decoding of the illumination compensation information.
  • the illumination compensation model fails to be inherited implicitly, and the previous illumination compensation information does not contain a valid illumination compensation factor, then the illumination compensation is ended, that is, when a block is coded and decoded, if it can perform illumination compensation, However, there is no available illumination compensation model in the illumination compensation list or no available illumination compensation model for the current block.
  • One possible implementation is that in this case, it does not use illumination compensation by default without flag transmission.
  • illumination compensation is not used and flag transmission is not used, thereby reducing transmission resources and processing complexity.
  • the group of illumination compensation information is taken as the current illumination compensation information.
  • the illumination compensation flag indicates that illumination compensation is performed, and the previous illumination compensation information includes a group of illumination compensation information
  • the group of illumination compensation information is taken as the current illumination compensation information.
  • the decoder can use the implicitly inherited illumination compensation model to perform illumination compensation on the current decoded block to obtain the current predicted value.
  • the embodiment of the present application provides an inter-frame prediction method, which is applied to an encoder.
  • FIG. 8 is a schematic diagram of the implementation process of an inter-frame prediction method provided by an embodiment of the application. As shown in FIG. 8, the method includes:
  • S202 Determine current illumination compensation information from the previous illumination compensation information.
  • the encoder performs the encoding of the current encoding block after the encoding of the previous encoding block is completed. Therefore, the encoder can obtain the previous illumination obtained after the previous encoding is completed. Compensation information.
  • the encoder when the encoder is encoding the current encoding block, since the previous encoding block has been encoded, the encoder can obtain the current encoding block and the previous illumination compensation information obtained after the encoding of the previous block is completed.
  • the encoder can encode a video sequence.
  • the encoding is performed in units of blocks.
  • the encoder is encoding the current
  • the previous or previous encoding block has been encoded.
  • the initial illumination compensation information is stored, and the initial illumination compensation information is empty.
  • a series of judgments can be made to determine whether there is an illumination compensation factor generated. If there is, it will be updated to the last or previous illumination compensation information, if not, it will not be updated.
  • the previous illumination compensation information is updated based on the previous encoding block encoding.
  • the previous illumination compensation information includes illumination compensation information of at least one reference block corresponding to the illumination compensation rule and that has been encoded before the encoding of the current encoding block. It should be noted that, in the embodiment of the present application, after the encoder performs motion compensation, it can perform illumination compensation on the current coding block again to implement inter-frame prediction.
  • the illumination compensation model is a processing method used for illumination compensation.
  • the illumination compensation model may be a linear model that changes to the illumination, and the embodiment of the present application does not limit the type of the model.
  • the illumination compensation factor in the previous illumination compensation information is the coefficient in the illumination compensation model.
  • the previous illumination compensation information or the current illumination compensation information includes: at least one item of illumination compensation information; and each item of illumination compensation information includes: at least one set of illumination compensation factors; or at least one set of illumination compensation information The illumination compensation factor and the index of the corresponding reference frame; wherein the at least one set of illumination compensation factors includes: the illumination compensation factor corresponding to at least one reference direction, or at least one reference frame list.
  • at least one set of illumination compensation factors may include: illumination compensation factors corresponding to one reference direction; or, illumination compensation factors corresponding to two reference directions, where the two reference directions may be One forward direction and one backward direction, or both reference directions are forward directions, which are not limited in the embodiments of the present application.
  • a group of illumination compensation factors may include: a scaling factor and an offset factor.
  • a and b are determined based on the previous illumination compensation information.
  • the pixel value of the current coding block is consistent with the position distribution of the pixel value of the corresponding reference block, and the current prediction value corresponding to the current coding block is obtained by predicting the pixel value with the same position distribution.
  • the previous illumination compensation information may be stored in the form of a list; that is to say, the encoder stores an empty illumination compensation list at the beginning of encoding. Yes, in the encoding process, each encoding block can undergo a series of judgments when the encoding is completed to determine whether there is illumination compensation information generated. If so, it will be updated to the last or previous illumination compensation list. If not, The illumination compensation list is not updated.
  • each item in the previous illumination compensation list is a set of illumination compensation information
  • the illumination compensation information includes a set of illumination compensation factors
  • the compensation factor includes a scaling factor and an offset factor, that is, a set of scaling factor a and offset factor b.
  • the illumination compensation factor saved in the previous illumination compensation list includes the values of the scaling factor a and the offset factor b mentioned above.
  • the illumination compensation information saved in the list is still Including the index of the reference frame
  • another possible implementation is that the illumination compensation information saved in the list does not include the index of the reference frame.
  • the reference direction of the current coding block can be one or at least one, for example, two reference directions. Therefore, when the reference direction is one, if the previous illumination compensation list is The illumination compensation information saved in includes the index of the corresponding reference frame.
  • each item in the above list only saves one illumination compensation information, that is, there is only a set of scaling factor a, offset factor b and the corresponding reference frame The index; or in addition to the above information, it also includes a set of default scaling factor a, offset factor b and the index of the corresponding reference frame, or in addition to the above information, the illumination compensation saved in the unused reference direction Information can be set as unavailable.
  • Another possible implementation manner is for two reference directions, each item of the above list includes the scaling factor a, the offset factor b, and the index of the reference frame corresponding to the two reference blocks.
  • the illumination compensation rule includes: when the current coding block is a coding block in a normal inter-frame coding mode; and/or a coding block in a merge mode. It should be noted that, in the embodiment of the present application, when the encoder is encoding, if the current encoding block being encoded meets the illumination compensation rules, the illumination compensation factor of the current encoding block can be calculated, and the index of the reference frame can be obtained.
  • the illumination compensation rule in this application may be that the current coding block is a coding block in the normal inter coding mode.
  • the normal inter coding mode refers to the use of inter prediction and the need to transmit motion vector difference (MVD motion vector difference) and residual ( residual) block; it can also be a coding block whose current coding block is a normal inter-frame coding mode or a merge mode, where the merge mode refers to a block that uses inter-frame prediction and does not need to transmit the motion vector difference but needs to transmit the residual.
  • VMD motion vector difference motion vector difference
  • residual residual
  • the encoder determines the current illumination compensation information from the previous illumination compensation information as follows: from the previous illumination compensation information, it is determined that the current reference frame is the same as the current encoding block The first candidate illumination compensation information;
  • the remaining illumination compensation information except the first candidate illumination compensation information in the previous illumination compensation information is based on the reference frame used by the illumination compensation model, the reference frame where the current reference block is located, and the time distance between the frames where the current coding block is located Scaling is performed to obtain the second candidate illumination compensation information, and then the current illumination compensation factor is determined from the first candidate illumination compensation information and the second candidate illumination compensation information.
  • the encoder can determine the current illumination compensation factor from the previous illumination compensation information (previous illumination compensation list), if the saved illumination compensation information in the previous illumination compensation list contains the information of the reference frame, such as Reference block list index and reference block index.
  • the previous illumination compensation list filters out the illumination compensation factor of the reference frame that is the same as the reference frame used in the current block.
  • Another possible implementation is the same as the one used in the current block.
  • the reference frame used by the illumination compensation model of the reference frame with different reference frames, the reference frame where the current reference block is located, and the time distance between the frame where the current coding block is located are used after scaling to select the optimal lighting compensation factor Is the current illumination compensation factor.
  • the encoder may perform illumination compensation on the current encoding block according to the current illumination compensation information, thereby obtaining the current prediction value.
  • the encoder performs the inter-frame prediction method provided in the embodiment of the present application for the current coding block subjected to motion compensation.
  • the encoder may input the pixel value of the reference block corresponding to the current encoding block according to the illumination compensation linear model in combination with the determined current illumination compensation information to obtain the predicted value of the current encoding block. It should be noted that, when the encoder in the embodiment of the present application performs inter-frame prediction, it also implements prediction based on pixels as a unit.
  • the encoder obtains the previous illumination compensation information obtained after the encoding of the previous coding block is completed, finds the illumination compensation information corresponding to the illumination compensation model index, and performs illumination compensation. Since the previous illumination compensation information is based on the previous illumination compensation information One code block is updated when the coding is completed, and the previous illumination compensation information contains illumination compensation information of at least one reference block corresponding to the illumination compensation rule and that has been encoded before the current coding block is decoded, Therefore, the encoder can select the illumination compensation information obtained from the historical encoding block, and determine the current illumination compensation information that is finally used. The selection range increases, and a more accurate value can be selected for illumination compensation. Therefore, the accuracy of the encoding will be improved. Furthermore, only the illumination compensation information obtained by the historical coding blocks that meet the illumination compensation rules are stored in the previous illumination compensation information. Therefore, the values that do not meet the illumination compensation rules are filtered out, and the storage space is reduced.
  • FIG. 9 is a schematic diagram of the implementation flow of an inter-frame prediction method provided by an embodiment of the present application. As shown in FIG. 9, after S203, the method further includes:
  • the encoder since the index of the reference frame of the current coding block can be obtained when the current coding block is determined, when the encoder obtains the current illumination compensation information, it is mainly for calculating the current illumination compensation factor.
  • the encoder after obtaining the current prediction value, the encoder continues to perform encoding processing until the encoding of the current encoding block is completed, and the encoding block is obtained. At this time, after the encoder completes the encoding of the current encoding block, it needs to make a judgment , Judge whether the coded block complies with the illumination compensation rules, if it complies, then calculate the illumination compensation factor of the coded block, in detail, use the current reference block corresponding to the coded block, combined with the initial illumination compensation model, calculate the corresponding coded block The latest light compensation information.
  • the encoder judges whether the coded block is the light compensation rule.
  • the coded block is the coded block when the current coded block is in the normal inter-frame coding mode; and/or the coded block coded in the merge mode, it conforms to the coded block.
  • the block satisfies the illumination compensation rules, so in this embodiment of the application, the encoding block obtains the current pixel value of the encoding block and the reference pixel value of the current reference block; and then based on the current pixel value and the reference pixel value, the corresponding encoding block is calculated The latest lighting compensation information.
  • the current pixel value is the pixel value corresponding to the first position of the coding block
  • the reference pixel value is the pixel value corresponding to the first position of the current reference block
  • the current pixel value is the pixel value obtained after the first down-sampling of the encoding block, and the reference pixel value is the pixel value obtained after the second down-sampling of the current reference block;
  • the current pixel value is a first preset number of pixel values determined in the encoding block, and the reference pixel value is a first preset number of pixel values determined in the current reference block;
  • the first position is at least one row or at least one column of at least one edge of the coding block or the current reference block; the first down-sampling and the second down-sampling are determined according to the size of the block, or according to the size and shape of the block.
  • the selected pixel can be an edge of the current block and the reference block, such as a row/column of the top edge, bottom edge, left edge, and right edge. Or several rows/columns, you can also select one or several of them at the same time to obtain one or several illumination compensation factors, that is, the current pixel value is the pixel value corresponding to the first position of the coding block, and the reference pixel value is the current reference block The pixel value corresponding to the first position of. It is also possible to use all pixels of the current block and the reference block or a certain regular down-sampling of all pixels.
  • one pixel is taken every 2/4/8 pixels in the horizontal and vertical directions of the current block, that is, the first down-sampling or
  • the second down-sampling is to sample every 2/4/8 in the horizontal and vertical directions of the current block.
  • the downsampling rules can also be distinguished according to the size of the block. For example, when the number of pixels in the current block is less than or equal to 64, one pixel is taken for every 2 pixels in the horizontal and vertical directions. When the number of pixels in the current block is greater than 64, One pixel is taken for every 4 pixels in the horizontal and vertical directions, which is not limited in the embodiment of the present application. Or it is stipulated to take the first preset number of pixels. For example, regardless of the size of the block, 16 pixels are taken for each of the current block and the reference block. Alternatively, sampling rules are set according to different sizes and shapes of blocks, which are not limited in the embodiment of the present application.
  • the encoder After the encoder calculates the latest illumination compensation information corresponding to the encoding block, it needs to make a second judgment to determine whether the latest illumination compensation information is valid. If it is valid, the latest illumination compensation information is used to update the previous illumination compensation information. If the list is invalid, it will not be updated.
  • the model is considered to be effective; otherwise, the model is considered invalid and the set of illumination compensation factors is discarded.
  • the encoder may use a preset threshold to make a second judgment.
  • the encoder uses the latest illumination compensation information to update the previous illumination compensation information to obtain the current illumination compensation information.
  • Secondary illumination compensation information the current secondary illumination compensation information is used when encoding the next coding block.
  • the latest illumination compensation information does not meet the preset threshold, the previous illumination compensation information is used as the current illumination compensation information, and the current illumination compensation information is used for encoding in the next coding block. This set of illumination compensation information is valid The illumination compensation information.
  • the encoder may use a preset threshold to make a secondary judgment, or other judgment methods may also be used, which is not limited in the embodiment of the present application.
  • the encoder uses the latest illumination compensation information to update the previous illumination compensation information to obtain the current illumination compensation information.
  • the implementation method may be: if the latest illumination compensation information does not exist in the previous illumination compensation information, and the previous illumination compensation information If the total amount of illumination compensation information is less than the preset amount of information, the latest illumination compensation information is arranged at the end of the information to obtain the current illumination compensation information; if the latest illumination compensation information does not exist in the previous illumination compensation information, and the previous illumination If the total information amount of the compensation information is equal to the preset information amount, delete the first illumination compensation factor arranged in the information header of the previous illumination compensation information, and then arrange the latest illumination compensation information at the end of the information to obtain the current illumination compensation information; If there is an i-th illumination compensation factor in the previous illumination compensation information that is the same as the latest illumination compensation information, delete the i-th illumination compensation factor, and then arrange the latest illumination compensation information at the end of the information to obtain the current illumination compensation information; Wherein, i is an integer greater than or equal to 1
  • the illumination compensation information can be embodied in the form of an illumination compensation list. After the encoding of each block to be coded is completed, the latest illumination compensation information is obtained, and the previous illumination compensation list will be updated.
  • the illumination compensation list can have the maximum length, that is, the number of items: the amount of preset information.
  • the illumination compensation information is stored in the illumination compensation column.
  • the insertion of the illumination compensation list can use the first-in first-out rule. You need to check duplicates and update the list when inserting new items. That is, it is necessary to check the duplicate of the latest illumination compensation information.
  • the latest illumination compensation information is the item to be inserted.
  • a possible implementation is that when the respective values of the illumination compensation factor corresponding to the item to be inserted and the existing item are equal, it is considered to be a duplicate item , Delete the existing item, and insert the item to be inserted into the end of the queue; another possible implementation is when the reference frame corresponding to the illumination compensation model corresponding to the item to be inserted and the existing item is the same and the difference between the values of a and b is less than When the difference threshold is preset, it is considered as a duplicate item, the existing item is deleted, and the item to be inserted is inserted at the end of the queue.
  • a and b are scaled according to the reference frame of the item to be inserted and the existing item and then compared.
  • the scaling method is similar to the scaling method of the motion vector. The specific needs are based on the reference frame used by the illumination compensation model, the reference frame where the current reference block is located, and the time distance between the frames where the current coding block is located.
  • the encoder every time the encoder encodes a block, if the block complies with the illumination compensation rules, it calculates its illumination compensation factor, and if the illumination compensation factor meets the secondary judgment requirements, it is inserted into the illumination compensation list , To update the illumination compensation list.
  • the illumination compensation model that the current block can refer to is not limited to the adjacent blocks, but all the blocks that have been coded before, so that the coding accuracy can be improved.
  • FIG. 10 is a schematic diagram of the implementation flow of an inter-frame prediction method provided by an embodiment of the present application. As shown in FIG. 10, after acquiring the current coding block in S201 and before S203, the Methods also include:
  • the previous illumination compensation information includes at least two sets of effective illumination compensation information, determine the current illumination compensation information from the previous illumination compensation information, and generate an illumination compensation model index, and write Into the code stream.
  • the illumination compensation model can be implicitly inherited, then the implicitly inherited illumination compensation model is used to perform illumination compensation on the current coding block to obtain the current predicted value.
  • a block when a block is coded and decoded, if the block meets the requirements, it can perform illumination compensation.
  • This requirement can be that the current block is a block in any inter coding mode, if the current block is in skip mode or merge mode; one possible implementation is that it implicitly inherits the block it refers to The illumination compensation factor (that is, illumination compensation information); another possible implementation is that it cannot implicitly inherit the illumination compensation factor of the block it refers to. If the current block can perform illumination compensation but it cannot implicitly inherit the illumination compensation factor of the block it refers to, a flag needs to be transmitted to indicate whether it uses illumination compensation.
  • the current coding block is a block in any inter coding mode, it is determined whether the current coding block can implicitly inherit the illumination compensation model. If the illumination compensation model is not inherited implicitly, and the previous illumination compensation information does not contain effective illumination compensation information (for example: illumination compensation factor), the illumination compensation is ended; if the illumination compensation model is not implicitly inherited, And when the previous illumination compensation information contains illumination compensation information, the illumination compensation flag is generated, and the illumination compensation flag is written into the code stream for use by the decoder.
  • effective illumination compensation information for example: illumination compensation factor
  • the previous illumination compensation information includes at least two sets of effective illumination compensation information
  • determine the current illumination compensation information from the previous illumination compensation information and generate the illumination compensation model index, and write the code Stream for use by the decoder. If it is determined that lighting compensation can be used, it is necessary to generate a lighting compensation model index and transmit a generated lighting compensation model index to the decoder to instruct the decoder to use the lighting compensation mode information in the first few items in the list as described above.
  • the illumination compensation model index represents the index information corresponding to the illumination compensation information used when encoding the previous illumination compensation information.
  • the illumination compensation model fails to be inherited implicitly, and the previous illumination compensation information does not contain valid illumination compensation information, the illumination compensation is ended, that is, when a block is coded and decoded, if it can perform illumination compensation, However, there is no available illumination compensation model in the illumination compensation list or no available illumination compensation model for the current block.
  • One possible implementation is that in this case, it does not use illumination compensation by default without flag transmission.
  • illumination compensation is not used and flag transmission is not used, thereby reducing transmission resources and processing complexity.
  • the group of illumination compensation information is taken as the current illumination compensation information.
  • the illumination compensation flag indicates that illumination compensation is performed, and the previous illumination compensation information includes a group of illumination compensation information
  • the group of illumination compensation information is taken as the current illumination compensation information.
  • the encoder can use the implicitly inherited illumination compensation model to perform illumination compensation on the current coding block to obtain the current predicted value.
  • an embodiment of the present application provides a decoder 1, including:
  • the first obtaining unit 10 is configured to obtain a code stream
  • the first parsing unit 11 is configured to parse the illumination compensation model index from the code stream
  • the first acquiring unit 10 is further configured to acquire the current decoded block and the previous illumination compensation information; the previous illumination compensation information is based on the update obtained when the previous decoded block is decoded, and the previous illumination compensation information Contains illumination compensation information of at least one reference block corresponding to a previously decoded historical decoding block that satisfies the illumination compensation rules before the current decoding block is decoded;
  • the first determining unit 12 is configured to determine current illumination compensation information from the previous illumination compensation information based on the illumination compensation model index;
  • the first illumination compensation unit 13 is configured to obtain a current prediction value after performing illumination compensation on the current decoded block based on the current illumination compensation information.
  • the decoder 1 further includes: a decoding unit 14, a first calculation unit 15, and a first update unit 16;
  • the decoding unit 14 is configured to perform illumination compensation on the current decoded block based on the current illumination compensation information to obtain the current predicted value, and then complete the decoding of the current decoded block based on the current predicted value , Get the decoded block;
  • the first calculation unit 15 is configured to use the current reference block corresponding to the decoded block to calculate the latest lighting compensation information corresponding to the decoded block when the decoded block satisfies the illumination compensation rule;
  • the first update unit 16 is configured to use the latest illumination compensation information to update the previous illumination compensation information to obtain the current illumination compensation information when the latest illumination compensation information meets a preset threshold.
  • the secondary illumination compensation information is configured to be used when the next decoding block is decoded.
  • the first acquiring unit 10 is further configured to, after acquiring the current decoded block and the previous illumination compensation information, perform illumination on the current decoded block based on the current illumination compensation information. After compensation, before the current predicted value is obtained, if the previous illumination compensation information includes a group of illumination compensation information, the group of illumination compensation information is taken as the current illumination compensation information; the group of illumination compensation information is Effective illumination compensation information.
  • the previous illumination compensation information or the current illumination compensation information includes: at least one piece of illumination compensation information
  • Each item of illumination compensation information includes: at least one set of illumination compensation factors; or at least one set of illumination compensation factors and the index of the corresponding reference frame.
  • the first update unit 16 is further configured to, if the latest illumination compensation information does not exist in the previous illumination compensation information, and the total information amount of the previous illumination compensation information Less than the preset amount of information, arrange the latest illumination compensation information at the end of the information to obtain the current illumination compensation information; if the latest illumination compensation information does not exist in the previous illumination compensation information, and the previous illumination compensation information If the total information amount of the illumination compensation information is equal to the preset information amount, the first item of illumination compensation information arranged in the header of the previous illumination compensation information is deleted, and the latest illumination compensation information is arranged at the end of the information to obtain The current illumination compensation information; if there is an i-th illumination compensation information in the previous illumination compensation information that is the same as the latest illumination compensation information, then the i-th illumination compensation information is deleted, and then the The latest illumination compensation information is arranged at the end of the information to obtain the current illumination compensation information; where i is an integer greater than or equal to 1 and less than or equal to the preset amount of information.
  • the illumination compensation rule includes: when the current decoded block is a decoded block obtained by encoding in a normal inter-frame coding mode; and/or a decoded block obtained by encoding in a merge mode.
  • the first calculation unit 15 is further configured to obtain the current pixel value of the decoded block and the reference pixel value of the current reference block; based on the current pixel value and the current pixel value. According to the reference pixel value, the latest illumination compensation information corresponding to the decoded block is calculated.
  • the current pixel value is a pixel value corresponding to the first position of the decoding block
  • the reference pixel value is a pixel value corresponding to the first position of the current reference block
  • the current pixel value is a pixel value obtained after the first down-sampling of the decoding block
  • the reference pixel value is a pixel value obtained after the second down-sampling of the current reference block
  • the current pixel value is a first preset number of pixel values determined in the decoding block, and the reference pixel value is a first preset number of pixel values determined in the current reference block;
  • the first position is at least one row or at least one column of at least one edge of the decoding block or the current reference block; the first down-sampling and the second down-sampling are determined according to the size of the block, or according to The size and shape of the block are determined.
  • the first acquiring unit 10 is further configured to calculate the latest illumination compensation information corresponding to the decoded block by using the current reference block corresponding to the decoded block, when the current reference block corresponding to the decoded block is calculated.
  • the latest illumination compensation information does not meet the preset threshold
  • the previous illumination compensation information is used as the current illumination compensation information
  • the current illumination compensation information is configured to be used when the next decoding block is decoded.
  • the first determining unit 12 is further configured to determine, from the previous illumination compensation information, the first candidate illumination compensation information that has the same current reference frame as the current decoded block Based on the illumination compensation model index, the current illumination compensation information is determined from the first candidate illumination compensation information; or, the previous illumination compensation information is divided by the first candidate illumination compensation information
  • the external remaining illumination compensation information is scaled according to the reference frame used by the illumination compensation model, the reference frame where the current reference block is located, and the time distance between the frame where the current decoding block is located, to obtain the second candidate illumination compensation information, Based on the illumination compensation model index, the current illumination compensation information is determined from the first candidate illumination compensation information and the second candidate illumination compensation information.
  • the first obtaining unit 10 is further configured to: after obtaining the code stream and obtaining the current decoding block, after performing illumination compensation on the current decoding block based on the current illumination compensation information , Before obtaining the current prediction value, when the current decoded block is a block in any inter coding mode, it is determined whether the current decoded block can implicitly inherit the illumination compensation model; if it fails to implicitly inherit the illumination compensation model, And when the previous illumination compensation information does not contain effective illumination compensation information, the illumination compensation is ended; if the illumination compensation model is not implicitly inherited, and the previous illumination compensation information contains effective illumination compensation information , The code stream is parsed to obtain the illumination compensation flag; when the illumination compensation flag indicates to perform illumination compensation, and the previous illumination compensation information includes at least two sets of valid illumination compensation information, the code stream is parsed to obtain the illumination Compensation model index; when the illumination compensation flag indicates to perform illumination compensation, and the previous illumination compensation information includes a valid set of illumination compensation information, use the set of illumination compensation information as the current illumination compensation information
  • the decoder can obtain the illumination compensation model index for illumination compensation from the code stream, so that the previous illumination compensation information obtained after the decoding of the previous decoding block is completed can find the illumination corresponding to the illumination compensation model index Compensation information, thereby performing illumination compensation, because the previous illumination compensation information is based on the update obtained when the previous decoding block is decoded, and the previous illumination compensation information contains the illumination compensation rules that satisfy the illumination compensation rules before the current decoding block is decoded. And the illumination compensation information of at least one reference block corresponding to the decoded historical decoding block. Therefore, the decoder can select the illumination compensation factor obtained from the historical decoding block to determine the current illumination compensation information to be finally used, and the selection range becomes larger. A more accurate value can be selected for illumination compensation. Therefore, the accuracy of decoding will be improved. Moreover, the previous illumination compensation information only stores the illumination compensation information obtained from historical decoding blocks that meet the illumination compensation rules. Therefore, it is filtered out. Values that do not meet the lighting compensation rules reduce storage space.
  • an embodiment of the present application also provides a decoder, including:
  • the first processor 17 and the first memory 18 storing executable instructions of the first processor 17.
  • the first memory 18 relies on the first processor 17 to perform operations through the first communication bus 19.
  • the instruction is executed by the first processor 17, the above-mentioned inter prediction method on the decoder side is executed.
  • the first processor may be implemented by software, hardware, firmware, or a combination thereof, and may use circuits, single or multiple application specific integrated circuits (application specific integrated circuits, ASIC), single or multiple general integrated circuits, single or multiple A microprocessor, a single or multiple programmable logic devices, or a combination of the foregoing circuits or devices, or other suitable circuits or devices, so that the first processor can perform the inter-frame prediction on the decoder side in the foregoing embodiments The corresponding steps of the method.
  • ASIC application specific integrated circuits
  • a microprocessor single or multiple programmable logic devices
  • the embodiment of the present application provides a computer-readable storage medium that stores executable instructions.
  • the executable instructions are executed by one or more first processors, the first processor executes the decoder side The inter-frame prediction method.
  • each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: magnetic random access memory (FRAM, ferromagnetic random access memory), read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), and erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read-Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Flash Memory, Magnetic Surface Memory, Optical Disk
  • FRAM magnetic random access memory
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash Memory Magnetic Surface Memory
  • Optical Disk Various media that can store program codes, such as CD-ROM (Compact Disc Read-Only Memory), etc., are not limited in the embodiments of the present
  • an encoder 2 including:
  • the second acquiring unit 20 is configured to acquire the current encoding block and the previous illumination compensation information; the previous illumination compensation information is updated based on the previous encoding block when the encoding is completed, and the previous illumination compensation information includes Before the current coding block is coded, the illumination compensation information of at least one reference block corresponding to the coded historical coding block that satisfies the illumination compensation rule;
  • the second determining unit 21 is configured to determine current illumination compensation information from the previous illumination compensation information
  • the second illumination compensation unit 22 is configured to obtain the current prediction value after performing illumination compensation on the current encoding block based on the current illumination compensation information.
  • the encoder 2 further includes: an encoding unit 23, a second calculation unit 24, and a second update unit 25;
  • the encoding unit 23 is configured to perform illumination compensation on the current encoding block based on the current illumination compensation information, and obtain the current prediction value, and then complete the encoding of the current encoding block based on the current prediction value , Get the coding block;
  • the second calculation unit 24 is configured to calculate the latest illumination compensation information corresponding to the coding block by using the current reference block corresponding to the coding block when the coding block satisfies the illumination compensation rule;
  • the second update unit 25 is configured to use the latest illumination compensation information to update the previous illumination compensation information to obtain the current illumination compensation information when the latest illumination compensation information meets a preset threshold.
  • the secondary illumination compensation information is configured to be used when the next coding block is coded.
  • the second determining unit 21 is further configured to determine, from the previous illumination compensation information, the first candidate illumination compensation information that has the same current reference frame as the current coding block From the first candidate illumination compensation information, determine the current illumination compensation information; or, the remaining illumination compensation information in the previous illumination compensation information except for the first candidate illumination compensation information, according to The reference frame used by the illumination compensation model, the reference frame where the current reference block is located, and the time distance between the frame where the current coding block is located are scaled to obtain the second candidate illumination compensation information, and then from the first candidate illumination From the compensation information and the second candidate illumination compensation information, the current illumination compensation information is determined.
  • the encoder 2 further includes: a judging unit 26 and an ending unit 27;
  • the judging unit 26 is configured to: after obtaining the current coding block, after performing illumination compensation on the current coding block based on the current illumination compensation information, before obtaining the current prediction value, when the current coding block is For any block in the inter-coding mode, it is determined whether the current coding block can implicitly inherit the illumination compensation model;
  • the end unit 27 is configured to end the illumination compensation if the illumination compensation model fails to be implicitly inherited, and the previous illumination compensation information does not contain valid illumination compensation information;
  • the second acquisition unit 20 is further configured to generate an illumination compensation flag and write it into the code stream if the illumination compensation model fails to be implicitly inherited, and the previous illumination compensation information contains valid illumination compensation information;
  • the second determining unit 21 is further configured to: when the illumination compensation flag indicates that illumination compensation is performed, and the previous illumination compensation information includes at least two sets of effective illumination compensation information, start from the previous illumination compensation information , Determining the current illumination compensation information, generating an illumination compensation model index, and writing it into the code stream;
  • the second acquiring unit 20 is further configured to use the set of illumination compensation information as a valid set of illumination compensation information when the illumination compensation flag indicates to perform illumination compensation, and the previous illumination compensation information includes a valid set of illumination compensation information The current illumination compensation information.
  • the second acquiring unit 20 is further configured to determine whether the current encoding block can implicitly inherit the illumination compensation model, and if the illumination compensation model can be implicitly inherited, then After performing illumination compensation on the current coding block using an implicitly inherited illumination compensation model, the current prediction value is obtained.
  • the previous illumination compensation information or the current illumination compensation information includes: at least one piece of illumination compensation information
  • Each item of illumination compensation information includes: at least one set of illumination compensation factors; or at least one set of illumination compensation factors and the index of the corresponding reference frame.
  • the second update unit 25 is further configured to, if the latest illumination compensation information does not exist in the previous illumination compensation information, and the total information amount of the previous illumination compensation information Less than the preset amount of information, arrange the latest illumination compensation information at the end of the information to obtain the current illumination compensation information; if the latest illumination compensation information does not exist in the previous illumination compensation information, and the previous illumination compensation information If the total information amount of the illumination compensation information is equal to the preset information amount, the first item of illumination compensation information arranged in the header of the previous illumination compensation information is deleted, and the latest illumination compensation information is arranged at the end of the information to obtain The current illumination compensation information; if there is an i-th illumination compensation information in the previous illumination compensation information that is the same as the latest illumination compensation information, then the i-th illumination compensation information is deleted, and then the The latest illumination compensation information is arranged at the end of the information to obtain the current illumination compensation information; where i is an integer greater than or equal to 1 and less than or equal to the preset amount of information.
  • the illumination compensation rule includes: when the current coding block is a coding block in a normal inter-frame coding mode; and/or a coding block in a merge mode.
  • the second calculation unit 24 is further configured to obtain the current pixel value of the encoding block and the reference pixel value of the current reference block; based on the current pixel value and the current pixel value. According to the reference pixel value, the latest illumination compensation information corresponding to the coding block is calculated.
  • the current pixel value is a pixel value corresponding to the first position of the coding block
  • the reference pixel value is a pixel value corresponding to the first position of the current reference block
  • the current pixel value is a pixel value obtained after the first down-sampling of the coding block
  • the reference pixel value is a pixel value obtained after the second down-sampling of the current reference block
  • the current pixel value is a first preset number of pixel values determined in the encoding block, and the reference pixel value is a first preset number of pixel values determined in the current reference block;
  • the first position is at least one row or at least one column of at least one edge of the coding block or the current reference block; the first down-sampling and the second down-sampling are determined according to the size of the block, or according to The size and shape of the block are determined.
  • the second acquiring unit 20 is further configured to calculate the latest illumination compensation information corresponding to the encoding block by using the current reference block corresponding to the encoding block, when the current reference block corresponding to the encoding block is used.
  • the latest illumination compensation information does not meet the preset threshold
  • the previous illumination compensation information is used as the current illumination compensation information
  • the current illumination compensation information is configured to be used when the next coding block is coded.
  • the encoder obtains the previous illumination compensation information obtained after the encoding of the previous coding block is completed, and finds the illumination compensation information corresponding to the illumination compensation model index, thereby performing illumination compensation. Because the previous illumination compensation information is based on the previous illumination compensation information One code block is updated when the coding is completed, and the previous illumination compensation information includes illumination compensation information of at least one reference block corresponding to the illumination compensation rule and that has been encoded before the current coding block is decoded, Therefore, the encoder can select the illumination compensation information obtained from the historical coding block, and determine the current illumination compensation information that is finally used. The selection range increases, and a more accurate value can be selected for illumination compensation. Therefore, the accuracy of the encoding will be improved. Furthermore, the previous illumination compensation information only stores illumination compensation information obtained by historical coding blocks that meet the illumination compensation rules. Therefore, the values that do not meet the illumination compensation rules are filtered out and the storage space is reduced.
  • an encoder including:
  • the second processor 28 and the second memory 29 storing executable instructions of the second processor 28.
  • the second memory 29 relies on the second processor 28 to perform operations through the second communication bus 210.
  • the second processor can be implemented by software, hardware, firmware, or a combination thereof, and can use circuits, single or multiple application-specific integrated circuits, single or multiple general-purpose integrated circuits, single or multiple microprocessors, single or multiple A programmable logic device, or a combination of the foregoing circuits or devices, or other suitable circuits or devices, so that the first processor can execute the corresponding steps of the decoder-side inter prediction method in the foregoing embodiment.
  • the embodiment of the present application provides a computer-readable storage medium that stores executable instructions.
  • the executable instructions are executed by one or more second processors, the second processor executes the encoder side The inter-frame prediction method.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of hardware embodiments, software embodiments, or embodiments combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the embodiment of the application discloses an inter-frame prediction method.
  • the method includes: obtaining a code stream, and parsing an illumination compensation model index from the code stream; obtaining the current decoded block and the previous illumination compensation information; the previous illumination compensation information is based on the previous A decoded block is updated when the decoding is completed.
  • the previous illumination compensation information contains the illumination compensation information of at least one reference block corresponding to the previously decoded historical decoding block that satisfies the illumination compensation rules before the current decoded block is decoded;
  • the compensation model index determines the current illumination compensation information from the previous illumination compensation information; based on the current illumination compensation information, after performing illumination compensation on the current decoded block, the current prediction value is obtained.
  • the embodiments of the present application also provide an encoder, a decoder, and a computer-readable storage medium at the same time.
  • the decoder can obtain the illumination compensation model index for illumination compensation from the code stream, so that it can find the illumination compensation model index corresponding to the previous illumination compensation information obtained after the decoding of the previous decoding block is completed.
  • the previous illumination compensation information is updated when the previous decoding block is decoded, and the previous illumination compensation information contains the illumination compensation information that satisfies the illumination before the current decoding block is decoded. Compensation rules and the illumination compensation information of at least one reference block corresponding to the decoded historical decoding block.
  • the decoder can select the illumination compensation information obtained from the historical decoding block to determine the final current illumination compensation information to be used.
  • the selection range changes More accurate values can be selected for illumination compensation. Therefore, the accuracy of decoding will be improved.
  • the previous illumination compensation information stores only illumination compensation information obtained from historical decoding blocks that meet the illumination compensation rules. The values that do not meet the lighting compensation rules are dropped, reducing storage space.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种帧间预测方法,方法包括:获取码流,从码流中解析出光照补偿模型索引;获取当前解码块和前一次光照补偿信息;前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息;基于光照补偿模型索引,从前一次光照补偿信息中,确定当前光照补偿信息;基于当前光照补偿信息,对当前解码块进行光照补偿后,得到当前预测值。本申请实施例还同时提供了一种编码器、解码器和计算机可读存储介质。

Description

帧间预测方法、编码器、解码器、计算机可读存储介质
相关申请的交叉引用
本申请基于申请号为202010144579.4、申请日为2020年03月04日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及视频编解码的技术领域,尤其涉及一种帧间预测方法、编码器、解码器、计算机可读存储介质。
背景技术
目前通用的视频编解码标准都采用基于块的混合编码框架。视频中的每一帧被分割成相同大小(如128x128,64x64等)的正方形的最大编码单元(LCU largest coding unit)或最大解码单元。每个最大编码单元或最大解码单元可根据规则划分成矩形的编码块或解码块。由于视频的一个帧中的相邻像素之间存在很强的相关性,在视频编解码技术中使用帧内预测方法消除相邻像素之间的空间冗余,从而提高编码效率。
在帧间预测中,采用局部光照补偿技术,来消除由于光照变化、物体运动引起的光照角度变化、前后物体相对运动引起的阴影变化、后期制作引入的亮度变化等,当前块(编码块或解码块)和参考块之间可能存在纹理相似但亮度不同的情况。这时,对当前块使用局部光照补偿技术,利用当前块左边一列和上边一行的已重构像素中的某些像素,及参考块左边一列和上边一行的像素中的对应位置的像素,确定出当前块的光照补偿因子,进行光照补偿。
然而,局部光照补偿技术在计算光照补偿因子时要用到当前块左边一列和上边一行的已重构像素中的某些像素,及参考块左边一列和上边一行的像素中的对应位置的像素,这在硬件实现时增加了存储,且计算范围局限,影响最终的编解码精度。
发明内容
本申请实施例提供一种帧间预测方法、编码器、解码器、计算机可读存储介质。
本申请的技术方案是这样实现的:
本申请实施例提供了一种帧间预测方法,应用于解码器,包括:
获取码流,从所述码流中解析出光照补偿模型索引;
获取当前解码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息;
基于所述光照补偿模型索引,从所述前一次光照补偿信息中,确定当前光照补偿信息;
基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值。
本申请实施例提供了一种帧间预测方法,应用于编码器,包括:
获取当前编码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,所述前一次光照补偿信息中包含有在当前编码块进行编码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息;
从所述前一次光照补偿信息中,确定当前光照补偿信息;
基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值。
本申请实施例提供了一种解码器,包括:
第一获取单元,配置为获取码流,
第一解析单元,配置为从所述码流中解析出光照补偿模型索引;
所述第一获取单元,还配置为获取当前解码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息;
第一确定单元,配置为基于所述光照补偿模型索引,从所述前一次光照补偿信息中,确定当前光照补偿信息;
第一光照补偿单元,配置为基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值。
本申请实施例提供了一种编码器,包括:
第二获取单元,配置为获取当前编码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,所述前一次光照补偿信息中包含有在当前编码块进行编码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息;
第二确定单元,配置为从所述前一次光照补偿信息中,确定当前光照补偿信息;
第二光照补偿单元,配置为基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值。
本申请实施例还提供了一种解码器,包括:第一处理器以及存储有所述第一处理器可执行指令的第一存储器,所述第一存储器通过第一通信总线依赖所述第一处理器执行操作,当所述指令被所述第一处理器执行时,执行上述的解码器侧的帧间预测方法。
本申请实施例还提供了一种编码器,包括:第二处理器以及存储有所述第二处理器可执行指令的第二存储器,所述第二存储器通过第二通信总线依赖所述第二处理器执行操作,当所述指令被所述第二处理器执行时,执行上述编码器侧的所述的帧间预测方法。
本申请实施例提供了一种计算可读机存储介质,存储有可执行指令,当所述可执行指令被一个或多个第一处理器执行的时候,所述第一处理器执行上述解码器侧的所述的帧间预测方法;或者,当所述可执行指令被一个或多个第二处理器执行的时候,所述第二处理器执行上述编码器侧的所述的帧间预测方法。
附图说明
图1为本申请实施例提供的示例性的视频编解码的网络架构的组成结构示意图;
图2为本申请实施例提供的示例性的编码器的框图;
图3为本申请实施例提供的示例性的解码器的框图;
图4为本申请实施例提供的一种解码器对应的可选的帧间预测的实例的流程示意图一;
图5为本申请实施例提供的一种解码器对应的可选的帧间预测的实例的流程示意图二;
图6为本申请实施例提供的一种解码器对应的可选的帧间预测的实例的流程示意图三;
图7为本申请实施例提供的一种解码器对应的可选的帧间预测的实例的流程示意图四;
图8为本申请实施例提供的一种编码器对应的可选的帧间预测的实例的流程示意图一;
图9为本申请实施例提供的一种编码器对应的可选的帧间预测的实例的流程示意图二;
图10为本申请实施例提供的一种编码器对应的可选的帧间预测的实例的流程示意图三;
图11为本申请实施例提供的一种解码器的结构示意图一;
图12为本申请实施例提供的一种解码器的结构示意图二;
图13为本申请实施例提供的一种编码器的结构示意图一;
图14为本申请实施例提供的一种编码器的结构示意图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
下面先介绍下帧间预测、视频编解码等概念。
预测编解码主要的功能是:在视频编解码中利用空间或时间上已有的重建图像构造当前处理块的预测值,仅将原始值和预测值的差值传输,以达到减少传输数据量的目的。
一个可以使用帧间预测编码的帧有一个或多个参考帧。对可使用帧间预测编码的当前帧的当前块,当前块可以是编码单元或预测单元,可以使用一个运动矢量(MV motion vector)指示到某一个参考帧的一个与当前块大小相同的像素区域,这里称作参考块,也可以使用两个运动矢量指示到某两个可以相同也可以不同的参考帧的两个参考块。运动补偿(MC motion compensation)根据运动矢量所指示的参考块得出当前编码单元的预测值。
基于上述概念的基础上,本申请实施例提供了一种包含帧间预测方法的视频编解码***的网络架构,图1为本申请实施例视频编解码的网络架构的组成结构示意图,如图1所示,该网络架构包括一个或多个电子设备111至11N和通信网络01,其中,电子设备111至11N可以通过通信网络01进行视频交互。电子设备在实施的过程中可以为各种类型的具有视频编解码功能的设备,例如,所述电子设备可以包括手机、平板电脑、个人计算机、个人数字助理、导航仪、数字电话、视频电话、电视机、传感设备、服务器等,本申请实施例不作限制。其中,本申请实施例中的帧间预测的编码器或解码器就可以为上述电子设备。
其中,本申请实施例中的电子设备具有编解码功能,一般包括编码器和解码器。
示例性的,参见图2所示,编码器21的组成结构包括:变换与量化单元211、帧内估计单元212、帧内预测单元213、运动补偿单元214、运动估计单元215、反变换与反量化单元216、滤波器控制分析单元217、滤波单元218、熵编码单元219和解码图像缓存单元210,以及光照补偿单元2110等,其中,滤波单元218可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,熵编码单元219可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmatic Coding,CABAC)。针对输入的源视频数据,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个当前视频帧的待编码块,然后对该待编码块进行帧内预测或帧间预测后,将得到的残差信息通过变换与量化单元211进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元212和帧内预测单元213配置为对该待编码块进行帧内预测,例如,确定用以编码该待编码块的帧内预测模式;运动补偿单元214、运动估计单元215和光照补偿单元2110配置为执行待编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供预测信息;其中,运动估计单元215配置为估计运动向量,运动向量可以估计该待编码块的运动,然后由运动补偿单元214基于运动向量执行运动补偿,光照补偿单元2110基于运动补偿后进行光照补偿;在确定帧内预测模式之后,帧内预测单元213还配置为将所选择的帧内预测数据提供到熵编码单元219,而且运动估计单元215将所计算确定的运动向量数据也发送到熵编码单元219;此外,反变换与反量化单元216是配置为该待编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元217和滤波单元218去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元210的帧中的一个预测性块,用以产生经重构建的视频编码块;熵编码单元219是配置为编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可配置为编码指示所确定的帧内预测模式的信息,输出该视频数据的码流;而解码图像缓存单元210配置为存放重构建的视频编码块,用于预测参考。随着视频编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元210中。
需要说明的是,在本申请实施例中,光照补偿也可以作为帧间预测过程中进行运动补偿的一部分,即光照补偿单元2110可以包含在运动补偿单元214中,图中未示出。
与编码器21对应的解码器22,其组成结构如图3所示,包括:熵解码单元221、反变换与反量化单元222、帧内预测单元223、运动补偿单元224、光照补偿单元227、滤波单元225和解码图像缓存单元226等,其中,熵解码单元221可以实现头信息解码以及CABAC解码,滤波单元225可以实现去方块滤波以及SAO滤波。输入的视频信号经过图3的编码处理之后,输出该视频信号的码流;该码流输入至视频解码器22中,首先经过熵解码单元221,得到解码后的变换系数;针对该变换系数通过反变换与反量化单元222进行处理,以便在像素域中产生残差块;帧内预测单元223可配置为基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前解码块的预测数据;运动补偿单元224通过剖析运动向量来确定当前解码块的预测信息,通过光照补偿单元227对该预测信息进行光照补偿后,产生正被解码的当前解码块的预测性块;通过对来自反变换与反量化单元222的残差块与由帧内预测单元223或运动补偿单元224产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频块通过滤波单元225以便去除方块效应伪影,从而改善视频质量;然后将经解码的视频块存储于解码图像缓存单元226中,解码图像缓存单元226配置为存储用于后续帧内预测或运动补偿的参考图像,同时也配置为视频信号的输出显示。
需要说明的是,在本申请实施例中,光照补偿也可以作为帧间预测过程中进行运动补偿的一部分,即光照补偿单元227可以包含在运动补偿单元224中,图中未示出。
基于此,下面结合附图和实施例对本申请的技术方案进一步详细阐述。本申请实施例所提供的帧间预测方法,是在预测编解码中的帧间预测过程的一种预测,既可以应用于编码器21中,也可以应用于解码器22中,本申请实施例对此不作具体限定。
本申请实施例提供的帧间预测方法主要为在光照补偿单元中实现的光照补偿的过程。
本申请实施例提供一种帧间预测方法,该方法应用于解码器中。
图4为本申请实施例提供的一种帧间预测方法的实现流程示意图,如图4所示,该方法包括:
S101、获取码流,从码流中解析出光照补偿模型索引。
S102、获取当前解码块和前一次光照补偿信息;前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息。
S103、基于光照补偿模型索引,从前一次光照补偿信息中,确定当前光照补偿信息。
S104、基于当前光照补偿信息,对当前解码块进行光照补偿后,得到当前预测值。
在S101中,在本申请实施例中,解码器在获取到编码器传输的码流后,可以通过解析码流,来获取编码器在编码时生成的光照补偿模型索引。
需要说明的是,在本申请实施例中,解码器在接收到光照补偿标志,且光照补偿标志表征可以进行光照补偿时,解码器是可以从码流中去获取光照补偿模型索引的。
在本申请实施例中,光照补偿模型索引表征具体在前一次光照补偿信息中编码时使用的光照补偿信息对应的索引信息。
需要说明的是,在本申请实施例中,解码器在进行运动补偿之后,可以对当前解码块进行光照补偿,来实现帧间预测。
其中,光照补偿模型为用于进行光照补偿所采用的处理手段。在本申请实施例中,光照补偿模型可以为一个对光照会变化的线性模型,本申请实施例不限制模型的类型。
在本申请的一些实施例中,前一次光照补偿信息或当前次光照补偿信息中包括:至少一项光照补偿信息;而每一项光照补偿信息包括:至少一组光照补偿因子;或者至少一组光照补偿因子和对应参考帧的索引;其中,至少一组光照补偿因子包括:至少一个参考方向对应的光照补偿因子,或者,至少一个参考帧列表。也就是说,前一次光照补偿信息或当前次光照补偿信息可以理解为是包括了至少一项光照补偿信息的一个信息集合。
需要说明的是,在本申请实施例中,至少一组光照补偿因子可以包括:一个参考方向对应的光照补偿因子;或者,两个参考方向对应的光照补偿因子,其中,两个参考方向可能是一个前向一个后向,或者两个参考方向都是前向,本申请实施例不作限制。
在本申请实施例中,一组光照补偿因子可以包括:一个缩放因子和一个偏移因子。
示例性的,线性模型使用一个缩放因子a和一个偏移因子b,设当前解码块在(x,y)位置的预测值为Y (x,y),参考块在(x,y)位置的像素值为X (x,y),则Y (x,y)=a*X (x,y)+b。其中,a和b是基于前一次光照补偿信息确定的。
需要说明的是,在本申请实施例中,当前解码块的像素值与对应参考块的像素值的位置分布一致,通过预测位置分布一致的像素值,从而得到当前解码块对应的当前预测值。
在S102中,解码器在进行当前解码块的解码时,由于上一块解码块已经解码完成了,因此,解码器是可以获取当前解码块和上一块解码完成后得到的前一次光照补偿信息的。
需要说明的是,在本申请实施例中,解码器是可以对视频序列进行解码的,当解码器在针对一个视频序列进行解码时,是以块为单位进行解码的,当解码器在解码当前解码块时,上一个或前一个解码块已经解码完成了。且在解码器解码初始时,是存储有初始光照补偿信息的,初始光照补偿信息是空的,在每个解码块解码完成时可以经过一系列的判断,来决定是不是有光照补偿因子生成,若有,则更新至上一次或前一次光照补偿信息中,若没有,则不更新。
在本申请实施例中,前一次光照补偿信息是基于前一个解码块解码完成时更新得到的。其中,前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息。
需要说明的是,在本申请实施例中,前一次光照补偿信息可以是以列表的形式来存储光照补偿信息的;也就是说,解码器在解码初始时,是存储有一个空的光照补偿列表的,在解码的过程中,每个解码块解码完成时可以经过一系列的判断,来决定是不是有光照补偿信息(具体为光照补偿因子)生成,若有,则更新至上一次或前一次光照补偿列表中,若没有,则不更新光照补偿列表。
示例性的,在前一次光照补偿列表中,前一次光照补偿列表中的每一项为一组光照补偿信息,若光照补偿信息为光照补偿因子时,在本申请实施例中一组光照补偿因子包括缩放因子和偏移因子,即一组缩放因子a和偏移因子b。
在本申请实施例中,前一次光照补偿列表中保存的光照补偿信息包括上面所提到的缩放因子a和偏移因子b的值,一种可能的实现是该列表中保存的光照补偿信息还包括参考帧的索引,另一种可能的实现是该列表中保存的光照补偿信息不包括参考帧的索引。需要说明的是,在本申请实施例中,当前解码块的参考方向可以为一个,也可以为至少一个,例如,两个参考方向,因此,当参考方向为一个时,如果前一次光照补偿列表中保存的光照补偿信息包括对应的参考帧的索引,一种可能的实现方式是上述列表的每一项只保存一个光照补偿信息,即只有一组缩放因子a、偏移因子b和对应参考帧的索引;或者为除上述信息外,还包括一组默认值的缩放因子a、偏移因子b和对应参考帧的索引,或为除上述信息外,未用到的参考方向上保存的光照补偿信息可设为不可用。另一种可能的实现方式是针对两个参考方向时,上述列表的每一项分别包括两个参考块对应的缩放因子a、偏移因子b和参考帧的索引。
在本申请的一些实施例中,光照补偿规则包括:当当前解码块为通过普通帧间编码模式编码得到的解码块;和/或为通过合并模式编码得到的解码块。
需要说明的是,在本申请实施例中,解码器在进行解码时,正在解码的当前解码块如果符合光照补偿 规则,就可以计算出当前解码块的光照补偿因子,并获取参考帧的索引,而本申请中的光照补偿规则可以为当前解码块为通过普通帧间编码模式编码得到的解码块,普通帧间解码模式指使用帧间预测并需要传输运动矢量差值(MVD motion vector difference)和残差(residual)的块;也可以是当前解码块为通过普通帧间编码模式或合并模式编码得到的解码块,其中,合并模式指使用帧间预测,不需要传输运动矢量差值但需要传输残差的块。
需要说明的是,本申请不限制S101和S102的执行顺序。
在S103中,解码器在获取到光照补偿模型索引,以及前一次光照补偿信息之后,该解码器就可以基于光照补偿模型索引,从前一次光照补偿信息中,确定当前光照补偿信息。
在本申请的一些实施例中,基于光照补偿模型索引,从前一次光照补偿信息中,确定当前光照补偿信息的实现方式为:从前一次光照补偿信息中,确定出与当前解码块具有相同当前参考帧的第一候选光照补偿信息;基于光照补偿模型索引,从第一候选光照补偿信息中,确定出当前光照补偿信息;或者,将前一次光照补偿信息中除第一候选光照补偿信息之外的剩余光照补偿信息,按照光照补偿模型所采用的参考帧,当前参考块所在参考帧,以及当前解码块所在帧之间的时间距离进行缩放,得到第二候选光照补偿信息,再基于光照补偿模型索引,从第一候选光照补偿信息和第二候选光照补偿信息中,确定出当前光照补偿信息。
在本申请实施例中,编码器可以从前一次光照补偿信息(前一次光照补偿列表)中确定当前光照补偿信息,如果前一次光照补偿列表中的保存光照补偿信息中包含参考帧的信息,如参考块列表索引及参考块索引,一种可能的实现方式是从前一次光照补偿列表中筛选出与当前块所用参考帧相同的参考帧的光照补偿因子,另一种可能的实现方式是与当前块所用参考帧不相同的参考帧的光照补偿模型所采用的参考帧,当前参考块所在参考帧,以及当前解码块所在帧之间的时间距离使用缩放后使用,从而选择出结果最优的光照补偿因子为当前光照补偿因子。需要说明的是,解码器可以将前一次光照补偿信息中,光照补偿模型索引指示的光照补偿信息选择出来作为当前光照补偿信息。后续还有基于其他确定光照补偿信息的方式,将在后面的实施例中进行说明。
在S104中,解码器可以根据当前光照补偿信息,对当前解码块进行光照补偿后,从而得到当前预测值。
需要说明的是,在本申请实施例中,解码器是针对进行了运动补偿的当前解码块进行的本申请实施例提供的帧间预测方法的。
在本申请的一些实施例中,解码器可以根据光照补偿线性模型,结合确定的当前光照补偿因子(当前光照补偿信息),将与当前解码块对应的参考块的像素值输入,以得到当前解码块的预测值。
需要说明的是,本申请实施例中的解码器在进行帧间预测时,也是基于像素为单元来实现预测的。
可以理解的是,解码器可以从码流中获取到进行光照补偿的光照补偿模型索引,这样就可以在上一个解码块解码完成后得到的前一次光照补偿信息,找到光照补偿模型索引对应的光照补偿信息,从而进行光照补偿了,由于前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息,因此,解码器可以针对历史解码块得到的光照补偿因子进行选择,确定出最终使用的当前光照补偿信息,选择范围变多,可以选择出更精确的数值进行光照补偿,因此,解码的精度会提升,再者,前一次光照补偿信息中存储的只有满足光照补偿规则的历史解码块得到的光照补偿信息,因此,筛掉了不符合光照补偿规则的数值,减少了存储空间。
在本申请的一些实施例中,图5为本申请实施例提供的一种帧间预测方法的实现流程示意图,如图5所示,在S104之后,该方法还包括:
S105、基于当前预测值,完成当前解码块的解码,得到解码块。
S106、当解码块满足光照补偿规则时,采用与解码块对应的当前参考块,计算出解码块对应的最新光照补偿信息。
S107、当最新光照补偿信息满足预设阈值时,采用最新光照补偿信息,更新前一次光照补偿信息,得到当前次光照补偿信息,当前次光照补偿信息用于在下一个解码块解码时使用。
S108、当最新光照补偿信息不满足预设阈值时,将前一次光照补偿信息,作为当前次光照补偿信息,当前次光照补偿信息用于在下一个解码块解码时使用。
需要说明的是,由于在当前解码块确定的情况下,可以获取到当前解码块的参考帧的索引,因此,解码器在获取当前光照补偿信息时,主要是为了计算当前光照补偿因子的。
在本申请实施例中,解码器在获取当前预测值后,继续进行解码处理,直至完成当前解码块的解码,得到解码块,这时,解码器完成当前解码块的解码之后,需要进行一次判断,判断解码块是不是符合光照补偿规则,若符合,那么就计算该解码块的光照补偿因子,详细的,采用与解码块对应的当前参考块,结 合初始光照补偿模型,从而计算出解码块对应的最新光照补偿信息。
在本申请实施例中,解码器判断解码块是否满足光照补偿规则,当解码块是当当前解码块为通过普通帧间编码模式编码得到的解码块;和/或者通过合并模式编码得到的解码块,则解码块满足光照补偿规则,这样在本申请实施例中,解码块就获取解码块的当前像素值,以及与当前参考块的参考像素值;再基于当前像素值和参考像素值,计算出解码块对应的最新光照补偿信息。
在本申请的一些实施例中,当前像素值为解码块的第一位置对应的像素值,参考像素值为当前参考块的第一位置对应的像素值;或者,
当前像素值为解码块进行第一下采样后得到的像素值,参考像素值为当前参考块进行第二下采样后得到的像素值;
当前像素值为解码块中确定的第一预设数量的像素值,参考像素值为当前参考块中确定的第一预设数量的像素值;
其中,第一位置为解码块或者当前参考块的至少一个边缘的至少一行或至少一列;第一下采样和第二下采样是根据块的大小确定的,或者根据块的大小和形状确定的。
需要说明的是,在计算最新光照补偿信息中的最新光照补偿因子时,所选择的像素可以是当前块和参考块的某一个边缘,如上边缘、下边缘、左边缘、右边缘的一行/列或几行/列,也可以同时选择其中的一个或几个而得出一个或几个光照补偿因子,即当前像素值为解码块的第一位置对应的像素值,参考像素值为当前参考块的第一位置对应的像素值。也可以使用当前块和参考块的所有像素或对所有像素的某一种规则的下采样,如当前块水平和竖直方向每2/4/8个像素取一个像素,即第一下采样或第二下采样为当前块水平和竖直方向每2/4/8个进行采样。还可以根据块的大小来区分下采样的规则,如当当前块的像素数小于等于64个时,水平和竖直方向每2个像素取一个像素,当当前块的像素数大于64个时,水平和竖直方向每4个像素取一个像素,本申请实施例不作限制。或者规定取第一预设数量的像素,如不管块的大小,都对当前块和参考块各取16个像素。或者,根据块的不同大小和形状设置采样规则等,本申请实施例不作限制。
解码器在计算出解码块对应的最新光照补偿信息之后,需要进行二次判断,判断该最新光照补偿信息中的最新光照补偿因子是否有效,若有效,才使用最新光照补偿信息更新前一次光照补偿信息或更新前一次光照补偿列表,若无效,则不更新。也就是说,解码器如果计算出来的光照补偿模型的a、b分别符合合理的取值范围,认为这个模型是有效的,否则,认为这个模型是无效的,放弃这组光照补偿因子。
在本申请的一些实施例中,解码器可以采用预设阈值来进行二次判断,当最新光照补偿信息满足预设阈值时,解码器采用最新光照补偿信息,更新前一次光照补偿信息,得到当前次光照补偿信息,当前次光照补偿信息用于在下一个解码块解码时使用。而当最新光照补偿信息不满足预设阈值时,将前一次光照补偿信息,作为当前次光照补偿信息,当前次光照补偿信息用于在下一个解码块解码时使用,该一组光照补偿信息为有效的光照补偿信息。
在本申请实施例中,解码器可以采用预设阈值来进行二次判断,也可以采用其他判断方式,本申请实施例不作限制。
在本申请实施例中,解码器采用最新光照补偿信息,更新前一次光照补偿信息,得到当前次光照补偿信息的实现方式可以为:若前一次光照补偿信息中不存在最新光照补偿信息,且前一次光照补偿信息的总信息量小于预设信息量,则将最新光照补偿信息排列在信息尾部,得到当前次光照补偿信息;若前一次光照补偿信息中不存在最新光照补偿信息,且前一次光照补偿信息的总信息量等于预设信息量,则删除前一次光照补偿信息中排列在信息首部的第一项光照补偿信息,再将最新光照补偿信息排列在信息尾部,得到当前次光照补偿信息;若前一次光照补偿信息中存在与最新光照补偿信息相同的第i项光照补偿信息,则将第i项光照补偿信息删除,再将最新光照补偿信息排列在信息尾部,得到当前次光照补偿信息;其中,i为大于等于1,且小于等于预设信息量的整数。需要说明的是,光照补偿信息可以采用光照补偿列表的形式体现,每个待解码块解码完成后,得到最新光照补偿信息,会对前一次光照补偿列表进行更新。光照补偿列表可以有最大的长度,即项数:预设信息量。光照补偿列表的每一项中保存光照补偿信息。光照补偿列表的***可以使用先进先出的规则。***新的项时需要进行查重并更新列表。即需要进行最新光照补偿信息的查重,最新光照补偿信息为要***项,一种可能的实现方式是当要***项和已有项对应的光照补偿因子的各个值相等时,认为是重复项,将已有项删除,将要***的项***队尾;另一种可能的实现方式是当要***的项和已有项对应的光照补偿模型对应的参考帧相同且a、b的值相差小于预设差异阈值时,认为是重复项,将已有项删除,将要***项***队尾。
在本申请的一些实施例中,如果允许缩放,将a、b根据要***项和已有项的参考帧缩放后进行比较,缩放方法类似于运动矢量的缩放方法。具体的需要依据光照补偿模型所采用的参考帧,当前参考块所在参考帧,以及当前解码块所在帧之间的时间距离进行缩放。
可以理解的是,每当解码器解码完一个块之后,如果这个块符合光照补偿规则计算出它的光照补偿因子,如果这个光照补偿因子符合二次判断要求,就把它***到光照补偿列表中,对光照补偿列表进行更新。当要解码一个块时,如果它需要用到光照补偿,就从光照补偿列表中选择合适的光照补偿因子。这样当前块能参考到的光照补偿模型就不局限于相邻的块,而是之前解码过的所有的块,从而可以提高解码的精度。
在本申请的一些实施例中,图6为本申请实施例提供的一种帧间预测方法的实现流程示意图,如图6所示,获取当前光照补偿信息的另一个实现方式为,在不采用光照补偿模型索引的情况下,在S102之后,且S104之前,该方法还包括:
S109、若前一次光照补偿信息中包括一组光照补偿信息,则将一组光照补偿信息作为当前光照补偿信息;一组光照补偿信息为有效的光照补偿信息。
在本申请实施例中,解码器获取当前解码块和前一次光照补偿信息之后,若前一次光照补偿信息中包括一组光照补偿信息,则将一组光照补偿因子作为当前光照补偿信息;但是这一组光照补偿信息为有效的光照补偿信息。详细的,应该是该组光照补偿信息中的光照补偿因子为有效的光照补偿因子。需要说明的是,当解码一个块时,如果可以进行光照补偿,光照补偿列表中只有一个可用的光照补偿信息或当前块可用的光照补偿信息,且光照补偿的标志确定当前块使用光照补偿,一种可能的实现方式是在这种情况下默认使用列表中唯一可用的光照补偿信息而不用基于光照补偿模型索引确定当前光照补偿信息,从而可以节省光照补偿模型索引的传输,节省传输资源。
在本申请的一些实施例中,图7为本申请实施例提供的一种帧间预测方法的实现流程示意图,如图7所示,在S101的获取码流之后,在S102获取当前解码块之后,且在S104之前,该方法还包括:
S110、当当前解码块为任何帧间编码模式的块,则判断当前解码块是否能隐式地继承光照补偿模型。
S111、若未能隐式地继承光照补偿模型,且前一次光照补偿信息不包含有有效的光照补偿信息时,则结束光照补偿。
S112、若未能隐式地继承光照补偿模型,且前一次光照补偿信息包含有有效的光照补偿信息时,则解析码流,得到光照补偿标志。
S113、当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括有效的至少两组光照补偿信息时,解析码流,得到光照补偿模型索引。
S114、当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括有效的一组光照补偿信息时,将一组光照补偿信息作为当前光照补偿信息。
在S110之后,还包括:
S115、若能隐式地继承光照补偿模型,则采用隐式地继承的光照补偿模型对当前解码块进行光照补偿后,得到当前预测值。
在本申请实施例中,当编解码一个块时,如果这个块符合要求,那么它可以进行光照补偿。这个要求可以是当前块是一个任何帧间解码模式的块,如果当前块是帧间跳过(skip)模式或合并(merge)模式;一种可能的实现是它隐式地继承它所参考块的光照补偿因子(即光照补偿信息);另一种可能的实现是它不能隐式地继承它所参考的块的光照补偿因子。如果当前块可以进行光照补偿但它不能隐式地继承它所参考的块的光照补偿因子,需要传输一个标志(flag)指示它是否使用光照补偿。
需要说明的是,当当前解码块为任何帧间解码模式的块,则判断当前解码块是否能隐式地继承光照补偿模型。若未能隐式地继承光照补偿模型,且前一次光照补偿信息不包含有有效的光照补偿信息(例如:光照补偿因子)时,则结束光照补偿;若未能隐式地继承光照补偿模型,且前一次光照补偿信息包含有有效的光照补偿信息时,则解析码流,得到光照补偿标志。当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括有效的至少两组光照补偿信息时,解析码流,得到光照补偿模型索引,指示解码器使用如上所述的列表中的第几个项中的光照补偿模信息,使用光照补偿可以提高预测质量。其中,在本申请实施例中,光照补偿模型索引表征具体在前一次光照补偿信息中解码时使用的光照补偿信息对应的索引信息。
若未能隐式地继承光照补偿模型,且前一次光照补偿信息不包含有有效的光照补偿因子时,则结束光照补偿,也就是说,当编解码一个块时,如果它可以进行光照补偿,但光照补偿列表中没有可用的光照补偿模型或没有当前块可用的光照补偿模型,一种可能的实现方式是在这种情况下默认它不使用光照补偿而不用进行flag传输。
可以理解的是,默认不使用光照补偿而不用进行flag传输从而减少传输资源和降低处理复杂度。
当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括一组光照补偿信息时,将一组光照补偿信息作为当前光照补偿信息。也就是说,当编解码一个块时,如果它可以进行光照补偿,光照补偿模型列表中只有一个可用的光照补偿模型或当前块可用的光照补偿模型,且光照补偿模型的标志确定当前块使用光照补偿,一种可能的实现方式是在这种情况下默认它使用列表中唯一可用的光照补偿信息而不用传输光照补偿模型索引。可以理解的是,默认使用列表中唯一可用的光照补偿信息而不用传输光照补偿模型 索引可以减少传输资源。
在本申请实施例中,若能隐式地继承光照补偿模型,则解码器可以采用隐式地继承的光照补偿模型对当前解码块进行光照补偿后,得到当前预测值。
本申请实施例提供一种帧间预测方法,该方法应用于编码器中。
图8为本申请实施例提供的一种帧间预测方法的实现流程示意图,如图8所示,该方法包括:
S201、获取当前编码块和前一次光照补偿信息;前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,前一次光照补偿信息中包含有在当前编码块进行编码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息。
S202、从前一次光照补偿信息中,确定当前光照补偿信息。
S203、基于当前光照补偿信息,对当前编码块进行光照补偿后,得到当前预测值。
在S201中,在本申请实施例中,编码器在进行完前一个编码块的编码后,接着进行当前编码块的编码,于是,编码器可以获取到前一次编码完成后更新得到的前一次光照补偿信息了。
也就是说,编码器在进行当前编码块的编码时,由于上一块编码块已经编码完成了,因此,编码器是可以获取当前编码块和上一块编码完成后得到的前一次光照补偿信息的。
需要说明的是,在本申请实施例中,编码器是可以对视频序列进行编码的,当编码器在针对一个视频序列进行编码时,是以块为单位进行编码的,当编码器在编码当前编码块时,上一个或前一个编码块已经编码完成了。且在编码器初始编码时,是存储有初始光照补偿信息的,初始光照补偿信息是空的,在每个编码块编码完成时可以经过一系列的判断,来决定是不是有光照补偿因子生成,若有,则更新至上一次或前一次光照补偿信息中,若没有,则不更新。
在本申请实施例中,前一次光照补偿信息是基于前一个编码块编码完成时更新得到的。其中,前一次光照补偿信息中包含有在当前编码块进行编码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息。需要说明的是,在本申请实施例中,编码器在进行运动补偿之后,可以对当前编码块再次进行光照补偿,来实现帧间预测。
其中,光照补偿模型为用于进行光照补偿所采用的处理手段。在本申请实施例中,光照补偿模型可以为一个对光照会变化的线性模型,本申请实施例不限制模型的类型。前一次光照补偿信息中的光照补偿因子为光照补偿模型中的系数。
在本申请的一些实施例中,前一次光照补偿信息或当前次光照补偿信息中包括:至少一项光照补偿信息;而每一项光照补偿信息包括:至少一组光照补偿因子;或者至少一组光照补偿因子和对应参考帧的索引;其中,至少一组光照补偿因子包括:至少一个参考方向对应的光照补偿因子,或者,至少一个参考帧列表。需要说明的是,在本申请实施例中,至少一组光照补偿因子可以包括:一个参考方向对应的光照补偿因子;或者,两个参考方向对应的光照补偿因子,其中,两个参考方向可能是一个前向一个后向,或者两个参考方向都是前向,本申请实施例不作限制。
在本申请实施例中,一组光照补偿因子可以包括:一个缩放因子和一个偏移因子。示例性的,线性模型使用一个缩放因子a和一个偏移因子b,设当前编码块在(x,y)位置的预测值为Y (x,y),参考块在(x,y)位置的像素值为X (x,y),则Y (x,y)=a*X (x,y)+b。其中,a和b是基于前一次光照补偿信息确定的。
需要说明的是,在本申请实施例中,当前编码块的像素值与对应参考块的像素值的位置分布一致,通过预测位置分布一致的像素值,从而得到当前编码块对应的当前预测值。
需要说明的是,在本申请实施例中,前一次光照补偿信息可以是以列表的形式来存储光照补偿信息的;也就是说,编码器在编码初始时,是存储有一个空的光照补偿列表的,在编码的过程中,每个编码块编码完成时可以经过一系列的判断,来决定是不是有光照补偿信息生成,若有,则更新至上一次或前一次光照补偿列表中,若没有,则不更新光照补偿列表。
示例性的,在前一次光照补偿列表中,前一次光照补偿列表中的每一项为一组光照补偿信息,光照补偿信息中包括一组光照补偿因子时,在本申请实施例中一组光照补偿因子包括缩放因子和偏移因子,即一组缩放因子a和偏移因子b。
在本申请实施例中,前一次光照补偿列表中保存的光照补偿因子包括上面所提到的缩放因子a和偏移因子b的值,一种可能的实现是该列表中保存的光照补偿信息还包括参考帧的索引,另一种可能的实现是该列表中保存的光照补偿信息不包括参考帧的索引。需要说明的是,在本申请实施例中,当前编码块的参考方向可以为一个,也可以为至少一个,例如,两个参考方向,因此,当参考方向为一个时,如果前一次光照补偿列表中保存的光照补偿信息包括对应的参考帧的索引,一种可能的实现方式是上述列表的每一项 只保存一个光照补偿信息,即只有一组缩放因子a、偏移因子b和对应参考帧的索引;或者为除上述信息外,还包括一组默认值的缩放因子a、偏移因子b和对应参考帧的索引,或为除上述信息外,未用到的参考方向上保存的光照补偿信息可设为不可用。另一种可能的实现方式是针对两个参考方向时,上述列表的每一项分别包括两个参考块对应的缩放因子a、偏移因子b和参考帧的索引。
在本申请的一些实施例中,光照补偿规则包括:当当前编码块为普通帧间编码模式的编码块;和/或者为合并模式的编码块。需要说明的是,在本申请实施例中,编码器在进行编码时,正在编码的当前编码块如果符合光照补偿规则,就可以计算出当前编码块的光照补偿因子,并获取参考帧的索引,而本申请中的光照补偿规则可以为当前编码块为普通帧间编码模式的编码块,普通帧间编码模式指使用帧间预测并需要传输运动矢量差值(MVD motion vector difference)和残差(residual)的块;也可以是当前编码块为普通帧间编码模式或合并模式的编码块,其中,合并模式指使用帧间预测,不需要传输运动矢量差值但需要传输残差的块。
在S202中,在本申请的一些实施例中,编码器从前一次光照补偿信息中,确定当前光照补偿信息的实现方式为:从前一次光照补偿信息中,确定出与当前编码块具有相同当前参考帧的第一候选光照补偿信息;
从第一候选光照补偿信息中,确定出当前光照补偿信息;或者,
将前一次光照补偿信息中除第一候选光照补偿信息之外的剩余光照补偿信息,按照光照补偿模型所采用的参考帧,当前参考块所在参考帧,以及当前编码块所在帧之间的时间距离进行缩放,得到第二候选光照补偿信息,再从第一候选光照补偿信息和第二候选光照补偿信息中,确定出当前光照补偿因子。
在本申请实施例中,编码器可以从前一次光照补偿信息(前一次光照补偿列表)中确定当前光照补偿因子,如果前一次光照补偿列表中的保存光照补偿信息的中包含参考帧的信息,如参考块列表索引及参考块索引,一种可能的实现方式是前一次光照补偿列表筛选出与当前块所用参考帧相同的参考帧的光照补偿因子,另一种可能的实现方式是与当前块所用参考帧不相同的参考帧的光照补偿模型所采用的参考帧,当前参考块所在参考帧,以及当前编码块所在帧之间的时间距离使用缩放后使用,从而选择出结果最优的光照补偿因子为当前光照补偿因子。
在S203中,在本申请实施例中,编码器可以根据当前光照补偿信息,对当前编码块进行光照补偿后,从而得到当前预测值。
需要说明的是,在本申请实施例中,编码器是针对进行了运动补偿的当前编码块进行的本申请实施例提供的帧间预测方法的。
在本申请的一些实施例中,编码器可以根据光照补偿线性模型,结合确定的当前光照补偿信息,将与当前编码块对应的参考块的像素值输入,以得到当前编码块的预测值。需要说明的是,本申请实施例中的编码器在进行帧间预测时,也是基于像素为单元来实现预测的。
可以理解的是,编码器获取在上一个编码块编码完成后得到的前一次光照补偿信息,找到光照补偿模型索引对应的光照补偿信息,从而进行光照补偿了,由于前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,所述前一次光照补偿信息中包含有在当前编码块进行解码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息,因此,编码器可以针对历史编码块得到的光照补偿信息进行选择,确定出最终使用的当前光照补偿信息,选择范围变多,可以选择出更精确的数值进行光照补偿,因此,编码的精度会提升,再者,前一次光照补偿信息中存储的只有满足光照补偿规则的历史编码块得到的光照补偿信息,因此,筛掉了不符合光照补偿规则的数值,减少了存储空间。
在本申请的一些实施例中,图9为本申请实施例提供的一种帧间预测方法的实现流程示意图,如图9所示,在S203之后,该方法还包括:
S204、基于当前预测值,完成当前编码块的编码,得到编码块。
S205、当编码块满足光照补偿规则时,采用与编码块对应的当前参考块,计算出编码块对应的最新光照补偿信息。
S206、当最新光照补偿信息满足预设阈值时,采用最新光照补偿信息,更新前一次光照补偿信息,得到当前次光照补偿信息,当前次光照补偿信息用于在下一个编码块编码时使用。
S207、当最新光照补偿信息不满足预设阈值时,将前一次光照补偿信息,作为当前次光照补偿信息,当前次光照补偿信息用于在下一个编码块编码时使用。
需要说明的是,由于在当前编码块确定的情况下,可以获取到当前编码块的参考帧的索引,因此,编码器在获取当前光照补偿信息时,主要是为了计算当前光照补偿因子的。
在本申请实施例中,编码器在获取当前预测值后,继续进行编码处理,直至完成当前编码块的编码,得到编码块,这时,编码器完成当前编码块的编码之后,需要进行一次判断,判断编码块是不是符合光照补偿规则,若符合,那么就计算该编码块的光照补偿因子,详细的,采用与编码块对应的当前参考块,结 合初始光照补偿模型,计算出编码块对应的最新光照补偿信息。
在本申请实施例中,编码器判断编码块是否光照补偿规则,当编码块为是当当前编码块为普通帧间编码模式的编码块;和/或者合并模式编码的编码块,则符合当编码块满足光照补偿规则,这样在本申请实施例中,编码块就获取编码块的当前像素值,以及与当前参考块的参考像素值;再基于当前像素值和参考像素值,计算出编码块对应的最新光照补偿信息。
在本申请的一些实施例中,当前像素值为编码块的第一位置对应的像素值,参考像素值为当前参考块的第一位置对应的像素值;或者,
当前像素值为编码块进行第一下采样后得到的像素值,参考像素值为当前参考块进行第二下采样后得到的像素值;
当前像素值为编码块中确定的第一预设数量的像素值,参考像素值为当前参考块中确定的第一预设数量的像素值;
其中,第一位置为编码块或者当前参考块的至少一个边缘的至少一行或至少一列;第一下采样和第二下采样是根据块的大小确定的,或者根据块的大小和形状确定的。
需要说明的是,在计算最新光照补偿信息中的最新光照补偿因子时,所选择的像素可以是当前块和参考块的某一个边缘,如上边缘、下边缘、左边缘、右边缘的一行/列或几行/列,也可以同时选择其中的一个或几个而得出一个或几个光照补偿因子,即当前像素值为编码块的第一位置对应的像素值,参考像素值为当前参考块的第一位置对应的像素值。也可以使用当前块和参考块的所有像素或对所有像素的某一种规则的下采样,如当前块水平和竖直方向每2/4/8个像素取一个像素,即第一下采样或第二下采样为当前块水平和竖直方向每2/4/8个进行采样。还可以根据块的大小来区分下采样的规则,如当当前块的像素数小于等于64个时,水平和竖直方向每2个像素取一个像素,当当前块的像素数大于64个时,水平和竖直方向每4个像素取一个像素,本申请实施例不作限制。或者规定取第一预设数量的像素,如不管块的大小,都对当前块和参考块各取16个像素。或者,根据块的不同大小和形状设置采样规则等,本申请实施例不作限制。
编码器在计算出编码块对应的最新光照补偿信息之后,需要进行二次判断,判断该最新光照补偿信息是否有效,若有效,才使用最新光照补偿信息更新前一次光照补偿信息更新前一次光照补偿列表,若无效,则不更新。
也就是说,编码器如果计算出来的光照补偿模型的a、b分别符合合理的取值范围,认为这个模型是有效的,否则,认为这个模型是无效的,放弃这组光照补偿因子。
在本申请的一些实施例中,编码器可以采用预设阈值来进行二次判断,当最新光照补偿信息满足预设阈值时,编码器采用最新光照补偿信息,更新前一次光照补偿信息,得到当前次光照补偿信息,当前次光照补偿信息用于在下一个编码块编码时使用。而当最新光照补偿信息不满足预设阈值时,将前一次光照补偿信息,作为当前次光照补偿信息,当前次光照补偿信息用于在下一个编码块编码时使用,该一组光照补偿信息为有效的光照补偿信息。
在本申请实施例中,编码器可以采用预设阈值来进行二次判断,也可以采用其他判断方式,本申请实施例不作限制。
在本申请实施例中,编码器采用最新光照补偿信息,更新前一次光照补偿信息,得到当前次光照补偿信息的实现方式可以为:若前一次光照补偿信息中不存在最新光照补偿信息,且前一次光照补偿信息的总信息量小于预设信息量,则将最新光照补偿信息排列在信息尾部,得到当前次光照补偿信息;若前一次光照补偿信息中不存在最新光照补偿信息,且前一次光照补偿信息的总信息量等于预设信息量,则删除前一次光照补偿信息中排列在信息首部的第一项光照补偿因子,再将最新光照补偿信息排列在信息尾部,得到当前次光照补偿信息;若前一次光照补偿信息中存在与最新光照补偿信息相同的第i项光照补偿因子,则将第i项光照补偿因子删除,再将最新光照补偿信息排列在信息尾部,得到当前次光照补偿信息;其中,i为大于等于1,且小于等于预设信息量的整数。
需要说明的是,光照补偿信息可以采用光照补偿列表的形式体现,每个待编码块编码完成后,得到最新光照补偿信息,会对前一次光照补偿列表进行更新。光照补偿列表可以有最大的长度,即项数:预设信息量。光照补偿列中保存光照补偿信息。光照补偿列表的***可以使用先进先出的规则。***新的项时需要进行查重并更新列表。即需要进行最新光照补偿信息的查重,最新光照补偿信息为要***项,一种可能的实现方式是当要***项和已有项对应的光照补偿因子的各个值相等时,认为是重复项,将已有项删除,将要***的项***队尾;另一种可能的实现方式是当要***的项和已有项对应的光照补偿模型对应的参考帧相同且a、b的值相差小于预设差异阈值时,认为是重复项,将已有项删除,将要***项***队尾。
在本申请的一些实施例中,如果允许缩放,将a、b根据要***项和已有项的参考帧缩放后进行比较,缩放方法类似于运动矢量的缩放方法。具体的需要依据光照补偿模型所采用的参考帧,当前参考块所在参 考帧,以及当前编码块所在帧之间的时间距离进行缩放。
可以理解的是,每当编码器编码完一个块之后,如果这个块符合光照补偿规则计算出它的光照补偿因子,如果这个光照补偿因子符合二次判断要求,就把它***到光照补偿列表中,对光照补偿列表进行更新。当要编码一个块时,如果它需要用到光照补偿,就从光照补偿列表中选择合适的光照补偿因子。这样当前块能参考到的光照补偿模型就不局限于相邻的块,而是之前编码过的所有的块,从而可以提高了编码的精度。
在本申请的一些实施例中,图10为本申请实施例提供的一种帧间预测方法的实现流程示意图,如图10所示,在S201的获取当前编码块之后,且在S203之前,该方法还包括:
S208、当当前编码块为任何帧间编码模式的块,则判断当前编码块是否能隐式地继承光照补偿模型。
S209、若未能隐式地继承光照补偿模型,且前一次光照补偿信息不包含有有效的光照补偿信息时,则结束光照补偿。
S210、若未能隐式地继承光照补偿模型,且前一次光照补偿信息包含有有效的光照补偿信息时,则生成光照补偿标志,写入码流。
S211、当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括有效的至少两组光照补偿信息时,从前一次光照补偿信息中,确定当前光照补偿信息,并生成光照补偿模型索引,写入码流。
S212、当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括有效的一组光照补偿信息时,将一组光照补偿信息作为当前光照补偿信息。
在S208之后,还包括:
S213、若能隐式地继承光照补偿模型,则采用隐式地继承的光照补偿模型对当前编码块进行光照补偿后,得到当前预测值。
在本申请实施例中,当编解码一个块时,如果这个块符合要求,那么它可以进行光照补偿。这个要求可以是当前块是一个任何帧间编码模式的块,如果当前块是帧间跳过(skip)模式或合并(merge)模式;一种可能的实现是它隐式地继承它所参考块的光照补偿因子(即光照补偿信息);另一种可能的实现是它不能隐式地继承它所参考的块的光照补偿因子。如果当前块可以进行光照补偿但它不能隐式地继承它所参考的块的光照补偿因子,需要传输一个标志(flag)指示它是否使用光照补偿。
需要说明的是,当当前编码块为任何帧间编码模式的块,则判断当前编码块是否能隐式地继承光照补偿模型。若未能隐式地继承光照补偿模型,且前一次光照补偿信息不包含有有效的光照补偿信息(例如:光照补偿因子)时,则结束光照补偿;若未能隐式地继承光照补偿模型,且前一次光照补偿信息包含有光照补偿光照补偿信息时,则生成光照补偿标志,并将光照补偿标志写入码流,供解码器使用。当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括有效的至少两组光照补偿信息时,从前一次光照补偿信息中,确定当前光照补偿信息,并生成光照补偿模型索引,写入码流,供解码器使用。如果确定可以使用光照补偿,需要生成光照补偿模型索引,并传输一个生成光照补偿模型索引至解码端,以便指示解码器使用如上所述的列表中的第几个项中的光照补偿模信息,以使用光照补偿提高预测质量。其中,在本申请实施例中,光照补偿模型索引表征具体在前一次光照补偿信息中编码时使用的光照补偿信息对应的索引信息。
若未能隐式地继承光照补偿模型,且前一次光照补偿信息不包含有有效的光照补偿信息时,则结束光照补偿,也就是说,当编解码一个块时,如果它可以进行光照补偿,但光照补偿列表中没有可用的光照补偿模型或没有当前块可用的光照补偿模型,一种可能的实现方式是在这种情况下默认它不使用光照补偿而不用进行flag传输。
可以理解的是,默认不使用光照补偿而不用进行flag传输从而减少传输资源和降低处理复杂度。
当光照补偿标志指示进行光照补偿,且前一次光照补偿信息中包括一组光照补偿信息时,将一组光照补偿信息作为当前光照补偿信息。也就是说,当编解码一个块时,如果它可以进行光照补偿,光照补偿模型列表中只有一个可用的光照补偿模型或当前块可用的光照补偿模型,且光照补偿模型的标志确定当前块使用光照补偿,一种可能的实现方式是在这种情况下默认它使用列表中唯一可用的光照补偿信息而不用传输光照补偿模型索引。
可以理解的是,默认使用列表中唯一可用的光照补偿信息而不用传输光照补偿模型索引可以减少传输资源。
在本申请实施例中,若能隐式地继承光照补偿模型,则编码器可以采用隐式地继承的光照补偿模型对当前编码块进行光照补偿后,得到当前预测值。
基于前述实施例的实现基础,如图11所示,本申请实施例提供了一种解码器1,包括:
第一获取单元10,配置为获取码流;
第一解析单元11,配置为从所述码流中解析出光照补偿模型索引;
所述第一获取单元10,还配置为获取当前解码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息;
第一确定单元12,配置为基于所述光照补偿模型索引,从所述前一次光照补偿信息中,确定当前光照补偿信息;
第一光照补偿单元13,配置为基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值。
在本申请的一些实施例中,所述解码器1还包括:解码单元14、第一计算单元15和第一更新单元16;
所述解码单元14,配置为所述基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值之后,基于所述当前预测值,完成所述当前解码块的解码,得到解码块;
所述第一计算单元15,配置为当所述解码块满足所述光照补偿规则时,采用与所述解码块对应的当前参考块,计算出所述解码块对应的最新光照补偿信息;
所述第一更新单元16,配置为当所述最新光照补偿信息满足预设阈值时,采用所述最新光照补偿信息,更新所述前一次光照补偿信息,得到当前次光照补偿信息,所述当前次光照补偿信息配置为在下一个解码块解码时使用。
在本申请的一些实施例中,所述第一获取单元10,还配置为获取当前解码块和前一次光照补偿信息之后,所述基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值之前,若所述前一次光照补偿信息中包括一组光照补偿信息,则将所述一组光照补偿信息作为所述当前光照补偿信息;所述一组光照补偿信息为有效的光照补偿信息。
在本申请的一些实施例中,所述前一次光照补偿信息或所述当前次光照补偿信息中包括:至少一项光照补偿信息;
每一项光照补偿信息包括:至少一组光照补偿因子;或者至少一组光照补偿因子和对应参考帧的索引。
在本申请的一些实施例中,所述第一更新单元16,还配置为若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量小于预设信息量,则将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量等于预设信息量,则删除所述前一次光照补偿信息中排列在信息首部的第一项光照补偿信息,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;若所述前一次光照补偿信息中存在与所述最新光照补偿信息相同的第i项光照补偿信息,则将所述第i项光照补偿信息删除,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;其中,i为大于等于1,且小于等于所述预设信息量的整数。
在本申请的一些实施例中,所述光照补偿规则包括:当所述当前解码块为通过普通帧间编码模式编码得到的解码块;和/或者合并模式编码得到的解码块。
在本申请的一些实施例中,所述第一计算单元15,还配置为获取所述解码块的当前像素值,以及与所述当前参考块的参考像素值;基于所述当前像素值和所述参考像素值,计算出所述解码块对应的最新光照补偿信息。
在本申请的一些实施例中,所述当前像素值为所述解码块的第一位置对应的像素值,所述参考像素值为所述当前参考块的所述第一位置对应的像素值;或者,
所述当前像素值为所述解码块进行第一下采样后得到的像素值,所述参考像素值为所述当前参考块进行第二下采样后得到的像素值;
所述当前像素值为所述解码块中确定的第一预设数量的像素值,所述参考像素值为所述当前参考块中确定的第一预设数量的像素值;
其中,所述第一位置为所述解码块或者所述当前参考块的至少一个边缘的至少一行或至少一列;所述第一下采样和第二下采样是根据块的大小确定的,或者根据块的大小和形状确定的。
在本申请的一些实施例中,所述第一获取单元10,还配置为所述采用与所述解码块对应的当前参考块,计算出所述解码块对应的最新光照补偿信息之后,当所述最新光照补偿信息不满足预设阈值时,将所述前一次光照补偿信息,作为所述当前次光照补偿信息,所述当前次光照补偿信息配置为在下一个解码块解码时使用。
在本申请的一些实施例中,所述第一确定单元12,还配置为从所述前一次光照补偿信息中,确定出与所述当前解码块具有相同当前参考帧的第一候选光照补偿信息;基于所述光照补偿模型索引,从所述第一候选光照补偿信息中,确定出所述当前光照补偿信息;或者,将所述前一次光照补偿信息中除所述第一候选光照补偿信息之外的剩余光照补偿信息,按照光照补偿模型所采用的参考帧,所述当前参考块所在参考帧,以及所述当前解码块所在帧之间的时间距离进行缩放,得到第二候选光照补偿信息,再基于所述光照 补偿模型索引,从所述第一候选光照补偿信息和所述第二候选光照补偿信息中,确定出所述当前光照补偿信息。
在本申请的一些实施例中,第一获取单元10,还配置为所述获取码流和获取当前解码块之后,所述基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值之前,当所述当前解码块为任何帧间编码模式的块,则判断所述当前解码块是否能隐式地继承光照补偿模型;若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息不包含有有效的光照补偿信息时,则结束光照补偿;若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息包含有有效的光照补偿信息时,则解析码流,得到光照补偿标志;当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的至少两组光照补偿信息时,解析码流,得到所述光照补偿模型索引;当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的一组光照补偿信息时,将所述一组光照补偿信息作为所述当前光照补偿信息。
可以理解的是,解码器可以从码流中获取到进行光照补偿的光照补偿模型索引,这样就可以在上一个解码块解码完成后得到的前一次光照补偿信息,找到光照补偿模型索引对应的光照补偿信息,从而进行光照补偿了,由于前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息,因此,解码器可以针对历史解码块得到的光照补偿因子进行选择,确定出最终使用的当前光照补偿信息,选择范围变多,可以选择出更精确的数值进行光照补偿,因此,解码的精度会提升,再者,前一次光照补偿信息中存储的只有满足光照补偿规则的历史解码块得到的光照补偿信息,因此,筛掉了不符合光照补偿规则的数值,减少了存储空间。
在实际应用中,如图12所示,本申请实施例还提供了一种解码器,包括:
第一处理器17以及存储有所述第一处理器17可执行指令的第一存储器18,所述第一存储器18通过第一通信总线19依赖所述第一处理器17执行操作,当所述指令被所述第一处理器17执行时,执行上述的解码器侧的帧间预测方法。
其中,第一处理器可以通过软件、硬件、固件或者其组合实现,可以使用电路、单个或多个专用集成电路(application specific integrated circuits,ASIC)、单个或多个通用集成电路、单个或多个微处理器、单个或多个可编程逻辑器件、或者前述电路或器件的组合、或者其他适合的电路或器件,从而使得该第一处理器可以执行前述实施例中的解码器侧的帧间预测方法的相应步骤。
本申请实施例提供了一种计算可读机存储介质,存储有可执行指令,当所述可执行指令被一个或多个第一处理器执行的时候,所述第一处理器执行解码器侧的帧间预测方法。
在本申请实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:磁性随机存取存储器(FRAM,ferromagnetic random access memory)、只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory)等各种可以存储程序代码的介质,本申请实施例不作限制。
如图13所示,本申请实施例提供了一种编码器2,包括:
第二获取单元20,配置为获取当前编码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,所述前一次光照补偿信息中包含有在当前编码块进行编码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息;
第二确定单元21,配置为从所述前一次光照补偿信息中,确定当前光照补偿信息;
第二光照补偿单元22,配置为基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值。
在本申请的一些实施例中,所述编码器2还包括:编码单元23、第二计算单元24和第二更新单元25;
所述编码单元23,配置为所述基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值之后,基于所述当前预测值,完成所述当前编码块的编码,得到编码块;
所述第二计算单元24,配置为当所述编码块满足所述光照补偿规则时,采用与所述编码块对应的当前参考块,计算出所述编码块对应的最新光照补偿信息;
所述第二更新单元25,配置为当所述最新光照补偿信息满足预设阈值时,采用所述最新光照补偿信息,更新所述前一次光照补偿信息,得到当前次光照补偿信息,所述当前次光照补偿信息配置为在下一个编码块编码时使用。
在本申请的一些实施例中,所述第二确定单元21,还配置为从所述前一次光照补偿信息中,确定出与所述当前编码块具有相同当前参考帧的第一候选光照补偿信息;从所述第一候选光照补偿信息中,确定出所述当前光照补偿信息;或者,将所述前一次光照补偿信息中除所述第一候选光照补偿信息之外的剩余光照补偿信息,按照光照补偿模型所采用的参考帧,所述当前参考块所在参考帧,以及所述当前编码块所在帧之间的时间距离进行缩放,得到第二候选光照补偿信息,再从所述第一候选光照补偿信息和所述第二候选光照补偿信息中,确定出所述当前光照补偿信息。
在本申请的一些实施例中,所述编码器2还包括:判断单元26和结束单元27;
所述判断单元26,配置为所述获取当前编码块之后,所述基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值之前,当所述当前编码块为任何帧间编码模式的块,则判断所述当前编码块是否能隐式地继承光照补偿模型;
所述结束单元27,配置为若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息不包含有有效的光照补偿信息时,则结束光照补偿;
所述第二获取单元20,还配置为若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息包含有有效的光照补偿信息时,则生成光照补偿标志,写入码流;
所述第二确定单元21,还配置为当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的至少两组光照补偿信息时,从所述前一次光照补偿信息中,确定所述当前光照补偿信息,并生成光照补偿模型索引,写入码流;
所述第二获取单元20,还配置为当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的一组光照补偿信息时,将所述一组光照补偿信息作为所述当前光照补偿信息。
在本申请的一些实施例中,所述第二获取单元20,还配置为所述判断所述当前编码块是否能隐式地继承光照补偿模型之后,若能隐式地继承光照补偿模型,则采用隐式地继承的光照补偿模型对所述当前编码块进行光照补偿后,得到所述当前预测值。
在本申请的一些实施例中,所述前一次光照补偿信息或所述当前次光照补偿信息中包括:至少一项光照补偿信息;
每一项光照补偿信息包括:至少一组光照补偿因子;或者至少一组光照补偿因子和对应参考帧的索引。
在本申请的一些实施例中,所述第二更新单元25,还配置为若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量小于预设信息量,则将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量等于预设信息量,则删除所述前一次光照补偿信息中排列在信息首部的第一项光照补偿信息,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;若所述前一次光照补偿信息中存在与所述最新光照补偿信息相同的第i项光照补偿信息,则将所述第i项光照补偿信息删除,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;其中,i为大于等于1,且小于等于所述预设信息量的整数。
在本申请的一些实施例中,所述光照补偿规则包括:当所述当前编码块为普通帧间编码模式的编码块;和/或者为合并模式的编码块。
在本申请的一些实施例中,所述第二计算单元24,还配置为获取所述编码块的当前像素值,以及与所述当前参考块的参考像素值;基于所述当前像素值和所述参考像素值,计算出所述编码块对应的最新光照补偿信息。
在本申请的一些实施例中,所述当前像素值为所述编码块的第一位置对应的像素值,所述参考像素值为所述当前参考块的所述第一位置对应的像素值;或者,
所述当前像素值为所述编码块进行第一下采样后得到的像素值,所述参考像素值为所述当前参考块进行第二下采样后得到的像素值;
所述当前像素值为所述编码块中确定的第一预设数量的像素值,所述参考像素值为所述当前参考块中确定的第一预设数量的像素值;
其中,所述第一位置为所述编码块或者所述当前参考块的至少一个边缘的至少一行或至少一列;所述第一下采样和第二下采样是根据块的大小确定的,或者根据块的大小和形状确定的。
在本申请的一些实施例中,所述第二获取单元20,还配置为所述采用与所述编码块对应的当前参考块, 计算出所述编码块对应的最新光照补偿信息之后,当所述最新光照补偿信息不满足预设阈值时,将所述前一次光照补偿信息,作为所述当前次光照补偿信息,所述当前次光照补偿信息配置为在下一个编码块编码时使用。
可以理解的是,编码器获取在上一个编码块编码完成后得到的前一次光照补偿信息,找到光照补偿模型索引对应的光照补偿信息,从而进行光照补偿了,由于前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,所述前一次光照补偿信息中包含有在当前编码块进行解码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息,因此,编码器可以针对历史编码块得到的光照补偿信息进行选择,确定出最终使用的当前光照补偿信息,选择范围变多,可以选择出更精确的数值进行光照补偿,因此,编码的精度会提升,再者,前一次光照补偿信息中存储的只有满足光照补偿规则的历史编码块得到的光照补偿信息,因此,筛掉了不符合光照补偿规则的数值,减少了存储空间。
在实际应用中,如图14所示,本申请实施例还提供了一种编码器,包括:
第二处理器28以及存储有所述第二处理器28可执行指令的第二存储器29,所述第二存储器29通过第二通信总线210依赖所述第二处理器28执行操作,当所述指令被所述第二处理器28执行时,执行上述的编码器侧的帧间预测方法。
其中,第二处理器可以通过软件、硬件、固件或者其组合实现,可以使用电路、单个或多个专用集成电路、单个或多个通用集成电路、单个或多个微处理器、单个或多个可编程逻辑器件、或者前述电路或器件的组合、或者其他适合的电路或器件,从而使得该第一处理器可以执行前述实施例中的解码器侧的帧间预测方法的相应步骤。
本申请实施例提供了一种计算可读机存储介质,存储有可执行指令,当所述可执行指令被一个或多个第二处理器执行的时候,所述第二处理器执行编码器侧的帧间预测方法。
本领域内的技术人员应明白,本申请的实施例可提供为方法、***、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。
工业实用性
本申请实施例公开了一种帧间预测方法,方法包括:获取码流,从码流中解析出光照补偿模型索引;获取当前解码块和前一次光照补偿信息;前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息;基于光照补偿模型索引,从前一次光照补偿信息中,确定当前光照补偿信息;基于当前光照补偿信息,对当前解码块进行光照补偿后,得到当前预测值。本申请实施例还同时提供了一种编码器、解码器和计算机可读存储介质。在本申请实施例中,解码器可以从码流中获取到进行光照补偿的光照补偿模型索引,这样就可以在上一个解码块解码完成后得到的前一次光照补偿信息,找到光照补偿模型索引对应的光照补偿信息,从而进行光照补偿了,由于前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息,因此,解码器可以针对历史解码块得到的光照补偿信息进行选择,确定出最终使用的当前光照补偿信息,选择范围变多,可以选择出更精确的数值进行光照补偿,因此,解码的精度会提升,再者,前一次光照补偿信息中存储的只有满足光照补偿规则的历史解码块得到的光照补偿信息,因此,筛掉了不符合光照补偿规则的数值,减少了存储空间。

Claims (26)

  1. 一种帧间预测方法,应用于解码器,包括:
    获取码流,从所述码流中解析出光照补偿模型索引;
    获取当前解码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息;
    基于所述光照补偿模型索引,从所述前一次光照补偿信息中,确定当前光照补偿信息;
    基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值。
  2. 根据权利要求1所述的方法,其中,所述基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值之后,所述方法还包括:
    基于所述当前预测值,完成所述当前解码块的解码,得到解码块;
    当所述解码块满足所述光照补偿规则时,采用与所述解码块对应的当前参考块,计算出所述解码块对应的最新光照补偿信息;
    当所述最新光照补偿信息满足预设阈值时,采用所述最新光照补偿信息,更新所述前一次光照补偿信息,得到当前次光照补偿信息,所述当前次光照补偿信息用于在下一个解码块解码时使用。
  3. 根据权利要求1或2所述的方法,其中,所述获取当前解码块和前一次光照补偿信息之后,所述基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值之前,所述方法还包括:
    若所述前一次光照补偿信息中包括一组光照补偿信息,则将所述一组光照补偿信息作为所述当前光照补偿信息;所述一组光照补偿信息为有效的光照补偿信息。
  4. 根据权利要求1至3任一项所述的方法,其中,
    所述前一次光照补偿信息或所述当前次光照补偿信息中包括:至少一项光照补偿信息;
    每一项光照补偿信息包括:至少一组光照补偿因子;或者至少一组光照补偿因子和对应参考帧的索引。
  5. 根据权利要求2所述的方法,其中,所述采用所述最新光照补偿信息,更新所述前一次光照补偿信息,得到当前次光照补偿信息,包括:
    若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量小于预设信息量,则将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;
    若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量等于预设信息量,则删除所述前一次光照补偿信息中排列在信息首部的第一项光照补偿信息,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;
    若所述前一次光照补偿信息中存在与所述最新光照补偿信息相同的第i项光照补偿信息,则将所述第i项光照补偿信息删除,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;其中,i为大于等于1,且小于等于所述预设信息量的整数。
  6. 根据权利要求1至5任一项所述的方法,其中,
    所述光照补偿规则包括:当所述当前解码块为通过普通帧间编码模式编码得到的解码块;和/或者合并模式编码得到的解码块。
  7. 根据权利要求2或5所述的方法,其中,所述采用与所述解码块对应的当前参考块,计算出所述解码块对应的最新光照补偿信息,包括:
    获取所述解码块的当前像素值,以及与所述当前参考块的参考像素值;
    基于所述当前像素值和所述参考像素值,计算出所述解码块对应的最新光照补偿信息。
  8. 根据权利要求7所述的方法,其中,
    所述当前像素值为所述解码块的第一位置对应的像素值,所述参考像素值为所述当前参考块的所述第一位置对应的像素值;或者,
    所述当前像素值为所述解码块进行第一下采样后得到的像素值,所述参考像素值为所述当前参考块进行第二下采样后得到的像素值;
    所述当前像素值为所述解码块中确定的第一预设数量的像素值,所述参考像素值为所述当前参考块中确定的第一预设数量的像素值;
    其中,所述第一位置为所述解码块或者所述当前参考块的至少一个边缘的至少一行或至少一列;所述第一下采样和第二下采样是根据块的大小确定的,或者根据块的大小和形状确定的。
  9. 根据权利要求2、5、7和8中任一项所述的方法,其中,所述采用与所述解码块对应的当前参考块,计算出所述解码块对应的最新光照补偿信息之后,所述方法还包括:
    当所述最新光照补偿信息不满足预设阈值时,将所述前一次光照补偿信息,作为所述当前次光照补偿信息,所述当前次光照补偿信息用于在下一个解码块解码时使用。
  10. 根据权利要求1所述的方法,其中,所述基于所述光照补偿模型索引,从所述前一次光照补偿信息中,确定当前光照补偿信息,包括:
    从所述前一次光照补偿信息中,确定出与所述当前解码块具有相同当前参考帧的第一候选光照补偿信息;
    基于所述光照补偿模型索引,从所述第一候选光照补偿信息中,确定出所述当前光照补偿信息;或者,
    将所述前一次光照补偿信息中除所述第一候选光照补偿信息之外的剩余光照补偿信息,按照光照补偿模型所采用的参考帧,所述当前参考块所在参考帧,以及所述当前解码块所在帧之间的时间距离进行缩放,得到第二候选光照补偿信息,再基于所述光照补偿模型索引,从所述第一候选光照补偿信息和所述第二候选光照补偿信息中,确定出所述当前光照补偿信息。
  11. 根据权利要求1至10任一项所述的方法,其中,所述获取码流和获取当前解码块之后,所述基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值之前,所述方法还包括:
    当所述当前解码块为任何帧间编码模式的块,则判断所述当前解码块是否能隐式地继承光照补偿模型;
    若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息不包含有有效的光照补偿信息时,则结束光照补偿;
    若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息包含有有效的光照补偿信息时,则解析码流,得到光照补偿标志;
    当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的至少两组光照补偿信息时,解析码流,得到所述光照补偿模型索引;
    当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的一组光照补偿信息时,将所述一组光照补偿信息作为所述当前光照补偿信息。
  12. 一种帧间预测方法,应用于编码器,包括:
    获取当前编码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,所述前一次光照补偿信息中包含有在当前编码块进行编码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息;
    从所述前一次光照补偿信息中,确定当前光照补偿信息;
    基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值。
  13. 根据权利要求12所述的方法,其中,所述基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值之后,所述方法还包括:
    基于所述当前预测值,完成所述当前编码块的编码,得到编码块;
    当所述编码块满足所述光照补偿规则时,采用与所述编码块对应的当前参考块,计算出所述编码块对应的最新光照补偿信息;
    当所述最新光照补偿信息满足预设阈值时,采用所述最新光照补偿信息,更新所述前一次光照补偿信息,得到当前次光照补偿信息,所述当前次光照补偿信息用于在下一个编码块编码时使用。
  14. 根据权利要求12或13所述的方法,其中,所述从所述前一次光照补偿信息中,确定当前光照补偿信息,包括:
    从所述前一次光照补偿信息中,确定出与所述当前编码块具有相同当前参考帧的第一候选光照补偿信息;
    从所述第一候选光照补偿信息中,确定出所述当前光照补偿信息;或者,
    将所述前一次光照补偿信息中除所述第一候选光照补偿信息之外的剩余光照补偿信息,按照光照补偿模型所采用的参考帧,所述当前参考块所在参考帧,以及所述当前编码块所在帧之间的时间距离进行缩放,得到第二候选光照补偿信息,再从所述第一候选光照补偿信息和所述第二候选光照补偿信息中,确定出所述当前光照补偿信息。
  15. 根据权利要求12至14任一项所述的方法,其中,所述获取当前编码块之后,所述基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值之前,所述方法还包括:
    当所述当前编码块为任何帧间编码模式的块,则判断所述当前编码块是否能隐式地继承光照补偿模 型;
    若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息不包含有有效的光照补偿信息时,则结束光照补偿;
    若未能隐式地继承光照补偿模型,且所述前一次光照补偿信息包含有有效的光照补偿信息时,则生成光照补偿标志,写入码流;
    当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的至少两组光照补偿信息时,从所述前一次光照补偿信息中,确定所述当前光照补偿信息,并生成光照补偿模型索引,写入码流;
    当所述光照补偿标志指示进行光照补偿,且所述前一次光照补偿信息中包括有效的一组光照补偿信息时,将所述一组光照补偿信息作为所述当前光照补偿信息。
  16. 根据权利要求12至15任一项所述的方法,其中,
    所述前一次光照补偿信息或所述当前次光照补偿信息中包括:至少一项光照补偿信息;
    每一项光照补偿信息包括:至少一组光照补偿因子;或者至少一组光照补偿因子和对应参考帧的索引。
  17. 根据权利要求13所述的方法,其中,所述采用所述最新光照补偿信息,更新所述前一次光照补偿信息,得到当前次光照补偿信息,包括:
    若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量小于预设信息量,则将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;
    若所述前一次光照补偿信息中不存在所述最新光照补偿信息,且所述前一次光照补偿信息的总信息量等于预设信息量,则删除所述前一次光照补偿信息中排列在信息首部的第一项光照补偿信息,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;
    若所述前一次光照补偿信息中存在与所述最新光照补偿信息相同的第i项光照补偿信息,则将所述第i项光照补偿信息删除,再将所述最新光照补偿信息排列在信息尾部,得到所述当前次光照补偿信息;其中,i为大于等于1,且小于等于所述预设信息量的整数。
  18. 根据权利要求13至17任一项所述的方法,其中,
    所述光照补偿规则包括:当所述当前编码块为普通帧间编码模式的编码块;和/或者为合并模式的编码块。
  19. 根据权利要求13或17所述的方法,其中,所述采用与所述编码块对应的当前参考块,计算出所述编码块对应的最新光照补偿信息,包括:
    获取所述编码块的当前像素值,以及与所述当前参考块的参考像素值;
    基于所述当前像素值和所述参考像素值,计算出所述编码块对应的最新光照补偿信息。
  20. 根据权利要求19所述的方法,其中,
    所述当前像素值为所述编码块的第一位置对应的像素值,所述参考像素值为所述当前参考块的所述第一位置对应的像素值;或者,
    所述当前像素值为所述编码块进行第一下采样后得到的像素值,所述参考像素值为所述当前参考块进行第二下采样后得到的像素值;
    所述当前像素值为所述编码块中确定的第一预设数量的像素值,所述参考像素值为所述当前参考块中确定的第一预设数量的像素值;
    其中,所述第一位置为所述编码块或者所述当前参考块的至少一个边缘的至少一行或至少一列;所述第一下采样和第二下采样是根据块的大小确定的,或者根据块的大小和形状确定的。
  21. 根据权利要求13、17或19中一项所述的方法,其中,所述采用与所述编码块对应的当前参考块,计算出所述编码块对应的最新光照补偿信息之后,所述方法还包括:
    当所述最新光照补偿信息不满足预设阈值时,将所述前一次光照补偿信息,作为所述当前次光照补偿信息,所述当前次光照补偿信息用于在下一个编码块编码时使用。
  22. 一种解码器,包括:
    第一获取单元,配置为获取码流,
    第一解析单元,配置为从所述码流中解析出光照补偿模型索引;
    所述第一获取单元,还配置为获取当前解码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个解码块解码完成时更新得到的,所述前一次光照补偿信息中包含有在当前解码块进行解码之前,满足光照补偿规则且已解码的历史解码块对应的至少一个参考块的光照补偿信息;
    第一确定单元,配置为基于所述光照补偿模型索引,从所述前一次光照补偿信息中,确定当前光照补偿信息;
    第一光照补偿单元,配置为基于所述当前光照补偿信息,对所述当前解码块进行光照补偿后,得到当前预测值。
  23. 一种编码器,包括:
    第二获取单元,配置为获取当前编码块和前一次光照补偿信息;所述前一次光照补偿信息是基于前一个编码块编码完成时更新得到的,所述前一次光照补偿信息中包含有在当前编码块进行编码之前,满足光照补偿规则且已编码的历史编码块对应的至少一个参考块的光照补偿信息;
    第二确定单元,配置为从所述前一次光照补偿信息中,确定当前光照补偿信息;
    第二光照补偿单元,配置为基于所述当前光照补偿信息,对所述当前编码块进行光照补偿后,得到当前预测值。
  24. 一种解码器,包括:第一处理器以及存储有所述第一处理器可执行指令的第一存储器,所述第一存储器通过第一通信总线依赖所述第一处理器执行操作,当所述指令被所述第一处理器执行时,执行上述的权利要求1至11任一项所述的帧间预测方法。
  25. 一种编码器,包括:第二处理器以及存储有所述第二处理器可执行指令的第二存储器,所述第二存储器通过第二通信总线依赖所述第二处理器执行操作,当所述指令被所述第二处理器执行时,执行上述的权利要求12至21任一项所述的帧间预测方法。
  26. 一种计算可读机存储介质,存储有可执行指令,当所述可执行指令被一个或多个第一处理器执行的时候,所述第一处理器执行所述的权利要求1至11任一项所述的帧间预测方法;或者,当所述可执行指令被一个或多个第二处理器执行的时候,所述第二处理器执行所述的权利要求12至21任一项所述的帧间预测方法。
PCT/CN2021/075974 2020-03-04 2021-02-08 帧间预测方法、编码器、解码器、计算机可读存储介质 WO2021175108A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010144579.4 2020-03-04
CN202010144579.4A CN113365077B (zh) 2020-03-04 2020-03-04 帧间预测方法、编码器、解码器、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021175108A1 true WO2021175108A1 (zh) 2021-09-10

Family

ID=77523450

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075974 WO2021175108A1 (zh) 2020-03-04 2021-02-08 帧间预测方法、编码器、解码器、计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113365077B (zh)
WO (1) WO2021175108A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077611A1 (zh) * 2022-10-14 2024-04-18 Oppo广东移动通信有限公司 解码方法、编码方法、解码器以及编码器

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107810635A (zh) * 2015-06-16 2018-03-16 Lg 电子株式会社 图像编译***中基于照度补偿预测块的方法和设备
CN109792529A (zh) * 2016-09-22 2019-05-21 Lg 电子株式会社 图像编译***中的基于照度补偿的间预测方法和设备
US10321142B2 (en) * 2013-07-15 2019-06-11 Samsung Electronics Co., Ltd. Method and apparatus for video encoding for adaptive illumination compensation, method and apparatus for video decoding for adaptive illumination compensation
CN110662052A (zh) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 更新查找表(lut)的条件
CN110662059A (zh) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 使用一个或多个查找表来按顺序存储先前编码的运动信息并使用它们来编码后面的块的概念
CN110662053A (zh) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 查找表尺寸

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111194553A (zh) * 2017-10-05 2020-05-22 交互数字Vc控股公司 用于视频编码和解码中的自适应照明补偿的方法和装置
KR20200010113A (ko) * 2018-07-18 2020-01-30 한국전자통신연구원 지역 조명 보상을 통한 효과적인 비디오 부호화/복호화 방법 및 장치
CN110784722B (zh) * 2019-11-06 2022-08-16 Oppo广东移动通信有限公司 编解码方法、编解码装置、编解码***及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10321142B2 (en) * 2013-07-15 2019-06-11 Samsung Electronics Co., Ltd. Method and apparatus for video encoding for adaptive illumination compensation, method and apparatus for video decoding for adaptive illumination compensation
CN107810635A (zh) * 2015-06-16 2018-03-16 Lg 电子株式会社 图像编译***中基于照度补偿预测块的方法和设备
CN109792529A (zh) * 2016-09-22 2019-05-21 Lg 电子株式会社 图像编译***中的基于照度补偿的间预测方法和设备
CN110662052A (zh) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 更新查找表(lut)的条件
CN110662059A (zh) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 使用一个或多个查找表来按顺序存储先前编码的运动信息并使用它们来编码后面的块的概念
CN110662053A (zh) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 查找表尺寸

Also Published As

Publication number Publication date
CN113365077A (zh) 2021-09-07
CN113365077B (zh) 2023-02-21

Similar Documents

Publication Publication Date Title
KR101999091B1 (ko) 오류 내성을 향상시킨 비디오 인코딩 및 디코딩
ES2621545T3 (es) Soporte de registro que memoriza un flujo de datos de imágenes codificadas
JP4987086B2 (ja) 画像符号化方法及び復号方法、それらの装置、それらのプログラム並びにプログラムを記録した記録媒体
WO2012095467A1 (en) Video encoding and decoding with low complexity
US9420303B2 (en) Method and apparatus for displacement vector component transformation in video coding and decoding
JP2024045346A (ja) ループ内フィルタリングの方法及び装置
JP2007507128A (ja) 参照ピクチャのリフレッシュを遅延させて行うビデオ画像の符号化および復号化
WO2021175108A1 (zh) 帧间预测方法、编码器、解码器、计算机可读存储介质
US20140169476A1 (en) Method and Device for Encoding a Sequence of Images and Method and Device for Decoding a Sequence of Image
CN112470468A (zh) 解码预测方法、装置及计算机存储介质
WO2022227082A1 (zh) 块划分方法、编码器、解码器以及计算机存储介质
CN116982262A (zh) 视频编码中依赖性量化的状态转换
CN111726632B (zh) 一种编解码方法、装置及其设备
CN109672889B (zh) 约束的序列数据头的方法及装置
CN114071158A (zh) 视频编解码中的运动信息列表构建方法、装置及设备
CN112970257A (zh) 解码预测方法、装置及计算机存储介质
CN114071159B (zh) 帧间预测方法、编码器、解码器及计算机可读存储介质
WO2024000768A1 (zh) 视频编、解码方法及装置、码流、解码器、编码器、存储介质
US20220239903A1 (en) Intra prediction method and apparatus, and computer-readable storage medium
KR20160067580A (ko) 영상 데이터의 인코딩 방법, 상기 방법을 이용한 인코더, 및 상기 인코더를 포함하는 애플리케이션 프로세서
CN118044204A (zh) 编解码方法、解码器、编码器及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21763846

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21763846

Country of ref document: EP

Kind code of ref document: A1