WO2023220946A1 - 视频编解码方法、装置、设备、***及存储介质 - Google Patents

视频编解码方法、装置、设备、***及存储介质 Download PDF

Info

Publication number
WO2023220946A1
WO2023220946A1 PCT/CN2022/093411 CN2022093411W WO2023220946A1 WO 2023220946 A1 WO2023220946 A1 WO 2023220946A1 CN 2022093411 W CN2022093411 W CN 2022093411W WO 2023220946 A1 WO2023220946 A1 WO 2023220946A1
Authority
WO
WIPO (PCT)
Prior art keywords
quantization
quantization coefficient
coefficient
parity
target
Prior art date
Application number
PCT/CN2022/093411
Other languages
English (en)
French (fr)
Inventor
徐陆航
黄航
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/093411 priority Critical patent/WO2023220946A1/zh
Publication of WO2023220946A1 publication Critical patent/WO2023220946A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Definitions

  • the present application relates to the technical field of video coding and decoding, and in particular to a video coding and decoding method, device, equipment, system and storage medium.
  • Digital video technology can be incorporated into a variety of video devices, such as digital televisions, smartphones, computers, e-readers, or video players.
  • video data includes a larger amount of data.
  • video devices implement video compression technology to make the video data more efficiently transmitted or stored.
  • the transform coefficients are quantized.
  • the purpose of quantization is to scale the transform coefficients, thereby reducing the number of bits consumed when encoding the coefficients.
  • the current quantization method has a high encoding cost.
  • Embodiments of the present application provide a video encoding and decoding method, device, equipment, system and storage medium to reduce encoding costs.
  • this application provides a video decoding method, including:
  • the parity of the quantization coefficient whose parity is hidden among the first quantization coefficients is determined, and the first quantization coefficient is the parity of all or part of the second quantization coefficient in the current area.
  • the quantization coefficient obtained after hiding, the second quantization coefficient is a quantization coefficient in which the parity in the current area is not hidden;
  • the second quantized coefficient is determined based on the parity of the quantized coefficient whose parity is concealed and the decoded first quantized coefficient.
  • embodiments of the present application provide a video encoding method, including:
  • the current region includes at least one non-zero quantization among N regions. Coefficient area;
  • the parity of the quantization coefficient whose parity is hidden in the first quantization coefficient is The property is indicated by P quantization coefficients in the current region, and P is a positive integer.
  • the present application provides a video encoder for performing the method in the above first aspect or its respective implementations.
  • the encoder includes a functional unit for performing the method in the above-mentioned first aspect or its respective implementations.
  • the present application provides a video decoder for performing the method in the above second aspect or various implementations thereof.
  • the decoder includes a functional unit for performing the method in the above-mentioned second aspect or its respective implementations.
  • a video encoder including a processor and a memory.
  • the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the method in the above first aspect or its respective implementations.
  • a sixth aspect provides a video decoder, including a processor and a memory.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program stored in the memory to execute the method in the above second aspect or its respective implementations.
  • a seventh aspect provides a video encoding and decoding system, including a video encoder and a video decoder.
  • the video encoder is used to perform the method in the above-mentioned first aspect or its various implementations
  • the video decoder is used to perform the method in the above-mentioned second aspect or its various implementations.
  • An eighth aspect provides a chip for implementing any one of the above-mentioned first to second aspects or the method in each implementation manner thereof.
  • the chip includes: a processor, configured to call and run a computer program from a memory, so that the device installed with the chip executes any one of the above-mentioned first to second aspects or implementations thereof. method.
  • a ninth aspect provides a computer-readable storage medium for storing a computer program that causes a computer to execute any one of the above-mentioned first to second aspects or the method in each implementation thereof.
  • a computer program product including computer program instructions, which enable a computer to execute any one of the above-mentioned first to second aspects or the methods in each implementation thereof.
  • An eleventh aspect provides a computer program that, when run on a computer, causes the computer to execute any one of the above-mentioned first to second aspects or the method in each implementation thereof.
  • a twelfth aspect provides a code stream, including a code stream generated by any aspect of the second aspect.
  • the decoder decodes the code stream and obtains P quantization coefficients in the current area.
  • the current area is an area in the current block that includes at least one non-zero quantization coefficient, and P is a positive integer; based on the P quantization coefficients, the th The parity of a quantized coefficient whose parity is hidden in a quantized coefficient.
  • the first quantized coefficient is a quantized coefficient obtained by hiding the parity of all or part of the second quantized coefficient.
  • the second quantized coefficient is the parity in the current area.
  • the parity of the hidden quantized coefficient and the decoded first quantized coefficient determine the second quantized coefficient.
  • the P quantization coefficients in the current area are used to hide part or all of the parity of the second quantization coefficient in the current area to obtain the first quantization coefficient, and the first quantization coefficient is encoded during encoding. Reduce the number of bits required for encoding and reduce the cost of video compression.
  • the embodiment of the present application re-determines the target context model for decoding the first quantization coefficient, which can achieve accurate decoding of the first quantization coefficient whose parity is hidden.
  • Figure 1 is a schematic block diagram of a video encoding and decoding system related to an embodiment of the present application
  • Figure 2 is a schematic block diagram of a video encoder provided by an embodiment of the present application.
  • Figure 3 is a schematic block diagram of a decoding framework provided by an embodiment of the present application.
  • FIGS. 4A to 4D are schematic diagrams of the scanning sequence involved in this application.
  • FIGS 5A to 5C are schematic diagrams of decoded coefficients of the first quantized coefficients involved in this application.
  • FIGS 5D to 5F are schematic diagrams of another decoded coefficient of the first quantized coefficient involved in this application.
  • Figure 6 is a schematic flow chart of a video decoding method provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of area division involved in the embodiment of the present application.
  • Figure 8 is a schematic diagram of the scanning sequence of the transformation block involved in the embodiment of the present application.
  • Figures 9A to 9C are schematic diagrams of decoded coefficients of the first quantized coefficients involved in this application.
  • FIGS. 10A to 10C are schematic diagrams of another decoded coefficient of the first quantized coefficient involved in this application.
  • Figure 11 is a schematic flow chart of a video decoding method provided by an embodiment of the present application.
  • Figure 12 is a schematic flow chart of a video encoding method provided by an embodiment of the present application.
  • Figure 13 is a schematic flow chart of a video encoding method provided by an embodiment of the present application.
  • Figure 14 is a schematic block diagram of a video decoding device provided by an embodiment of the present application.
  • Figure 15 is a schematic block diagram of a video encoding device provided by an embodiment of the present application.
  • Figure 16 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
  • Figure 17 is a schematic block diagram of a video encoding system provided by an embodiment of the present application.
  • This application can be applied to the fields of image encoding and decoding, video encoding and decoding, hardware video encoding and decoding, dedicated circuit video encoding and decoding, real-time video encoding and decoding, etc.
  • the solution of this application can be operated in conjunction with other proprietary or industry standards, including ITU-TH.261, ISO/IECMPEG-1Visual, ITU-TH.262 or ISO/IECMPEG-2Visual, ITU-TH.263 , ISO/IECMPEG-4Visual, ITU-TH.264 (also known as ISO/IECMPEG-4AVC), including scalable video codec (SVC) and multi-view video codec (MVC) extensions.
  • SVC scalable video codec
  • MVC multi-view video codec
  • FIG. 1 For ease of understanding, the video encoding and decoding system involved in the embodiment of the present application is first introduced with reference to FIG. 1 .
  • Figure 1 is a schematic block diagram of a video encoding and decoding system related to an embodiment of the present application. It should be noted that Figure 1 is only an example, and the video encoding and decoding system in the embodiment of the present application includes but is not limited to what is shown in Figure 1 .
  • the video encoding and decoding system 100 includes an encoding device 110 and a decoding device 120 .
  • the encoding device is used to encode the video data (which can be understood as compression) to generate a code stream, and transmit the code stream to the decoding device.
  • the decoding device decodes the code stream generated by the encoding device to obtain decoded video data.
  • the encoding device 110 in the embodiment of the present application can be understood as a device with a video encoding function
  • the decoding device 120 can be understood as a device with a video decoding function. That is, the embodiment of the present application includes a wider range of devices for the encoding device 110 and the decoding device 120. Examples include smartphones, desktop computers, mobile computing devices, notebook (eg, laptop) computers, tablet computers, set-top boxes, televisions, cameras, display devices, digital media players, video game consoles, vehicle-mounted computers, and the like.
  • the encoding device 110 may transmit the encoded video data (eg, code stream) to the decoding device 120 via the channel 130 .
  • Channel 130 may include one or more media and/or devices capable of transmitting encoded video data from encoding device 110 to decoding device 120 .
  • channel 130 includes one or more communication media that enables encoding device 110 to transmit encoded video data directly to decoding device 120 in real time.
  • encoding device 110 may modulate the encoded video data according to the communication standard and transmit the modulated video data to decoding device 120.
  • the communication media includes wireless communication media, such as radio frequency spectrum.
  • the communication media may also include wired communication media, such as one or more physical transmission lines.
  • channel 130 includes a storage medium that can store video data encoded by encoding device 110 .
  • Storage media include a variety of local access data storage media, such as optical disks, DVDs, flash memories, etc.
  • the decoding device 120 may obtain the encoded video data from the storage medium.
  • channel 130 may include a storage server that may store video data encoded by encoding device 110 .
  • the decoding device 120 may download the stored encoded video data from the storage server.
  • the storage server may store the encoded video data and may transmit the encoded video data to the decoding device 120, such as a web server (eg, for a website), a File Transfer Protocol (FTP) server, etc.
  • FTP File Transfer Protocol
  • the encoding device 110 includes a video encoder 112 and an output interface 113.
  • the output interface 113 may include a modulator/demodulator (modem) and/or a transmitter.
  • the encoding device 110 may include a video source 111 in addition to the video encoder 112 and the input interface 113 .
  • Video source 111 may include at least one of a video capture device (eg, a video camera), a video archive, a video input interface for receiving video data from a video content provider, a computer graphics system Used to generate video data.
  • a video capture device eg, a video camera
  • a video archive e.g., a video archive
  • video input interface for receiving video data from a video content provider
  • computer graphics system Used to generate video data.
  • the video encoder 112 encodes the video data from the video source 111 to generate a code stream.
  • Video data may include one or more images (pictures) or sequence of pictures (sequence of pictures).
  • the code stream contains the encoding information of an image or image sequence in the form of a bit stream.
  • Encoded information may include encoded image data and associated data.
  • the associated data may include sequence parameter set (SPS), picture parameter set (PPS) and other syntax structures.
  • SPS sequence parameter set
  • PPS picture parameter set
  • An SPS can contain parameters that apply to one or more sequences.
  • a PPS can contain parameters that apply to one or more images.
  • a syntax structure refers to a collection of zero or more syntax elements arranged in a specified order in a code stream.
  • the video encoder 112 transmits the encoded video data directly to the decoding device 120 via the output interface 113 .
  • the encoded video data can also be stored on a storage medium or storage server for subsequent reading by the decoding device 120 .
  • decoding device 120 includes input interface 121 and video decoder 122.
  • the decoding device 120 may also include a display device 123.
  • the input interface 121 includes a receiver and/or a modem. Input interface 121 may receive encoded video data over channel 130.
  • the video decoder 122 is used to decode the encoded video data to obtain decoded video data, and transmit the decoded video data to the display device 123 .
  • the display device 123 displays the decoded video data.
  • Display device 123 may be integrated with decoding device 120 or external to decoding device 120 .
  • Display device 123 may include a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or other types of display devices.
  • LCD liquid crystal display
  • plasma display a plasma display
  • OLED organic light emitting diode
  • Figure 1 is only an example, and the technical solution of the embodiment of the present application is not limited to Figure 1.
  • the technology of the present application can also be applied to unilateral video encoding or unilateral video decoding.
  • FIG. 2 is a schematic block diagram of a video encoder provided by an embodiment of the present application. It should be understood that the video encoder 200 can be used to perform lossy compression of images (lossy compression), or can also be used to perform lossless compression (lossless compression) of images.
  • the lossless compression can be visually lossless compression (visually lossless compression) or mathematically lossless compression (mathematically lossless compression).
  • the video encoder 200 can be applied to image data in a luminance-chrominance (YCbCr, YUV) format.
  • YUV ratio can be 4:2:0, 4:2:2 or 4:4:4, Y represents brightness (Luma), Cb(U) represents blue chroma, Cr(V) represents red chroma, U and V represent Chroma, which is used to describe color and saturation.
  • 4:2:0 means that every 4 pixels have 4 luminance components and 2 chrominance components (YYYYCbCr)
  • 4:2:2 means that every 4 pixels have 4 luminance components and 4 Chroma component (YYYYCbCrCbCr)
  • 4:4:4 means full pixel display (YYYYCbCrCbCrCbCrCbCr).
  • the video encoder 200 reads video data, and for each frame of image in the video data, divides one frame of image into several coding tree units (coding tree units, CTU).
  • CTU may be called “Tree block", “Largest Coding unit” (LCU for short) or “coding tree block” (CTB for short).
  • LCU Large Coding unit
  • CTB coding tree block
  • Each CTU can be associated with an equal-sized block of pixels within the image.
  • Each pixel can correspond to one luminance (luminance or luma) sample and two chrominance (chrominance or chroma) samples. Therefore, each CTU can be associated with one block of luma samples and two blocks of chroma samples.
  • a CTU size is, for example, 128 ⁇ 128, 64 ⁇ 64, 32 ⁇ 32, etc.
  • a CTU can be further divided into several coding units (Coding Units, CUs) for encoding.
  • CUs can be rectangular blocks or square blocks.
  • CU can be further divided into prediction unit (PU for short) and transform unit (TU for short), thus enabling coding, prediction, and transformation to be separated and processing more flexible.
  • the CTU is divided into CUs in a quad-tree manner, and the CU is divided into TUs and PUs in a quad-tree manner.
  • Video encoders and video decoders can support various PU sizes. Assuming that the size of a specific CU is 2N ⁇ 2N, the video encoder and video decoder can support a PU size of 2N ⁇ 2N or N ⁇ N for intra prediction, and support 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, N ⁇ N or similar sized symmetric PU for inter prediction. The video encoder and video decoder can also support 2N ⁇ nU, 2N ⁇ nD, nL ⁇ 2N and nR ⁇ 2N asymmetric PUs for inter prediction.
  • the video encoder 200 may include: a prediction unit 210, a residual unit 220, a transform/quantization unit 230, an inverse transform/quantization unit 240, a reconstruction unit 250, and a loop filter unit. 260. Decode the image cache 270 and the entropy encoding unit 280. It should be noted that the video encoder 200 may include more, less, or different functional components.
  • the current block may be called the current coding unit (CU) or the current prediction unit (PU), etc.
  • the prediction block may also be called a predicted image block or an image prediction block
  • the reconstructed image block may also be called a reconstruction block or an image reconstructed image block.
  • prediction unit 210 includes inter prediction unit 211 and intra prediction unit 212. Since there is a strong correlation between adjacent pixels in a video frame, the intra-frame prediction method is used in video encoding and decoding technology to eliminate the spatial redundancy between adjacent pixels. Since there is a strong similarity between adjacent frames in the video, the interframe prediction method is used in video coding and decoding technology to eliminate the temporal redundancy between adjacent frames, thereby improving coding efficiency.
  • the inter-frame prediction unit 211 can be used for inter-frame prediction.
  • Inter-frame prediction can refer to image information of different frames.
  • Inter-frame prediction uses motion information to find reference blocks from reference frames and generate prediction blocks based on the reference blocks to eliminate temporal redundancy;
  • the frames used in inter-frame prediction may be P frames and/or B frames.
  • P frames refer to forward prediction frames
  • B frames refer to bidirectional prediction frames.
  • the motion information includes the reference frame list where the reference frame is located, the reference frame index, and the motion vector.
  • the motion vector can be in whole pixels or sub-pixels. If the motion vector is in sub-pixels, then interpolation filtering needs to be used in the reference frame to make the required sub-pixel blocks.
  • the reference frame found based on the motion vector is A block of whole pixels or sub-pixels is called a reference block.
  • Some technologies will directly use the reference block as a prediction block, and some technologies will process the reference block to generate a prediction block. Reprocessing to generate a prediction block based on a reference block can also be understood as using the reference block as a prediction block and then processing to generate a new prediction block based on the prediction block.
  • the intra prediction unit 212 only refers to the information of the same frame image and predicts the pixel information in the current coded image block to eliminate spatial redundancy.
  • the frames used in intra prediction may be I frames.
  • the white 4 ⁇ 4 block is the current block
  • the gray pixels in the left row and the upper column of the current block are the reference pixels of the current block.
  • Intra-frame prediction uses these reference pixels to predict the current block.
  • These reference pixels may all be available, that is, all of them may have been encoded and decoded. There may also be some parts that are unavailable. For example, if the current block is the leftmost part of the entire frame, then the reference pixel on the left side of the current block is unavailable.
  • the lower left part of the current block has not been encoded or decoded, so the reference pixels in the lower left are also unavailable.
  • the available reference pixels or certain values or certain methods can be used for filling, or no filling can be performed.
  • the intra prediction method also includes a multiple reference line intra prediction method (multiple reference line, MRL).
  • MRL can use more reference pixels to improve coding efficiency.
  • mode 0 is to copy the pixels above the current block in the vertical direction to the current block as the prediction value
  • mode 1 is to copy the reference pixel on the left to the current block in the horizontal direction as the prediction value
  • mode 2 is to copy A ⁇
  • the average value of the eight points D and I ⁇ L is used as the predicted value of all points.
  • Mode 3 to mode 8 copy the reference pixel to the corresponding position of the current block at a certain angle. Because some positions of the current block cannot exactly correspond to the reference pixels, it may be necessary to use the weighted average of the reference pixels, or the sub-pixels of the interpolated reference pixels.
  • the intra-frame prediction modes used by HEVC include planar mode (Planar), DC and 33 angle modes, for a total of 35 prediction modes.
  • the intra-frame modes used by VVC include Planar, DC and 65 angle modes, for a total of 67 prediction modes.
  • the intra-frame modes used by AVS3 include DC, Plane, Bilinear and 63 angle modes, for a total of 66 prediction modes.
  • Residual unit 220 may generate a residual block of the CU based on the pixel block of the CU and the prediction block of the PU of the CU. For example, residual unit 220 may generate a residual block of a CU such that each sample in the residual block has a value equal to the difference between the sample in the pixel block of the CU and the PU of the CU. Predict the corresponding sample in the block.
  • Transform/quantization unit 230 may quantize the transform coefficients. Transform/quantization unit 230 may quantize transform coefficients associated with the TU of the CU based on quantization parameter (QP) values associated with the CU. Video encoder 200 may adjust the degree of quantization applied to transform coefficients associated with the CU by adjusting the QP value associated with the CU.
  • QP quantization parameter
  • Inverse transform/quantization unit 240 may apply inverse quantization and inverse transform to the quantized transform coefficients, respectively, to reconstruct the residual block from the quantized transform coefficients.
  • Reconstruction unit 250 may add samples of the reconstructed residual block to corresponding samples of one or more prediction blocks generated by prediction unit 210 to produce a reconstructed image block associated with the TU. By reconstructing blocks of samples for each TU of a CU in this manner, video encoder 200 can reconstruct blocks of pixels of the CU.
  • Loop filtering unit 260 may perform deblocking filtering operations to reduce blocking artifacts for blocks of pixels associated with the CU.
  • the loop filtering unit 260 includes a deblocking filtering unit, a sample adaptive compensation SAO unit, and an adaptive loop filtering ALF unit.
  • Decoded image cache 270 may store reconstructed pixel blocks.
  • Inter prediction unit 211 may perform inter prediction on PUs of other images using reference images containing reconstructed pixel blocks.
  • intra prediction unit 212 may use the reconstructed pixel blocks in decoded image cache 270 to perform intra prediction on other PUs in the same image as the CU.
  • Entropy encoding unit 280 may receive the quantized transform coefficients from transform/quantization unit 230 . Entropy encoding unit 280 may perform one or more entropy encoding operations on the quantized transform coefficients to generate entropy encoded data.
  • the basic process of video encoding involved in this application is as follows: at the encoding end, the current image is divided into blocks, and for the current block, the prediction unit 210 uses intra prediction or inter prediction to generate a prediction block of the current block.
  • the residual unit 220 may calculate a residual block based on the prediction block and the original block of the current block, that is, the difference between the prediction block and the original block of the current block.
  • the residual block may also be called residual information.
  • the residual block undergoes transformation and quantization processes such as transformation/quantization unit 230 to remove information that is insensitive to human eyes to eliminate visual redundancy.
  • the residual block before transformation and quantization by the transformation/quantization unit 230 may be called a time domain residual block, and the time domain residual block after transformation and quantization by the transformation/quantization unit 230 may be called a frequency residual block. or frequency domain residual block.
  • the entropy encoding unit 280 receives the quantized transform coefficients output by the transform and quantization unit 230, and may perform entropy encoding on the quantized transform coefficients to output a code stream. For example, the entropy encoding unit 280 may eliminate character redundancy according to the target context model and probability information of the binary code stream.
  • the video encoder performs inverse quantization and inverse transformation on the quantized transform coefficients output by the transform and quantization unit 230 to obtain the residual block of the current block, and then adds the residual block of the current block and the prediction block of the current block, Get the reconstructed block of the current block.
  • reconstruction blocks corresponding to other image blocks in the current image can be obtained, and these reconstruction blocks are spliced to obtain a reconstructed image of the current image. Since errors are introduced during the encoding process, in order to reduce the error, the reconstructed image is filtered.
  • ALF is used to filter the reconstructed image to reduce the difference between the pixel value of the pixel in the reconstructed image and the original pixel value of the pixel in the current image. difference.
  • the filtered reconstructed image is stored in the decoded image cache 270 and can be used as a reference frame for inter-frame prediction for subsequent frames.
  • the block division information determined by the encoding end as well as mode information or parameter information such as prediction, transformation, quantization, entropy coding, loop filtering, etc., are carried in the code stream when necessary.
  • the decoding end determines the same block division information as the encoding end by parsing the code stream and analyzing the existing information, prediction, transformation, quantization, entropy coding, loop filtering and other mode information or parameter information, thereby ensuring the decoded image obtained by the encoding end It is the same as the decoded image obtained by the decoding end.
  • Figure 3 is a schematic block diagram of a video decoder provided by an embodiment of the present application.
  • the video decoder 300 includes an entropy decoding unit 310 , a prediction unit 320 , an inverse quantization/transformation unit 330 , a reconstruction unit 340 , a loop filtering unit 350 and a decoded image cache 360 . It should be noted that the video decoder 300 may include more, less, or different functional components.
  • Video decoder 300 can receive the code stream.
  • Entropy decoding unit 310 may parse the codestream to extract syntax elements from the codestream. As part of parsing the code stream, the entropy decoding unit 310 may parse entropy-encoded syntax elements in the code stream.
  • the prediction unit 320, the inverse quantization/transformation unit 330, the reconstruction unit 340 and the loop filtering unit 350 may decode the video data according to the syntax elements extracted from the code stream, that is, generate decoded video data.
  • prediction unit 320 includes inter prediction unit 321 and intra prediction unit 322.
  • Inter prediction unit 321 may perform intra prediction to generate predicted blocks for the PU. Inter prediction unit 321 may use an intra prediction mode to generate predicted blocks for a PU based on pixel blocks of spatially neighboring PUs. Inter prediction unit 321 may also determine the intra prediction mode of the PU based on one or more syntax elements parsed from the codestream.
  • Intra prediction unit 322 may construct a first reference image list (List 0) and a second reference image list (List 1) based on syntax elements parsed from the codestream. Additionally, if the PU uses inter-prediction encoding, entropy decoding unit 310 may parse the motion information of the PU. Intra-prediction unit 322 may determine one or more reference blocks for the PU based on the motion information of the PU. Intra-prediction unit 322 may generate a predicted block for the PU based on one or more reference blocks of the PU.
  • Inverse quantization/transform unit 330 may inversely quantize (ie, dequantize) transform coefficients associated with a TU. Inverse quantization/transform unit 330 may use the QP value associated with the CU of the TU to determine the degree of quantization.
  • inverse quantization/transform unit 330 may apply one or more inverse transforms to the inverse quantized transform coefficients to produce a residual block associated with the TU.
  • Reconstruction unit 340 uses the residual blocks associated with the TU of the CU and the prediction blocks of the PU of the CU to reconstruct the pixel blocks of the CU. For example, reconstruction unit 340 may add samples of the residual block to corresponding samples of the prediction block to reconstruct the pixel block of the CU to obtain a reconstructed image block.
  • Loop filtering unit 350 may perform deblocking filtering operations to reduce blocking artifacts for blocks of pixels associated with the CU.
  • the loop filtering unit 350 includes a deblocking filtering unit, a sample adaptive compensation SAO unit, and an adaptive loop filtering ALF unit.
  • Video decoder 300 may store the reconstructed image of the CU in decoded image cache 360 .
  • the video decoder 300 may use the reconstructed image in the decoded image cache 360 as a reference image for subsequent prediction, or transmit the reconstructed image to a display device for presentation.
  • the entropy decoding unit 310 can parse the code stream to obtain the prediction information, quantization coefficient matrix, etc. of the current block.
  • the prediction unit 320 uses intra prediction or inter prediction for the current block based on the prediction information to generate the current block.
  • the inverse quantization/transform unit 330 uses the quantization coefficient matrix obtained from the code stream to perform inverse quantization and inverse transformation on the quantization coefficient matrix to obtain a residual block.
  • the reconstruction unit 340 adds the prediction block and the residual block to obtain a reconstruction block.
  • the reconstructed blocks constitute a reconstructed image
  • the loop filtering unit 350 performs loop filtering on the reconstructed image based on the image or based on the blocks to obtain a decoded image.
  • the decoded image can also be called a reconstructed image.
  • the reconstructed image can be displayed by a display device, and on the other hand, it can be stored in the decoded image cache 360 and used as a reference frame for inter-frame prediction for subsequent frames.
  • Quantization and inverse quantization are closely related to the coefficient coding part.
  • the purpose of quantization is to scale the transform coefficients, thereby reducing the number of bits consumed when encoding coefficients.
  • the quantized object is the residual, that is, the residual is directly scaled and then encoded.
  • t i is the transformation coefficient
  • qstep is the quantization step size, which is related to the quantization parameters set in the configuration file
  • q i is the quantization coefficient
  • round is the rounding process, which is not limited to upper rounding or lower rounding, etc., the quantization process Controlled by the encoder.
  • t′ i is the reconstructed transformation coefficient. Due to the accuracy loss caused by the rounding process, t′ i and t i are different.
  • Quantization will reduce the accuracy of the transform coefficients, and the loss of accuracy is irreversible.
  • Encoders usually measure the cost of quantization through a rate-distortion cost function.
  • B() is the bits consumed by the encoder to estimate the encoding quantization coefficient qi .
  • multi-symbol arithmetic coding may be used to encode or decode the quantized coefficients, and each quantized coefficient may be indicated by one or more multi-symbol identifiers. Specifically, Depending on the size of the quantization coefficient, it can be segmented and represented by the following multi-symbol identification bits.
  • Identifier 1 Represents the part from 0 to 3, with a total of 4 symbols (0,1,2,3). When the symbol of Identifier 1 is 3, Identifier 2 needs to be further encoded/decoded.
  • Identifier 2 represents the part from 3 to 6, with a total of 4 symbols (0, 1, 2, 3). When the symbol of identifier 2 is 3, further encoding/decoding of identifier 3 is required.
  • Identifier 3 represents the part from 6 to 9, with a total of 4 symbols (0, 1, 2, 3). When the symbol of identifier 3 is 3, further encoding/decoding of identifier 4 is required.
  • Identification 4 represents the part from 9 to 12, with a total of 4 symbols (0, 1, 2, 3). When the symbol of identification 4 is 3, identification 5 needs to be further encoded/decoded.
  • Identification 5 indicates the part from 12 to 15, with a total of 4 symbols (0, 1, 2, 3). When the symbol of identification 5 is 3, it is necessary to further encode/decode the part greater than or equal to 15.
  • encoding/decoding identifies 1-5 in scanning order from the last non-zero coefficient to the upper left corner of the transform block.
  • the coefficient part indicated by the mark 1 is called the basic part (Base Range, BR)
  • the coefficient part marked by the marks 2 to 5 is called the lower part (Lower Range, LR)
  • the part greater than or equal to 15 is called the higher part ( Higher Range, HR).
  • the decoded quantization coefficient index qIdx is the sum of identifiers 1 to 5 plus the part exceeding 15. For example, as shown in formula (4):
  • the LR part includes the four identifiers 2 to 5, the sum of ⁇ LR identifiers is used here to identify the parts 2 to 5.
  • S() is the multi-symbol context model coding
  • L(1) is the bypass coding
  • different coefficient encoding orders are selected according to different selected transform modes.
  • the transformation includes the following 16 types. The corresponding relationship between these 16 transformations and the scanning order is shown in Table 2:
  • Tansform Type represents the transformation type
  • Vertical Mode represents the vertical transformation type
  • Horizontal Mode represents the horizontal transformation type
  • Tansform Type includes 1D Tansform Type and 2D Transform Type.
  • the scanning method of 1D Tansform Type is divided into row scanning and column scanning according to the horizontal and vertical directions.
  • 2D Transform Type will be divided into Zig-Zag scan and Diagonal scan.
  • Zig-Zag scan, Diagonal scan, Column scan and Row are used when encoding and decoding coefficients.
  • scan line scan
  • the numbers in the figure indicate the index of the scanning sequence.
  • the actual coefficient decoding order is defined as the reverse scanning order, that is, starting from the first non-zero coefficient in the lower right corner of the transform block, sequential decoding is performed in decoding order.
  • the Zig-Zag scanning order in Figure 4A is from 0, 1..., 15..., and the corresponding decoding order is... 15, 14..., 1, 0. If the first non-zero coefficient position in decoding order is index 12, the actual decoding order of the coefficients is 12, 11, ..., 1, 0.
  • the context model corresponding to the current coefficient it is also necessary to consider whether the current coefficient is the last non-zero coefficient, the distance between the position of the current coefficient and the upper left corner of the transformation block, the size of the transformation block, and other conditions.
  • encoding and decoding identification 1 that is, BR
  • a total of 4 QP segments, 5 types of transform block sizes, luminance or color components, 42 types of position distances, and surrounding coded coefficients need to be selected according to the quantization parameter QP.
  • the absolute value is less than or equal to the sum of 3 parts, a total of 4X5X2X42, that is, 1680 context models. Select a model to encode and decode and update the model probability value.
  • the codec identifier is 2 to 5 (i.e. LR)
  • a total of 4 QP segments, 5 types of transform block sizes, luminance or chrominance components, 21 types of position distances, and surrounding areas need to be selected based on the quantization parameter QP.
  • the absolute value of the encoding and decoding coefficients is less than or equal to the sum of 15 parts, a total of 4X5X2X21, that is, 840 context models. Select one model to perform encoding and decoding and update the model probability value.
  • the calculation method for determining the context model index value is as shown in Table 3:
  • is the 5 decoded quantization coefficients around the first quantization coefficient in several scanning modes in Figures 5A to 5C
  • is the 3 decoded quantization coefficients around the first quantization coefficient in several scanning modes in Figures 5D to 5F. decoded quantized coefficients.
  • DC represents the upper left corner position in the transformation block.
  • the current encoding method of quantized coefficients is usually to fully encode the sign and absolute value of the quantized coefficient, which occupies a lot of bits and is expensive to encode.
  • At least one quantized coefficient in the current area is hidden according to the parity related to the quantized coefficient in the current area, so as to reduce the encoding cost.
  • FIG. 6 is a schematic flowchart of a video decoding method provided by an embodiment of the present application.
  • the embodiment of the present application is applied to the video decoder shown in FIGS. 1 and 3 .
  • the method in the embodiment of this application includes:
  • the current area is an area in the current block that includes at least one non-zero quantization coefficient.
  • the current block is divided into one or more regions, for example, into N regions, where N is a positive integer, and the current region is a region that includes at least one non-zero quantization coefficient among the N regions of the current block.
  • the coding end can perform parity analysis on one or more quantized coefficients in the region based on the parity related to the quantized coefficients in the same region, for example, based on the parity of the sum of the absolute values of the quantized coefficients in the region. Hide the quantized coefficient whose parity is hidden to reduce the parity.
  • the second quantized coefficient whose parity is to be hidden is a1. Parity hiding is performed on all or part of the second quantized coefficient to obtain the first quantized coefficient a2, where a2 is smaller than a1. , thus encoding a2 uses fewer bits than encoding a1, reducing the encoding cost.
  • the encoding end hides part or all of the parity in the second quantized coefficient to obtain the first quantized coefficient.
  • the first quantized coefficient includes the quantized coefficient with the parity hidden.
  • the parity of the quantized coefficient whose parity is hidden in the first quantized coefficient is indicated by P quantized coefficients in the current area, and the decoding end is based on the parity of the quantized coefficient whose parity is hidden in the first quantized coefficient. property, reconstruct the quantized coefficient whose parity is hidden, and obtain the reconstructed second quantized coefficient. Based on this, when decoding the first quantized coefficient, the decoder first decodes the code stream to obtain P quantized coefficients in the current region.
  • the P quantized coefficients are all quantized coefficients in the current region except the first quantized coefficient.
  • the above P quantization coefficients are partial quantization coefficients in the current region.
  • the above-mentioned P quantization coefficients do not include the first quantization coefficient in the current region.
  • the above-mentioned quantized coefficients are obtained by quantizing the transform coefficients.
  • the encoding end predicts the current block and obtains the predicted value of the current block.
  • the original value of the current block is subtracted from the predicted value to obtain the residual value of the current block.
  • the encoding end transforms the residual value to obtain the transform coefficient of the current block, then quantizes the transform coefficient to obtain the quantized coefficient, and then encodes the quantized coefficient to obtain the code stream.
  • the decoder receives the code stream, decodes the code stream, obtains the quantized coefficients, inversely quantizes the quantized coefficients to obtain the transform coefficients of the current block, and then inversely transforms the transform coefficients to obtain the residual value of the current block.
  • the above-mentioned current block may also be called a current transform block.
  • the above-mentioned quantization coefficient is obtained by quantizing the residual value.
  • the encoding end predicts the current block to obtain the predicted value of the current block, and subtracts the original value of the current block from the predicted value to obtain the residual value of the current block.
  • the encoding end quantizes the residual value to obtain the quantized coefficients, and then encodes the quantized coefficients to obtain the code stream.
  • the decoder receives the code stream, decodes the code stream, obtains the quantized coefficients, and performs inverse quantization on the quantized coefficients to obtain the residual value of the current block.
  • the quantization coefficient can be understood as a numerical value composed of an absolute value and a sign decoded from the code stream.
  • the absolute value includes the value of the flag bit corresponding to the quantization coefficient. If the value of the quantization coefficient exceeds 15, It also includes absolute values exceeding 15 parts.
  • the current block can be divided into N areas, and the sizes of these N areas can be the same or different.
  • the decoding end and the encoding end use the same region division method to divide the current block into N regions.
  • both the decoding end and the encoding end use the default region division method to divide the current block into N regions.
  • the encoding end may indicate the region division mode of the current block to the decoding end. For example, a flag A is encoded in the code stream, and the flag A is used to indicate the region division mode of the current block. In this way, the decoding end obtains the flag A by decoding the code stream, and determines the area division method of the current block based on the flag A.
  • the flag A may be a sequence-level flag, used to indicate that all decoding blocks in the sequence may use this region division method to divide the decoding blocks into N regions.
  • the flag A may be a frame-level flag, used to indicate that all decoded blocks in the image frame may use this region division method to divide the decoded blocks into N regions.
  • the flag A may be a slice-level flag, used to indicate that all decoded blocks in the image slice may use this region division method to divide the decoded blocks into N regions.
  • the flag A can be a block-level flag, used to indicate that the current block can be divided into N areas using this area division method.
  • the process of determining the P quantization coefficients in the current area by the decoder includes but is not limited to the following examples:
  • Example 1 The decoder decodes region by region to obtain the quantized coefficients whose parity is not concealed in each region. For example, the decoder determines the P of the region after decoding the quantized coefficients whose parity is not concealed in a region. quantized coefficients, and then decodes the quantized coefficients whose parity is not hidden in the next area, and determines the P quantized coefficients of the next area. That is to say, in this embodiment, the decoder can determine the P quantization coefficients in the current region when all the quantization coefficients of the current block whose parity is not hidden are not completely decoded. For example, the region is divided into K pixels in the scanning direction as one region.
  • Example 2 After determining all quantized coefficients whose parity is not hidden in the current block, the decoder determines P quantized coefficients in the current area. Specifically, it includes the following steps:
  • the decoder first decodes the code stream to obtain decoded information of the current block.
  • the decoded information includes quantized coefficients of different areas in the current block.
  • the current block is divided into N regions according to the region division method. Assuming that the current area is the k-th area among the N areas, the decoded information corresponding to the k-th area among the decoded information of the current block is determined as the decoded information of the current area.
  • the decoded information of the current area includes P quantization coefficients in the current area.
  • Method 1 Divide the current block into N areas according to the scanning order.
  • At least two areas among the N areas include the same number of quantization coefficients.
  • At least two of the N regions include different numbers of quantization coefficients.
  • every M non-zero quantized coefficients in the current block are divided into a region, resulting in N regions.
  • Each of these N regions includes M non-zero quantization coefficients.
  • At least one of the N regions includes one or more hidden coefficients.
  • the non-zero quantization coefficient included in the last region is not M, then the last region is divided into a separate region, or the last region and the previous region are merged into one region.
  • every K pixels in the current block are divided into a region, resulting in N regions.
  • N regions For example, for an 8x8 size transform block using the reverse ZigZag scan order, when each region is equal in size, that is, each region contains 16 coefficients, as shown in Figure 7, the current block is divided into 4 regions.
  • the divided N regions are of the same size, and each region includes K pixels. There may be regions in these N regions where the quantization coefficients are all 0, or there may be regions that do not include hidden coefficients. That is, at least one region among the N regions includes one or more hidden coefficients.
  • the quantization coefficient included in the last region is not K, the last region is divided into a separate region, or the last region and the previous region are merged into one region.
  • Method 2 Divide the current block into N regions according to the spatial location.
  • the above-mentioned N areas are sub-blocks of the current block.
  • the current block is evenly divided into N sub-blocks.
  • the size of each sub-block is 4*4.
  • each sub-block includes at least one non-zero quantization coefficient.
  • the method of dividing the current block into N areas may also include other methods, and the embodiment of the present application does not limit this.
  • the current region in the embodiment of the present application includes at least one first quantization coefficient.
  • the first quantization coefficient is also called a hidden coefficient.
  • the first quantization coefficient may be any non-zero quantization coefficient in the current area defaulted by the codec end.
  • the first quantization coefficient may be a non-zero quantization coefficient with the largest absolute value in the current region.
  • the first quantization coefficient is a non-zero quantization coefficient located at the Kth position in the scanning order in the current area, and the K is less than or equal to the number of non-zero quantization coefficients in the current area.
  • the current area contains 16 coefficients, and the 16th non-zero quantization coefficient and/or the 15th non-zero quantization coefficient in the scanning order can be used as the first quantization coefficient of the current area.
  • a flag may be used to indicate whether the current block is allowed to use the technology of hiding the parity of quantization coefficients provided by the embodiments of this application.
  • the technology of hiding the parity of the quantization coefficient provided by the embodiment of the present application is also called the parity hiding technology.
  • the at least one flag set may be a flag of different levels, used to indicate whether the corresponding level allows the parity of the quantization coefficient to be hidden.
  • the at least one flag includes at least one of a sequence-level flag, an image-level flag, a slice-level flag, a unit-level flag, and a block-level flag.
  • the above-mentioned at least one flag includes a sequence-level flag, which is used to indicate whether the current sequence allows the parity of the quantization coefficient to be hidden. For example, if the value of the sequence-level flag is 1, it indicates that the current sequence allows the parity of the quantized coefficient to be hidden. If the value of the sequence-level flag is 0, it indicates that the current sequence does not allow the parity of the quantized coefficient to be hidden. .
  • the sequence-level flag may be located in the sequence header.
  • the above-mentioned at least one flag includes a picture-level flag, which is used to indicate whether the current picture allows the parity of the quantization coefficient to be hidden. For example, if the value of the image-level flag is 1, it indicates that the current image allows the parity of the quantized coefficient to be hidden. If the value of the image-level flag is 0, it indicates that the current image does not allow the parity of the quantized coefficient to be hidden. .
  • the image-level flag may be located in the picture header.
  • the above-mentioned at least one flag includes a slice-level flag, which is used to indicate whether the current slice (slice) allows the parity of the quantization coefficient to be hidden. For example, if the value of the slice-level flag is 1, it indicates that the current slice allows the parity of the quantized coefficient to be hidden. If the value of the slice-level flag is 0, it indicates that the current slice does not allow the parity of the quantized coefficient to be hidden. .
  • the slice-level flag may be located in the slice header.
  • the above-mentioned at least one flag includes a unit-level flag, which is used to indicate whether the current CTU allows the parity of the quantization coefficient to be hidden. For example, if the value of the unit-level flag is 1, it indicates that the current CTU allows the parity of the quantized coefficient to be hidden. If the value of the unit-level flag is 0, it indicates that the current CTU does not allow the parity of the quantized coefficient to be hidden. .
  • the above-mentioned at least one flag includes a block-level flag, which is used to indicate whether the current block allows the parity of the quantization coefficient to be hidden. For example, if the value of the block-level flag is 1, it indicates that the current block allows the parity of the quantized coefficient to be hidden. If the value of the block-level flag is 0, it indicates that the current block does not allow the parity of the quantized coefficient to be hidden. .
  • the decoder first decodes the code stream to obtain at least one of the above flags, and determines whether the current block allows the parity of the quantized coefficient to be hidden based on the at least one flag. If it is determined based on at least one of the above flags that the parity of the quantized coefficient is not allowed to be hidden in the current block, the method in the embodiment of the present application is skipped, and the decoded quantized coefficient is directly inversely quantized to obtain the transform coefficient. If it is determined based on at least one of the above flags that the current block is allowed to use the parity concealment technology provided by the embodiment of the present application, the method of the embodiment of the present application is executed.
  • the decoder first decodes the code stream to obtain the sequence-level flag. If the sequence-level flag indicates that the current sequence does not allow the use of the quantization coefficient parity concealment technology provided by the embodiment of the present application, the method of the embodiment of the present application is skipped and the method is used. The traditional method is to dequantize the current block. If the sequence-level flag indicates that the current sequence is allowed to use the quantization coefficient parity concealment technology provided by the embodiment of the present application, continue to decode the code stream to obtain the image-level flag.
  • the image-level flag indicates that the current image is not allowed to use the quantization coefficient parity concealment technology provided by the embodiment of the present application.
  • the method in the embodiment of the present application is skipped, and the traditional method is used to inverse quantize the current block. If the picture-level flag indicates that the current picture is allowed to use the quantization coefficient parity concealment technology provided by the embodiment of the present application, continue to decode the code stream to obtain the slice-level flag. If the slice-level flag indicates that the current picture is not allowed to use the quantization coefficient parity concealment technology provided by the embodiment of the present application.
  • the method in the embodiment of the present application is skipped, and the traditional method is used to inverse quantize the current block. If the slice-level flag indicates that the current slice is allowed to use the quantization coefficient parity concealment technology provided by the embodiment of this application, continue to decode the code stream to obtain the unit-level flag. If the unit-level flag indicates that the current CTU is not allowed to use the quantization coefficient parity concealment technology provided by the embodiment of this application.
  • the method in the embodiment of the present application is skipped, and the traditional method is used to inverse quantize the current block.
  • the unit-level flag indicates that the current CTU is allowed to use the quantization coefficient parity concealment technology provided by the embodiment of the present application, continue to decode the code stream to obtain the block-level flag. If the block-level flag indicates that the current block is not allowed to use the quantization coefficient parity concealment technology provided by the embodiment of the present application. When using the quantization coefficient parity concealment technology, the method in the embodiment of the present application is skipped, and the traditional method is used to inverse quantize the current block. If the block-level flag indicates that the current block is allowed to use the quantization coefficient parity concealment technology provided by the embodiment of the present application, then the embodiment of the present application is executed.
  • the quantization coefficient parity concealment technology provided by the embodiments of the present application is mutually exclusive with the target transformation method, where the target transformation method includes secondary transformation, multiple transformations, or a first transformation type, where the first transformation type is Skip transformations in at least one direction indicating the current block.
  • the decoder determines that the current block is transformed using the target transformation method, it skips the technical solution provided by the embodiment of the present application, for example, skips the following step S402.
  • the first quantization coefficient is a quantization coefficient obtained after parity concealment of all or part of the second quantization coefficient
  • the second quantization coefficient is a quantization coefficient in the current area whose parity is not concealed, for example, the second quantization coefficient is The part of the absolute value greater than n is parity hidden.
  • the first quantization coefficient is the sum of the part of the second quantization coefficient where the parity is not hidden (i.e. 10) and the quantization coefficient where the parity is hidden (i.e. 17), that is, the first quantization coefficient is 27, that is to say, the first quantization coefficient is 27.
  • the unhidden parity part of a quantized coefficient is 10, and the quantized coefficient with hidden parity is 17.
  • P quantization coefficients in the current region are used to indicate the parity of the quantization coefficients whose parity is hidden.
  • the decoder obtains P quantized coefficients in the current area according to the above steps, it can determine the parity of the quantized coefficients whose parity is hidden based on the P quantized coefficients in the current area, and then determine the parity of the quantized coefficients whose parity is hidden based on the P quantized coefficients in the current area.
  • the parity achieves accurate reconstruction of the hidden part of the parity.
  • the embodiment of the present application does not limit the method of determining the parity of the quantized coefficient whose parity is hidden among the first quantized coefficients in the above S402 based on the P quantized coefficients in the current region.
  • a binary characteristic (0 or 1) of the P quantized coefficients in the current region is used to determine the parity of the quantized coefficient whose parity is hidden among the first quantized coefficients.
  • the above S402 includes the following S402-A:
  • the above S402-A includes: according to the sum of the first absolute values of the P quantization coefficients.
  • the parity of the sum determines the parity of the quantization coefficient whose parity is hidden.
  • the first absolute value of the quantized coefficient is part or all of the absolute value of the quantized coefficient.
  • the first absolute value is a part of the absolute value of the quantized coefficient that is less than 15.
  • the absolute value of the quantization coefficient is indicated by one or more identifiers. If the absolute value of the quantization coefficient is greater than 15, the absolute value of the quantization coefficient also includes the part exceeding 15.
  • the entire absolute value of the quantized coefficient refers to the entire decoded absolute value of the quantized coefficient, including the value of each identification bit. If the absolute value of the quantized coefficient is greater than 15, it will also Includes values with more than 15 parts.
  • part of the absolute values of the quantized coefficients refers to the value of all or part of the flag bits.
  • the decoding end determines the parity that is hidden based on the parity of the sum of flags 1 of the P quantization coefficients. Parity of quantized coefficients.
  • the quantization coefficient whose parity is hidden is an even number.
  • the encoding end will The parity of at least one of the quantized coefficients. For example, if the quantization coefficient whose parity is hidden among the second quantization coefficients is an odd number, and the sum of the first absolute values of the P quantization coefficients is an even number, at this time, add 1 or subtract 1 to the smallest quantization coefficient among the P quantization coefficients. , to modify the sum of the first absolute values of the P quantization coefficients to an odd number.
  • the quantization coefficient whose parity is hidden among the second quantization coefficients is an even number
  • the sum of the first absolute values of the P quantization coefficients is an odd number
  • the above S402-A includes: determining the target quantization coefficient among the P quantization coefficients, and based on The parity of the number of target quantization coefficients among the P quantization coefficients determines the parity of the quantization coefficient whose parity is hidden.
  • the above-mentioned target quantization coefficient is any one of the non-zero quantization coefficients among the P quantization coefficients, the non-zero quantization coefficients with even values, the quantization coefficients with even values, and the quantization coefficients with odd values.
  • the parity of the number of non-zero quantization coefficients among the P quantization coefficients can be determined to determine the quantization coefficient whose parity is hidden. Parity.
  • the quantization coefficient whose parity is hidden is an odd number.
  • the quantization coefficient whose parity is hidden is an even number.
  • the encoding end will At least one of the quantization coefficients is modified. For example, if the quantization coefficient whose parity is hidden among the second quantization coefficients is an odd number, and the number of non-zero quantization coefficients among the P quantization coefficients is an even number, then the quantization coefficient in which one of the P quantization coefficients is zero can be adjusted.
  • the quantization coefficient whose parity is hidden in the second quantization coefficient is an even number
  • the number of non-zero quantization coefficients among the P quantization coefficients is an odd number
  • one of the P quantization coefficients can be zero. Adjust to 1, or adjust the value of the smallest quantized coefficient among the P quantized coefficients to 0, so that the number of non-zero quantized coefficients among the P quantized coefficients is an even number.
  • the target quantization coefficient is a non-zero quantization coefficient with an even number among the P quantization coefficients
  • the number of non-zero quantization coefficients with an even number among the P quantization coefficients can be determined. The parity of the quantized coefficients is hidden.
  • the quantization coefficient whose parity is hidden is an odd number.
  • the quantization coefficient whose parity is hidden is an even number.
  • the encoding The terminal modifies at least one coefficient among the P quantization coefficients. For example, if the quantization coefficient whose parity is hidden among the second quantization coefficients is an odd number, and the number of non-zero quantization coefficients with an even value among the P quantization coefficients is an even number, then one of the P quantization coefficients can be set to zero.
  • one of the P quantization coefficients can be The zero quantization coefficient is adjusted to 2, or the value of the smallest non-zero quantization coefficient among the P quantization coefficients is added or subtracted by 1, so that the number of non-zero quantization coefficients among the P quantization coefficients is an even number.
  • the target quantization coefficient is a quantization coefficient with an even number among the P quantization coefficients
  • the parity of the quantization coefficient include quantization coefficients with value 0.
  • the quantization coefficient whose parity is hidden is an odd number.
  • the quantization coefficient whose parity is hidden is an even number.
  • the encoding end At least one coefficient among the P quantization coefficients is modified. For example, if the quantization coefficient whose parity is hidden among the second quantization coefficients is an odd number, and the number of quantization coefficients with an even value among the P quantization coefficients is an even number, then one of the P quantization coefficients can be quantized to zero.
  • the coefficient is adjusted to 1, or the smallest non-zero quantized coefficient among the P quantized coefficients is adjusted to 0, so that the number of quantized coefficients with even values among the P quantized coefficients is an odd number.
  • the quantization coefficient whose parity is hidden among the second quantization coefficients is an even number
  • the number of quantization coefficients with an even value among the P quantization coefficients is an odd number
  • one of the P quantization coefficients can be set to zero.
  • the quantization coefficient is adjusted to 1, or the smallest non-zero quantization coefficient among the P quantization coefficients is adjusted to 0, so that the number of quantization coefficients among the P quantization coefficients is an even number.
  • the target quantization coefficient is a quantization coefficient with an odd value among the P quantization coefficients
  • it can be determined based on the parity of the number of quantization coefficients with an odd value among the P quantization coefficients that the parity is hidden. The parity of the quantization coefficient.
  • the quantization coefficient whose parity is hidden is an odd number.
  • the quantization coefficient whose parity is hidden is an even number.
  • the encoding end At least one coefficient among the P quantization coefficients is modified. For example, if the quantization coefficient whose parity is hidden among the second quantization coefficients is an odd number, and the number of quantization coefficients with odd values among the P quantization coefficients is an even number, then one of the P quantization coefficients can be quantized to zero.
  • the coefficient is adjusted to 1, or the smallest non-zero quantized coefficient among the P quantized coefficients is adjusted to 0 or plus 1 or minus 1, so that the number of quantized coefficients with an odd value among the P quantized coefficients is an odd number.
  • the quantization coefficient whose parity is hidden among the second quantization coefficients is an even number
  • the number of quantization coefficients with odd values among the P quantization coefficients is an odd number
  • one of the P quantization coefficients can be set to zero.
  • Adjust the quantization coefficient to 1, or adjust the smallest non-zero quantization coefficient among the P quantization coefficients to 0 or add 1 or subtract 1, so that the number of quantization coefficients among the P quantization coefficients is an even number.
  • the encoding end uses an adjustment method with the smallest rate distortion cost to adjust at least one coefficient among the P quantization coefficients.
  • the embodiment of the present application determines the parity of the quantized coefficient whose parity is hidden based on the parity corresponding to other quantized coefficients in the current region.
  • the above-mentioned S403 and the above-mentioned S402 have no strict sequence in the implementation process. That is to say, the above-mentioned S403 can be executed before the above-mentioned S402, can also be executed after the above-mentioned S402, and can also be executed synchronously with the above-mentioned S402.
  • the application examples do not limit this.
  • the first quantized coefficient is a quantized coefficient obtained by parity concealing all or part of the second quantized coefficient in the current region, and the first quantized coefficient includes the part where the parity is concealed.
  • the coefficients are divided into a region for every sixteen coefficients in the decoding order starting from the first one in the lower right corner.
  • an 8 ⁇ 8 transform block uses Zig-Zag scanning, and the decoding order is from 63 to 0.
  • the transformation block can be divided into four areas, namely scan index 48-63, 32-47, 16-31, 0-15. For example, the parity of the coefficients at the four positions of indexes 48, 32, 16, and 0 is hidden.
  • the actual decoding starts from the first non-zero coefficient in decoding order, assuming that index 21 is the first non-zero coefficient in decoding order in the transform block.
  • Coefficient decoding will skip the position with an index greater than 21 and default it to zero, and decode the context model identification bits in the order of 21, 20, ..., 0.
  • the first quantization coefficient is the coefficient at position 21 in FIG. 8
  • the first quantization coefficient is decoded using the context model corresponding to the first quantization coefficient.
  • the decoder Before decoding the first quantized coefficient, the decoder first needs to determine the context model corresponding to the first quantized coefficient.
  • the decoder uses different context models to decode the quantized coefficients whose parity is hidden and the quantized coefficients whose parity is not hidden. That is to say, the context model corresponding to the first quantization coefficient is different from the context model corresponding to other quantization coefficients whose parity is not hidden.
  • the first quantization coefficient in the embodiment of the present application is represented by one or more identifiers.
  • identifier 1 represents the part from 0 to 3
  • identifier 2 represents the part from 3 to 6
  • identifier 3 represents the part from 6 to 9
  • identifier 4 represents 9. ⁇ 12
  • the mark 5 indicates the part from 12 to 15.
  • a context model may be determined for one or more identifiers among the identifiers of the first quantization coefficient, and decoding may be performed using the one or more identifiers. That is to say, the embodiment of the present application can determine a target context model for the target quantization coefficient among the first quantization coefficients, and use the target context model to decode the target quantization coefficient among the first quantization coefficients.
  • the target quantization coefficient of the first quantization coefficient can be identifier 1, or the target identifier can be the quantization coefficient represented by any one of identifiers 2 to 5.
  • the decoder can determine two context models, one of which is upper and lower. A context model is used to decode identifier 1, and another context model is used to decode identifiers 2 to 5.
  • Method 2 In order to reduce the decoding complexity of the first quantization parameter, in the embodiment of the present application, the context model corresponding to the first quantization coefficient is the same as at least one context model corresponding to the context models corresponding to other quantization coefficients whose parity is not hidden. That is to say, in the embodiment of the present application, the existing context model is reused to decode part or all of the quantization parameters with hidden parity, thereby reducing the decoding complexity of the quantization parameters with hidden parity.
  • the quantization parameter can be divided into at least one part.
  • the part 0 to 3 represented by the quantization parameter identifier 1 is called BR
  • the part 4 to 15 represented by the quantization parameter identifier 2 to 5 is called LB.
  • different parts of the quantization parameters correspond to different context models.
  • the above S403 includes the following steps:
  • the target quantization coefficient is a partial quantization coefficient of the quantization coefficient
  • S403-C Use the target context model corresponding to the target quantization coefficient to decode the target quantization coefficient of the first quantization coefficient to obtain the first quantization coefficient.
  • context models corresponding to other quantized coefficients whose parity is not hidden are reused to decode the first quantized coefficient whose parity is hidden. Therefore, when decoding the target quantized coefficient in the first quantized coefficient, the context model corresponding to the target quantized coefficient of other quantized coefficients whose parity is not hidden is used to decode the target quantized coefficient in the first quantized coefficient.
  • R context models are created for the BR part and Q context models are created for the LR part, where R and Q are both Positive integer.
  • the target quantization coefficient is BR, assuming that BR corresponds to R context models.
  • the decoder selects a context model from the R upper and lower models as the target context model corresponding to BR, and uses the target context model corresponding to the BR. Decode the BR in the first quantized coefficient.
  • the decoder selects a context model from the Q upper and lower models as the target context model corresponding to LR, and uses the LR target context.
  • the model decodes the LR in the first quantized coefficient.
  • the implementation methods of determining the target context model corresponding to the target quantization coefficient of the first quantization coefficient from the multiple context models corresponding to the target quantization coefficient include but are not limited to the following:
  • Method 1 determine any one of the multiple context models corresponding to the target quantization coefficient as the target context model corresponding to the target quantization coefficient of the first quantization coefficient.
  • Method 2 the above S403-B includes the following steps of S403-B1 and S403-B2:
  • S403-B2 Determine the target context model corresponding to the target quantization coefficient from the multiple context models corresponding to the target quantization coefficient according to the index.
  • each context model in the multiple context models corresponding to the target quantization coefficient includes an index, so that the decoder can determine the index of the target context model corresponding to the target quantization coefficient, and then based on the index, obtain the target quantization
  • the target context model corresponding to the target quantization coefficient is selected from multiple context models corresponding to the coefficient.
  • the embodiments of the present application do not limit the specific implementation manner of determining the index of the context model corresponding to the target quantization coefficient.
  • the first quantization coefficient includes BR. If the first quantization coefficient is greater than 3, the first quantization coefficient also includes LR, where the index of the target context model corresponding to BR is determined and the index of the target context model corresponding to LR is determined. Indexing works differently. The process of determining the index of the target context model corresponding to the LR and the process of determining the index of the target context model corresponding to the LR are introduced below.
  • the above S403-B1 includes the following steps:
  • S403-B11 Determine the index of the target context model corresponding to the BR of the first quantized coefficient based on the sum of BRs of the decoded quantized coefficients around the first quantized coefficient.
  • the BR of the first quantized coefficient is related to the BR of the decoded quantized coefficients around the first quantized coefficient. Therefore, in this embodiment of the present application, according to the sum of BRs of the decoded quantized coefficients around the first quantized coefficient, Determine the index of the target context model corresponding to the BR in the first quantization coefficient.
  • the sum of BRs of the decoded quantized coefficients around the first quantized coefficient is determined as the index of the target context model corresponding to the BR in the first quantized coefficient.
  • the sum of BRs of the decoded quantized coefficients around the first quantized coefficient is operated to obtain the index of the target context model corresponding to the BR in the first quantized coefficient.
  • the above S403-B11 includes the following steps:
  • the method for determining the index of the target context model corresponding to the BR in the first quantized coefficient based on the sum of BRs of the decoded quantized coefficients around the first quantized coefficient may be to add the decoded BR around the first quantized coefficient.
  • the sum of BR of the quantized coefficients is added to the first preset value to obtain the first sum value.
  • the first sum value is divided by the first numerical value to obtain the first ratio.
  • the decoded quantized coefficients surrounding the first quantized coefficient can be understood as J decoded quantized coefficients located around the first coefficient in the scanning order.
  • the five decoded quantization coefficients around the first quantization coefficient are shown in Figure 9A to Figure 9C
  • the first quantization coefficient is the black part in the figure
  • the five decoded quantization coefficients around the first quantization coefficient are the gray part. That is to say, when determining the index of the target context model corresponding to the BR, reference is made to the five decoded quantization coefficients surrounding the first quantization coefficient as shown in Figures 9A to 9C.
  • the embodiments of the present application do not limit the specific values of the above-mentioned first preset value and first numerical value.
  • the first preset value is 1.
  • the first preset value may be 2.
  • the first value is 1.
  • the first value is 2.
  • the embodiment of the present application does not limit the specific method of determining the index of the target context model corresponding to the BR in the first quantization coefficient based on the first ratio in S403-B113 above.
  • Example 1 Determine the first ratio as the index of the target context model corresponding to the BR in the first quantization coefficient.
  • Example 2 Process the first ratio to obtain the index of the target context model corresponding to the BR in the first quantization coefficient.
  • the embodiments of this application do not limit the specific manner in which the first ratio is processed.
  • the above S403-B113 includes:
  • the first ratio is compared with the first preset threshold, the minimum value of the first ratio and the first preset threshold is determined as the second value, and then the minimum value of the first ratio and the first preset threshold is determined based on the second value.
  • the embodiment of the present application does not limit the specific method of determining the index of the target context model corresponding to the BR based on the second value in the above S403-B113-2.
  • the second value determined above is determined as the index of the target context model corresponding to the BR in the first quantization coefficient.
  • the offset index offset BR of the BR is determined, and the second value and the offset index of the BR in the first quantized coefficient are determined as the index of the target context model corresponding to the BR in the first quantized coefficient.
  • the index of the target context model corresponding to the BR in the first quantization coefficient can be determined according to the following formula (5):
  • Index BR is the index of the target context model corresponding to BR in the first quantization coefficient
  • offset BR is the offset index of BR
  • ⁇ ⁇ BR is the sum of BR of the quantization parameters around the first quantization coefficient
  • a1 is the first preset value
  • b1 is half of the first value
  • c1 is the first preset threshold
  • the embodiments of the present application do not limit the specific values of the above-mentioned first preset value, first numerical value and first preset threshold.
  • Index BR offset BR +min(4,(( ⁇ ⁇ BR)+1)>>1) (6)
  • Index BR offset BR +min(4,(( ⁇ ⁇ BR)+2)>>2) (7)
  • the determination process of the context model index is adjusted to select a target context model suitable for the first quantization coefficient whose parity is hidden, specifically for The sum of the BRs of the surrounding quantized coefficients of the first quantized coefficient is divided by 4 to obtain a first ratio. According to the first ratio, the index of the target context model corresponding to the BR in the first quantized coefficient is determined.
  • the The first preset value a1 is adjusted from 1 in formula (6) to 2 in formula (7) to achieve rounding.
  • the embodiment of the present application does not limit the specific method of determining the offset index offset BR of the BR.
  • the offset index of the BR is determined according to at least one of the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color component of the current block.
  • the offset index of the BR is the first threshold.
  • the offset index of the BR is the second threshold.
  • the first threshold is 0.
  • the first threshold is 10.
  • the second threshold is 5.
  • the second threshold is 15.
  • the specific process of determining the index of the target context model corresponding to the BR in the first quantization coefficient is introduced above.
  • this embodiment also needs to determine the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the above S403-B1 includes the following steps:
  • the LR of the first quantized coefficient is related to the BR and LR of the decoded quantized coefficients around the first quantized coefficient. Therefore, in the embodiment of the present application, according to the BR and LR of the decoded quantized coefficients around the first quantized coefficient, The sum of LR determines the index of the target context model corresponding to LR in the first quantization coefficient.
  • the sum of BR and LR of the decoded quantized coefficients around the first quantized coefficient is determined as the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the sum of BR and LR of the decoded quantized coefficients around the first quantized coefficient is operated to obtain the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the above S403-B21 includes the following steps:
  • the method for determining the index of the target context model corresponding to the LR in the first quantized coefficient may be to add the decoded quantized coefficients around the first quantized coefficient.
  • the sum of BR and LR of the decoded quantization coefficient is added to the second preset value to obtain the second sum value.
  • the second sum value is divided by the third value to obtain the second ratio.
  • the The second ratio determines the index of the target context model corresponding to the LR in the first quantization coefficient.
  • the decoded quantized coefficients surrounding the first quantized coefficient can be understood as J decoded quantized coefficients located around the first coefficient in the scanning order.
  • the three decoded quantization coefficients around the first quantization coefficient are as shown in Figure 10A to Figure 10C
  • the first quantization coefficient is the black part in the figure
  • the three decoded quantization coefficients around the first quantization coefficient are the gray part. That is to say, when determining the index of the target context model corresponding to the LR, reference is made to the three decoded quantization coefficients surrounding the first quantization coefficient as shown in Figures 10A to 10C.
  • the second preset value is 1.
  • the second preset value may be 2.
  • the third value is 1.
  • the third value is 2.
  • the embodiment of the present application does not limit the specific manner of determining the index of the target context model corresponding to the LR in the first quantization coefficient based on the second ratio in S403-B213 above.
  • Example 1 Determine the second ratio as the index of the target context model corresponding to the LR in the first quantization coefficient.
  • Example 2 Process the second ratio to obtain the index of the target context model corresponding to the LR in the first quantization coefficient.
  • the embodiments of this application do not limit the specific manner in which the second ratio is processed.
  • the above S403-B113 includes:
  • the second ratio is compared with the second preset threshold, the minimum value of the second ratio and the second preset threshold is determined as a fourth value, and then based on the fourth value, the minimum value is determined.
  • the embodiment of the present application does not limit the specific method of determining the index of the target context model corresponding to the LR based on the fourth value in the above S403-B213-2.
  • the fourth value determined above is determined as the index of the target context model corresponding to the LR in the first quantization coefficient.
  • determine the offset index offset LR of LR and determine the second value and the offset index of LR in the first quantized coefficient as the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the index of the target context model corresponding to the LR in the first quantization coefficient can be determined according to the following formula (8):
  • Index LR offset LR +min(c2,((( ⁇ ⁇ (BR+ ⁇ LR))))+a2)>>b2 (8)
  • Index LR is the index of the target context model corresponding to LR in the first quantization coefficient
  • offset LR is the offset index of LR
  • ⁇ ⁇ (BR+ ⁇ LR) is the sum of less than 16 parts of the surrounding quantization parameters of the first quantization coefficient. , that is, the sum of BR and LR of the surrounding quantization parameters of the first quantization coefficient
  • a2 is the second preset value
  • b2 is half of the third value
  • c2 is the second preset threshold.
  • the embodiments of the present application do not limit the specific values of the above-mentioned second preset value, third value and second preset threshold.
  • Index LR offset LR +min(6,((( ⁇ ⁇ (BR+ ⁇ LR))))+1)>>1 (9)
  • Index LR offset LR +min(6,((( ⁇ ⁇ (BR+ ⁇ LR))))+2)>>2 (8)
  • the determination process of the context model index is adjusted to select a target context model suitable for the first quantization coefficient whose parity is hidden, specifically for The sum of the parts less than 16 of the surrounding quantized coefficients of the first quantized coefficient is divided by 4 to obtain a second ratio. According to the second ratio, the index of the target context model corresponding to the LR in the first quantized coefficient is determined.
  • the embodiment of the present application does not limit the specific method of determining the offset index offsrt LR of LR.
  • the offset index of the LR is determined according to at least one of the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color component of the current block.
  • the offset index of the LR is the third threshold.
  • the offset index of the LR is the fourth threshold.
  • the embodiments of the present application do not limit the specific values of the third threshold and the fourth threshold.
  • the third threshold is 0.
  • the third threshold is 14.
  • the fourth threshold is 7.
  • the fourth threshold is 21.
  • the context model index value of the first quantization coefficient is calculated as shown in Table 4 below:
  • is the 5 decoded quantization coefficients around the first quantization coefficient in several scanning modes in Figures 9A to 9C
  • is the 3 decoded quantization coefficients around the first quantization coefficient in several scanning modes in Figures 10A to 10C. decoded quantized coefficients.
  • the target quantization coefficient corresponding to the first quantization coefficient after determining the index of the target context model corresponding to the target quantization coefficient in the first quantization coefficient, according to the index, determine the target quantization coefficient corresponding to the first quantization coefficient from the multiple context models corresponding to the target quantization coefficient.
  • target context model For example, according to the index of the target context model corresponding to the BR in the first quantized coefficient, the target corresponding to the BR in the first quantized coefficient is obtained from multiple context models corresponding to the BR parts of other quantized coefficients whose parity is not hidden.
  • the context model according to the index of the target context model corresponding to the LR in the first quantized coefficient, obtains the LR corresponding to the first quantized coefficient from the multiple context models corresponding to the LR parts of other quantized coefficients whose parity is not hidden.
  • Target context model according to the index of the target context model corresponding to the LR in the first quantized coefficient, obtains the LR corresponding to the first quantized coefficient from the multiple context models corresponding to the LR parts of other quantized coefficients whose parity is not hidden.
  • the above multiple context models include multiple QP segments (for example, 4 QP segments) and context models with different indexes under 2 components (such as luminance component and chrominance component).
  • the index determines the target context model corresponding to the target quantization coefficient from the plurality of context models corresponding to the target quantization coefficient, including: according to at least one of the quantization parameter corresponding to the first quantization coefficient, the transformation type, the transformation block size and the color component, from Select at least one context model from the plurality of context models corresponding to the target quantization coefficient; then, determine the context model corresponding to the index in the at least one lower context model as the target context model corresponding to the target quantization coefficient.
  • the index corresponding to the BR (ie, identification 1) in the first quantization coefficient is index 1
  • the QP segment corresponding to the first quantization coefficient is QP segment 1
  • the first quantization coefficient is the luminance component.
  • Quantization coefficient Assume that there are S context models corresponding to identification 1. First, from these S context models, T context models under QP segment 1 and brightness component are selected. Then, the corresponding index 1 in these T context models is The context model is determined as the target context model corresponding to the BR in the first quantized coefficient, and then the target context model is used to decode the BR in the first quantized coefficient.
  • the index corresponding to the LR (ie, identifier 2 to identifier 5) in the first quantization coefficient is index 2
  • the QP segment corresponding to the first quantization coefficient is QP segment 1
  • the first quantization coefficient is Quantization coefficient under the luminance component.
  • the context model is determined as the target context model corresponding to identifiers 2 to 5 in the first quantized coefficient, and then uses the target context model to decode the LR in the first quantized coefficient.
  • the decoder before determining the target context model from the multiple context models, the decoder needs to initialize the multiple context models, and then determines the target context model from the multiple initialized context models. .
  • multiple context models corresponding to BR are initialized, and then a target context model corresponding to BR is selected from the multiple initialized context models.
  • multiple context models corresponding to LR are initialized, and then the target context model corresponding to LR is selected from the multiple initialized context models.
  • each model in order to enable the probability values in the context model corresponding to the parity hidden coefficient to converge faster, each model can have a corresponding set of initial values based on the probability of occurrence of each symbol.
  • the probability of each context model is calculated based on the number of symbols in the model, the probability of each symbol appearing, and the cumulative distribution function (Cumulative Distribution Function).
  • the probability that a symbol appears in the context model is less than a certain value. This probability The value is represented by a 16-bit integer through normalization.
  • the methods for initializing multiple context models include but are not limited to the following:
  • Method 1 use equal probability values to initialize multiple context models. For example, in a 4-symbol context model, the probability of occurrence of symbols 0, 1, 2, and 3 is 0.25, then the probability of symbols less than 1 appearing is 0.25, the probability of symbols less than 2 is 0.5, and the probability of symbols less than 3 is 0.75. The probability of less than 4 symbols is 1. After rounding the probability to 16 bits, the probability of less than 1 symbol is 8192, the probability of less than 2 symbols is 16384, the probability of less than 3 symbols is 24576, and the probability of less than 4 symbols is 32768. These four The integer values constitute the probability initial value of the context model.
  • Method 2 Initialize multiple context models using convergence probability values, where the convergence probability value is the convergence probability value corresponding to the context model when the context model is used to encode the test video. For example, first, an initial probability value such as equal probability is set for these context models. Then, a series of test videos are encoded using quantization coefficients corresponding to certain target bit rates, and the final convergence probability values of these context models are used as the initial values of these context models. Repeat the above steps to obtain a new convergence value, and select the appropriate convergence value as the final initial value.
  • the target context model is used to decode the target quantization coefficient to obtain the first quantization coefficient.
  • the BR corresponding to the first quantized coefficient is obtained.
  • target context model and use the target context model corresponding to the BR in the first quantized coefficient to decode the BR in the first quantized coefficient to obtain the decoded BR, and then determine the first quantized coefficient based on the decoded BR.
  • the decoded BR is determined as the first quantization coefficient if the first quantization coefficient only includes the BR part and does not include the LR part.
  • the decoder uses the target context model corresponding to the LR to decode the LR in the first quantized coefficient to obtain the decoded LR, and determine the first quantized coefficient based on the decoded BR and the decoded LR.
  • the sum of the decoded BR and the decoded LR is determined as the absolute value of the first quantized coefficient.
  • the first quantized coefficient also includes a part greater than 15
  • the part greater than 15 in the first quantized coefficient is decoded, and then the sum of the decoded BR, the decoded LR and the decoded value of the part greater than 15 is determined. is the absolute value of the first quantization coefficient.
  • the decoding process of the first quantization coefficient is:
  • Decode identifier 1 Here a is used to represent the value of identifier 1. The value of a is 0 to 3. If the decoded identifier 1 is 3, decode identifier 2. Otherwise, the default identifier 2 is 0.
  • b is used to represent identifier 2. The value of b is 0 ⁇ 3.
  • decoded identifier 2 is 3, decode identifier 3, otherwise the default identifier 3 is 0.
  • c is used to represent the value of identifier 3, and the value of c is 0 to 3.
  • decoded identifier 3 If the decoded identifier 3 is 3, decode identifier 4, otherwise the default identifier 4 is 0.
  • d is used to represent the value of identifier 4, and the value of d is 0 to 3.
  • decoded identifier 4 If the decoded identifier 4 is 3, decode the identifier 5, otherwise the default identifier 5 is 0.
  • e is used to represent the value of the identifier 5, and the value of e is 0 to 3.
  • the absolute value of the first quantization coefficient can be recovered:
  • Abslevel is the absolute value of the first quantization coefficient
  • remainder is the part of the absolute value of the first quantization coefficient that exceeds 15.
  • the first quantization coefficient is determined according to the above S403, the parity of the quantization coefficient whose parity is hidden among the first quantization coefficients is determined according to the above S402, and then the following steps of S404 are performed.
  • the coding end in order to reduce the coding cost, performs parity hiding on all or part of the second quantized coefficient in the current area to obtain the first quantized coefficient. Since all or part of the second quantized coefficient The parity of is hidden and is therefore indicated by the P quantization coefficients in the current region.
  • the decoder when decoding the current area, the decoder first decodes the code stream to obtain P quantized coefficients in the current area, and based on these P quantized coefficients, determines the parity of the quantized coefficient whose parity is hidden in the first quantized coefficient, and then , determine the target context model corresponding to the first quantization coefficient, and use the target context model to decode the first quantization coefficient based on context coding to obtain the first quantization coefficient. Finally, the decoder can reconstruct the second quantized coefficient with parity based on the decoded first quantized coefficient and the parity of the quantized coefficient whose parity is hidden in the first quantized coefficient, that is, to realize the parity hidden coefficient. reconstruction.
  • the decoder uses different ways to reconstruct the second quantized coefficient according to the different parities of the quantized coefficients whose parity is hidden.
  • Method 1 If the quantization coefficient with hidden parity is an odd number, the first operation method is used to operate the quantization coefficient with hidden parity to obtain the first operation result. According to the first operation result and the first quantization coefficient, we obtain The second quantization coefficient.
  • the first operation mode used by the decoding end corresponds to the third operation mode used by the encoding end to determine the first quantization coefficient.
  • the embodiments of the present application do not limit the specific forms of the above-mentioned first operation method and the third operation method, where the first operation method can be understood as the inverse operation of the third operation method.
  • the third operation method adopted by the encoding end is to add one to the value of the parity to be hidden part in the second quantized coefficient and divide it by two.
  • the first operation method adopted by the decoder is to multiply the quantized coefficient with the parity hidden in the first quantized coefficient by two and subtract one.
  • Method 2 If the second quantization coefficient is an even number, the second operation method is used to operate the quantization coefficient with the parity hidden, and the first operation result is obtained. According to the first operation result and the first quantization coefficient, the second quantization is obtained. coefficient.
  • the second operation mode used by the decoding end corresponds to the fourth operation mode used by the encoding end to determine the first quantization coefficient.
  • the fourth operation method adopted by the encoding end is to divide the value of the parity to be hidden part in the second quantization coefficient by two.
  • the second operation mode adopted by the decoder is to multiply the quantized coefficient whose parity is hidden in the first quantized coefficient by two.
  • the decoder can determine the second quantization coefficient in the following manner, that is, the above S404 includes the following steps:
  • S404-A3 Obtain the second quantization coefficient according to the second operation result and the first quantization coefficient.
  • the encoding end performs parity concealment on the part exceeding 10 in the second quantized coefficient to obtain the first quantized coefficient 25.
  • the part 10 in the first quantized coefficient is not parity concealed, and the remaining 15 parts are parity concealed. quantization coefficient.
  • the decoding end first uses the preset operation method to operate on the 15 parts whose parity is hidden, and obtains the first operation result.
  • the first operation result is processed according to the parity of the 15 parts whose parity is hidden, for example Adding a numerical value or subtracting a numerical value to the first operation result, specifically corresponding to the parity concealment method of the encoding end, obtains the second operation result. Finally, the second operation result and the first quantization coefficient are used to obtain the final second quantization coefficient.
  • the above-mentioned preset operation method includes multiplying the quantized coefficient with hidden parity by two, that is, multiplying the value of the quantized coefficient with hidden parity in the first quantized coefficient by 2 to obtain the first operation result.
  • a second operation result is obtained based on the parity of the quantized coefficient with the parity hidden and the above-mentioned first operation result. For example, the sum of the first operation result and the parity value is determined as the value of the second operation result, where, If the quantization coefficient whose parity is hidden is an odd number, the parity value is 1; if the quantization coefficient whose parity is hidden is an even number, the parity value is 0.
  • the second quantization coefficient is obtained based on the second operation result and the first quantization coefficient.
  • the encoding end performs parity concealment on all the second quantized coefficients to obtain the first quantized coefficients, that is to say, the above parity-hidden quantized coefficients are all the first quantized coefficients. At this time, Then the above-mentioned second operation result is determined as the second quantization coefficient.
  • the encoding end performs parity concealment on the part of the second quantized coefficient to obtain the first quantized coefficient, that is to say, the above-mentioned first quantized coefficient also includes the parity unconcealed part. At this time, the The sum of the unhidden part of the parity and the second operation result is determined as the second quantization coefficient.
  • the decoder can determine the second quantization coefficient according to the following formula (12):
  • C is the second quantization coefficient
  • qIdx is the first quantization coefficient
  • (qIdx-n) is the quantization coefficient in which the parity is hidden in the first quantization coefficient
  • (qIdx-n) ⁇ 2 is the first operation result
  • (qIdx-n) ⁇ 2+parity is the second operation result.
  • the encoding end performs parity concealment on the part of the second quantization coefficient greater than n, and does not perform parity concealment on the part less than n.
  • the decoder decodes the first quantized coefficient qIdx, if the first quantized coefficient qIdx is less than n, it means that the first quantized coefficient does not include a quantized coefficient with parity hidden, and the first quantized coefficient will be qIdx is determined as the second quantization coefficient C.
  • the first quantized coefficient qIdx is greater than n, it means that the first quantized coefficient includes a quantized coefficient with hidden parity, and then the quantized coefficient (qIdx-n) with hidden parity in the first quantized coefficient is After processing, the second operation result (qIsx-n) ⁇ 2+parity is obtained. Then, the sum of the second operation result and the part n of the first quantization coefficient whose parity is not hidden is determined as the second quantization coefficient.
  • the decoding end it is possible to determine the second quantization coefficient corresponding to the first quantization coefficient of other areas in the current block, the second quantization coefficient of each area in the current block, and other quantization coefficients whose parity is not hidden to form the current block.
  • the quantization coefficient of the block Next, the reconstruction value of the current block is determined based on the quantization coefficient of the current block.
  • the encoding end if the encoding end skips the transformation step, the residual value of the current block is directly quantized.
  • the decoding end performs inverse quantization on the quantization coefficient of the current block to obtain the residual value of the current block.
  • the intra prediction and/or inter prediction method is used to determine the prediction value of the current block, and the prediction value of the current block and the reconstruction value are added to obtain the reconstruction value of the current block.
  • the encoding end if the encoding end does not skip the transformation step, that is, the encoding end transforms the residual value of the current block to obtain the transform coefficient, and quantizes the transform coefficient.
  • the decoding end performs inverse quantization on the quantization coefficient of the current block to obtain the transform coefficient of the current block, and performs inverse transformation on the transform coefficient to obtain the residual value of the current block.
  • the intra prediction and/or inter prediction method is used to determine the prediction value of the current block, and the prediction value of the current block and the reconstruction value are added to obtain the reconstruction value of the current block.
  • the decoder decodes the code stream to obtain P quantization coefficients in the current area.
  • the current area is an area in the current block that includes at least one non-zero quantization coefficient.
  • P is a positive integer; according to P
  • the quantization coefficient determines the parity of the quantization coefficient whose parity is hidden in the first quantization coefficient.
  • the first quantization coefficient is the quantization coefficient obtained by hiding the parity of all or part of the second quantization coefficient in the current area; determines the parity of the second quantization coefficient.
  • a target context model corresponding to a quantized coefficient and using the target context model to decode the first quantized coefficient based on context encoding to obtain the decoded first quantized coefficient; according to the parity of the first quantized coefficient hidden in the quantized coefficient Parity and the first quantization coefficient determine a second quantization coefficient having parity.
  • the first quantization coefficient is obtained, and the first quantization coefficient is encoded, which can reduce encoding The number of bits required reduces the cost of video compression.
  • the embodiments of the present application re-determine the target context model for decoding the first quantization coefficient, which can achieve accurate decoding of the first quantization coefficient whose parity is hidden.
  • FIG 11 is a schematic flowchart of a video decoding method provided by an embodiment of the present application. As shown in Figure 6, the method in the embodiment of this application includes:
  • the above-mentioned at least one flag is used to indicate whether the parity of the quantized coefficient is allowed to be hidden.
  • the at least one flag includes at least one of a sequence-level flag, an image-level flag, a slice-level flag, a unit-level flag, and a block-level flag.
  • the decoder first decodes the code stream to obtain at least one of the above flags, and based on the at least one flag, determines whether the current block allows the parity of the quantization coefficient to be hidden.
  • the following steps S602 and S609 are performed, that is, the code stream is decoded to obtain the decoded information of the current block, and the decoded information includes the current block.
  • the quantization coefficient of the block At this time, since the quantization coefficient in the current block has not been parity hidden, a subsequent inverse quantization process is performed.
  • the decoded information of the current block includes the quantization coefficient in the current block.
  • the decoding end first decodes the identifiers 1 to 5, and then decodes the part where the absolute value of each quantization coefficient exceeds 15 in the scanning order from the last non-zero coefficient to the upper left corner of the transform block, and finally obtains the parts of each quantization coefficient in the current block. Quantization coefficient.
  • the embodiment of the present application does not limit the method of dividing the area of the current block.
  • the current block is divided into N areas according to the decoding scanning order, or the current block is divided into N areas according to the spatial position of the pixels in the current block. .
  • the sizes of the above-mentioned N areas may be the same or different, and the embodiments of the present application do not limit this.
  • the current area is an area to be decoded among the N areas of the current block.
  • the decoded information of the current block includes the quantized coefficients in the current block, so that the P quantized coefficients included in the current area can be obtained from the decoded information of the current block.
  • the above-mentioned P quantization coefficients are all quantization coefficients of the current region.
  • the above-mentioned P quantization coefficients are partial quantization coefficients of the current region.
  • the condition includes at least one of the following preset conditions and the parity hiding technology enabling condition.
  • the parity concealment technology proposed in this application is limited. Specifically, if the current area meets the set conditions, it means that the current area can obtain significant beneficial effects by using the parity hiding technology provided by the embodiment of the present application. At this time, the following steps S606 to S609 are performed. If the current area does not meet the set conditions, it means that the parity hiding technology provided by the embodiment of the present application cannot achieve significant beneficial effects in the current area. At this time, the following steps of S610 are performed.
  • different parity concealment technology enabling conditions may be preset based on at least one of quantization parameters, transform types, transform block sizes, scan types, and color components.
  • the parity concealment technology is enabled if the quantization coefficient is greater than or equal to a certain preset value. If the quantization coefficient is less than a certain preset value, the parity concealment technology is not enabled.
  • the transformation type is a preset transformation type
  • the parity hiding technology is enabled; if the transformation type is not a preset transformation type, the parity hiding technology is not enabled.
  • the transform type of the current block is the first transform type
  • the first transform type is used to indicate that at least one direction of the current block skips the transform. For details, refer to the above table. 2 shown.
  • the parity concealment technology is enabled; if the transformation block size is smaller than the preset size, the parity concealment technology is not enabled.
  • the first component is the chrominance component. That is to say, under the luminance component, the parity concealment technology is turned on. Under the chrominance component, Parity hiding technology is not enabled.
  • the transformation block uses the preset scan type
  • the parity concealment technology is enabled; if the transformation block does not apply to the preset scan type, the parity concealment technology is not enabled.
  • This application does not limit the preset scan type.
  • the preset scan type is ZigZag scan or diagonal scan.
  • the preset conditions are introduced below.
  • the above preset conditions include at least one of the following conditions:
  • Condition 1 The number of non-zero quantization coefficients in the current area is greater than the first preset value
  • Condition 2 In the current area, the distance between the first non-zero quantization coefficient and the last non-zero quantization coefficient in the decoding scanning order is greater than the second preset value
  • Condition 3 In the current area, the distance between the first non-zero quantization coefficient and the last quantization coefficient in the decoding scanning order is greater than the third preset value
  • Condition 4 In the current area, the sum of the absolute values of non-zero quantization coefficients is greater than the fourth preset value
  • the color component of the current block is the second component, and the optional second component is the brightness component;
  • the transformation type of the current block is not the first transformation type.
  • the first transformation type is used to indicate skip transformation in at least one direction of the current block.
  • the above six conditions can also be combined with each other to form new constraints.
  • the embodiments of the present application do not limit the specific values of the above-mentioned first to fourth preset values, as long as the first to fourth preset values are all positive integers.
  • At least one of the first preset value, the second preset value, the third preset value and the fourth preset value is a fixed value.
  • At least one of the first preset value, the second preset value, the third preset value and the fourth preset value is a non-fixed value, that is, it is determined by the encoding end based on the current encoding information. numerical value.
  • the encoding end writes the non-fixed value into code stream.
  • the decoder may first determine that the first quantization coefficient allows the parity of the quantization coefficient to be hidden based on at least one of the quantization parameter, transform type, transform block size, and color component corresponding to the first quantization coefficient. Then determine whether the current area meets the above preset conditions.
  • the decoder when determining that the current area satisfies the above preset condition, may determine whether the first quantization coefficient satisfies the parity concealment technology enabling condition.
  • the above preset conditions and the above parity hiding technology enabling conditions can be used alone or in combination with each other.
  • the parity of the quantization coefficient whose parity is hidden is determined.
  • the parity of the quantization coefficient whose parity is hidden is determined based on the parity of the sum of the first absolute values of the P quantization coefficients.
  • the parity of the quantization coefficient whose parity is hidden is determined.
  • the above-mentioned target quantization coefficient is any one of the non-zero quantization coefficients among the P quantization coefficients, the non-zero quantization coefficients with even values, the quantization coefficients with even values, and the quantization coefficients with odd values.
  • S607. Determine the target context model corresponding to the first quantization coefficient, and use the target context model to decode the first quantization coefficient based on context coding to obtain the decoded first quantization coefficient.
  • the target context model corresponding to the first quantization coefficient is the same as at least one context model corresponding to the context models corresponding to other quantization coefficients whose parity is not hidden.
  • the quantized coefficients whose parity is hidden in each area of the current block are determined, and combined with the decoded other quantized coefficients whose parity is not hidden in the current block, the quantized coefficient of the current block is obtained.
  • S610 Determine the quantization coefficient of the current block according to the decoded information of the current block.
  • the quantization coefficient decoded in the code stream is used as the final quantization coefficient of the current block.
  • the quantization coefficient of the current block is determined, and the reconstruction value of the current block is determined in the following two ways.
  • Method 1 If the encoding end skips the transformation step, the residual value of the current block is directly quantized. Correspondingly, the decoding end performs inverse quantization on the quantization coefficient of the current block to obtain the residual value of the current block. In addition, the intra prediction and/or inter prediction method is used to determine the prediction value of the current block, and the prediction value of the current block and the reconstruction value are added to obtain the reconstruction value of the current block.
  • Method 2 if the encoding end does not skip the transformation step, that is, the encoding end transforms the residual value of the current block to obtain the transform coefficient, and quantizes the transform coefficient.
  • the decoding end performs inverse quantization on the quantization coefficient of the current block to obtain the transform coefficient of the current block, and performs inverse transformation on the transform coefficient to obtain the residual value of the current block.
  • the intra prediction and/or inter prediction method is used to determine the prediction value of the current block, and the prediction value of the current block and the reconstruction value are added to obtain the reconstruction value of the current block.
  • the embodiment of the present application sets conditions for the parity hiding technology. When the current area meets the conditions, it is determined that the parity of at least one quantization coefficient in the current area is hidden, and then the parity hiding technology proposed by the application is used to hide the parity of the third quantization coefficient in the current area. A quantization coefficient is used for decoding to improve decoding accuracy.
  • Table 5 above adds a flag bit to the sequence header to control whether to enable parity hiding technology for the current sequence.
  • enable_parityhiding 1 to enable parity hiding technology
  • enable_parityhiding 0 to disable parity hiding technology. If this syntax element does not appear, its default value is zero.
  • Table 6 above shows the process of coefficient decoding.
  • the value of the variable enable_ph is based on the value of enable_parityhiding in the sequence header, as well as the currently used quantization parameter QP size, the current transform block size, the color component of the current transform block, and the scan type of the current transform block. At least one of the transformation types of the current transformation block is determined, and then the decoding end determines whether to use the parity concealment technology based on the value of the variable enable_ph.
  • SBBSIZE represents the number of locations included in each region
  • PHTRESH is the threshold for whether to turn on parity hiding technology in the current region. For example, SBBSIZE and PHTHRESH are 16 and 3 respectively.
  • the parity hiding technology includes enable_parityhiding is equal to 1, the number of non-zero quantization coefficients included in the current area is greater than the threshold, the current component is a luminance component, and the transformation type of the current block is a 2D type. , and is not IDTX in the 2D type.
  • IDTX indicates that the transformation operation is skipped in both the horizontal and vertical directions.
  • Table 7 the value acquisition conditions for the isHidePar variable in the syntax table shown in Table 6 above become as shown in Table 7:
  • tx_type! IDTX indicates that the current transformation type is not identity transform, where identity transform can be understood as skip transformation.
  • the parity hiding technology includes enable_parityhiding is equal to 1, the number of non-zero quantization coefficients included in the current area is greater than the threshold, the transformation type of the current block is a 2D type, and is not a 2D type. IDTX.
  • the value acquisition conditions for the isHidePar variable in the syntax table shown in Table 6 above become as shown in Table 8:
  • the embodiments of this application do not limit the specific acquisition conditions for determining the value of the isHidePar variable, which are determined based on actual needs.
  • the above-mentioned value acquisition condition of the isHidePar variable can be understood as the preset condition described in the above-mentioned S605.
  • Table 6 is an example, showing that the 16th quantization coefficient in the current area (ie, the first quantization coefficient) is determined based on the first 15 quantization coefficients (ie, P quantization coefficients) in the current area. )the process of.
  • the decoding process shown in Table 6 above mainly includes the steps shown in Table 9:
  • Loop 1 Decode the identifiers 1 to 5 of the first 15 coefficients of each region Determine whether the 16th coefficient hides parity based on the number of non-zero coefficients in the first 15 coefficients Decode the 16th coefficient Loop 2 Decode the part where the sign sum of non-zero coefficients is greater than 15 (if it is a hidden odd-even coefficient, decode the part greater than 30)
  • the decoding end when decoding the current area, the decoding end first determines whether the current area meets the conditions to determine whether the current area can bring significant beneficial technical effects when decoding using the parity concealment technology provided by the present application.
  • the technical solution of this application is used to decode the current area to improve decoding reliability.
  • the video decoding method involved in the embodiment of the present application is described above. On this basis, the video decoding method involved in the present application is described below with respect to the encoding end.
  • Figure 12 is a schematic flowchart of a video encoding method provided by an embodiment of the present application.
  • the execution subject of the embodiment of the present application may be the encoder shown in the above-mentioned Figure 1 and Figure 2.
  • the method in the embodiment of this application includes:
  • the current block is divided into one or more areas, for example, into N areas, where N is a positive integer.
  • the parity of one or more quantized coefficients in the area can be hidden based on the parity related to the quantized coefficients in the same area, for example, based on the parity of the sum of the absolute values of the quantized coefficients in the area.
  • the value of the second quantized coefficient is a1
  • perform parity hiding on all or part of the second quantized coefficient and obtain the first quantized coefficient a2, a2 is smaller than a1, so a2 is Compared with encoding a1, encoding uses fewer bits and reduces the encoding cost.
  • This embodiment of the present application does not limit the area division method of the current block.
  • At least two areas among the N areas include the same number of quantization coefficients.
  • At least two of the N regions include different numbers of quantization coefficients.
  • the specific ways of dividing the current block into N areas in S701 include but are not limited to the following:
  • Method 1 Divide the current block into N areas according to the scanning order.
  • every M non-zero quantized coefficients in the current block are divided into a region, resulting in N regions.
  • Each of these N regions includes M non-zero quantization coefficients.
  • At least one of the N regions includes one or more second quantization coefficients whose parity can be concealed.
  • the non-zero quantization coefficient included in the last region is not M, then the last region is divided into a separate region, or the last region and the previous region are merged into one region.
  • every K pixels in the current block are divided into a region, resulting in N regions.
  • N regions For example, for an 8x8 size transform block using the reverse ZigZag scan order, when each region is equal in size, that is, each region contains 16 coefficients, as shown in Figure 7, the current block is divided into 4 regions.
  • the quantization coefficient included in the last region is not K, the last region is divided into a separate region, or the last region and the previous region are merged into one region.
  • Method 2 Divide the current block into N regions according to the spatial location.
  • the N regions are sub-blocks of the current block.
  • the current block is evenly divided into N sub-blocks.
  • the size of each sub-block is 4 ⁇ 4.
  • each sub-block includes at least one non-zero quantization coefficient.
  • the method of dividing the current block into N areas may also include other methods, and the embodiment of the present application does not limit this.
  • a flag may be used to indicate whether the current block is allowed to use the technology of hiding the parity of quantization coefficients provided by the embodiments of this application.
  • the technology of hiding the parity of the coefficients provided by the embodiments of the present application is also called the parity hiding technology.
  • the at least one flag set may be a flag of different levels, used to indicate whether the corresponding level allows the parity of the quantization coefficient to be hidden.
  • the at least one flag includes at least one of a sequence-level flag, an image-level flag, a slice-level flag, a unit-level flag, and a block-level flag.
  • the above-mentioned at least one flag includes a sequence-level flag, which is used to indicate whether the current sequence allows the parity of the quantization coefficient to be hidden.
  • the sequence-level flag may be located in the sequence header.
  • the above-mentioned at least one flag includes a picture-level flag, which is used to indicate whether the current picture allows the parity of the quantization coefficient to be hidden.
  • the image-level flag may be located in the picture header.
  • the above-mentioned at least one flag includes a slice-level flag, which is used to indicate whether the current slice (slice) allows the parity of the quantization coefficient to be hidden.
  • the slice-level flag may be located in the slice header.
  • the above-mentioned at least one flag includes a unit-level flag, which is used to indicate whether the current CTU allows the parity of the quantization coefficient to be hidden.
  • the above-mentioned at least one flag includes a block-level flag, which is used to indicate whether the current block allows the parity of the quantization coefficient to be hidden.
  • the encoding end first obtains the above-mentioned at least one flag, and determines whether the current block allows the parity of the quantized coefficient to be hidden based on the at least one flag. If it is determined based on at least one of the above flags that the current block does not allow the parity of the quantized coefficient to be hidden, the method in the embodiment of the present application is skipped. If it is determined based on at least one of the above flags that the current block is allowed to use the parity concealment technology provided by the embodiment of the present application, the method of the embodiment of the present application is executed.
  • the quantization coefficient parity hiding technology provided by the embodiments of the present application is mutually exclusive with the target transformation method, where the target transformation method includes secondary transformation or multiple transformations. At this time, when the encoding end determines that the current block is transformed using the target transformation method, it skips the technical solution provided by the embodiment of the present application.
  • the current area is an area including at least one non-zero quantization coefficient among the N areas.
  • parity hiding is performed on all or part of the second quantized coefficient to obtain the first quantized coefficient, and the first quantized coefficient is encoded into the code stream instead of encoding the second quantized coefficient.
  • Coefficient encoding code stream Since the first quantization coefficient is usually smaller than the second quantization coefficient, encoding the first quantization coefficient requires fewer bits than encoding the second quantization coefficient, thereby reducing the encoding cost.
  • Embodiments of the present application perform parity hiding on all or part of the second quantized coefficient, and the method of obtaining the first quantized coefficient is not limited.
  • different methods can be selected to determine the first quantization coefficient according to the parity of the second quantization coefficient, as shown in case 1 and case 2.
  • the third operation method is used to operate part or all of the second quantization coefficient to obtain the first quantization coefficient.
  • the third operation mode includes that the first quantization coefficient is equal to part or all of the second quantization coefficient plus one divided by two.
  • the fourth operation method is used to operate part or all of the second quantization coefficient to obtain the first quantization coefficient.
  • the fourth operation mode includes that the first quantization coefficient is equal to part or all of the value of the second quantization coefficient divided by two.
  • the encoding end uses a preset operation method to perform operations on part or all of the second quantized coefficients to obtain the first quantized coefficients.
  • the preset operation method is to divide part or all of the value of the second quantization coefficient by a positive integer greater than 1.
  • the preset operation method includes dividing the value of the parity part to be hidden in the second quantized coefficient by two and rounding, where rounding can be understood as a quotient operation, such as the value of the parity part to be hidden in the second quantized coefficient. is 7, 7 divided by 2 is rounded to 3.
  • the encoding end can determine the first quantization coefficient according to the following formula (13):
  • C is the second quantization coefficient
  • qIdx is the first quantization coefficient
  • (C-n) is the quantization coefficient in which the parity is hidden in the second quantization coefficient.
  • the encoding end performs parity concealment on the part of the second quantization coefficient greater than n, and does not perform parity concealment on the part less than n.
  • the parity to be hidden part (C-n) in the second quantized coefficient is divided by 2 to obtain the parity
  • the parity hidden part (C-n)//2 is then determined as the first quantized coefficient by the sum of the parity hidden part (C-n)//2 and the parity unconcealed part n in the second quantized coefficient.
  • the first quantized coefficient is a quantized coefficient obtained by parity concealing all or part of the second quantized coefficient in the current region, and the first quantized coefficient includes the part where the parity is concealed.
  • the encoding end Before encoding the first quantized coefficient, the encoding end first needs to determine the context model corresponding to the first quantized coefficient.
  • Method 1 The encoding end uses different context models to encode the quantized coefficients whose parity is hidden and the quantized coefficients whose parity is not hidden. That is to say, the context model corresponding to the first quantization coefficient is different from the context model corresponding to other quantization coefficients whose parity is not hidden.
  • the first quantization coefficient in the embodiment of the present application is represented by one or more identifiers.
  • identifier 1 represents the part from 0 to 3
  • identifier 2 represents the part from 3 to 6
  • identifier 3 represents the part from 6 to 9
  • identifier 4 represents 9. ⁇ 12
  • the mark 5 indicates the part from 12 to 15.
  • a context model may be determined for one or more identifiers among the identifiers of the first quantization coefficient, and encoding may be performed using the one or more identifiers. That is to say, the embodiment of the present application can determine a target context model for the target quantization coefficient among the first quantization coefficients, and use the target context model to encode the target quantization coefficient among the first quantization coefficients.
  • the target quantization coefficient of the first quantization coefficient can be identifier 1, or a quantization coefficient represented by any one of identifiers 2 to 5.
  • the encoding end can determine two context models, one of which is used for the model. For encoding identifier 1, another context model is used for encoding identifiers 2 to 5.
  • Method 2 In order to reduce the coding complexity of the first quantization parameter, in the embodiment of the present application, the target context model corresponding to the first quantization coefficient is the same as at least one of the context models corresponding to the other quantization coefficients whose parity is not hidden. . That is to say, in the embodiment of the present application, the existing context model corresponding to other quantization coefficients whose parity is not hidden is reused to encode part or all of the quantization parameters whose parity is hidden, thereby reducing the parity being hidden. Encoding complexity of hidden quantization parameters.
  • the quantization parameter can be divided into at least one part.
  • the part 0 to 3 represented by the quantization parameter identifier 1 is called BR
  • the part 4 to 15 represented by the quantization parameter identifier 2 to 5 is called LB.
  • different parts of the quantization parameters correspond to different context models.
  • the above S703 includes the following steps:
  • the target quantization coefficients are partial quantization coefficients of the quantization coefficients.
  • S703-C Use the target context model corresponding to the target quantization coefficient to encode the target quantization coefficient of the first quantization coefficient to obtain a code stream.
  • context models corresponding to other quantization coefficients whose parity is not hidden are reused to encode the first quantization coefficient whose parity is hidden. Therefore, when encoding the target quantized coefficient in the first quantized coefficient, the context model corresponding to the target quantized coefficient of other quantized coefficients whose parity is not hidden is used to encode the target quantized coefficient in the first quantized coefficient.
  • R context models are created for the BR part and Q context models are created for the LR part, where R and Q are both Positive integer.
  • the target quantization coefficient is BR, assuming that BR corresponds to R context models.
  • the encoding end selects a context model from the R upper and lower models as the target context model corresponding to BR, and uses the target context model corresponding to the BR.
  • the BR in the first quantization coefficient is encoded.
  • the encoding end selects a context model from these Q upper and lower models as the target context model corresponding to LR, and uses the LR target context.
  • the model encodes LR in the first quantization coefficient.
  • the implementation methods of determining the target context model corresponding to the target quantization coefficient of the first quantization coefficient from the multiple context models corresponding to the target quantization coefficient include but are not limited to the following:
  • Method 1 determine any one of the multiple context models corresponding to the target quantization coefficient as the target context model corresponding to the target quantization coefficient.
  • Method 2 the above S703-B includes the following steps of S703-B1 and S703-B2:
  • each context model in the multiple context models corresponding to the target quantization coefficient includes an index, so that the encoding end can determine the index of the target context model corresponding to the target quantization coefficient, and then based on the index, the target quantization
  • the target context model corresponding to the target quantization coefficient is selected from multiple context models corresponding to the coefficient.
  • the embodiments of the present application do not limit the specific implementation manner of determining the index of the context model corresponding to the target quantization coefficient.
  • the first quantization coefficient includes BR. If the first quantization coefficient is greater than 3, the first quantization coefficient also includes LR, where the index of the target context model corresponding to BR is determined and the index of the target context model corresponding to LR is determined. Indexing is done differently. The process of determining the index of the target context model corresponding to the LR and the process of determining the index of the target context model corresponding to the LR are introduced below.
  • the above S703-B1 includes the following steps:
  • S703-B11 Determine the index of the target context model corresponding to the BR based on the sum of BRs of the encoded quantization coefficients around the first quantization coefficient.
  • the BR of the first quantized coefficient is related to the BR of the encoded quantized coefficients around the first quantized coefficient. Therefore, in this embodiment of the present application, according to the sum of BRs of the encoded quantized coefficients around the first quantized coefficient, Determine the index of the target context model corresponding to the BR in the first quantization coefficient.
  • the sum of BRs of the encoded quantized coefficients around the first quantized coefficient is determined as the index of the target context model corresponding to the BR in the first quantized coefficient.
  • the sum of BRs of the encoded quantized coefficients around the first quantized coefficient is operated to obtain the index of the target context model corresponding to the BR in the first quantized coefficient.
  • the above S703-B11 includes the following steps:
  • the method of determining the index of the target context model corresponding to the BR in the first quantized coefficient based on the sum of BRs of the coded quantization coefficients around the first quantization coefficient may be to add the coded BRs around the first quantization coefficient.
  • the sum of BR of the quantized coefficients is added to the first preset value to obtain the first sum value.
  • the first sum value is divided by the first numerical value to obtain the first ratio.
  • the coded quantization coefficients surrounding the first quantization coefficient can be understood as J coded quantization coefficients located around the first coefficient in the scanning order.
  • the five quantized coefficients that have been encoded around the first quantized coefficient are as shown in Figure 9A to Figure 9C
  • the first quantization coefficient is the black part in the figure
  • the five encoded quantization coefficients around the first quantization coefficient are the gray part. That is to say, when determining the index of the target context model corresponding to the BR, reference is made to the five encoded quantization coefficients surrounding the first quantization coefficient as shown in Figures 9A to 9C.
  • the embodiments of the present application do not limit the specific values of the above-mentioned first preset value and first numerical value.
  • the first preset value is 1.
  • the first preset value may be 2.
  • the first value is 1.
  • the first value is 2.
  • the embodiment of the present application does not limit the specific manner of determining the index of the target context model corresponding to the BR in the first quantization coefficient based on the first ratio in S703-B113 above.
  • Example 1 Determine the first ratio as the index of the target context model corresponding to the BR in the first quantization coefficient.
  • Example 2 Process the first ratio to obtain the index of the target context model corresponding to the BR in the first quantization coefficient.
  • the embodiments of this application do not limit the specific manner in which the first ratio is processed.
  • the above S703-B113 includes:
  • the first ratio is compared with the first preset threshold, the minimum value of the first ratio and the first preset threshold is determined as the second value, and then the minimum value of the first ratio and the first preset threshold is determined based on the second value.
  • the embodiment of the present application does not limit the specific method of determining the index of the target context model corresponding to the BR based on the second value in the above S703-B113-2.
  • the second value determined above is determined as the index of the target context model corresponding to the BR in the first quantization coefficient.
  • the offset index offset BR of the BR is determined, and the second value and the offset index of the BR in the first quantized coefficient are determined as the index of the target context model corresponding to the BR in the first quantized coefficient.
  • the index of the target context model corresponding to the BR in the first quantization coefficient can be determined according to the above formula (5).
  • the embodiments of the present application do not limit the specific values of the above-mentioned first preset value, first numerical value and first preset threshold.
  • the above formula (5) can be specifically expressed as the above Formula (6).
  • the above formula (5) can be specifically expressed as The above formula (7).
  • the determination process of the context model index is adjusted to select a target context model suitable for the first quantization coefficient whose parity is hidden, specifically for The sum of the BRs of the surrounding quantized coefficients of the first quantized coefficient is divided by 4 to obtain a first ratio. According to the first ratio, the index of the target context model corresponding to the BR in the first quantized coefficient is determined.
  • the The first preset value a1 is adjusted from 1 in formula (6) to 2 in formula (7) to achieve rounding.
  • the embodiment of the present application does not limit the specific method of determining the offset index offset BR of the BR.
  • the offset index of the BR is determined according to at least one of the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color component of the current block.
  • the offset index of the BR is the first threshold.
  • the offset index of the BR is the second threshold.
  • the first threshold is 0.
  • the first threshold is 10.
  • the second threshold is 5.
  • the second threshold is 15.
  • the specific process of determining the index of the target context model corresponding to the BR in the first quantization coefficient is introduced above.
  • this embodiment also needs to determine the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the above S703-B1 includes the following steps:
  • the LR of the first quantized coefficient is related to the BR and LR of the encoded quantized coefficients around the first quantized coefficient. Therefore, in this embodiment of the present application, according to the BR and LR of the encoded quantized coefficients around the first quantized coefficient, The sum of LR determines the index of the target context model corresponding to LR in the first quantization coefficient.
  • the sum of BR and LR of the encoded quantized coefficients around the first quantized coefficient is determined as the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the sum of BR and LR of the encoded quantized coefficients around the first quantized coefficient is operated to obtain the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the above S703-B21 includes the following steps:
  • the method for determining the index of the target context model corresponding to the LR in the first quantized coefficient may be to add the encoded quantized coefficients around the first quantized coefficient.
  • the sum of BR and LR of the encoded quantization coefficient is added to the second preset value to obtain the second sum value.
  • the second sum value is divided by the third value to obtain the second ratio.
  • the The second ratio determines the index of the target context model corresponding to the LR in the first quantization coefficient.
  • the coded quantization coefficients surrounding the first quantization coefficient can be understood as J coded quantization coefficients located around the first coefficient in the scanning order.
  • the three quantized coefficients that have been encoded around the first quantized coefficient are as shown in Figure 10A to Figure 10C
  • the first quantization coefficient is the black part in the figure
  • the three encoded quantization coefficients around the first quantization coefficient are the gray part. That is to say, when determining the index of the target context model corresponding to the LR, reference is made to the three encoded quantization coefficients surrounding the first quantization coefficient as shown in FIGS. 10A to 10C .
  • the second preset value is 1.
  • the second preset value may be 2.
  • the third value is 1.
  • the third value is 2.
  • the embodiment of the present application does not limit the specific manner of determining the index of the target context model corresponding to the LR in the first quantization coefficient based on the second ratio in S703-B213 above.
  • Example 1 Determine the second ratio as the index of the target context model corresponding to the LR in the first quantization coefficient.
  • Example 2 Process the second ratio to obtain the index of the target context model corresponding to the LR in the first quantization coefficient.
  • the embodiments of this application do not limit the specific manner in which the second ratio is processed.
  • the above S703-B113 includes:
  • the second ratio is compared with the second preset threshold, the minimum value of the second ratio and the second preset threshold is determined as a fourth value, and then based on the fourth value, the minimum value is determined.
  • the embodiment of the present application does not limit the specific method of determining the index of the target context model corresponding to the LR based on the fourth value in the above S703-B213-2.
  • the fourth value determined above is determined as the index of the target context model corresponding to the LR in the first quantization coefficient.
  • determine the offset index offset LR of LR and determine the second value and the offset index of LR in the first quantized coefficient as the index of the target context model corresponding to the LR in the first quantized coefficient.
  • the index of the target context model corresponding to the LR in the first quantization coefficient can be determined according to the above formula (8).
  • the embodiments of the present application do not limit the specific values of the above-mentioned second preset value, third value and second preset threshold.
  • the above formula (8) can be specifically expressed as the above Formula (9).
  • the above formula (8) can be specifically expressed as The above formula (10).
  • the determination process of the context model index is adjusted to select a target context model suitable for the first quantization coefficient whose parity is hidden, specifically for The sum of the parts less than 16 of the surrounding quantized coefficients of the first quantized coefficient is divided by 4 to obtain a second ratio. According to the second ratio, the index of the target context model corresponding to the LR in the first quantized coefficient is determined.
  • the embodiment of the present application does not limit the specific method of determining the offset index offset LR of LR.
  • the offset index of the LR is determined according to at least one of the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color component of the current block.
  • the offset index of the LR is the third threshold.
  • the offset index of the LR is the fourth threshold.
  • the embodiments of the present application do not limit the specific values of the third threshold and the fourth threshold.
  • the third threshold is 0.
  • the third threshold is 14.
  • the fourth threshold is 7.
  • the fourth threshold is 21.
  • the context model index value of the first quantization coefficient is calculated as shown in Table 4 below.
  • the target quantization coefficient corresponding to the first quantization coefficient after determining the index of the target context model corresponding to the target quantization coefficient in the first quantization coefficient, according to the index, determine the target quantization coefficient corresponding to the first quantization coefficient from the multiple context models corresponding to the target quantization coefficient.
  • target context model For example, according to the index of the target context model corresponding to the BR in the first quantized coefficient, the target corresponding to the BR in the first quantized coefficient is obtained from multiple context models corresponding to the BR parts of other quantized coefficients whose parity is not hidden.
  • the context model according to the index of the target context model corresponding to the LR in the first quantized coefficient, obtains the LR corresponding to the first quantized coefficient from the multiple context models corresponding to the LR parts of other quantized coefficients whose parity is not hidden.
  • Target context model according to the index of the target context model corresponding to the LR in the first quantized coefficient, obtains the LR corresponding to the first quantized coefficient from the multiple context models corresponding to the LR parts of other quantized coefficients whose parity is not hidden.
  • the above-mentioned multiple context models include multiple QP segments (for example, 4 QP segments) and context models with different indexes under 2 components (such as luminance component and chrominance component).
  • the index determines the target context model corresponding to the target quantization coefficient from the plurality of context models corresponding to the target quantization coefficient, including: according to at least one of the quantization parameter corresponding to the first quantization coefficient, the transformation type, the transformation block size and the color component, from Select at least one context model from the plurality of context models corresponding to the target quantization coefficient; then, determine the context model corresponding to the index in the at least one lower context model as the target context model corresponding to the target quantization coefficient.
  • the index corresponding to the BR (ie, identification 1) in the first quantization coefficient is index 1
  • the QP segment corresponding to the first quantization coefficient is QP segment 1
  • the first quantization coefficient is the luminance component.
  • Quantization coefficient Assume that there are S context models corresponding to identification 1. First, from these S context models, T context models under QP segment 1 and brightness component are selected. Then, the corresponding index 1 in these T context models is The context model is determined as the target context model corresponding to the BR in the first quantization coefficient, and then the target context model is used to encode the BR in the first quantization coefficient.
  • the index corresponding to the LR (ie, identifier 2 to identifier 5) in the first quantization coefficient is index 2
  • the QP segment corresponding to the first quantization coefficient is QP segment 1
  • the first quantization coefficient is Quantization coefficient under the luminance component.
  • the context model is determined as the target context model corresponding to identifiers 2 to 5 in the first quantization coefficient, and then uses the target context model to encode the LR in the first quantization coefficient.
  • the encoding end before determining the target context model from the multiple context models, the encoding end needs to initialize the multiple context models, and then determine the target context model from the multiple initialized context models. .
  • multiple context models corresponding to BR are initialized, and then a target context model corresponding to BR is selected from the multiple initialized context models.
  • multiple context models corresponding to LR are initialized, and then a target context model corresponding to LR is selected from the multiple initialized context models.
  • each model in order to enable the probability values in the context model corresponding to the parity hidden coefficient to converge faster, each model can have a corresponding set of initial values based on the probability of occurrence of each symbol.
  • the probability of each context model is calculated according to the number of symbols in the model, the probability of each symbol appearing, and the cumulative distribution function.
  • the probability that a symbol appears in the context model is less than a certain value.
  • the probability value is normalized Represented by a 16-bit integer.
  • the methods for initializing multiple context models include but are not limited to the following:
  • Method 1 use equal probability values to initialize multiple context models. For example, in a 4-symbol context model, the probability of occurrence of symbols 0, 1, 2, and 3 is 0.25, then the probability of symbols less than 1 appearing is 0.25, the probability of symbols less than 2 is 0.5, and the probability of symbols less than 3 is 0.75. The probability of less than 4 symbols is 1. After rounding the probability to 16 bits, the probability of less than 1 symbol is 8192, the probability of less than 2 symbols is 16384, the probability of less than 3 symbols is 24576, and the probability of less than 4 symbols is 32768. These four The integer values constitute the probability initial value of the context model.
  • Method 2 Initialize multiple context models using convergence probability values, where the convergence probability value is the convergence probability value corresponding to the context model when the context model is used to encode the test video. For example, first, an initial probability value such as equal probability is set for these context models. Then, a series of test videos are encoded using quantization coefficients corresponding to certain target bit rates, and the final convergence probability values of these context models are used as the initial values of these context models. Repeat the above steps to obtain a new convergence value, and select the appropriate convergence value as the final initial value.
  • the target context model is used to encode the target quantization coefficient of the first quantization coefficient to obtain a code stream.
  • the BR corresponding to the first quantized coefficient is obtained.
  • target context model and use the target context model corresponding to the BR in the first quantization coefficient to encode the BR in the first quantization coefficient to obtain the encoded BR, and then obtain the code stream based on the encoded BR.
  • the encoded BR is output as a code stream.
  • the encoding end uses the target context model corresponding to the LR to encode the LR in the first quantization coefficient to obtain the encoded LR, and obtain a code stream based on the encoded BR and the encoded LR.
  • the bit stream formed by the encoded BR and the encoded LR is output as a code stream.
  • the first quantized coefficient also includes a part greater than 15
  • encode the part greater than 15 in the first quantized coefficient and then convert the encoded BR, the encoded LR and the encoded value of the part greater than 15 into a bit stream. Output as code stream.
  • the parity of the quantized coefficient whose parity is hidden among the first quantized coefficients is indicated by the P quantized coefficients in the current region.
  • a binary characteristic (0 or 1) of the P quantized coefficients in the current region is used to indicate the parity of the quantized coefficients whose parity is hidden.
  • the parity of the quantization coefficient whose parity is hidden is indicated by the parity corresponding to the P quantization coefficients in the current region.
  • the above-mentioned P quantization coefficients are all quantization coefficients in the current area.
  • the above P quantization coefficients are partial quantization coefficients in the current region.
  • the parity corresponding to the P quantization coefficients may be the smallest quantization coefficient among the P quantization coefficients, the largest quantization coefficient among the P quantization coefficients, the sum of the absolute values of the P quantization coefficients, and the target quantization coefficient among the P quantization coefficients.
  • the embodiment of the present application uses the parity corresponding to the P quantization coefficients in the current area to indicate the parity of the quantization coefficient whose parity is hidden.
  • the parity corresponding to the P quantization coefficients is an odd number to indicate the quantization whose parity is hidden.
  • the coefficient is an odd number
  • the parity corresponding to the P quantization coefficients is an even number, indicating that the quantization coefficient whose parity is hidden is an even number.
  • the method in the embodiment of the present application also includes the following step 1:
  • Step 1 Adjust the P quantization coefficients so that the parities corresponding to the P quantization coefficients are consistent with the parity of the quantization coefficients whose parity is hidden.
  • step 1 above includes: adjusting the values of the P quantization coefficients so that The parity of the sum of the first absolute values of the P quantized coefficients is consistent with the parity of the quantized coefficients whose parity is hidden.
  • the first absolute value of the quantized coefficient is part or all of the absolute value of the quantized coefficient.
  • the first absolute value of the quantized coefficient is an absolute value smaller than 15 parts.
  • the above situation 1 includes the following two examples:
  • Example 1 if the quantization coefficient whose parity is hidden is an odd number, and the sum of the first absolute values of the P quantization coefficients is an even number, then adjust the values of the P quantization coefficients so that the adjusted P quantization coefficients have The sum of absolute values is an odd number.
  • the quantization coefficient whose parity is hidden is an odd number
  • the sum of the first absolute values of the P quantization coefficients is an even number
  • the smallest quantization coefficient among the P quantization coefficients is added or subtracted by 1 to reduce the P quantization coefficients by 1.
  • the sum of the first absolute values of the quantization coefficients is modified to be an odd number.
  • Example 2 if the quantization coefficient whose parity is hidden is an even number, and the sum of the first absolute values of the P quantization coefficients is an odd number, then adjust the values of the P quantization coefficients so that the adjusted P quantization coefficients The sum of the first absolute values is an even number.
  • the quantization coefficient whose parity is hidden is an even number
  • the sum of the first absolute values of the P quantization coefficients is an odd number.
  • the smallest quantization coefficient among the P quantization coefficients is added or subtracted by 1 to quantize the P quantization coefficients.
  • the sum of the first absolute values of the coefficients is modified to be an even number.
  • the above step 1 includes: adjusting the values of the P quantization coefficients so that The parity of the target number of quantized coefficients among the P quantized coefficients is consistent with the parity of the quantized coefficients whose parity is hidden.
  • Example 2 above includes at least the following two examples:
  • Example 1 if the quantization coefficient whose parity is hidden is an odd number, and the number of target quantization coefficients among the P quantization coefficients is an even number, then adjust the values of the P quantization coefficients so that the adjusted P quantization coefficients are in the target An odd number of quantization coefficients.
  • Example 2 if the quantization coefficients whose parity is hidden is an even number, and the number of target quantization coefficients among the P quantization coefficients is an odd number, then adjust the values of the P quantization coefficients so that the adjusted P quantization coefficients fall within the target An even number of quantization coefficients.
  • the above-mentioned target quantization coefficient is any one of the non-zero quantization coefficients among the P quantization coefficients, the non-zero quantization coefficients with even values, the quantization coefficients with even values, and the quantization coefficients with odd values.
  • the encoding end modifies at least one coefficient among the P quantization coefficients. For example, if the quantization coefficient whose parity is hidden is an odd number, and the number of non-zero quantization coefficients among the P quantization coefficients is an even number, then the quantization coefficient in which one of the P quantization coefficients is zero can be adjusted to 1, or, The value of the smallest quantization coefficient among the P quantization coefficients is adjusted to 0, so that the number of non-zero quantization coefficients among the P quantization coefficients is an odd number.
  • the quantization coefficient in which one of the P quantization coefficients is zero can be adjusted to 1, or, Adjust the value of the smallest non-zero quantization coefficient among the P quantization coefficients to 0, so that the number of non-zero quantization coefficients among the P quantization coefficients is an even number.
  • the encoding end modifies at least one coefficient among the P quantization coefficients. For example, if the quantization coefficient whose parity is hidden is an odd number, and the number of non-zero quantization coefficients among the P quantization coefficients is an even number, then the quantization coefficient in which one of the P quantization coefficients is zero can be adjusted to 2 , or add 1 or subtract 1 to the value of the smallest non-zero quantization coefficient among the P quantization coefficients, so that the number of non-zero quantization coefficients with even values among the P quantization coefficients is an odd number.
  • the quantization coefficient whose parity is hidden is an even number
  • the number of non-zero quantization coefficients with an even value among the P quantization coefficients is an odd number
  • the quantization coefficient in which one of the P quantization coefficients is zero can be adjusted to 2.
  • the encoding end uses an adjustment method with the smallest rate distortion cost to adjust at least one coefficient among the P quantization coefficients.
  • the encoding end modifies at least one coefficient among the P quantization coefficients. For example, if the quantization coefficient whose parity is hidden is an odd number, and the number of quantization coefficients with an even value among the P quantization coefficients is an even number, then the quantization coefficient with a zero value among the P quantization coefficients can be adjusted to 1, or , adjust the smallest non-zero quantization coefficient among the P quantization coefficients to 0, so that the number of quantization coefficients with even values among the P quantization coefficients is an odd number.
  • the quantization coefficient with a zero value among the P quantization coefficients can be adjusted to 1, Or, adjust the smallest non-zero quantization coefficient among the P quantization coefficients to 0, so that the number of quantization coefficients among the P quantization coefficients is an even number.
  • the encoding end modifies at least one coefficient among the P quantization coefficients. For example, if the quantization coefficient whose parity is hidden is an odd number, and the number of quantization coefficients with odd values among the P quantization coefficients is an even number, then the quantization coefficient with a zero value among the P quantization coefficients can be adjusted to 1, or , adjust the smallest non-zero quantization coefficient among the P quantization coefficients to 0 or add 1 or subtract 1, so that the number of quantization coefficients with odd values among the P quantization coefficients is an odd number.
  • the quantization coefficient with a zero value among the P quantization coefficients can be adjusted to 1, Alternatively, adjust the smallest non-zero quantized coefficient among the P quantized coefficients to 0 or add 1 or subtract 1 so that the number of quantized coefficients among the P quantized coefficients is an even number.
  • the parity hiding technology proposed in this application is implemented on the AVM reference software, the area size used is set to 16, and the threshold for the number of non-zero coefficients that enable parity hiding is 4. If the sum of the absolute values of the non-zero coefficients less than or equal to 15 in the current area is an odd number, it means that the last coefficient in the scanning order of the area (i.e. the second quantization coefficient) is an odd number, and an even number means that the last coefficient in the scanning order of the area (i.e. The second quantization coefficient) is an even number.
  • the parity concealment technology of this application is that at least one direction of the transform block is identity transform
  • the parity concealment technology is not enabled.
  • both directions of the transform block are not identity transform
  • Turn on the parity hiding technology turn on the parity hiding technology for the luminance component
  • the sequences with resolutions from 270p to 4K were tested under all intra (full frame), random access (random access) and low delay (low delay) configurations, and the result
  • Table 10 Table 11 and Table 12 respectively:
  • the PSNR in Table 10 to Table 12 above is the peak signal-to-noise ratio, which represents the ratio of the maximum possible power of the signal to the destructive noise power that affects its representation accuracy. Since many signals have very wide dynamic ranges, peak signal-to-noise ratio is often expressed in logarithmic decibel units.
  • PSNR is used to evaluate the quality of an image after compression compared with the original image. The higher the PSNR, the smaller the distortion after compression.
  • the parity concealment technology is not enabled.
  • both directions of the transform block are not identity transform, the parity concealment technology is enabled, and the parity is reused.
  • the context model corresponding to the quantized coefficients that are not hidden encodes the quantized coefficients whose parity is hidden. Under this condition, sequences with resolutions from 270p to 4K were tested under the general test conditions CTC under all intra (full frame) and random access (random access) configurations, and the codec performance changes were obtained as shown in Table 13 And as shown in Table 14:
  • the encoding end divides the current block into N areas, determines the second quantization coefficient in the current area, and performs parity hiding on part or all of the second quantization coefficient to obtain the first quantization coefficient, the current area is an area including at least one non-zero quantization coefficient among the N areas; then, determine the target context model corresponding to the first quantization coefficient, use the target context model to encode the first quantization coefficient, and obtain the code stream,
  • the parity of a quantized coefficient whose parity is hidden in a quantized coefficient is indicated by P quantized coefficients in the current region.
  • the encoding end performs parity hiding on part or all of the second quantized coefficient to obtain the first quantized coefficient. Since the absolute value of the first quantized coefficient is smaller than the second quantized coefficient, the first quantized coefficient is encoded in this way. Compared with the second quantization coefficient, the number of coding bits can be reduced, thereby reducing the coding cost.
  • the first quantization coefficient is encoded by separately determining the target context model corresponding to the first quantization coefficient, thereby improving the coding accuracy of the first quantization coefficient.
  • Figure 13 is a schematic flowchart of a video encoding method provided by an embodiment of the present application.
  • Figure 13 is a schematic flowchart of a video encoding method provided by an embodiment of the present application. As shown in Figure 13, the method in the embodiment of this application includes:
  • the above-mentioned at least one flag is used to indicate whether the parity of the quantized coefficient is allowed to be hidden.
  • At least one flag includes at least one of a sequence-level flag, an image-level flag, a slice-level flag, a unit-level flag, and a block-level flag.
  • step S802 If it is determined according to at least one flag that the parity of the quantization coefficient of the current block is allowed to be hidden, the following step S802 is performed.
  • step S806 is performed.
  • the embodiment of the present application does not limit the method of dividing the area of the current block.
  • the current block is divided into N areas according to the scanning order, or the current block is divided into N areas according to the spatial position of the pixels in the current block.
  • the sizes of the above-mentioned N areas may be the same or different, and this is not limited in the embodiment of the present application.
  • the condition includes at least one of the following preset conditions and the parity hiding technology enabling condition.
  • the parity concealment technology proposed in this application is limited. Specifically, if the current area meets the conditions, it means that the current area can obtain significant beneficial effects by using the parity hiding technology provided by the embodiment of the present application. At this time, the following steps of S804 are performed.
  • different parity concealment technology enabling conditions may be preset based on at least one of quantization parameters, transform types, transform block sizes, scan types, and color components.
  • the parity concealment technology is enabled if the quantization coefficient is greater than or equal to a certain preset value. If the quantization coefficient is less than a certain preset value, the parity concealment technology is not enabled.
  • the transformation type is a preset transformation type
  • the parity hiding technology is enabled; if the transformation type is not a preset transformation type, the parity hiding technology is not enabled.
  • the transform type of the current block is the first transform type
  • the first transform type is used to indicate that at least one direction of the current block skips the transform. For details, refer to the above table. 2 shown.
  • the parity concealment technology is enabled; if the transformation block size is smaller than the preset size, the parity concealment technology is not enabled.
  • the first component is the chrominance component. That is to say, under the luminance component, the parity concealment technology is turned on. Under the chrominance component, Parity hiding technology is not enabled.
  • the transformation block uses the preset scan type
  • the parity concealment technology is enabled; if the transformation block does not apply to the preset scan type, the parity concealment technology is not enabled.
  • This application does not limit the preset scan type.
  • the preset scan type is ZigZag scan or diagonal scan.
  • the preset conditions are introduced below.
  • the above preset conditions include at least one of the following conditions:
  • Condition 1 The number of non-zero quantization coefficients in the current area is greater than the first preset value
  • Condition 2 In the current area, the distance between the first non-zero quantization coefficient and the last non-zero quantization coefficient in the decoding scanning order is greater than the second preset value
  • Condition 3 In the current area, the distance between the first non-zero quantization coefficient and the last quantization coefficient in the decoding scanning order is greater than the third preset value
  • Condition 4 In the current area, the sum of the absolute values of non-zero quantization coefficients is greater than the fourth preset value
  • the color component of the current block is the second component, and the optional second component is the brightness component;
  • the transformation type of the current block is not the first transformation type.
  • the first transformation type is used to indicate skip transformation in at least one direction of the current block.
  • the above six conditions can also be combined with each other to form new constraints.
  • the embodiments of the present application do not limit the specific values of the above-mentioned first to fourth preset values, as long as the first to fourth preset values are all positive integers.
  • At least one of the first preset value, the second preset value, the third preset value and the fourth preset value is a fixed value.
  • At least one of the first preset value, the second preset value, the third preset value and the fourth preset value is a non-fixed value, that is, it is determined by the encoding end based on the current encoding information. numerical value.
  • the encoding end writes the non-fixed value into Code stream
  • the decoding end parses the non-fixed value from the code stream.
  • the encoding end may first determine the parity of the first quantization coefficient to allow the quantization coefficient to be determined based on at least one of the quantization parameter, transform type, transform block size, scan type, and color component corresponding to the first quantization coefficient. When hiding, it is then judged whether the current area meets the above preset conditions.
  • the encoding end when determining that the current area satisfies the above preset condition, may determine whether the first quantization coefficient satisfies the parity concealment technology enabling condition.
  • the above preset conditions and the above parity hiding technology enabling conditions can be used alone or in combination with each other.
  • S804. Determine the second quantization coefficient in the current area, and perform parity hiding on all or part of the second quantization coefficient to obtain the first quantization coefficient.
  • the target context model corresponding to the first quantization coefficient is the same as at least one context model corresponding to the context models corresponding to other quantization coefficients whose parity is not hidden.
  • the encoding end when encoding the current area, the encoding end first determines whether the current area meets the conditions to determine whether the current area can bring significant beneficial technical effects when encoded using the parity concealment technology provided by this application. After determining When using this parity hiding technology has significant beneficial effects, the technical solution of this application is used to encode the current area to improve coding reliability.
  • FIG. 6 to FIG. 13 are only examples of the present application and should not be understood as limitations of the present application.
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be used in this application.
  • the implementation of the examples does not constitute any limitations.
  • the term "and/or" is only an association relationship describing associated objects, indicating that three relationships can exist. Specifically, A and/or B can represent three situations: A exists alone, A and B exist simultaneously, and B exists alone.
  • the character "/" in this article generally indicates that the related objects are an "or" relationship.
  • Figure 14 is a schematic block diagram of a video decoding device provided by an embodiment of the present application.
  • the video decoding device 10 includes:
  • the decoding unit 11 is used to decode the code stream to obtain P quantization coefficients in the current area, where the current area is an area including at least one non-zero quantization coefficient in the current block, and the P is a positive integer;
  • the parity determination unit 12 is configured to determine the parity of the quantization coefficient whose parity is hidden among the first quantization coefficients based on the P quantization coefficients, where the first quantization coefficient is performed on all or part of the second quantization coefficient.
  • the quantization coefficient obtained after parity concealment, the second quantization coefficient is a quantization coefficient whose parity is not concealed in the current area;
  • the processing unit 13 is configured to determine the target context model corresponding to the first quantization coefficient, and use the target context model to decode the first quantization coefficient based on context coding to obtain the decoded first quantization coefficient;
  • the determining unit 14 is configured to determine a second quantized coefficient with parity according to the parity of the quantized coefficient whose parity is hidden and the decoded first quantized coefficient.
  • the target context model corresponding to the first quantization coefficient is the same as at least one context model corresponding to the context models corresponding to other quantization coefficients whose parity is not hidden.
  • the processing unit 13 is specifically configured to obtain multiple context models corresponding to the target quantization coefficients of other quantization coefficients whose parity is not hidden; from the multiple context models corresponding to the target quantization coefficients, Determine a target context model corresponding to the target quantization coefficient of the first quantization coefficient; use the target context model corresponding to the target quantization coefficient to decode the target quantization coefficient of the first quantization coefficient to obtain the first quantization coefficient.
  • the processing unit 13 is specifically configured to determine the index of the target context model corresponding to the target quantization coefficient; and determine the first first context model from multiple context models corresponding to the target quantization coefficient according to the index.
  • the target context model corresponding to the target quantization coefficient of the quantization coefficient is specifically configured to determine the index of the target context model corresponding to the target quantization coefficient; and determine the first first context model from multiple context models corresponding to the target quantization coefficient according to the index.
  • the processing unit 13 is specifically configured to: if the target quantization coefficient is a basic part represented by the flag 1 in the first quantization coefficient, then according to the decoded quantization around the first quantization coefficient The sum of the basic parts of the coefficients determines the index of the target context model corresponding to the basic parts.
  • the processing unit 13 is specifically configured to add the sum of the basic parts of the decoded quantization coefficients around the first quantization coefficient to a first preset value to obtain a first sum value; The first sum value is divided by the first numerical value to obtain a first ratio; according to the first ratio, the index of the target context model corresponding to the basic part is determined.
  • the first value is 4.
  • the first preset value is 2.
  • the processing unit 13 is specifically configured to determine the minimum value of the first ratio and the first preset threshold as a second value; and determine the corresponding value of the basic part according to the second value. Index of the target context model.
  • the first preset threshold is 4.
  • the processing unit 13 is specifically configured to determine the offset index of the basic part; determine the sum of the second value and the offset index of the basic part as the target corresponding to the basic part. Index to the context model.
  • the processing unit 13 is specifically configured to determine the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color of the current block. At least one of the components determines the offset index of the base portion.
  • the offset index of the basic part is the first threshold.
  • the first threshold is 0.
  • the offset index of the basic part is the second threshold.
  • the second threshold is 5.
  • the processing unit 13 is specifically configured to: if the target quantization coefficient is a lower part represented by identifiers 2 to 5 in the first quantization coefficient, then according to the first quantization coefficient The sum of the basic part and the lower part of the surrounding decoded quantized coefficients determines the index of the target context model corresponding to the lower part.
  • the processing unit 13 is specifically configured to add the sum of the basic part and the lower part of the decoded quantized coefficient around the first quantized coefficient to a second preset value to obtain a second sum. value; divide the second sum value and the third value to obtain a second ratio; determine the index of the target context model corresponding to the lower part according to the second ratio.
  • the third value is 4.
  • the second preset value is 2.
  • the processing unit 13 is specifically configured to determine the minimum value of the second ratio and the second preset threshold as a fourth value; based on the fourth value, determine the lower part corresponding to Index of the target context model.
  • the second preset threshold is 4.
  • the processing unit 13 is specifically configured to determine the offset index of the lower part; determine the sum of the fourth value and the offset index of the lower part as the lower part The index of the corresponding target context model.
  • the processing unit 13 is specifically configured to determine the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color of the current block. At least one of the components determines the bias index of the lower portion.
  • the offset index of the lower part is the third threshold.
  • the third threshold is 0.
  • the offset index of the lower part is a fourth threshold.
  • the fourth threshold is 7.
  • the processing unit 13 is specifically configured to use the target context model corresponding to the basic part represented by the identifier 1 in the first quantization coefficient, decode the basic part in the first quantization coefficient, and obtain the decoded the basic part; determine the first quantization coefficient according to the decoded basic part.
  • the processing unit 13 is specifically configured to use the target context model corresponding to the lower part if the first quantization coefficient also includes a lower part represented by identifiers 2 to 5, Decode the lower part of the first quantized coefficient to obtain the decoded lower part; determine the first quantized coefficient based on the decoded basic part and the decoded lower part.
  • the processing unit 13 is specifically configured to, according to at least one of the quantization parameter, transform type, transform block size, scan type and color component corresponding to the first quantization coefficient, from the target quantization coefficient corresponding to Select at least one context model from multiple context models; determine the context model corresponding to the index in the at least one context model as the target context model corresponding to the target quantization coefficient.
  • the processing unit 13 is also configured to initialize multiple context models corresponding to the target quantization coefficient; and determine the target quantization from the multiple context models corresponding to the initialized target quantization coefficient.
  • the target context model corresponding to the coefficient is also configured to initialize multiple context models corresponding to the target quantization coefficient.
  • the processing unit 13 is specifically configured to use equal probability values to initialize multiple context models corresponding to the target quantization coefficient; or, use convergence probability values to initialize multiple context models corresponding to the target quantization coefficient. Initialization is performed, where the convergence probability value is the convergence probability value corresponding to the context model when the test video is encoded using the context model.
  • the parity determination unit 12 is further configured to determine whether to allow the first quantization coefficient based on at least one of the quantization parameter, transform type, transform block size, scan type and color component corresponding to the first quantization coefficient.
  • the parity of a quantization coefficient is hidden; when it is determined that the parity of the first quantization coefficient is allowed to be hidden, the parity of the quantization coefficient whose parity is hidden is determined according to the P quantization coefficients.
  • the parity determination unit 12 is specifically configured to determine that the parity of the first quantization coefficient is not allowed to be hidden if the transform type of the current block is the first transform type.
  • the transformation type is used to indicate at least one direction skip transformation of the current block.
  • the parity determination unit 12 is specifically configured to determine that the parity of the first quantization coefficient is not allowed to be hidden if the color component of the current block is the first component.
  • the first component is a chrominance component.
  • the parity determination unit 12 is specifically configured to determine the parity of the quantization coefficient whose parity is hidden according to the parity corresponding to the P quantization coefficients.
  • the determination unit 14 is specifically configured to use a preset operation method to operate on the quantization coefficient with hidden parity to obtain a first operation result; according to the parity, calculate the first operation result Perform processing to obtain a second operation result; obtain the second quantization coefficient based on the second operation result and the first quantization coefficient.
  • the parity determination unit 12 is specifically configured to determine the parity of the quantization coefficient whose parity is hidden based on the P quantization coefficients if the current area satisfies the preset condition.
  • the preset conditions include at least one of the following:
  • the number of non-zero quantization coefficients in the current region is greater than the first preset value
  • the distance between the first non-zero quantized coefficient and the last non-zero quantized coefficient in the decoding scanning order is greater than the second preset value
  • the distance between the first non-zero quantized coefficient and the last quantized coefficient in the decoding scanning order is greater than the third preset value
  • the sum of absolute values of non-zero quantization coefficients is greater than the fourth preset value
  • the color component of the current block is the second component
  • the transformation type of the current block is not a first transformation type, and the first transformation type is used to indicate at least one direction skip transformation of the current block.
  • the second component is a brightness component.
  • the parity determination unit 12 is also configured to skip determining the parity-hidden first quantization coefficient based on the P quantization coefficients when the current block is transformed using a target transformation method. Steps to quantify the parity of coefficients.
  • the target transformation method includes secondary transformation, multiple transformations or a first transformation type, and the first transformation type is used to indicate at least one direction skip transformation of the current block.
  • the parity determination unit 12 is also used to decode the code stream to obtain at least one flag.
  • the at least one flag is used to indicate whether the parity of the quantized coefficient is allowed to be hidden; if according to the at least one flag Flag, when it is determined that the parity of at least one quantization coefficient in the current block is allowed to be concealed, determine the parity of the quantization coefficient whose parity is concealed according to the P quantization coefficients.
  • the at least one flag includes at least one of a sequence level flag, an image level flag, a slice level flag, a unit level flag and a block level flag.
  • the first quantization coefficient is a non-zero quantization coefficient located at the Kth position in the scanning order in the current area, and the K is less than or equal to the number of non-zero quantization coefficients in the current area.
  • the decoding unit 11 is specifically configured to decode the code stream to obtain the decoded information of the current block; divide the current block into N regions, where N is a positive integer; from the From the decoded information of the current block, P quantization coefficients in the current area are obtained. .
  • the device embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, they will not be repeated here.
  • the video decoding device 10 shown in FIG. 14 may correspond to the corresponding subject in performing the method of the embodiment of the present application, and the aforementioned and other operations and/or functions of each unit in the video decoding device 10 are respectively to implement the method, etc. The corresponding processes in each method will not be repeated here for the sake of brevity.
  • Figure 15 is a schematic block diagram of a video encoding device provided by an embodiment of the present application.
  • the video encoding device 20 includes:
  • the dividing unit 21 is used to divide the current block into N areas, where N is a positive integer;
  • the processing unit 22 is configured to determine the second quantized coefficient in the current area, and perform parity hiding on part or all of the second quantized coefficient to obtain the first quantized coefficient.
  • the current area is one of N areas. A region including at least one non-zero quantization coefficient;
  • the encoding unit 23 is used to determine the target context model corresponding to the first quantization coefficient, and use the target context model to encode the first quantization coefficient to obtain a code stream.
  • the parity in the first quantization coefficient is The parity of the hidden quantization coefficients is indicated by P quantization coefficients in the current region, where P is a positive integer.
  • the target context model corresponding to the first quantization coefficient is the same as at least one context model corresponding to the context models corresponding to other quantization coefficients whose parity is not hidden.
  • the encoding unit 23 is specifically configured to obtain multiple context models corresponding to the target quantization coefficients of other quantization coefficients whose parity is not hidden; from the multiple context models corresponding to the target quantization coefficients, Determine a target context model corresponding to the target quantization coefficient of the first quantization coefficient; use the target context model of the target quantization coefficient to encode the target quantization coefficient of the first quantization coefficient to obtain the code stream.
  • the encoding unit 23 is specifically configured to determine the index of the target context model corresponding to the target quantization coefficient; and determine the target quantization from multiple context models corresponding to the target quantization coefficient according to the index.
  • the target context model corresponding to the coefficient is specifically configured to determine the index of the target context model corresponding to the target quantization coefficient; and determine the target quantization from multiple context models corresponding to the target quantization coefficient according to the index.
  • the target context model corresponding to the coefficient is specifically configured to determine the index of the target context model corresponding to the target quantization coefficient.
  • the encoding unit 23 is specifically configured to: if the target quantization coefficient is a basic part represented by the flag 1 in the first quantization coefficient, then according to the coded quantization around the first quantization coefficient The sum of the basic parts of the coefficients determines the index of the target context model corresponding to the basic parts.
  • the encoding unit 23 is specifically configured to add the sum of the basic parts of the encoded quantization coefficients around the first quantization coefficient to the first preset value to obtain the first sum value;
  • the first sum value is divided by the first numerical value to obtain a first ratio; according to the first ratio, the index of the target context model corresponding to the basic part is determined.
  • the first value is 2.
  • the first preset value is 2.
  • the encoding unit 23 is specifically configured to determine the minimum value of the first ratio and the first preset threshold as a second value; according to the second value, determine the corresponding value of the basic part. Index of the target context model.
  • the first preset threshold is 4.
  • the encoding unit 23 is specifically configured to determine the offset index of the basic part; determine the sum of the second value and the offset index of the basic part as the target corresponding to the basic part. Index to the context model.
  • the encoding unit 23 is specifically configured to determine the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color of the current block. At least one of the components determines the offset index of the base portion.
  • the offset index of the basic part is the first threshold.
  • the offset index of the basic part is the second threshold.
  • the second threshold is 5.
  • the encoding unit 23 is specifically configured to: if the target quantization coefficient is a lower part represented by identifiers 2 to 5 in the first quantization coefficient, then according to the first quantization coefficient The sum of the basic part and the lower part of the surrounding encoded quantization coefficients determines the index of the target context model corresponding to the lower part.
  • the encoding unit 23 is specifically configured to add the sum of the basic part and the lower part of the encoded quantized coefficient around the first quantized coefficient to a second preset value to obtain a second sum. value; divide the second sum value and the third value to obtain a second ratio; determine the index of the target context model corresponding to the lower part according to the second ratio.
  • the third value is 2.
  • the second preset value is 2.
  • the encoding unit 23 is specifically configured to determine the minimum value of the second ratio and the second preset threshold as a fourth value; according to the fourth value, determine the lower part corresponding to Index of the target context model.
  • the second preset threshold is 4.
  • the encoding unit 23 is specifically configured to determine the offset index of the lower part; determine the sum of the fourth value and the offset index of the lower part as the lower part The index of the corresponding target context model.
  • the encoding unit 23 is specifically configured to determine the position of the first quantization coefficient in the current block, the size of the current block, the scanning order of the current block, and the color of the current block. At least one of the components determines the bias index of the lower portion.
  • the offset index of the lower part is the third threshold.
  • the third threshold is 0.
  • the offset index of the lower part is a fourth threshold.
  • the fourth threshold is 7.
  • the encoding unit 23 is specifically configured to use the target context model corresponding to the basic part represented by the identifier 1 in the first quantization coefficient to encode the basic part in the first quantization coefficient to obtain the encoded The basic part; according to the encoded basic part, the code stream is obtained.
  • the encoding unit 23 is specifically configured to use the target context model corresponding to the lower part if the first quantized coefficient also includes a lower part represented by identifiers 2 to 5, The lower part of the first quantized coefficient is encoded to obtain an encoded lower part; and the code stream is determined based on the encoded basic part and the encoded lower part.
  • the encoding unit 23 is specifically configured to, according to at least one of the quantization parameter, transform type, transform block size, scan type and color component corresponding to the first quantization coefficient, from the target quantization coefficient corresponding to Select at least one context model from multiple context models; determine the context model corresponding to the index in the at least one context model as the target context model corresponding to the target quantization coefficient.
  • the encoding unit 23 is also configured to initialize multiple context models corresponding to the target quantization coefficient; and determine the first first context model from the multiple context models corresponding to the initialized target quantization coefficient.
  • the target context model corresponding to the target quantization coefficient of the quantization coefficient is also configured to initialize multiple context models corresponding to the target quantization coefficient; and determine the first first context model from the multiple context models corresponding to the initialized target quantization coefficient. The target context model corresponding to the target quantization coefficient of the quantization coefficient.
  • the encoding unit 23 is specifically configured to use equal probability values to initialize multiple context models corresponding to the target quantization coefficient; or, use convergence probability values to initialize multiple context models corresponding to the target quantization coefficient. Initialization is performed, where the convergence probability value is the convergence probability value corresponding to the context model when the context model is used to encode the test video.
  • the processing unit 22 is specifically configured to determine whether to allow the first quantization according to at least one of the quantization parameter, transform type, transform block size, scan type and color component corresponding to the first quantization coefficient.
  • the parity of the coefficient is hidden; when it is determined that the parity of the first quantized coefficient is allowed to be hidden, parity hiding is performed on part or all of the second quantized coefficient to obtain the first quantized coefficient.
  • the processing unit 22 is specifically configured to determine that the parity of the first quantization coefficient is not allowed to be hidden if the transform type of the current block is a first transform type. Used to indicate at least one direction skip transformation of the current block.
  • the processing unit 22 is specifically configured to determine that the parity of the first quantization coefficient is not allowed to be hidden if the color component of the current block is the first component.
  • the first component is a chrominance component.
  • the parity of the quantization coefficient whose parity is hidden is indicated by the parity corresponding to the P quantization coefficients in the current region.
  • the processing unit 22 is used to adjust the P quantization coefficients, In this way, the parity corresponding to the P quantization coefficients is consistent with the parity of the quantization coefficient whose parity is hidden.
  • the processing unit 22 is specifically configured to process part or all of the second quantization coefficient using a preset operation method to obtain the first quantization coefficient.
  • the preset operation method includes dividing part or all of the second quantization coefficient by two and rounding.
  • the processing unit 22 is specifically configured to perform parity hiding on part or all of the second quantization coefficient to obtain the first quantization coefficient if the current area satisfies a preset condition.
  • the preset conditions include at least one of the following:
  • the number of non-zero quantization coefficients in the current region is greater than the first preset value
  • the distance between the first non-zero quantization coefficient and the last non-zero quantization coefficient in the scanning order is greater than the second preset value
  • the distance between the first non-zero quantization coefficient and the last quantization coefficient in the scanning order is greater than the third preset value
  • the sum of absolute values of non-zero quantization coefficients is greater than the fourth preset value
  • the color component of the current block is the second component
  • the transformation type of the current block is not a first transformation type, and the first transformation type is used to indicate at least one direction skip transformation of the current block.
  • the second component is a brightness component.
  • the processing unit 22 is also configured to skip parity concealment of part or all of the second quantization coefficient to obtain the first quantization coefficient if the current block is transformed using a target transformation method. .
  • the target transformation method includes secondary transformation, multiple transformations or a first transformation type, and the first transformation type is used to indicate at least one direction skip transformation of the current block.
  • the processing unit 22 is also configured to obtain at least one flag, which is used to indicate whether the parity of the quantized coefficient is allowed to be hidden; if it is determined that the current block is allowed to be hidden according to the at least one flag.
  • at least one flag which is used to indicate whether the parity of the quantized coefficient is allowed to be hidden; if it is determined that the current block is allowed to be hidden according to the at least one flag.
  • the at least one flag includes at least one of a sequence level flag, an image level flag, a slice level flag, a unit level flag and a block level flag.
  • the first quantization coefficient is a non-zero quantization coefficient located at the Kth position in the scanning order in the current area, and the K is less than or equal to the number of non-zero quantization coefficients in the current area.
  • the device embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, they will not be repeated here.
  • the video encoding device 20 shown in FIG. 15 may correspond to the corresponding subject in performing the method of the embodiment of the present application, and the aforementioned and other operations and/or functions of each unit in the video encoding device 20 are respectively to implement the method, etc. The corresponding processes in each method will not be repeated here for the sake of brevity.
  • the software unit may be located in a mature storage medium in this field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, register, etc.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above method embodiment in combination with its hardware.
  • Figure 16 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 30 may be the video encoder or video decoder described in the embodiment of the present application.
  • the electronic device 30 may include:
  • Memory 33 and processor 32 the memory 33 is used to store the computer program 34 and transmit the program code 34 to the processor 32.
  • the processor 32 can call and run the computer program 34 from the memory 33 to implement the method in the embodiment of the present application.
  • the processor 32 may be configured to perform steps in the above method according to instructions in the computer program 34 .
  • the processor 32 may include but is not limited to:
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the memory 33 includes but is not limited to:
  • Non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
  • RAM Random Access Memory
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • Direct Rambus RAM Direct Rambus RAM
  • the computer program 34 can be divided into one or more units, and the one or more units are stored in the memory 33 and executed by the processor 32 to complete the tasks provided by this application.
  • the one or more units may be a series of computer program instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program 34 in the electronic device 30 .
  • the electronic device 30 may also include:
  • Transceiver 33 the transceiver 33 can be connected to the processor 32 or the memory 33 .
  • the processor 32 can control the transceiver 33 to communicate with other devices. Specifically, it can send information or data to other devices, or receive information or data sent by other devices.
  • Transceiver 33 may include a transmitter and a receiver.
  • the transceiver 33 may further include an antenna, and the number of antennas may be one or more.
  • bus system where in addition to the data bus, the bus system also includes a power bus, a control bus and a status signal bus.
  • Figure 17 is a schematic block diagram of the video encoding and decoding system 40 provided by the embodiment of the present application.
  • the video encoding and decoding system 40 may include: a video encoder 41 and a video decoder 42, where the video encoder 41 is used to perform the video encoding method involved in the embodiment of the present application, and the video decoder 42 is used to perform
  • the embodiment of the present application relates to a video decoding method.
  • This application also provides a computer storage medium on which a computer program is stored.
  • the computer program When the computer program is executed by a computer, the computer can perform the method of the above method embodiment.
  • embodiments of the present application also provide a computer program product containing instructions, which when executed by a computer causes the computer to perform the method of the above method embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted over a wired connection from a website, computer, server, or data center (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the available media may be magnetic media (such as floppy disks, hard disks, magnetic tapes), optical media (such as digital video discs (DVD)), or semiconductor media (such as solid state disks (SSD)), etc.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separate.
  • a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or it may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in various embodiments of the present application can be integrated into a processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例提供一种视频编解码方法、装置、设备、***及存储介质,通过当前区域中P个量化系数,对当前区域中的第二量化系数的部分或全部奇偶性进行隐藏,得到第一量化系数,并对第一量化系数进行编码,可以减少编码所需的比特数,降低视频压缩代价。另外,本申请对于第一量化系数重新确定目标上下文模型进行编解码,可以实现对奇偶性被隐藏的第一量化系数的准确解码。

Description

视频编解码方法、装置、设备、***及存储介质 技术领域
本申请涉及视频编解码技术领域,尤其涉及一种视频编解码方法、装置、设备、***及存储介质。
背景技术
数字视频技术可以并入多种视频装置中,例如数字电视、智能手机、计算机、电子阅读器或视频播放器等。随着视频技术的发展,视频数据所包括的数据量较大,为了便于视频数据的传输,视频装置执行视频压缩技术,以使视频数据更加有效的传输或存储。
在视频压缩过程中为了便于编码,对变换系数进行量化,量化的目的是将变换系数进行缩放,从而使得编码系数时消耗的比特数减少。但是,目前的量化方法,编码代价较大。
发明内容
本申请实施例提供了一种视频编解码方法、装置、设备、***及存储介质,以降低编码代价。
第一方面,本申请提供了一种视频解码方法,包括:
解码码流,得到当前区域中的P个量化系数,所述当前区域为当前块中包括至少一个非零量化系数的区域,所述P为正整数;
根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,所述第一量化系数为对所述当前区域中的第二量化系数的全部或部分进行奇偶性隐藏后得到的量化系数,所述第二量化系数为所述当前区域中奇偶性未被隐藏的一个量化系数;
确定所述第一量化系数对应的目标上下文模型,并使用所述目标上下文模型,对所述基于上下文编码的所述第一量化系数进行解码,得到解码后的第一量化系数;
根据所述奇偶性被隐藏的量化系数的奇偶性和所述解码后的第一量化系数,确定所述第二量化系数。
第二方面,本申请实施例提供一种视频编码方法,包括:
将当前块划分为N个区域,所述N为正整数;
确定所述当前区域中的第二量化系数,并对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,所述当前区域为N个区域中包括至少一个非零量化系数的区域;
确定所述第一量化系数对应的目标上下文模型,使用所述目标上下文模型,对所述第一量化系数进行编码,得到码流,所述第一量化系数中奇偶性被隐藏的量化系数的奇偶性通过所述当前区域中P个量化系数进行指示,所述P为正整数。
第三方面,本申请提供了一种视频编码器,用于执行上述第一方面或其各实现方式中的方法。具体地,该编码器包括用于执行上述第一方面或其各实现方式中的方法的功能单元。
第四方面,本申请提供了一种视频解码器,用于执行上述第二方面或其各实现方式中的方法。具体地,该解码器包括用于执行上述第二方面或其各实现方式中的方法的功能单元。
第五方面,提供了一种视频编码器,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行上述第一方面或其各实现方式中的方法。
第六方面,提供了一种视频解码器,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行上述第二方面或其各实现方式中的方法。
第七方面,提供了一种视频编解码***,包括视频编码器和视频解码器。视频编码器用于执行上述第一方面或其各实现方式中的方法,视频解码器用于执行上述第二方面或其各实现方式中的方法。
第八方面,提供了一种芯片,用于实现上述第一方面至第二方面中的任一方面或其各实现方式中的方法。具体地,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片的设备执行如上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第九方面,提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十方面,提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十一方面,提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十二方面,提供了一种码流,包括第二方面任一方面生成的码流。
基于以上技术方案,解码器解码码流,得到当前区域中的P个量化系数,当前区域为当前块中包括至少一个非零量化系数的区域,P为正整数;根据P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,第一量化系数为对第二量化系数的全部或部分进行奇偶性隐藏后得到的量化系数,第二量化系数为所述当前区域中奇偶性未被隐藏的一个量化系数;确定第一量化系数对应的目标上下文模型,并使用目标上下文模型,对基于上下文编码的第一量化系数进行解码,得到解码后的第一量化系数;根据奇偶性被隐藏的量化系数的奇偶性和解码后的第一量化系数,确定第二量化系数。本申请实施例,通过当前区域中P个量化系数,对当前区域中的第二量化系数的部分或全部奇偶性进行隐藏,得到第一量化系数,在编码时对第一量化系数进行编码,可以减少编码所需的比特数,降低视频压缩代价。另外,本申请实施例对于第一量化系数重新确定目标上下文模型进行解码,可以实现对奇偶性被隐藏的第一量化 系数的准确解码。
附图说明
图1为本申请实施例涉及的一种视频编解码***的示意性框图;
图2是本申请实施例提供的视频编码器的示意性框图;
图3是本申请实施例提供的解码框架的示意性框图;
图4A至图4D为本申请涉及的扫描顺序示意图;
图5A至图5C为本申请涉及的第一量化系数的已解码系数示意图;
图5D至图5F为本申请涉及的第一量化系数的另一已解码系数示意图;
图6为本申请实施例提供的一种视频解码码方法的流程示意图;
图7为本申请实施例涉及的区域划分示意图;
图8为本申请实施例涉及的变换块的扫描顺序示意图;
图9A至图9C为本申请涉及的第一量化系数的已解码系数示意图;
图10A至图10C为本申请涉及的第一量化系数的另一已解码系数示意图;
图11为本申请一实施例提供的视频解码方法流程示意图;
图12为本申请实施例提供的视频编码方法的一种流程示意图;
图13为本申请一实施例提供的视频编码方法流程示意图;
图14是本申请实施例提供的视频解码装置的示意性框图;
图15是本申请实施例提供的视频编码装置的示意性框图;
图16是本申请实施例提供的电子设备的示意性框图;
图17是本申请实施例提供的视频编码***的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
本申请可应用于图像编解码领域、视频编解码领域、硬件视频编解码领域、专用电路视频编解码领域、实时视频编解码领域等。或者,本申请的方案可结合至其它专属或行业标准而操作,所述标准包含ITU-TH.261、ISO/IECMPEG-1Visual、ITU-TH.262或ISO/IECMPEG-2Visual、ITU-TH.263、ISO/IECMPEG-4Visual,ITU-TH.264(还称为ISO/IECMPEG-4AVC),包含可分级视频编解码(SVC)及多视图视频编解码(MVC)扩展。应理解,本申请的技术不限于任何特定编解码标准或技术。
为了便于理解,首先结合图1对本申请实施例涉及的视频编解码***进行介绍。
图1为本申请实施例涉及的一种视频编解码***的示意性框图。需要说明的是,图1只是一种示例,本申请实施例的视频编解码***包括但不限于图1所示。如图1所示,该视频编解码***100包含编码设备110和解码设备120。其中编码设备用于对视频数据进行编码(可以理解成压缩)产生码流,并将码流传输给解码设备。解码设备对编码设备编码产生的码流进行解码,得到解码后的视频数据。
本申请实施例的编码设备110可以理解为具有视频编码功能的设备,解码设备120可以理解为具有视频解码功能的设备,即本申请实施例对编码设备110和解码设备120包括更广泛的装置,例如包含智能手机、台式计算机、移动计算装置、笔记本(例如,膝上型)计算机、平板计算机、机顶盒、电视、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机等。
在一些实施例中,编码设备110可以经由信道130将编码后的视频数据(如码流)传输给解码设备120。信道130可以包括能够将编码后的视频数据从编码设备110传输到解码设备120的一个或多个媒体和/或装置。
在一个实例中,信道130包括使编码设备110能够实时地将编码后的视频数据直接发射到解码设备120的一个或多个通信媒体。在此实例中,编码设备110可根据通信标准来调制编码后的视频数据,且将调制后的视频数据发射到解码设备120。其中通信媒体包含无线通信媒体,例如射频频谱,可选的,通信媒体还可以包含有线通信媒体,例如一根或多根物理传输线。
在另一实例中,信道130包括存储介质,该存储介质可以存储编码设备110编码后的视频数据。存储介质包含多种本地存取式数据存储介质,例如光盘、DVD、快闪存储器等。在该实例中,解码设备120可从该存储介质中获取编码后的视频数据。
在另一实例中,信道130可包含存储服务器,该存储服务器可以存储编码设备110编码后的视频数据。在此实例中,解码设备120可以从该存储服务器中下载存储的编码后的视频数据。可选的,该存储服务器可以存储编码后的视频数据且可以将该编码后的视频数据发射到解码设备120,例如web服务器(例如,用于网站)、文件传送协议(FTP)服务器等。
一些实施例中,编码设备110包含视频编码器112及输出接口113。其中,输出接口113可以包含调制器/解调器(调制解调器)和/或发射器。
在一些实施例中,编码设备110除了包括视频编码器112和输入接口113外,还可以包括视频源111。
视频源111可包含视频采集装置(例如,视频相机)、视频存档、视频输入接口、计算机图形***中的至少一个,其中,视频输入接口用于从视频内容提供者处接收视频数据,计算机图形***用于产生视频数据。
视频编码器112对来自视频源111的视频数据进行编码,产生码流。视频数据可包括一个或多个图像(picture)或图像序列(sequence of pictures)。码流以比特流的形式包含了图像或图像序列的编码信息。编码信息可以包含编 码图像数据及相关联数据。相关联数据可包含序列参数集(sequence parameter set,简称SPS)、图像参数集(picture parameter set,简称PPS)及其它语法结构。SPS可含有应用于一个或多个序列的参数。PPS可含有应用于一个或多个图像的参数。语法结构是指码流中以指定次序排列的零个或多个语法元素的集合。
视频编码器112经由输出接口113将编码后的视频数据直接传输到解码设备120。编码后的视频数据还可存储于存储介质或存储服务器上,以供解码设备120后续读取。
在一些实施例中,解码设备120包含输入接口121和视频解码器122。
在一些实施例中,解码设备120除包括输入接口121和视频解码器122外,还可以包括显示装置123。
其中,输入接口121包含接收器及/或调制解调器。输入接口121可通过信道130接收编码后的视频数据。
视频解码器122用于对编码后的视频数据进行解码,得到解码后的视频数据,并将解码后的视频数据传输至显示装置123。
显示装置123显示解码后的视频数据。显示装置123可与解码设备120整合或在解码设备120外部。显示装置123可包括多种显示装置,例如液晶显示器(LCD)、等离子体显示器、有机发光二极管(OLED)显示器或其它类型的显示装置。
此外,图1仅为实例,本申请实施例的技术方案不限于图1,例如本申请的技术还可以应用于单侧的视频编码或单侧的视频解码。
下面对本申请实施例涉及的视频编码器进行介绍。
图2是本申请实施例提供的视频编码器的示意性框图。应理解,该视频编码器200可用于对图像进行有损压缩(lossy compression),也可用于对图像进行无损压缩(lossless compression)。该无损压缩可以是视觉无损压缩(visually lossless compression),也可以是数学无损压缩(mathematically lossless compression)。
该视频编码器200可应用于亮度色度(YCbCr,YUV)格式的图像数据上。例如,YUV比例可以为4:2:0、4:2:2或者4:4:4,Y表示明亮度(Luma),Cb(U)表示蓝色色度,Cr(V)表示红色色度,U和V表示为色度(Chroma)用于描述色彩及饱和度。例如,在颜色格式上,4:2:0表示每4个像素有4个亮度分量,2个色度分量(YYYYCbCr),4:2:2表示每4个像素有4个亮度分量,4个色度分量(YYYYCbCrCbCr),4:4:4表示全像素显示(YYYYCbCrCbCrCbCrCbCr)。
例如,该视频编码器200读取视频数据,针对视频数据中的每帧图像,将一帧图像划分成若干个编码树单元(coding tree unit,CTU),在一些例子中,CTU可被称作“树型块”、“最大编码单元”(Largest Coding unit,简称LCU)或“编码树型块”(coding tree block,简称CTB)。每一个CTU可以与图像内的具有相等大小的像素块相关联。每一像素可对应一个亮度(luminance或luma)采样及两个色度(chrominance或chroma)采样。因此,每一个CTU可与一个亮度采样块及两个色度采样块相关联。一个CTU大小例如为128×128、64×64、32×32等。一个CTU又可以继续被划分成若干个编码单元(Coding Unit,CU)进行编码,CU可以为矩形块也可以为方形块。CU可以进一步划分为预测单元(prediction Unit,简称PU)和变换单元(transform unit,简称TU),进而使得编码、预测、变换分离,处理的时候更灵活。在一种示例中,CTU以四叉树方式划分为CU,CU以四叉树方式划分为TU、PU。
视频编码器及视频解码器可支持各种PU大小。假定特定CU的大小为2N×2N,视频编码器及视频解码器可支持2N×2N或N×N的PU大小以用于帧内预测,且支持2N×2N、2N×N、N×2N、N×N或类似大小的对称PU以用于帧间预测。视频编码器及视频解码器还可支持2N×nU、2N×nD、nL×2N及nR×2N的不对称PU以用于帧间预测。
在一些实施例中,如图2所示,该视频编码器200可包括:预测单元210、残差单元220、变换/量化单元230、反变换/量化单元240、重建单元250、环路滤波单元260、解码图像缓存270和熵编码单元280。需要说明的是,视频编码器200可包含更多、更少或不同的功能组件。
可选的,在本申请中,当前块可以称为当前编码单元(CU)或当前预测单元(PU)等。预测块也可称为预测图像块或图像预测块,重建图像块也可称为重建块或图像重建图像块。
在一些实施例中,预测单元210包括帧间预测单元211和帧内预测单元212。由于视频的一个帧中的相邻像素之间存在很强的相关性,在视频编解码技术中使用帧内预测的方法消除相邻像素之间的空间冗余。由于视频中的相邻帧之间存在着很强的相似性,在视频编解码技术中使用帧间预测方法消除相邻帧之间的时间冗余,从而提高编码效率。
帧间预测单元211可用于帧间预测,帧间预测可以参考不同帧的图像信息,帧间预测使用运动信息从参考帧中找到参考块,根据参考块生成预测块,用于消除时间冗余;帧间预测所使用的帧可以为P帧和/或B帧,P帧指的是向前预测帧,B帧指的是双向预测帧。运动信息包括参考帧所在的参考帧列表,参考帧索引,以及运动矢量。运动矢量可以是整像素的或者是分像素的,如果运动矢量是分像素的,那么需要再参考帧中使用插值滤波做出所需的分像素的块,这里把根据运动矢量找到的参考帧中的整像素或者分像素的块叫参考块。有的技术会直接把参考块作为预测块,有的技术会在参考块的基础上再处理生成预测块。在参考块的基础上再处理生成预测块也可以理解为把参考块作为预测块然后再在预测块的基础上处理生成新的预测块。
帧内预测单元212只参考同一帧图像的信息,预测当前码图像块内的像素信息,用于消除空间冗余。帧内预测所使用的帧可以为I帧。例如图5所示,白色的4×4块是当前块,当前块左边一行和上面一列的灰色的像素为当前块的参考像素,帧内预测使用这些参考像素对当前块进行预测。这些参考像素可能已经全部可得,即全部已经编解码。也可能有部分不可得,比如当前块是整帧的最左侧,那么当前块的左边的参考像素不可得。或者编解码当前块时,当前块左下方的部分还没有编解码,那么左下方的参考像素也不可得。对于参考像素不可得的情况,可以使用可得的参考像素或某些值或某些方法进行填充,或者不进行填充。
在一些实施例中,帧内预测方法还包括多参考行帧内预测方法(multiple reference line,MRL),MRL可以使用更多的参考像素从而提高编码效率。
帧内预测有多种预测模式,H.264中对4×4的块进行帧内预测的9种模式。其中模式0是将当前块上面的像素按竖直方向复制到当前块作为预测值;模式1是将左边的参考像素按水平方向复制到当前块作为预测值;模式2(DC)是将A~D和I~L这8个点的平均值作为所有点的预测值,模式3至模式8是分别按某一个角度将参考像素复制到当前块的对应位置。因为当前块某些位置不能正好对应到参考像素,可能需要使用参考像素的加权平均值,或者说是插值的参考像素的分像素。
HEVC使用的帧内预测模式有平面模式(Planar)、DC和33种角度模式,共35种预测模式。VVC使用的帧内模式有Planar、DC和65种角度模式,共67种预测模式。AVS3使用的帧内模式有DC、Plane、Bilinear和63种角度模式,共66种预测模式。
需要说明的是,随着角度模式的增加,帧内预测将会更加精确,也更加符合对高清以及超高清数字视频发展的需求。
残差单元220可基于CU的像素块及CU的PU的预测块来产生CU的残差块。举例来说,残差单元220可产生CU的残差块,使得残差块中的每一采样具有等于以下两者之间的差的值:CU的像素块中的采样,及CU的PU的预测块中的对应采样。
变换/量化单元230可量化变换系数。变换/量化单元230可基于与CU相关联的量化参数(QP)值来量化与CU的TU相关联的变换系数。视频编码器200可通过调整与CU相关联的QP值来调整应用于与CU相关联的变换系数的量化程度。
反变换/量化单元240可分别将逆量化及逆变换应用于量化后的变换系数,以从量化后的变换系数重建残差块。
重建单元250可将重建后的残差块的采样加到预测单元210产生的一个或多个预测块的对应采样,以产生与TU相关联的重建图像块。通过此方式重建CU的每一个TU的采样块,视频编码器200可重建CU的像素块。
环路滤波单元260可执行消块滤波操作以减少与CU相关联的像素块的块效应。
在一些实施例中,环路滤波单元260包括去块滤波单元、样点自适应补偿SAO单元、自适应环路滤波ALF单元。
解码图像缓存270可存储重建后的像素块。帧间预测单元211可使用含有重建后的像素块的参考图像来对其它图像的PU执行帧间预测。另外,帧内预测单元212可使用解码图像缓存270中的重建后的像素块来对在与CU相同的图像中的其它PU执行帧内预测。
熵编码单元280可接收来自变换/量化单元230的量化后的变换系数。熵编码单元280可对量化后的变换系数执行一个或多个熵编码操作以产生熵编码后的数据。
本申请涉及的视频编码的基本流程如下:在编码端,将当前图像划分成块,针对当前块,预测单元210使用帧内预测或帧间预测产生当前块的预测块。残差单元220可基于预测块与当前块的原始块计算残差块,即预测块和当前块的原始块的差值,该残差块也可称为残差信息。该残差块经由变换/量化单元230变换与量化等过程,可以去除人眼不敏感的信息,以消除视觉冗余。可选的,经过变换/量化单元230变换与量化之前的残差块可称为时域残差块,经过变换/量化单元230变换与量化之后的时域残差块可称为频率残差块或频域残差块。熵编码单元280接收到变换量化单元230输出的量化后的变换系数,可对该量化后的变换系数进行熵编码,输出码流。例如,熵编码单元280可根据目标上下文模型以及二进制码流的概率信息消除字符冗余。
另外,视频编码器对变换量化单元230输出的量化后的变换系数进行反量化和反变换,得到当前块的残差块,再将当前块的残差块与当前块的预测块进行相加,得到当前块的重建块。随着编码的进行,可以得到当前图像中其他图像块对应的重建块,这些重建块进行拼接,得到当前图像的重建图像。由于编码过程中引入误差,为了降低误差,对重建图像进行滤波,例如,使用ALF对重建图像进行滤波,以减小重建图像中像素点的像素值与当前图像中像素点的原始像素值之间差异。将滤波后的重建图像存放在解码图像缓存270中,可以为后续的帧作为帧间预测的参考帧。
需要说明的是,编码端确定的块划分信息,以及预测、变换、量化、熵编码、环路滤波等模式信息或者参数信息等在必要时携带在码流中。解码端通过解析码流及根据已有信息进行分析确定与编码端相同的块划分信息,预测、变换、量化、熵编码、环路滤波等模式信息或者参数信息,从而保证编码端获得的解码图像和解码端获得的解码图像相同。
图3是本申请实施例提供的视频解码器的示意性框图。
如图3所示,视频解码器300包含:熵解码单元310、预测单元320、反量化/变换单元330、重建单元340、环路滤波单元350及解码图像缓存360。需要说明的是,视频解码器300可包含更多、更少或不同的功能组件。
视频解码器300可接收码流。熵解码单元310可解析码流以从码流提取语法元素。作为解析码流的一部分,熵解码单元310可解析码流中的经熵编码后的语法元素。预测单元320、反量化/变换单元330、重建单元340及环路滤波单元350可根据从码流中提取的语法元素来解码视频数据,即产生解码后的视频数据。
在一些实施例中,预测单元320包括帧间预测单元321和帧内预测单元322。
帧间预测单元321可执行帧内预测以产生PU的预测块。帧间预测单元321可使用帧内预测模式以基于空间相邻PU的像素块来产生PU的预测块。帧间预测单元321还可根据从码流解析的一个或多个语法元素来确定PU的帧内预测模式。
帧内预测单元322可根据从码流解析的语法元素来构造第一参考图像列表(列表0)及第二参考图像列表(列表1)。此外,如果PU使用帧间预测编码,则熵解码单元310可解析PU的运动信息。帧内预测单元322可根据PU的运动信息来确定PU的一个或多个参考块。帧内预测单元322可根据PU的一个或多个参考块来产生PU的预测块。
反量化/变换单元330可逆量化(即,解量化)与TU相关联的变换系数。反量化/变换单元330可使用与TU的CU相关联的QP值来确定量化程度。
在逆量化变换系数之后,反量化/变换单元330可将一个或多个逆变换应用于逆量化变换系数,以便产生与TU相 关联的残差块。
重建单元340使用与CU的TU相关联的残差块及CU的PU的预测块以重建CU的像素块。例如,重建单元340可将残差块的采样加到预测块的对应采样以重建CU的像素块,得到重建图像块。
环路滤波单元350可执行消块滤波操作以减少与CU相关联的像素块的块效应。
在一些实施例中,环路滤波单元350包括去块滤波单元、样点自适应补偿SAO单元、自适应环路滤波ALF单元。
视频解码器300可将CU的重建图像存储于解码图像缓存360中。视频解码器300可将解码图像缓存360中的重建图像作为参考图像用于后续预测,或者,将重建图像传输给显示装置呈现。
本申请涉及的视频解码的基本流程如下:熵解码单元310可解析码流得到当前块的预测信息、量化系数矩阵等,预测单元320基于预测信息对当前块使用帧内预测或帧间预测产生当前块的预测块。反量化/变换单元330使用从码流得到的量化系数矩阵,对量化系数矩阵进行反量化、反变换得到残差块。重建单元340将预测块和残差块相加得到重建块。重建块组成重建图像,环路滤波单元350基于图像或基于块对重建图像进行环路滤波,得到解码图像。该解码图像也可以称为重建图像,该重建图像一方面可以被显示设备进行显示,另一方面可以存放在解码图像缓存360中,为后续的帧作为帧间预测的参考帧。
上述是基于块的混合编码框架下的视频编解码器的基本流程,随着技术的发展,该框架或流程的一些模块或步骤可能会被优化,本申请适用于该基于块的混合编码框架下的视频编解码器的基本流程,但不限于该框架及流程。
下面对本申请涉及的量化相关技术进行介绍。
量化、反量化与系数编码部分关系紧密,量化的目的是将变换系数缩放,从而使得编码系数时消耗的比特数减少。
在一些实施例中,存在跳过变换的情况,此时量化的对象为残差,即直接对残差进行缩放后,进行编码。
示例性的,可以采用如下公式(1)实现量化:
Figure PCTCN2022093411-appb-000001
其中,t i为变换系数,qstep为量化步长,与配置文件中设置的量化参数相关,q i为量化系数,round为取整过程,不限于上取整或下取整等,量化的过程受编码器控制。
示例性的,可以采用如下公式(2)实现反量化:
t′ i=q i·qstep     (2)
其中,t′ i为重建的变换系数,由于取整过程中造成的精度损失,t′ i与t i是不同的。
量化会使得变换系数的精度降低,并且精度的损失是不可逆的。编码器通常通过率失真代价函数来衡量量化的代价。
示例性的,采用如下公式(3)确定量化代价:
J=D+λ·R=(t i-t′ i) 2+λ·B(q i)   (3)
其中,B()为编码器估计编码量化系数q i所消耗的比特。
理论上,无论编码器如何决定q i的取值,解码端的反量化过程都是不变的,所以编码器可以更自由的决定q i。通常的,编码器会根据当前块总的代价最小为原则,对每一个q i进行调整以达到整体代价最优化,这样的过程叫做率失真优化量化,也被广泛使用在视频编码中。
在一些实施例中,可以使用多符号的算数编码(Multi-symbol arithmetic coding)方式对量化系数进行编码或解码,每个量化系数都可以由一个或多个多符号的标识来指示,具体的,根据量化系数的大小,可分段由以下几种多符号的标识位表示。
标识1:表示0~3的部分,共4个符号(0,1,2,3),当标识1的符号为3时,则需要进一步编码/解码标识2。
标识2:表示3~6的部分,共4个符号(0,1,2,3),当标识2的符号为3时,则需要进一步编码/解码标识3。
标识3:表示6~9的部分,共4个符号(0,1,2,3),当标识3的符号为3时,则需要进一步编码/解码标识4。
标识4:表示9~12的部分,共4个符号(0,1,2,3),当标识4的符号为3时,则需要进一步编码/解码标识5。
标识5:表示12~15的部分,共4个符号(0,1,2,3),当标识5的符号为3时,则需要进一步编码/解码大于等于15的部分。
大于等于15的部分使用指数哥伦布编码/解码且不依赖上下文模型,而标识1~5则使用上下文模型,其中标识1有一套单独的上下文模型,而标识2~5公用一套上下文模型。另外若当前系数为非零系数时,还需要编码/解码正负号。各个标识在变换块中的编码/解码过程如下:
首先,按照从最后一个非零系数至变换块左上角的扫描顺序,编码/解码标识1-5。
其中,标识1表示的系数部分称为基本部分(Base Range,BR),标识2~5标识的系数部分称为较低部分(Lower Range,LR),大于等于15的部分称为较高部分(Higher Range,HR)。
解码得到的量化系数索引qIdx为标识1~5的总和加上超过15的部分。示例性的,如公式(4)所示:
qIdx=∑(BR+∑LR)     (4)
由于LR部分包括了标识2~5这4个标识,故这里用∑LR标识求和来标识2~5的部分。
在没有引入奇偶隐藏技术时,量化系数的绝对值level=qIdx。
接着,分别按照从变换块左上角到最后一个非零系数的顺序,编码/解码非零系数的正负号和超过系数超过15的部分,其中,左上角系数若为非零,其正负号的编码/解码采用上下文模型编码,其余位置上的非零系数编码/解码采用等概率模型编码。
示例性的,编码语法如下表1所示:
表1
Figure PCTCN2022093411-appb-000002
Figure PCTCN2022093411-appb-000003
上述表1中S()为多符号的上下文模型编码,L(1)为旁路编码。
在一些实施例中,在编码量化系数时,会根据选中的变换模式不同而选择不同的系数编码顺序(即扫描顺序)。示例性的,变换共包含了以下16种,这16种变换与扫描顺序的对应关系如表2所示:
表2
Figure PCTCN2022093411-appb-000004
表2中,Tansform Type表示变换类型,Vertical Mode表示竖直方向的变换类型,Horizontal Mode表示水平方向的变换类型。Tansform Type包括1D Tansform Type和2D Transform Type,其中1D Tansform Type的扫描方式会根据水平和垂直方向的不同分为行扫描和列扫描。2D Transform Type会分为Zig-Zag scan(之字形扫描)、Diagonal scan(对角线扫描)。
在一些实施例中,如图4A至图4D所示,系数编码和解码时,采用了Zig-Zag scan(之字形扫描)、Diagonal scan(对角线扫描)、Column scan(列扫描)和Row scan(行扫描)这四种扫描顺序,图中的数字表示扫描顺序的索引。由于变换可以将能量集中在变换块的左上角,实际的系数解码顺序定义为逆扫描顺序,即从变换块的右下角的第一个非零系数开始,按解码顺序进行顺序解码。例如图4A的Zig-Zag扫描顺序是从0,1…,15…,对应的解码顺序为…15,14…,1,0。如果按解码顺序的第一个非零系数位置是索引12,则系数实际解码顺序是12,11,…,1,0。
系数解码的过程中,会根据扫描顺序的不同决定选取周围哪些已解码的系数部分作为参考,以选择当前系数的上下文模型。以Zig-Zag scan、Column scan和Row scan这3种扫描方式为例,在解码当前系数的标识1(即BR)部分的符号时,如图5A至图5C所示,根据当前***周围已解码系数的绝对值小于等于3的部分,确定标识1(即BR)对应的上下文模型,并使用确定的标识1(即BR)对应的上下文模型对当前系数的标识1(即BR)部分进行解码。再例如,在解码当前系数的标识2~5(即LR)部分时,如图5D至图5F所示,也根据扫描顺序的不同选取不同的周围系数,并根据选取的周围系数的绝对值小于等于15部分,确定标识2~5(即LR)部分对应的上下文模型,以使用确定的标识2~5部分对应的上下文模型对当前系数的标识2~5(即LR)部分进行解码。
同时,在确定当前系数对应的上下文模型时,还需要考虑当前系数是否为最后一个非零系数,当前系数的位置距离变换块左上角的距离,变换块的大小等条件。
在一些实施例中,编解码标识1(即BR)时,共需要根据量化参数QP从4个QP段、5种变换块大小、亮度或颜色分量、42种位置距离,以及周围已编解码系数绝对值小于等于3部分的和的大小,共4X5X2X42即1680个上下文模型中,选取一个模型进行编解码和模型概率值的更新。
在一些实施例中,编解码标识2~5(即LR)时,共需要根据量化参数QP从4个QP段、5种变换块大小、亮度或色度分量、21种位置距离,以及周围已编解码系数绝对值小于等于15部分的和的大小,共4X5X2X21即840个上下文模型中,选取一个模型进行编解码和模型概率值的更新。
在一些实施例中,确定上下文模型索引值的计算方式如表3所示:
表3
Figure PCTCN2022093411-appb-000005
表3中,Θ为图5A至图5C中几种扫描方式下第一量化系数周围的5个已解码量化系数,Φ为图5D至图5F中几种扫描方式下第一量化系数周围的3个已解码量化系数。表3中DC表示位于变换块中左上角位置。根据上述表5所述的方式,确定出量化系数对应的上下文模型后,使用该上下文模型对量化参数进行编解码。
目前量化系数的编码方式通常是,对量化系数的正负号和绝对值进行完全编码,其占用比特位较多,编码代价大。
本申请实施例,在量化系数的编码过程中,根据当前区域中量化系数相关的奇偶性,对当前区域中的至少一个量化系数进行隐藏,以降低编码代价。
下面结合具体的实施例对本申请实施例提供的技术方案进行详细描述。
首先结合图6,以解码端为例进行介绍。
图6为本申请实施例提供的一种视频解码码方法的流程示意图,本申请实施例应用于图1和图3所示视频解码器。如图6所示,本申请实施例的方法包括:
S401、解码码流,得到当前区域中的P个量化系数。
其中,当前区域为当前块中包括至少一个非零量化系数的区域。
本申请实施例,将当前块划分为一个或多个区域,例如划分为N个区域,N为正整数,当前区域为当前块的N个区域中包括至少一个非零量化系数的区域。
为了降低编码代价,编码端可以根据同一个区域中量化系数相关的奇偶性,例如根据该区域中量化系数的绝对值之和的奇偶性,对该区域中一个或多个量化系数的奇偶性进行隐藏,以降低奇偶性被隐藏的量化系数,例如,奇偶性待隐藏的第二量化系数为a1,对第二量化系数的全部或部分进行奇偶性隐藏,得到第一量化系数a2,a2小于a1,这样对a2进行编码相比于对a1进行编码,使用更少的比特数,降低了编码代价。
由上述可知,本申请实施例中,编码端对第二量化系数中的部分或全部的奇偶性进行隐藏,得到第一量化系数,此时,第一量化系数中包括奇偶性被隐藏的量化系数,本申请实施例,通过当前区域中P个量化系数来指示第一量化系数中奇偶性被隐藏的量化系数的奇偶性,以及解码端根据第一量化系数中奇偶性被隐藏的量化系数的奇偶性,对奇偶性被隐藏的量化系数进行重建,得到重建的第二量化系数。基于此,解码端在解码第一量化系数时,首先解码码流,得到当前区域中的P个量化系数。
在一种示例中,上述P个量化系数为当前区域中除第一量化系数外的所有量化系数。
在另一种示例中,上述P个量化系数为当前区域中的部分量化系数。
可选的,上述P个量化系数中不包括当前区域中的第一量化系数。
在一些实施例中,上述量化系数是对变换系数进行量化得到的。例如,编码端对当前块进行预测,得到当前块的 预测值,当前块的原始值与预测值进行相减,得到当前块的残差值。编码端对残差值进行变换,得到当前块的变换系数,接着,对变换系数进行量化,得到量化系数,再对量化系数进行编码,得到码流。这样,解码端接收到码流中,解码码流,得到量化系数,对量化系数进行反量化,得到当前块的变换系数,再对变换系数进行反变换,得到当前块的残差值。可选的,在该实例中,上述当前块也可以称为当前变换块。
在一些实施例中,上述量化系数是对残差值进行量化得到的。例如,编码端对当前块进行预测,得到当前块的预测值,当前块的原始值与预测值进行相减,得到当前块的残差值。编码端对残差值进行量化,得到量化系数,再对量化系数进行编码,得到码流。这样,解码端接收到码流中,解码码流,得到量化系数,对量化系数进行反量化,得到当前块的残差值。
在一些实施例中,量化系数可以理解为从码流中解码出的绝对值以及正负号组成的数值,该绝对值包括量化系数对应的标识位的值,若该量化系数的值超过15,则还包括超过15部分的绝对值。
本申请实施例中,可以将当前块划分为N个区域,这N个区域大小可以相同,也可以不同。
本申请实施例中,解码端和编码端采用相同的区域划分方式,将当前块划分为N个区域。
在一些实施例中,解码端和编码端均采用默认的区域划分方式,将当前块划分为N个区域。
在一些实施例中,编码端可以将当前块的区域划分方式指示给解码端。例如,在码流中编入一标志A,该标志A用于指示当前块的区域划分方式。这样,解码端通过解码码流,得到该标志A,并根据该标志A,确定当前块的区域划分方式。
可选的,该标志A可以为序列级标志,用于指示该序列中的所有解码块均可以采用该区域划分方式,将解码块划分为N个区域。
可选的,该标志A可以为帧级标志,用于指示该图像帧中的所有解码块均可以采用该区域划分方式,将解码块划分为N个区域。
可选的,该标志A可以为片级标志,用于指示该图像片中的所有解码块均可以采用该区域划分方式,将解码块划分为N个区域。
可选的,该标志A可以为块级标志,用于指示当前块可以采用该区域划分方式,将当前块划分为N个区域。
本申请实施例中,解码端在确定当前区域中的P个量化系数的过程包括但不限于如下几种示例:
示例1,解码端逐区域进行解码,以得到每个区域中奇偶性未被隐藏的量化系数,例如,解码端在解码出一个区域中奇偶性未被隐藏的量化系数之后,确定该区域的P个量化系数,接着,解码下一个区域中奇偶性未被隐藏的量化系数,并确定该下一个区域的P个量化系数。也就是说,在该实施例中,解码端可以在未完全解码出当前块的所有奇偶性未被隐藏的量化系数时,确定出当前区域中的P个量化系数。例如,区域的划分方式为将扫描方向上的K个像素点作为一个区域,这样,解码端在解码出K个量化系数中奇偶性未被隐藏的量化系数时,从这K个量化系数中奇偶性未被隐藏的量化系数中,确定出P个量化系数。
示例2,解码端在确定出当前块的所有奇偶性未被隐藏的量化系数之后,确定当前区域中的P个量化系数。具体包括如下步骤:
S401-A1、解码码流,得到当前块的已解码信息;
S401-A2、将当前块划分为N个区域,N为正整数;
S401-A3、从当前块的已解码信息中,得到当前区域中的P个量化系数。
在该示例2中,解码端首先解码码流,得到当前块的已解码信息,该已解码信息中包括当前块中不同区域的量化系数。接着,根据区域划分方式,将当前块划分为N个区域。假设当前区域为这N个区域中的第k个区域,则将当前块的已解码信息中,第k个区域对应的已解码信息,确定为当前区域的已解码信息。当前区域的已解码信息中包括当前区域中的P个量化系数。
上述S401-A2中将当前块划分为N个区域的具体方式包括但不限于如下几种:
方式1,根据扫描顺序,将当前块划分为N个区域。
可选的,这N个区域中至少两个区域所包括的量化系数个数相同。
可选的,这N个区域中至少两个区域所包括的量化系数个数不相同。
在一种示例中,按照扫描方向,将当前块中每隔M个非零量化系数划分为一个区域,得到N个区域。这N个区域中每个区域均包括M个非零量化系数。这N个区域中至少一个区域包括一个或多个隐藏系数。在该示例中,若最后一个区域所包括的非零量化系数不为M时,则将该最后一个区域划分为单独的一个区域,或者,将该最后一个区域与上一个区域合并为一个区域。
在另一种示例中,按照扫描方向,将当前块中每隔K个像素点划分为一个区域,得到N个区域。例如,对于一个使用反向ZigZag扫描顺序的8x8大小的变换块,当每个区域大小相等,即每个区域包含16个系数时,图7所示,将当前块划分为4个区域。在该示例中,划分得到的N个区域大小相同,每个区域均包括K个像素点,这N个区域中可能存在量化系数全为0的区域,也可能存在不包括隐藏系数的区域。也就是说,N个区域中至少一个区域包括一个或多个隐藏系数。在该示例中,若最后一个区域所包括的量化系数不为K时,则将该最后一个区域划分为单独的一个区域,或者,将该最后一个区域与上一个区域合并为一个区域。
方式2,根据空间位置,将当前块划分为N个区域。
在一种示例中,上述N个区域为当前块的子块,例如将当前块平均划分为N个子块,示例性的,每个子块的大小为4*4。在该示例中,N个区域中可能存在量化系数全为0的区域,也可能存在不包括隐藏系数的区域。也就是说,N个区域中至少一个区域包括一个或多个隐藏系数。
在另一种示例中,根据当前块中像素点的空间位置关系,将空间位置上相邻的多个像素点划分为一个子块,每个子块中至少包括一个非零量化系数。
本申请实施例中,将当前块划分为N个区域的方法除上述各示例外,还可以包括其他的方法,本申请实施例对此不做限制。
本申请实施例的当前区域包括至少一个第一量化系数,在一些实施例中,第一量化系数也称为隐藏系数。
在一种示例中,第一量化系数可以为编解码端默认的当前区域中任意一个非零的量化系数。
在另一种示例中,第一量化系数可以为当前区域中绝对值最大的非零量化系数。
在另一种示例中,第一量化系数为当前区域中位于扫描顺序的第K个位置处的非零量化系数,该K小于或等于当前区域中非零量化系数个数。例如,如图7所示,当前区域中包含16个系数,可以将扫描顺序上的第16个非零量化系数和/或第15个非零量化系数作为当前区域的第一量化系数。
在一些实施例中,可以通过标志来指示当前块是否允许采用本申请实施例提供的量化系数奇偶性被隐藏的技术。可选的,本申请实施例提供的量化系数奇偶性被隐藏的技术也称为奇偶性隐藏技术。
示例性的,设置的至少一个标志可以是不同级别的标志,用于指示对应级别是否允许量化系数的奇偶性被隐藏。
可选的,上述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
例如,上述至少一个标志包括序列级标志,该序列级标志用于指示当前序列是否允许量化系数的奇偶性被隐藏。例如,若该序列级标志的取值为1时,指示当前序列允许量化系数的奇偶性被隐藏,若该序列级标志的取值为0时,指示当前序列不允许量化系数的奇偶性被隐藏。
可选的,若至少一个标志包括序列级标志时,该序列级标志可以位于序列头(sequence header)中。
再例如,上述至少一个标志包括图像级标志,该图像级标志用于指示当前图像是否允许量化系数的奇偶性被隐藏。例如,若该图像级标志的取值为1时,指示当前图像允许量化系数的奇偶性被隐藏,若该图像级标志的取值为0时,指示当前图像不允许量化系数的奇偶性被隐藏。
可选的,若至少一个标志包括图像级标志时,该图像级标志可以位于图像头(picture header)中。
再例如,上述至少一个标志包括片级标志,该片级标志用于指示当前片(slice)是否允许量化系数的奇偶性被隐藏。例如,若该片级标志的取值为1时,指示当前片允许量化系数的奇偶性被隐藏,若该片级标志的取值为0时,指示当前片不允许量化系数的奇偶性被隐藏。
可选的,若至少一个标志包括片级标志时,该片级标志可以位于片头(slice header)中。
再例如,上述至少一个标志包括单元级标志,该单元级标志用于指示当前CTU是否允许量化系数的奇偶性被隐藏。例如,若该单元级标志的取值为1时,指示当前CTU允许量化系数的奇偶性被隐藏,若该单元级标志的取值为0时,指示当前CTU不允许量化系数的奇偶性被隐藏。
再例如,上述至少一个标志包括块级标志,该块级标志用于指示当前块是否允许量化系数的奇偶性被隐藏。例如,若该块级标志的取值为1时,指示当前块允许量化系数的奇偶性被隐藏,若该块级标志的取值为0时,指示当前块不允许量化系数的奇偶性被隐藏。
这样,解码端首先解码码流,得到上述至少一个标志,根据这至少一个标志,判断当前块是否允许量化系数的奇偶性被隐藏。若根据上述至少一个标志,确定当前块不允许量化系数的奇偶性被隐藏时,则跳过本申请实施例的方法,直接将上述解码出的量化系数进行反量化,得到变换系数。若根据上述至少一个标志,确定当前块允许使用本申请实施例提供的奇偶性隐藏技术时,则执行本申请实施例的方法。
例如,若上述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志。此时,解码端首先解码码流,得到序列级标志,若该序列级标志指示当前序列不允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则跳过本申请实施例的方法,采用传统的方法,对当前块进行反量化。若该序列级标志指示当前序列允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则继续解码码流,得到图像级标志,若该图像级标志指示当前图像不允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则跳过本申请实施例的方法,采用传统的方法,对当前块进行反量化。若该图像级标志指示当前图像允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则继续解码码流,得到片级标志,若该片级标志指示当前片不允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则跳过本申请实施例的方法,采用传统的方法,对当前块进行反量化。若该片级标志指示当前片允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则继续解码码流,得到单元级标志,若该单元级标志指示当前CTU不允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则跳过本申请实施例的方法,采用传统的方法,对当前块进行反量化。若该单元级标志指示当前CTU允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则继续解码码流,得到块级标志,若该块级标志指示当前块不允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则跳过本申请实施例的方法,采用传统的方法,对当前块进行反量化。若该块级标志指示当前块允许使用本申请实施例提供的量化系数奇偶隐藏技术时,则执行本申请实施例。
在一些实施例中,本申请实施例提供的量化系数奇偶隐藏技术,与目标变换方式互斥,其中目标变换方式包括二次变换、多次变换或第一变换类型等,其中第一变换类型用于指示当前块的至少一个方向跳过变换。此时,解码端在确定当前块采用目标变换方式进行变换时,则跳过本申请实施例提供的技术方案,例如,跳过下面S402的步骤。
S402、根据P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性。
其中,第一量化系数为第二量化系数的全部或部分进行奇偶性隐藏后得到的量化系数,第二量化系数为当前区域中奇偶性未被隐藏的一个量化系数,例如对第二量化系数的绝对值中大于n的部分进行奇偶性隐藏。
举例说明,假设第二量化系数为45,对第二量化系数中大于10的部分(即35)进行奇偶性隐藏,示例性的对第二量化系数中的35部分进行奇偶性隐藏得到17。此时,第一量化系数为第二量化系数中奇偶性未隐藏部分(即10)与奇偶性被隐藏的量化系数(即17)的和,即第一量化系数为27,也就是说,第一量化系数中奇偶性未隐藏部分为10,奇偶性被隐藏的量化系数为17。在重建第一量化系数中奇偶性被隐藏的量化系数时,需要根据奇偶性被隐藏的 量化系数的奇偶性进行重建。
本申请实施例中,使用当前区域中P个量化系数,来指示奇偶性被隐藏的量化系数的奇偶性。这样,解码端根据上述步骤,得到当前区域中P个量化系数后,可以根据当前区域中P个量化系数,确定出奇偶性被隐藏的量化系数的奇偶性,进而根据奇偶性被隐藏的量化系数的奇偶性实现对奇偶性隐藏部分的准确重建。
本申请实施例对上述S402中,根据当前区域中P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性的方式不做限制。
例如,使用当前区域中P个量化系数的一种二值特性(0或1),确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性。
在一些实施例中,上述S402包括如下S402-A:
S402-A、根据P个量化系数对应的奇偶性,确定奇偶性被隐藏的量化系数的奇偶性。
上述S402-A的实现方式包括但不限于如下几种情况:
情况1,若P个量化系数对应的奇偶性为P个量化系数的第一绝对值之和的奇偶性进行指示,此时,上述S402-A包括:根据P个量化系数的第一绝对值之和的奇偶性,确定奇偶性被隐藏的量化系数的奇偶性。
其中,量化系数的第一绝对值为该量化系数的部分或全部绝对值,例如第一绝对值为量化系数的绝对值中小于15的部分。
本申请实施例中,量化系数的绝对值用一个或多个标识指示,若该量化系数的绝对值大于15,则该量化系数的绝对值还包括超出15的部分。
基于此,本申请实施例中,量化系数的全部绝对值,指的是解码出的关于该量化系数的整个绝对值,包括各标识位的值,若该量化系数的绝对值大于15时,还包括超过15部分的值。
可选的,量化系数的绝对值中的部分绝对值指标识位中的全部或部分标识位的值,例如解码端根据P个量化系数的标识1之和的奇偶性,确定奇偶性被隐藏的量化系数的奇偶性。
例如,若P个量化系数的第一绝对值之和为偶数,则确定奇偶性被隐藏的量化系数为偶数。
需要说明的是,该情况1中,若第二量化系数中奇偶性被隐藏的量化系数的奇偶性与P个量化系数的第一绝对值之和的奇偶性不一致时,则编码端对P个量化系数中的至少一个系数的奇偶性进行修改。例如,若第二量化系数中奇偶性被隐藏的量化系数为奇数,而P个量化系数的第一绝对值之和为偶数,此时,将P个量化系数中最小量化系数加1或减1,以将P个量化系数的第一绝对值之和修改为奇数。再例如,若第二量化系数中奇偶性被隐藏的量化系数为偶数,而P个量化系数的第一绝对值之和为奇数,此时,将P个量化系数中最小量化系数加1或减1,以将P个量化系数的第一绝对值之和修改为偶数。
情况2,若P个量化系数对应的奇偶性为P个量化系数中目标量化系数的个数的奇偶性,此时,上述S402-A包括:确定P个量化系数中的目标量化系数,并根据P个量化系数中目标量化系数的个数奇偶性,确定该奇偶性被隐藏的量化系数的奇偶性。
上述目标量化系数为P个量化系数中的非零量化系数、值为偶数的非零量化系数、值为偶数的量化系数、值为奇数的量化系数中的任意一个。
在一种示例中,若目标量化系数为P个量化系数中的非零量化系数,则可以根据P个量化系数中非零量化系数的个数奇偶性,确定该奇偶性被隐藏的量化系数的奇偶性。
例如,若P个量化系数中非零量化系数的个数为奇数,则确定该奇偶性被隐藏的量化系数为奇数。
例如,若P个量化系数中非零量化系数的个数为偶数,则确定该奇偶性被隐藏的量化系数为偶数。
需要说明的是,该示例中,若第二量化系数中奇偶性被隐藏的量化系数的奇偶性与P个量化系数中非零量化系数的个数的奇偶性不一致时,则编码端对P个量化系数中的至少一个系数进行修改。例如,若第二量化系数中奇偶性被隐藏的量化系数为奇数,而P个量化系数中非零量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数的值调整为0,以使P个量化系数中非零量化系数的个数为奇数。再例如,若第二量化系数中奇偶性被隐藏的量化系数为偶数,而P个量化系数中非零量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个量化系数的值调整为0,以使P个量化系数中非零量化系数的个数为偶数。
在另一种示例中,若目标量化系数为P个量化系数中值为偶数的非零量化系数,则可以根据P个量化系数中值为偶数的非零量化系数的个数奇偶性,确定该奇偶性被隐藏的量化系数的奇偶性。
例如,若P个量化系数中值为偶数的非零量化系数的个数为奇数,则确定该奇偶性被隐藏的量化系数为奇数。
例如,若P个量化系数中值为偶数的非零量化系数的个数为偶数,则确定该奇偶性被隐藏的量化系数为偶数。
需要说明的是,该示例中,若第二量化系数中奇偶性被隐藏的量化系数的奇偶性与P个量化系数中值为偶数的非零量化系数的个数的奇偶性不一致时,则编码端对P个量化系数中的至少一个系数进行修改。例如,若第二量化系数中奇偶性被隐藏的量化系数为奇数,而P个量化系数中值为偶数的非零量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为2,或者,将P个量化系数中最小的一个非零量化系数的值加1或减1,以使P个量化系数中值为偶数的非零量化系数的个数为奇数。再例如,若第二量化系数中奇偶性被隐藏的量化系数为偶数,而P个量化系数中值为偶数的非零量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为2,或者,将P个量化系数中最小的一个非零量化系数的值加1或减1,以使P个量化系数中非零量化系数的个数为偶数。
在另一种示例中,若目标量化系数为P个量化系数中值为偶数的量化系数,则可以根据P个量化系数中值为偶数的量化系数的个数奇偶性,确定该奇偶性被隐藏的量化系数的奇偶性。其中,值为偶数的量化系数中包括值为0的量化系数。
例如,若P个量化系数中值为偶数的量化系数的个数为奇数,则确定该奇偶性被隐藏的量化系数为奇数。
例如,若P个量化系数中值为偶数的量化系数的个数为偶数,则确定该奇偶性被隐藏的量化系数为偶数。
需要说明的是,该示例中,若第二量化系数中奇偶性被隐藏的量化系数的奇偶性与P个量化系数中值为偶数的量化系数的个数的奇偶性不一致时,则编码端对P个量化系数中的至少一个系数进行修改。例如,若第二量化系数中奇偶性被隐藏的量化系数为奇数,而P个量化系数中值为偶数的量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0,以使P个量化系数中值为偶数的量化系数的个数为奇数。再例如,若第二量化系数中奇偶性被隐藏的量化系数为偶数,而P个量化系数中值为偶数的量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0,以使P个量化系数中量化系数的个数为偶数。
在另一种示例中,若目标量化系数为P个量化系数中值为奇数的量化系数,则可以根据P个量化系数中值为奇数的量化系数的个数奇偶性,确定该奇偶性被隐藏的量化系数的奇偶性。
例如,若P个量化系数中值为奇数的量化系数的个数为奇数,则确定该奇偶性被隐藏的量化系数为奇数。
例如,若P个量化系数中值为奇数的量化系数的个数为偶数,则确定该奇偶性被隐藏的量化系数为偶数。
需要说明的是,该示例中,若第二量化系数中奇偶性被隐藏的量化系数的奇偶性与P个量化系数中值为奇数的量化系数的个数的奇偶性不一致时,则编码端对P个量化系数中的至少一个系数进行修改。例如,若第二量化系数中奇偶性被隐藏的量化系数为奇数,而P个量化系数中值为奇数的量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0或加1或减1,以使P个量化系数中值为奇数的量化系数的个数为奇数。再例如,若第二量化系数中奇偶性被隐藏的量化系数为偶数,而P个量化系数中值为奇数的量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0或加1或减1,以使P个量化系数中量化系数的个数为偶数。
可选的,编码端使用率失真代价最小的调整方式,对P个量化系数中的至少一个系数进行调整。
由上述可知,本申请实施例,根据当前区域中其他量化系数对应的奇偶性,确定该奇偶性被隐藏的量化系数的奇偶性。
S403、确定第一量化系数对应的目标上下文模型,并使用目标上下文模型,对基于上下文编码的第一量化系数进行解码,得到解码后的第一量化系数。
需要说明的是,上述S403与上述S402在实现过程中没有严格的先后顺序,也就是说,上述S403可以在上述S402之前执行,也可以在上述S402之后执行,还可以与上述S402同步执行,本申请实施例对此不做限制。
本申请实施例中,第一量化系数为对当前区域中的第二量化系数的全部或部分进行奇偶性隐藏得到的量化系数,该第一量化系数中包括奇偶性被隐藏的部分。
例如,对于AVM中的系数解码顺序,由于AVM中变换块大小通常为十六的倍数,故将系数从右下角第一个开始按解码顺序上每十六个系数分为一个区域。示例性的,如图8所示一个使用Zig-Zag扫描的8×8的变换块,解码顺序从63到0。该变换块可分为四个区域,分别是扫描索引48-63,32-47,16-31,0-15。示例性的,索引48,32,16,0四个位置上的系数的奇偶性被隐藏。实际解码从解码顺序上第一个非零系数开始,假定索引21为变换块中按解码顺序的第一个非零系数。系数解码则会跳过索引比21大的位置并默认其为零,按照21,20,….,0的顺序进行上下文模型标识位的解码。假设第一量化系数为图8索引为21位置处的系数,则使用第一量化系数对应的上下文模型对第一量化系数进行解码。
解码端在解码第一量化系数之前,首先需要确定第一量化系数对应的上下文模型。
方式一,由于奇偶性被隐藏后的量化系数,通常会比相同情况下的奇偶性未隐藏的量化系数要小,这使得使用同样的一组上下文模型去编码分布概率不同的两种系数(即奇偶性被隐藏的系数和奇偶性未被隐藏的系数)不再合理。基于此,本申请实施例中,解码端对于奇偶性被隐藏的量化系数和奇偶性未被隐藏的量化系数,使用不同的上下文模型进行解码。也就是说,第一量化系数对应的上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型不同。
本申请实施例的第一量化系数通过一个或多个标识表示,例如,标识1表示0~3的部分,标识2表示3~6的部分,标识3表示6~9的部分,标识4表示9~12的部分,标识5表示12~15的部分。
本申请实施例中,可以为第一量化系数的各标识中的一个或多个标识确定一个上下文模型,以该一个或多个标识进行解码。也就是说,本申请实施例可以为第一量化系数中的目标量化系数确定一个目标上下文模型,并使用该目标上下文模型,对第一量化系数中的目标量化系数进行解码。
可选的,第一量化系数的目标量化系数可以为标识1,或者目标标识为标识2至标识5中的任意一个标识表示的量化系数,这样,解码端可以确定2个上下文模型,其中一个上下为模型用于解码标识1,另一个上下文模型用于解码标识2至标识5。
方式二,为了降低第一量化参数的解码复杂度,则本申请实施例中第一量化系数对应的上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。也就是说,本申请实施例中,复用已有上下文模型,来对部分或全部奇偶性被隐藏的量化参数进行解码,进而降低了奇偶性被隐藏量化参数的解码复杂度。
由上述可知,量化参数可以划分为至少一个部分,例如,将量化参数标识1表示的0~3的部分称为BR,将量化参数中标识2~5表示的4~15的部分称为LB。
在一些实施例中,量化参数的不同部分所对应的上下文模型不同,此时,上述S403包括如下步骤:
S403-A、获取奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型。
S403-B、从目标量化系数对应的多个上下文模型中,确定目标量化系数对应的目标上下文模型,目标量化系数为量化系数的部分量化系数;
S403-C、使用目标量化系数对应的目标上下文模型,解码第一量化系数的目标量化系数,得到第一量化系数。
在方式二中,为了降低解码复杂度,复用奇偶性未被隐藏的其他量化系数对应的上下文模型,对奇偶性隐藏的第一量化系数进行解码。因此,在解码第一量化系数中的目标量化系数时,则使用奇偶性未被隐藏的其他量化系数的目标量化系数对应的上下文模型,对第一量化系数中的目标量化系数进行解码。
为了提高量化系数的解码准确性,则通常为量化系数中的各部分创建多个上下文模型,例如,为BR部分创建R个上下文模型,为LR部分创建Q个上下文模型,其中R、Q均为正整数。
这样在对第一量化系数的目标量化系数进行解码时,首先获取奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型,再从这目标量化系数对应的多个上下文模型中,确定该目标量化系数对应的目标上下文模型,并使用该目标上下文模型对第一量化系数中的目标量化系数进行解码。例如,目标量化系数为BR,假设BR对应R个上下文模型,这样,解码端从这R个上下模型中,选出一个上下文模型作为BR对应的目标上下文模型,并使用该BR对应的目标上下文模型对第一量化系数中的BR进行解码。同理,若目标量化系数为LR时,假设LR对应Q个上下文模型,这样,解码端从这Q个上下模型中,选出一个上下文模型作为LR对应的目标上下文模型,并使用该LR目标上下文模型对第一量化系数中的LR进行解码。
上述S403-B中从目标量化系数对应的多个上下文模型中,确定第一量化系数的目标量化系数对应的目标上下文模型的实现方式包括但不限于如下几种:
方式1,将上述目标量化系数对应的多个上下文模型中的任意一个上下文模型,确定为第一量化系数的目标量化系数对应的目标上下文模型。
方式2,上述S403-B包括如下S403-B1和S403-B2的步骤:
S403-B1、确定目标量化系数对应的目标上下文模型的索引;
S403-B2、根据索引从目标量化系数对应的多个上下文模型中,确定目标量化系数对应的目标上下文模型。
在该方式2中,上述目标量化系数对应的多个上下文模型中的每个上下文模型包括一个索引,这样解码端可以通过确定目标量化系数对应的目标上下文模型的索引,进而根据索引,从目标量化系数对应的多个上下文模型中选出该目标量化系数对应的目标上下文模型。
本申请实施例对确定目标量化系数对应的上下文模型的索引的具体实现方式不做限制。
本申请实施例中,第一量化系数包括BR,若第一量化系数大于3时,则第一量化系数还包括LR,其中确定BR对应的目标上下文模型的索引和确定LR对应的目标上下文模型的索引的方式不同。下面对确定LR对应的目标上下文模型的索引和确定LR对应的目标上下文模型的索引的过程分别进行介绍。
情况1,若目标量化系数为第一量化系数的BR时,则上述S403-B1包括如下步骤:
S403-B11、根据第一量化系数周围已解码的量化系数的BR之和,确定第一量化系数的BR对应的目标上下文模型的索引。
本申请实施例中,第一量化系数的BR与第一量化系数周围已解码的量化系数的BR相关,因此,本申请实施例,根据第一量化系数周围已解码的量化系数的BR之和,确定第一量化系数中BR对应的目标上下文模型的索引。
本申请实施例对上述S403-B11的具体实现方式不做限制。
在一种示例中,将第一量化系数周围已解码的量化系数的BR之和,确定为第一量化系数中BR对应的目标上下文模型的索引。
在另一种示例中,对第一量化系数周围已解码的量化系数的BR之和进行运算处理,得到第一量化系数中BR对应的目标上下文模型的索引。
在一些实施例后,上述S403-B11包括如下步骤:
S403-B111、将第一量化系数周围已解码的量化系数的BR之和、与第一预设值进行相加,得到第一和值;
S403-B112、将第一和值与第一数值进行相除,得到第一比值;
S403-B113、根据第一比值,确定第一量化系数的BR对应的目标上下文模型的索引。
在该实施例中,根据第一量化系数周围已解码的量化系数的BR之和,确定第一量化系数中BR对应的目标上下文模型的索引的方法可以是,将第一量化系数周围已解码的量化系数的BR之和、与第一预设值进行相加,得到第一和值,接着,将第一和值与第一数值进行相除,得到第一比值,最后根据该第一比值,确定第一量化系数中BR对应的目标上下文模型的索引。
本申请实施例对第一量化系数的周围已解码的量化系数可以理解为在扫描顺序上位于第一系数周围的J个已解码的量化系数。
在一种示例中,假设J为5,以Zig-Zag scan、Column scan和Row scan这3种扫描方式为例,第一量化系数周围已解码的5个量化系数如图9A至图9C所示,第一量化系数为图中的黑色部分,第一量化系数周围已解码的5个量化系数为灰色部分。也就是说,在确定BR对应的目标上下文模型的索引,参照的是如9A至图9C所示的第一量化系数周围5个已解码的量化系数。
本申请实施例对上述第一预设值和第一数值的具体取值不做限制。
可选的,第一预设值为1。
可选的,第一预设值可以为2。
可选的,第一数值为1。
可选的,第一数值为2。
本申请实施例对上述S403-B113中根据第一比值,确定第一量化系数中BR对应的目标上下文模型的索引的具体方式不做限制。
示例1,将第一比值,确定为第一量化系数中BR对应的目标上下文模型的索引。
示例2,对第一比值进行处理,得到第一量化系数中BR对应的目标上下文模型的索引。本申请实施例对第一比值进行处理的具体方式不做限制。
在示例2的一种可能的实现方式中,上述S403-B113包括:
S403-B113-1、将第一比值与第一预设阈值中的最小值,确定为第二数值;
S403-B113-2、根据第二数值,确定第一量化系数的BR对应的目标上下文模型的索引。
在该可能的实现方式中,将第一比值与第一预设阈值进行比较,将第一比值和第一预设阈值中的最小值,确定为第二数值,进而根据该第二数值,确定第一量化系数中BR对应的目标上下文模型的索引。
本申请实施例对上述S403-B113-2中根据第二数值,确定BR对应的目标上下文模型的索引的具体方式不做限制。
例如,将上述确定的第二数值,确定为第一量化系数中BR对应的目标上下文模型的索引。
再例如,确定BR的偏置索引offset BR,将第二数值和第一量化系数中BR的偏置索引之后,确定为第一量化系数中BR对应的目标上下文模型的索引。
在一种示例中,可以根据如下公式(5),确定第一量化系数中BR对应的目标上下文模型的索引:
Index BR=offset BR+min(c1,((∑ ΘBR)+a1)>>b1)     (5)
其中,Index BR为第一量化系数中BR对应的目标上下文模型的索引,offset BR为BR的偏置索引,∑ ΘBR为第一量化系数周围量化参数的BR之和,a1为第一预设值,b1为第一数值的二分之一,c1为第一预设阈值。
本申请实施例对上述第一预设值、第一数值以及第一预设阈值的具体取值不做限制。
在一种可能的实现方式中,假设第一预设值a1为1,第一数值为2,即b1=1,第一预设阈值为4时,则上述公式(5)可以具体表示为如下公式(6):
Index BR=offset BR+min(4,((∑ ΘBR)+1)>>1)    (6)
在另一种可能的实现方式中,假设第一预设值a1为2,第一数值为4,即b1=2,第一预设阈值为4时,则上述公式(5)可以具体表示为如下公式(7):
Index BR=offset BR+min(4,((∑ ΘBR)+2)>>2)     (7)
本申请实施例中,在复用原有上下文模型对第一量化系数的BR进行解码时,考虑到奇偶性被隐藏的量化系数与奇偶性未被隐藏的量化系数的大小整体上分布不同,例如奇偶性被隐藏的量化系数大小约为奇偶性未被隐藏的量化系数的一半。因此,本申请实施例中,在复用已有的上下文模型时,对上行文模型索引的确定过程进行调整,以选择适用于奇偶性被隐藏的第一量化系数的目标上下文模型,具体是对第一量化系数的周围量化系数的BR之和除以4,得到第一比值,根据该第一比值,确定第一量化系数中BR对应的目标上下文模型的索引。
进一步的,为了实现整除,在将第一量化系数的周围量化系数的BR之和从公式(6)所示的右移一位,调整为公式(7)所示的右移2位时,将第一预设值a1从公式(6)中的1,调整为公式(7)中的2,以实现四舍五入。
本申请实施例对上述确定BR的偏置索引offset BR的具体方式不做限制。
在一种示例中,根据第一量化系数在当前块中的位置、当前块的大小、当前块的扫描顺序、当前块的颜色分量中的至少一个,确定BR的偏置索引。
例如,在第一量化系数为当前块的左上方位置处的量化系数的情况下,则BR的偏置索引为第一阈值。
再例如,在第一量化系数为当前块的左上方位置处的量化系数的情况下,则BR的偏置索引为第二阈值。
本申请实施例对上述第一阈值和第二阈值的具体取值不做限制。
可选的,在当前块的颜色分量为亮度分量的情况下,则第一阈值为0。
可选的,在当前块的颜色分量为色度分量的情况下,则第一阈值为10。
可选的,在当前块的颜色分量为亮度分量的情况下,则第二阈值为5。
可选的,在当前块的颜色分量为色度分量的情况下,则第二阈值为15。
上文对确定第一量化系数中BR对应的目标上下文模型的索引的具体过程进行介绍。在一些实施例中,若第一量化系数还包括LB部分时,则本申实施例还需要确定第一量化系数中LR对应的目标上下文模型的索引。
下面对确定第一量化系数中LR对应的目标上下文模型的索引的过程进行介绍。
情况2,若目标量化系数为第一量化系数的LR时,则上述S403-B1包括如下步骤:
S403-B21、根据第一量化系数周围已解码的量化系数的BR与LR之和,确定LR对应的目标上下文模型的索引。
本申请实施例中,第一量化系数的LR与第一量化系数周围已解码的量化系数的BR和LR相关,因此,本申请实施例,根据第一量化系数周围已解码的量化系数的BR与LR之和,确定第一量化系数中LR对应的目标上下文模型的索引。
本申请实施例对上述S403-B21的具体实现方式不做限制。
在一种示例中,将第一量化系数周围已解码的量化系数的BR与LR之和,确定为第一量化系数中LR对应的目标上下文模型的索引。
在另一种示例中,对第一量化系数周围已解码的量化系数的BR与LR之和进行运算处理,得到第一量化系数中LR对应的目标上下文模型的索引。
在一些实施例后,上述S403-B21包括如下步骤:
S403-B211、将第一量化系数周围已解码的量化系数的BR与LR之和、与第二预设值进行相加,得到第二和值;
S403-B212、将第二和值与第三数值进行相除,得到第二比值;
S403-B213、根据第二比值,确定LR对应的目标上下文模型的索引。
在该实施例中,根据第一量化系数周围已解码的量化系数的BR与LR之和,确定第一量化系数中LR对应的目 标上下文模型的索引的方法可以是,将第一量化系数周围已解码的量化系数的BR与LR之和、与第二预设值进行相加,得到第二和值,接着,将第二和值与第三数值进行相除,得到第二比值,最后根据该第二比值,确定第一量化系数中LR对应的目标上下文模型的索引。
本申请实施例对第一量化系数的周围已解码的量化系数可以理解为在扫描顺序上位于第一系数周围的J个已解码的量化系数。
在一种示例中,假设J为3,以Zig-Zag scan、Column scan和Row scan这3种扫描方式为例,第一量化系数周围已解码的3个量化系数如图10A至图10C所示,第一量化系数为图中的黑色部分,第一量化系数周围已解码的3个量化系数为灰色部分。也就是说,在确定LR对应的目标上下文模型的索引,参照的是如10A至图10C所示的第一量化系数周围3个已解码的量化系数。
本申请实施例对上述第二预设值和第三数值的具体取值不做限制。
可选的,第二预设值为1。
可选的,第二预设值可以为2。
可选的,第三数值为1。
可选的,第三数值为2。
本申请实施例对上述S403-B213中根据第二比值,确定第一量化系数中LR对应的目标上下文模型的索引的具体方式不做限制。
示例1,将第二比值,确定为第一量化系数中LR对应的目标上下文模型的索引。
示例2,对第二比值进行处理,得到第一量化系数中LR对应的目标上下文模型的索引。本申请实施例对第二比值进行处理的具体方式不做限制。
在示例2的一种可能的实现方式中,上述S403-B113包括:
S403-B213-1、将第二比值与第二预设阈值中的最小值,确定为第四数值;
S403-B213-2、根据第四数值,确定LR对应的目标上下文模型的索引。
在该可能的实现方式中,将第二比值与第二预设阈值进行比较,将第二比值和第二预设阈值中的最小值,确定为第四数值,进而根据该第四数值,确定第一量化系数中LR对应的目标上下文模型的索引。
本申请实施例对上述S403-B213-2中根据第四数值,确定LR对应的目标上下文模型的索引的具体方式不做限制。
例如,将上述确定的第四数值,确定为第一量化系数中LR对应的目标上下文模型的索引。
再例如,确定LR的偏置索引offset LR,将第二数值和第一量化系数中LR的偏置索引之后,确定为第一量化系数中LR对应的目标上下文模型的索引。
在一种示例中,可以根据如下公式(8),确定第一量化系数中LR对应的目标上下文模型的索引:
Index LR=offset LR+min(c2,(((∑ Φ(BR+∑LR)))+a2)>>b2    (8)
其中,Index LR为第一量化系数中LR对应的目标上下文模型的索引,offset LR为LR的偏置索引,∑ Φ(BR+∑LR)为第一量化系数的周围量化参数中小于16部分之和,即第一量化系数的周围量化参数的BR与LR之和,a2为第二预设值,b2为第三数值的二分之一,c2为第二预设阈值。
本申请实施例对上述第二预设值、第三数值以及第二预设阈值的具体取值不做限制。
在一种可能的实现方式中,假设第二预设值a2为1,第三数值为2,即b2=1,第二预设阈值为6时,则上述公式(8)可以具体表示为如下公式(9):
Index LR=offset LR+min(6,(((∑ Φ(BR+∑LR)))+1)>>1      (9)
在另一种可能的实现方式中,假设第二预设值a1为2,第三数值为4,即b2=2,第二预设阈值为6时,则上述公式(8)可以具体表示为如下公式(10):
Index LR=offset LR+min(6,(((∑ Φ(BR+∑LR)))+2)>>2    (8)
本申请实施例中,在复用原有上下文模型对第一量化系数的LR进行解码时,考虑到奇偶性被隐藏的量化系数与奇偶性未被隐藏的量化系数的大小整体上分布不同,例如奇偶性被隐藏的量化系数大小约为奇偶性未被隐藏的量化系数的一半。因此,本申请实施例中,在复用已有的上下文模型时,对上行文模型索引的确定过程进行调整,以选择适用于奇偶性被隐藏的第一量化系数的目标上下文模型,具体是对第一量化系数的周围量化系数的小于16部分之和除以4,得到第二比值,根据该第二比值,确定第一量化系数中LR对应的目标上下文模型的索引。
进一步的,为了实现整除,在将第一量化系数的周围量化系数的小于16部分之和从公式(9)所示的右移一位,调整为公式(10)所示的右移2位时,将第二预设值a21从公式(9)中的1,调整为公式(10)中的2,以实现四舍五入。
本申请实施例对上述确定LR的偏置索引offsrt LR的具体方式不做限制。
在一种示例中,根据第一量化系数在当前块中的位置、当前块的大小、当前块的扫描顺序、当前块的颜色分量中的至少一个,确定LR的偏置索引。
例如,若第一量化系数为当前块的左上方位置处的量化系数时,则LR的偏置索引为第三阈值。
再例如,若第一量化系数为当前块的左上方位置处的量化系数时,则LR的偏置索引为第四阈值。
本申请实施例对上述第三阈值和第四阈值的具体取值不做限制。
可选的,在当前块的颜色分量为亮度分量的情况下,则第三阈值为0。
可选的,在当前块的颜色分量为色度分量的情况下,则第三阈值为14。
可选的,在当前块的颜色分量为亮度分量的情况下,则第四阈值为7。
可选的,在当前块的颜色分量为色度分量的情况下,则第四阈值为21。
在一些实施例中,第一量化系数的上下文模型索引值的计算如下表4所示:
表4
Figure PCTCN2022093411-appb-000006
表4中,Θ为图9A至图9C中几种扫描方式下第一量化系数周围的5个已解码量化系数,Φ为图10A至图10C中几种扫描方式下第一量化系数周围的3个已解码量化系数。
根据上述步骤,确定出第一量化系数中目标量化系数对应的目标上下文模型的索引后,根据该索引,从目标量化系数对应的多个上下文模型中,确定出第一量化系数中目标量化系数对应的目标上下文模型。例如,根据第一量化系数中BR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的BR部分对应的多个上下文模型中,得到第一量化系数中BR对应的目标上下文模型,根据第一量化系数中LR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的LR部分对应的多个上下文模型中,得到第一量化系数中LR对应的目标上下文模型。
由上述可知,上述多个上下文模型包括多个QP段(例如4个QP段)、2个分量(例如亮度分量和色度分量)下的不同索引的上下文模型,这样,上述S403-B2中根据索引从目标量化系数对应的多个上下文模型中,确定目标量化系数对应的目标上下文模型,包括:根据第一量化系数对应的量化参数、变换类型、变换块大小和颜色分量中的至少一个,从上述目标量化系数对应的多个上下文模型中,选出至少一个上下文模型;接着,将至少一个下上文模型中与该索引对应的上下文模型,确定为目标量化系数对应的目标上下文模型。
例如,根据上述步骤,确定出第一量化系数中的BR(即标识1)对应的索引为索引1,第一量化系数对应的QP段为QP段1,且第一量化系数为亮度分量下的量化系数。假设,标识1对应的上行文模型为S个,则首先从这S个上下文模型中,选出QP段1和亮度分量下的T个上下文模型,接着,将这T个上下文模型中索引1对应的上下文模型,确定为第一量化系数中的BR对应的目标上下文模型,进而使用该目标上下文模型对第一量化系数中的BR进行解码。
再例如,根据上述步骤,确定出第一量化系数中的LR(即标识2至标识5)对应的索引为索引2,第一量化系数对应的QP段为QP段1,且第一量化系数为亮度分量下的量化系数。假设,LR对应的上行文模型为U个,则首先从这U个上下文模型中,选出QP段1和亮度分量下的V个上下文模型,接着,将这V个上下文模型中索引2对应的上下文模型,确定为第一量化系数中的标识2至标识5对应的目标上下文模型,进而使用该目标上下文模型对第一量化系数中的LR进行解码。
在一些实施例中,解码端在从多个上下文模型中,确定目标上下文模型之前,需要对这多个上下文模型进行初始化,接着,从这些初始化后的多个上下文模型中,确定出目标上下文模型。
例如,对BR对应的多个上下文模型进行初始化,再从这些初始化后的多个上下文模型中,选择BR对应的目标上下文模型。
再例如,对LR对应的多个上下文模型进行初始化,再从这些初始化后的多个上下文模型中,选择LR对应的目 标上下文模型。
也就是说,本申请实施例为了使奇偶性被隐藏系数对应的上下文模型中的概率值能够更快收敛,每个模型可以根据每个符号出现的概率,有一组对应的初始值。在AVM中,每一个上下文模型的概率根据该模型中的符号个数和每个符号出现的概率和累计分布函数(Cumulative Distribution Function)计算该上下文模型中出现符号小于某个值的概率,该概率值通过归一化用一个16bit的整数来表示。
本申请实施例中,对多个上下文模型进行初始化的方式包括但不限于如下几种:
方式1,使用等概率值对多个上下文模型进行初始化。例如,一个4符号的上下文模型,符号0,1,2,3出现的概率都为0.25,则出现小于1符号的概率为0.25,小于2符号的概率为0.5,小于3符号的概率为0.75,小于4符号的概率为1,将概率进行16bit取整得到出现小于1符号的概率为8192,小于2符号的概率为16384,小于3符号的概率为24576,小于4符号的概率为32768,这四个整数值构成该上下文模型的概率初始值。
方式2,使用收敛概率值对多个上下文模型进行初始化,其中收敛概率值为使用上下文模型对测试视频进行编码时,上下文模型对应的收敛概率值。示例性的,首先,给这些上下文模型设定一个初始的概率值例如等概率。接着,使用某些目标码率对应的量化系数对一系列测试视频进行编码,并将这些上下文模型最终的收敛概率值作为这些上下文模型的初始值。重复上述步骤可得到新的收敛值,选取合适的收敛值作为最终的初始值。
根据上述方法,确定出第一量化系数中目标量化系数对应的目标上下文模型后,使用该目标上下文模型对目标量化系数进行解码,得到第一量化系数。
示例性的,根据第一量化系数中BR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的BR部分对应的多个上下文模型中,得到第一量化系数中BR对应的目标上下文模型,并使用第一量化系数中BR对应的目标上下文模型,对第一量化系数中的BR进行解码,得到解码后的BR,进而根据解码后的BR,确定第一量化系数。
在一些实施例中,若第一量化系数只包括BR部分,不包括LR部分时,则将解码后的BR,确定为第一量化系数。
在一些实施例中,若第一量化系数还包括LR部分时,则根据第一量化系数中LR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的LR部分对应的多个上下文模型中,得到第一量化系数中LR对应的目标上下文模型。接着,解码端使用LR对应的目标上下文模型,解码所述第一量化系数中的LR,得到解码后的LR,并根据解码后的BR和解码后的LR,确定第一量化系数。
例如,若第一量化系数不包括大于15的部分时,则将解码后的BR和解码后的LR的和,确定为第一量化系数的绝对值。
再例如,若第一量化系数还包括大于15的部分时,解码得到第一量化系数中大于15的部分,进而将解码后的BR、解码后的LR以及大于15部分的解码值之和,确定为第一量化系数的绝对值。
在一种示例中,第一量化系数的解码过程为:
解码标识1,此处用a表示标识1的值,a的取值为0~3,若解码的标识1为3,则解码标识2,否则默认标识2为0,此处用b表示标识2的值,b的取值为0~3。
若解码的标识2为3,则解码标识3,否则默认标识3为0,此处用c表示标识3的值,c的取值为0~3。
若解码的标识3为3,则解码标识4,否则默认标识4为0,此处用d表示标识4的值,d的取值为0~3。
若解码的标识4为3,则解码标识5,否则默认标识5为0,此处用e表示标识5的值,e的取值为0~3。
若解码的标识5为3,则解码remainder部分。
若标识1不是0或奇偶性为奇时,则解码第一量化系数的正负号。
根据标识1~5,remainder的值,根据如下公式(11),可恢复出第一量化系数绝对值:
Abslevel=(a+b+c+d+e+remainder)    (11)
其中,Abslevel为第一量化系数的绝对值,remainder为第一量化系数的绝对值中超过15的部分。
根据上述S403确定出第一量化系数,根据上述S402确定出第一量化系数中奇偶性被隐藏的量化系数的奇偶性,接着执行如下S404的步骤。
S404、根据奇偶性被隐藏的量化系数的奇偶性和解码后的第一量化系数,确定第二量化系数。
由上述可知,本申请实施例中,为了降低编码代价,编码端对当前区域中的第二量化系数的全部或部分进行奇偶性隐藏,得到第一量化系数,由于第二量化系数的全部或部分的奇偶性被隐藏,因此,通过当前区域中P个量化系数来指示。这样解码端在解码当前区域时,首先解码码流,得到当前区域中的P个量化系数,并根据这P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,接着,确定第一量化系数对应的目标上下文模型,并用该目标上下文模型对基于上下文编码的第一量化系数进行解码,得到第一量化系数。最后,解码端可以根据解码出的第一量化系数,以及第一量化系数中奇偶性被隐藏的量化系数的奇偶性,重建出具有奇偶性的第二量化系数,即实现奇偶性被隐藏系数的重建。
在一些实施例中,解码端根据奇偶性被隐藏的量化系数的奇偶性不同,采用不同的方式重建第二量化系数。
方式1,若奇偶性被隐藏的量化系数为奇数,则采用第一运算方式对奇偶性被隐藏的量化系数进行运算,得到第一运算结果,根据该第一运算结果和第一量化系数,得到第二量化系数。
解码端采用的第一运算方式,与编码端确定第一量化系数的第三运算方式对应。
本申请实施例对上述第一运算方式和第三运算方式的具体形式不做限制,其中第一运算方式可以理解为第三运算方式的逆运算。
例如,对于非零量化系数,若编码端采用的第三运算方式为将第二量化系数中的奇偶性待隐藏部分的值加一除以 二。对应的,解码端采用的第一运算方式为对第一量化系数中奇偶性被隐藏的量化系数乘以二减一。
方式2,若第二量化系数为偶数,则采用第二运算方式对奇偶性被隐藏的量化系数进行运算,得到第一运算结果,根据该第一运算结果和第一量化系数,得到第二量化系数。
解码端采用的第二运算方式,与编码端确定第一量化系数的第四运算方式对应。
本申请实施例对上述第二运算方式和第四运算方式的具体形式不做限制,其中第二运算方式可以理解为第四运算方式的逆运算。
在一种可能的实现方式中,若编码端采用的第四运算方式为将第二量化系数中的奇偶性待隐藏部分的值除以二。对应的,解码端采用的第二运算方式为对第一量化系数中奇偶性被隐藏的量化系数乘以二。
在一些实施例中,解码端可以通过如下方式,确定出第二量化系数,即上述S404包括如下步骤:
S404-A1、使用预设运算方式对奇偶性被隐藏的量化系数进行运算,得到第一运算结果;
S404-A2、根据奇偶性,对第一运算结果进行处理,得到第二运算结果;
S404-A3、根据第二运算结果和第一量化系数,得到第二量化系数。
举例说明,编码端对第二量化系数中超出10的部分进行奇偶性隐藏,得到第一量化系数25,该第一量化系数中10的部分没有经过奇偶性隐藏,剩余15部分为奇偶性被隐藏的量化系数。假设根据上述方法,确定出奇偶性被隐藏的量化系数为奇数。这样,解码端首先使用预设运算方式对奇偶性被隐藏的15部分进行运算,得到第一运算结果,接着,根据奇偶性被隐藏的15部分的奇偶性,对第一运算结果进行处理,例如为第一运算结果增加一个数值或减取一个数值,具体与编码端的奇偶性隐藏方式对应,得到第二运算结果。最后,该第二运算结果和第一量化系数,得到最终的第二量化系数。
本申请实施例对上述预设运算方式的具体形式不做限制。
在一些实施例中,上述预设运算方式包括奇偶性被隐藏的量化系数乘以二,也就是说,将第一量化系数中奇偶性被隐藏的量化系数的值乘以2,得到第一运算结果。
接着,根据奇偶性被隐藏的量化系数的奇偶性和上述第一运算结果,得到第二运算结果,例如,将第一运算结果与奇偶值之和,确定为第二运算结果的值,其中,若奇偶性被隐藏的量化系数为奇数,则奇偶值为1,若奇偶性被隐藏的量化系数为偶数,则奇偶值为0。
最后,根据第二运算结果和第一量化系数,得到第二量化系数。
在一些实施例中,若编码端对第二量化系数的全部进行奇偶性隐藏,得到第一量化系数,也就是说,上述奇偶性被隐藏的量化系数为第一量化系数的全部,此时,则将该上述第二运算结果,确定为所述第二量化系数。
在一些实施例中,若编码端对第二量化系数的部分进行奇偶性隐藏,得到第一量化系数,也就是说,上述第一量化系数还包括奇偶性未被隐藏部分,此时,则将奇偶性未被隐藏部分与第二运算结果之和,确定为所述第二量化系数。
示例性的,解码端可以根据如下公式(12),确定出第二量化系数:
Figure PCTCN2022093411-appb-000007
其中,C为第二量化系数,qIdx为第一量化系数,parity为奇偶性被隐藏的量化系数的奇偶值,若奇偶性被隐藏的量化系数为奇数时,则该parity=1,若奇偶性被隐藏的量化系数为奇数时,则该parity=0。(qIdx-n)为第一量化系数中奇偶性被隐藏的量化系数,(qIdx-n)×2为第一运算结果,(qIdx-n)×2+parity为第二运算结果。
由上述公式(12)可知,编码端对第二量化系数大于n的部分进行奇偶性隐藏,对于小于n的部分不进行奇偶性隐藏。此时,解码端解码出第一量化系数qIdx后,若该第一量化系数qIdx小于n时,则说明该第一量化系数中不包括奇偶性被隐藏的量化系数,则将该第一量化系数qIdx确定为第二量化系数C。
若该第一量化系数qIdx大于n时,则说明该第一量化系数中包括奇偶性被隐藏的量化系数,则对该第一量化系数中的奇偶性被隐藏的量化系数(qIdx-n)进行处理,得到第二运算结果(qIsx-n)×2+parity,接着,将该第二运算结果与第一量化系数中奇偶性未被隐藏部分n的和,确定为第二量化系数。
解码端,根据上述方法,可以确定出当前块中其他区域的第一量化系数对应的第二量化系数,当前块中各区域的第二量化系数,以及其他奇偶性未被隐藏的量化系数组成当前块的量化系数。接着,根据当前块的量化系数,确定当前块的重建值。
在一些实施例中,若编码端跳过变换步骤,直接对当前块的残差值进行量化。对应的,解码端对当前块的量化系数进行反量化,得到当前块的残差值。另外,采用帧内预测和/或帧间预测方法,确定出当前块的预测值,将当前块的预测值与重建值相加,得到当前块的重建值。
在一些实施例中,若编码端未跳过变换步骤,即编码端对当前块的残差值进行变换,得到变换系数,对变换系数进行量化。对应的,解码端对当前块的量化系数进行反量化,得到当前块的变换系数,对变换系数进行反变换,得到当前块的残差值。另外,采用帧内预测和/或帧间预测方法,确定出当前块的预测值,将当前块的预测值与重建值相加,得到当前块的重建值。
本申请实施例提供的视频解码方法,解码器解码码流,得到当前区域中的P个量化系数,当前区域为当前块中包括至少一个非零量化系数的区域,P为正整数;根据P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,第一量化系数为对当前区域中的第二量化系数的全部或部分进行奇偶性隐藏后得到的量化系数;确定第一量化系数对应的目标上下文模型,并使用目标上下文模型,对基于上下文编码的第一量化系数进行解码,得到解码后的第一量化系数;根据第一量化系数中奇偶性被隐藏的量化系数的奇偶性和第一量化系数,确定具有奇偶性的第二量化系数。本申请实施例,通过当前区域中P个量化系数,对当前区域中的第二量化系数的部分或全部奇偶性进行隐藏,得到第一量化系数,并对第一量化系数进行编码,可以减少编码所需的比特数,降低视频压缩代价。另外,本申请实 施例对于第一量化系数重新确定目标上下文模型进行解码,可以实现对奇偶性被隐藏的第一量化系数的准确解码。
在上述图6所示实施例的基础上,下面结合图11,对本申请实施例提供的视频解码方法做进一步说明。
图11为本申请一实施例提供的视频解码方法流程示意图。如图6所示,本申请实施例的方法包括:
S601、解码码流,得到至少一个标志。
上述至少一个标志用于指示是否允许量化系数的奇偶性被隐藏。
可选的,上述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
具体的,解码端首先解码码流,得到上述至少一个标志,根据这至少一个标志,判断当前块是否允许量化系数的奇偶性被隐藏。
若根据上述至少一个标志,确定当前块不允许量化系数的奇偶性被隐藏时,则执行如下S602和S609的步骤,即解码码流,得到当前块的已解码信息,该已解码信息中包括当前块的量化系数,此时,由于当前块中的量化系数未进行奇偶性隐藏,则执行进行后续的反量化过程。
若根据上述至少一个标志,确定当前块允许量化系数的奇偶性被隐藏时,则执行如下S602至S608的步骤。
S602、解码码流,得到当前块的已解码信息。
上述当前块的已解码信息中包括当前块中的量化系数。
示例性的,解码端首先解码标识1到5,接着按照从最后一个非零系数至变换块左上角的扫描顺序,解码每个量化系数的绝对值超出15的部分,最终得的当前块中各量化系数。
S603、将当前块划分为N个区域。
本申请实施例对当前块的区域划分方式不做限制,例如,根据解码扫描顺序,将当前块划分为N个区域,或者根据当前块中像素点的空间位置,将当前块划分为N个区域。
上述N个区域的大小可以相同,也可以不同,本申请实施例对此不做限制。
上述S603的具体实现过程可以参照上述S401-A2的相关描述,在此不再赘述。
S604、从当前块的已解码信息中,得到当前区域中的P个量化系数。
其中,当前区域为当前块的N个区域中待解码的一个区域。
上述当前块的已解码信息中包括当前块中的量化系数,这样可以从当前块的已解码信息中,得到当前区域所包括的P个量化系数。
在一些实施例中,上述P个量化系数为当前区域的所有量化系数。
在一些实施例中,上述P个量化系数为当前区域的部分量化系数。
上述S604的具体实现过程可以参照上述S401-A3的相关描述,在此不再赘述。
S605、判断当前区域是否满足条件。
该条件包括下面的预设条件和奇偶性隐藏技术开启条件中的至少一个。
本申请实施例中,对本申请提出的奇偶性隐藏技术进行限定。具体是,若当前区域满足设定的条件时,则说明当前区域使用本申请实施例提供的奇偶性隐藏技术可以得到显著的有益效果,此时,执行如下S606至S609的步骤。若当前区域不满足设定的条件时,说明当前区域使用本申请实施例提供的奇偶性隐藏技术无法达到显著的有益效果,此时,执行如下S610的步骤。
下面对奇偶性隐藏技术开启条件进行介绍。
在一些实施例中,可以根据量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个的不同,预设不同的奇偶性隐藏技术开启条件。
例如,若量化系数大于或等于某一预设值时,开启奇偶性隐藏技术,若量化系数小于某一预设值时,不开启奇偶性隐藏技术。
再例如,若变换类型为预设变换类型时,则开启奇偶性隐藏技术,若变换类型不为预设变换类型时,不开启奇偶性隐藏技术。例如,当前块的变换类型为第一变换类型时,则确定第一量化系数的奇偶性不允许被隐藏,该第一变换类型用于指示当前块的至少一个方向跳过变换,具体参照上述表2所示。
再例如,若变换块大小大于或等于预设大小时,则开启奇偶性隐藏技术,若变换块大小小于预设大小时,不开启奇偶性隐藏技术。
再例如,若当前块的颜色分量为第一分量时,则确定第一量化系数的奇偶性不允许被隐藏,即不开启奇偶性隐藏技术,若当前块的颜色分量不是第一分量时,则确定允许第一量化系数的奇偶性被隐藏,即开启奇偶性隐藏技术。本申请实施例对第一分量不做限制,在一种可能的实现方式中,第一分量为色度分量,也就是说,在亮度分量下,开启奇偶性隐藏技术,在色度分量下,不开启奇偶性隐藏技术。
再例如,若变换块使用预设扫描类型时,则开启奇偶性隐藏技术,若变换块不适用预设扫描类型时,不开启奇偶性隐藏技术。本申请对预设扫描类型不做限制,例如预设扫描类型为ZigZag扫描或对角线扫描。
需要说明的是,上述各举例只是一种示例性说明,上述各示例可以相同组合作为奇偶性隐藏技术开启条件。
下面对预设条件进行介绍。
本申请实施例对上述预设条件的具体内容不做限制,可以根据实际需要设定。
在一种可能的实现方式中,上述预设条件包括如下至少一个条件:
条件1,当前区域中非零量化系数的个数大于第一预设数值;
条件2,当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个非零量化系数之间距离大于第二预设数值;
条件3,当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个量化系数之间距离大于第三预设数值;
条件4,当前区域中,非零量化系数的绝对值之和大于第四预设数值;
条件5,当前块为颜色分量为第二分量,可选的的第二分量为亮度分量;
条件6,当前块的变换类型不是第一变换类型,该第一变换类型用于指示当前块的至少一个方向跳过变换。
在一些实施例中,上述6个条件还可以相互组合,形成新的约束条件。
本申请实施例对上述第一预设数值至第四预设数值的具体取值不做限制,只有第一预设数值至第四预设数值均为正整数即可。
在一种示例中,上述第一预设数值、第二预设数值、第三预设数值和第四预设数值中的至少一个为固定值。
在另一种示例中,上述第一预设数值、第二预设数值、第三预设数值和第四预设数值中的至少一个为非固定值,即为编码端根据当前编码信息确定的数值。
在一些实施例中,若第一预设数值、第二预设数值、第三预设数值和第四预设数值中的至少一个为非固定值时,则编码端将该非固定值写入码流。
在一些实施例中,解码端可以先根据第一量化系数对应的量化参数、变换类型、变换块大小和颜色分量中的至少一个,在确定第一量化系数允许量化系数的奇偶性被隐藏时,再判断当前区域是否满足上述预设条件。
在一些实施例中,解码端可以在确定当前区域满足上述预设条件时,判断第一量化系数是否满足该奇偶性隐藏技术开启条件。
也就是说,上述预设条件,与上述奇偶性隐藏技术开启条件可以单独使用,也可以相互组合使用。
S606、根据P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性。
例如,根据P个量化系数对应的奇偶性,确定奇偶性被隐藏的量化系数的奇偶性。
示例的,根据P个量化系数的第一绝对值之和的奇偶性,确定奇偶性被隐藏的量化系数的奇偶性。
示例的,根据P个量化系数中目标量化系数的个数奇偶性,确定奇偶性被隐藏的量化系数的奇偶性。
上述目标量化系数为P个量化系数中的非零量化系数、值为偶数的非零量化系数、值为偶数的量化系数、值为奇数的量化系数中的任意一个。
上述S606的具体实现过程可以参照上述S402的相关描述,在此不再赘述。
S607、确定第一量化系数对应的目标上下文模型,并使用目标上下文模型,对基于上下文编码的第一量化系数进行解码,得到解码后的第一量化系数。
可选的,第一量化系数对应的目标上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。
上述S607的实现过程,参照上述S403的具体描述,在此不再赘述。
S608、根据奇偶性被隐藏的量化系数的奇偶性和解码后的第一量化系数,确定第二量化系数。
上述S608的实现过程,参照上述S404的具体描述,在此不再赘述。
S609、根据第二量化系数,确定当前块的量化系数。
根据上述方法,确定出当前块各区域中奇偶性被隐藏的量化系数,结合解码出的当前块中奇偶性未被隐藏的其他量化系数,得到当前块的量化系数。
S610、根据当前块的已解码信息,确定当前块的量化系数。
若当前区域不包括奇偶性被隐藏的量化系数,则将码流中解码出的量化系数,作为当前块最终的量化系数。
S611、根据当前块的量化系数,得到当前块的重建值。
根据上述方法,确定出当前块的量化系数,通过如下两种方式,确定当前块的重建值。
方式1,若编码端跳过变换步骤,直接对当前块的残差值进行量化。对应的,解码端对当前块的量化系数进行反量化,得到当前块的残差值。另外,采用帧内预测和/或帧间预测方法,确定出当前块的预测值,将当前块的预测值与重建值相加,得到当前块的重建值。
方式2,若编码端未跳过变换步骤,即编码端对当前块的残差值进行变换,得到变换系数,对变换系数进行量化。对应的,解码端对当前块的量化系数进行反量化,得到当前块的变换系数,对变换系数进行反变换,得到当前块的残差值。另外,采用帧内预测和/或帧间预测方法,确定出当前块的预测值,将当前块的预测值与重建值相加,得到当前块的重建值。
本申请实施例,为奇偶性隐藏技术设置条件,在当前区域满足条件时,确定当前区域中至少一个量化系数的奇偶性被隐藏,进而采用本申请提出的奇偶性隐藏技术对当前区域中的第一量化系数进行解码,提高解码准确性。
示例性的,本申请实施例的解码详细过程中如下表5和表6所示:
表5
Figure PCTCN2022093411-appb-000008
上表5为在序列头中增加标识位从而控制是否对当前序列打开奇偶隐藏技术。
enable_parityhiding等于1表示开启奇偶隐藏技术,enable_parityhiding等于0表示关闭奇偶隐藏技术,如果该语法元素没出现,其默认值为零。
表6
Figure PCTCN2022093411-appb-000009
Figure PCTCN2022093411-appb-000010
Figure PCTCN2022093411-appb-000011
上表6为系数解码时的流程,变量enable_ph的值根据序列头中enable_parityhiding的值,以及当前使用的量化参数QP大小、当前变换块大小、当前变换块的颜色分量、当前变换块的扫描类型、当前变换块的变换类型中的至少一个确定,进而解码端根据该变量enable_ph的值,确定是否使用奇偶隐藏技术决定。plane_type==PLANE_TYPE_Y表示当前分量为亮度分量。SBBSIZE为表示每个区域包含的位置个数,PHTRESH为就当前区域中是否打开奇偶隐藏技术的阈值,例如,SBBSIZE和PHTHRESH分别是16和3。
在一些实施例中,若本申请实施例中,奇偶性隐藏技术包括enable_parityhiding等于1、当前区域包括的非零量化系数的个数大于阈值、当前分量为亮度分量、当前块的变换类型为2D类型,且不是2D类型中的IDTX,如表2所示,IDTX指示水平方向和竖直方向均跳过变换操作。此时,如表7所示,上述表6所示的语法表内isHidePar变量的值获取条件变为如表7所示:
表7
isHidePar=(enable_ph  
&&num_nz>PHTHRESH  
&&plane_type==PLANE_TYPE_Y  
&&tx_class==TX_CLASS_2D  
&&tx_type!=IDTX)  
其中,tx_class==TX_CLASS_2D表示当前变换类型为2D变换(见表2)。
tx_type!=IDTX表示当前变换类型不为identity transform,其中identity transform可以理解为跳过变换。
在一些实施例中,若本申请实施例中,奇偶性隐藏技术包括enable_parityhiding等于1、当前区域包括的非零量化系数的个数大于阈值、当前块的变换类型为2D类型,且不是2D类型中的IDTX。此时,如表8所示,上述表6所示的语法表内isHidePar变量的值获取条件变为如表8所示:
isHidePar=(enable_ph  
&&num_nz>PHTHRESH  
&&tx_class==TX_CLASS_2D  
&&tx_type!=IDTX)  
本申请实施例对确定isHidePar变量的值具体获取条件不做限制,具体根据实际需要确定。
上述isHidePar变量的值获取条件可以理解为上述S605中所述的预设条件。
需要说明的是,上述表6为一种示例,示出了根据当前区域中的前15个量化系数(即P个量化系数),确定当前区域中的第16个量化系数(即第一量化系数)的过程。
上述表6示出的解码过程主要包括如表9所示的步骤:
表9
循环1
解码每个区域前15个系数的标识1~5
根据前15个系数中非零系数个数判断第16个系数是否隐藏了奇偶
解码第16个系数
循环2
解码非零系数的正负号和大于15的部分(若是隐藏奇偶的系数,则解码大于30的部分)
本申请实施例,解码端在解码码当前区域时,首先判断当前区域是否满足条件,以确定当前区域采用本申请提供的奇偶性隐藏技术进行解码时,是否可以带来显著的有益技术效果,在确定采用该奇偶性隐藏技术具有显著的有益效果时,采用本申请的技术方案对当前区域进行解码,提高解码可靠性。
上文对本申请实施例涉及的视频解码方法进行了描述,在此基础上,下面针对编码端,对本申请涉及的视频解码 方法进行描述。
图12为本申请实施例提供的视频编码方法的一种流程示意图。本申请实施例的执行主体可以为上述图1和图2所示的编码器。
如图12所示,本申请实施例的方法包括:
S701、将当前块划分为N个区域,N为正整数。
本申请实施例,将当前块划分为一个或多个区域,例如划分为N个区域,N为正整数。为了降低编码代价,可以根据同一个区域中量化系数相关的奇偶性,例如根据该区域中量化系数的绝对值之和的奇偶性,对该区域中一个或多个量化系数的奇偶性进行隐藏,以降低奇偶性被隐藏的量化系数,例如,第二量化系数的值为a1,对第二量化系数的全部或部分进行奇偶性隐藏,得到第一量化系数a2,a2小于a1,这样对a2进行编码相比于对a1进行编码,使用更少的比特数,降低了编码代价。
本申请实施例对当前块的区域划分方式不做限制。
可选的,这N个区域中至少两个区域所包括的量化系数个数相同。
可选的,这N个区域中至少两个区域所包括的量化系数个数不相同。
在一些实施例中,上述S701中将当前块划分为N个区域的具体方式包括但不限于如下几种:
方式1,根据扫描顺序,将当前块划分为N个区域。
在一种示例中,按照扫描方向,将当前块中每隔M个非零量化系数划分为一个区域,得到N个区域。这N个区域中每个区域均包括M个非零量化系数。这N个区域中至少一个区域包括一个或多个奇偶性可隐藏的第二量化系数。在该示例中,若最后一个区域所包括的非零量化系数不为M时,则将该最后一个区域划分为单独的一个区域,或者,将该最后一个区域与上一个区域合并为一个区域。
在另一种示例中,按照扫描方向,将当前块中每隔K个像素点划分为一个区域,得到N个区域。例如,对于一个使用反向ZigZag扫描顺序的8x8大小的变换块,当每个区域大小相等,即每个区域包含16个系数时,图7所示,将当前块划分为4个区域。在该示例中,若最后一个区域所包括的量化系数不为K时,则将该最后一个区域划分为单独的一个区域,或者,将该最后一个区域与上一个区域合并为一个区域。
方式2,根据空间位置,将当前块划分为N个区域。
在一种示例中,上述N个区域为当前块的子块,例如将当前块平均划分为N个子块,示例性的,每个子块的大小为4×4。
在另一种示例中,根据当前块中像素点的空间位置关系,将空间位置上相邻的多个像素点划分为一个子块,每个子块中至少包括一个非零量化系数。
本申请实施例中,将当前块划分为N个区域的方法除上述各示例外,还可以包括其他的方法,本申请实施例对此不做限制。
在一些实施例中,可以通过标志来指示当前块是否允许采用本申请实施例提供的量化系数奇偶性被隐藏的技术。在一些实施例中,本申请实施例提供的化系数奇偶性被隐藏的技术也称为奇偶性隐藏技术。
示例性的,设置的至少一个标志可以是不同级别的标志,用于指示对应级别是否允许量化系数的奇偶性被隐藏。
可选的,上述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
例如,上述至少一个标志包括序列级标志,该序列级标志用于指示当前序列是否允许量化系数的奇偶性被隐藏。
可选的,若至少一个标志包括序列级标志时,该序列级标志可以位于序列头(sequence header)中。
再例如,上述至少一个标志包括图像级标志,该图像级标志用于指示当前图像是否允许量化系数的奇偶性被隐藏。
可选的,若至少一个标志包括图像级标志时,该图像级标志可以位于图像头(picture header)中。
再例如,上述至少一个标志包括片级标志,该片级标志用于指示当前片(slice)是否允许量化系数的奇偶性被隐藏。
可选的,若至少一个标志包括片级标志时,该片级标志可以位于片头(slice header)中。
再例如,上述至少一个标志包括单元级标志,该单元级标志用于指示当前CTU是否允许量化系数的奇偶性被隐藏。
再例如,上述至少一个标志包括块级标志,该块级标志用于指示当前块是否允许量化系数的奇偶性被隐藏。
这样,编码端首先得到上述至少一个标志,根据这至少一个标志,判断当前块是否允许量化系数的奇偶性被隐藏。若根据上述至少一个标志,确定当前块不允许量化系数的奇偶性被隐藏时,则跳过本申请实施例的方法。若根据上述至少一个标志,确定当前块允许使用本申请实施例提供的奇偶性隐藏技术时,则执行本申请实施例的方法。
在一些实施例中,本申请实施例提供的量化系数奇偶隐藏技术,与目标变换方式互斥,其中目标变换方式包括二次变换或多次变换等。此时,编码端在确定当前块采用目标变换方式进行变换时,则跳过本申请实施例提供的技术方案。
S702、确定当前区域中的第二量化系数,对第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
其中,当前区域为N个区域中包括至少一个非零量化系数的区域。
本申请实施例,为了降低编码代价,对该第二量化系数的全部或部分进行奇偶性隐藏,得到第一量化系数,并将该第一量化系数编入码流,而不是将该第二量化系数编码码流。由于第一量化系数通常小于第二量化系数,因此,对第一量化系数进行编码,相比于编码第二量化系数,所需要的比特数减少,进而降低了编码代价。
本申请实施例对第二量化系数的全部或部分进行奇偶性隐藏,得到第一量化系数的方式不做限制。
在一些实施例中,可根据第二量化系数的奇偶性,选择不同的方式,确定第一量化系数,具体如情况1和情况2所示。
情况1,若第二量化系数为奇数,则采用第三运算方式对第二量化系数中的部分或全部进行运算,得到所第一量 化系数。
本申请实施例上述第三运算方式的具体形式不做限制。
方式1,第三运算方式包括第一量化系数等于第二量化系数的部分或全部的值加一除以二。
情况2,若第二量化系数为偶数,则采用第四运算方式对第二量化系数的部分或全部进行运算,得到第一量化系数。
本申请实施例上述第四运算方式的具体形式不做限制。
示例性的,第四运算方式包括第一量化系数等于第二量化系数的部分或全部值除以二。
在一些实施例中,则编码端采用预设运算方式对第二量化系数的部分或全部进行运算,得到第一量化系数。
本申请对预设运算方式的具体取值不做限制,例如预设运算方式为第二量化系数的部分或全部值除以大于1的正整数。
示例性的,预设运算方式包括第二量化系数中奇偶性待隐藏部分的值除以二取整,其中取整可以理解为取商值操作,例如第二量化系数的奇偶性待隐藏部分值为7,7除以2取整为3。
例如,编码端可以根据如下公式(13),确定出第一量化系数:
Figure PCTCN2022093411-appb-000012
其中,C为第二量化系数,qIdx为第一量化系数,(C-n)为第二量化系数中奇偶性被隐藏的量化系数。
由上述公式(13)可知,编码端对第二量化系数大于n的部分进行奇偶性隐藏,对于小于n的部分不进行奇偶性隐藏。
若该第二量化系数C大于n时,则说明该第二量化系数中包括奇偶性待隐藏部分,则对该第二量化系数中的奇偶性待隐藏部分(C-n)与2进行整除,得到奇偶性隐藏部分(C-n)//2,接着,将该奇偶性隐藏部分(C-n)//2与第二量化系数中奇偶性未被隐藏部分n的和,确定为第一量化系数。
S703、确定第一量化系数对应的目标上下文模型,使用目标上下文模型,对第一量化系数进行编码,得到码流。
本申请实施例中,第一量化系数为对当前区域中的第二量化系数的全部或部分进行奇偶性隐藏得到的量化系数,该第一量化系数中包括奇偶性被隐藏的部分。
编码端在编码第一量化系数之前,首先需要确定第一量化系数对应的上下文模型。
方式一,编码端对于奇偶性被隐藏的量化系数和奇偶性未被隐藏的量化系数,使用不同的上下文模型进行编码。也就是说,第一量化系数对应的上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型不同。
本申请实施例的第一量化系数通过一个或多个标识表示,例如,标识1表示0~3的部分,标识2表示3~6的部分,标识3表示6~9的部分,标识4表示9~12的部分,标识5表示12~15的部分。
本申请实施例中,可以为第一量化系数的各标识中的一个或多个标识确定一个上下文模型,以该一个或多个标识进行编码。也就是说,本申请实施例可以为第一量化系数中的目标量化系数确定一个目标上下文模型,并使用该目标上下文模型,对第一量化系数中的目标量化系数进行编码。
可选的,第一量化系数的目标量化系数可以为标识1,或者为标识2至标识5中任意一个标识表示的量化系数,这样,编码端可以确定2个上下文模型,其中一个上下为模型用于编码标识1,另一个上下文模型用于编码标识2至标识5。
方式二,为了降低第一量化参数的编码复杂度,则本申请实施例中第一量化系数对应的目标上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。也就是说,本申请实施例中,复用已有的奇偶性未被隐藏的其他量化系数对应的上下文模型,来对部分或全部奇偶性被隐藏的量化参数进行编码,进而降低了奇偶性被隐藏量化参数的编码复杂度。
由上述可知,量化参数可以划分为至少一个部分,例如,将量化参数标识1表示的0~3的部分称为BR,将量化参数中标识2~5表示的4~15的部分称为LB。
在一些实施例中,量化参数的不同部分所对应的上行文模型不同,此时,上述S703包括如下步骤:
S703-A、获取奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型,目标量化系数为量化系数的部分量化系数。
S703-B、从目标量化系数对应的多个上下文模型中,确定第一量化系数的目标量化系数对应的目标上下文模型;
S703-C、使用目标量化系数对应的目标上下文模型,编码第一量化系数的目标量化系数,得到码流。
在方式二中,为了降低编码复杂度,复用奇偶性未被隐藏的其他量化系数对应的上下文模型,对奇偶性隐藏的第一量化系数进行编码。因此,在编码第一量化系数中的目标量化系数时,则使用奇偶性未被隐藏的其他量化系数的目标量化系数对应的上下文模型,对第一量化系数中的目标量化系数进行编码。
为了提高量化系数的编码准确性,则通常为量化系数中的各部分创建多个上下文模型,例如,为BR部分创建R个上下文模型,为LR部分创建Q个上下文模型,其中R、Q均为正整数。
这样在对第一量化系数的目标量化系数进行编码时,首先获取奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型,再从这目标量化系数对应的多个上下文模型中,确定该目标量化系数对应的目标上下文模型,并使用该目标上下文模型对第一量化系数中的目标量化系数进行编码。例如,目标量化系数为BR,假设BR对应R个上下文模型,这样,编码端从这R个上下模型中,选出一个上下文模型作为BR对应的目标上下文模型,并使用该BR对应的目标上下文模型对第一量化系数中的BR进行编码。同理,若目标量化系数为LR时,假设LR对应Q个上下文模型,这样,编码端从这Q个上下模型中,选出一个上下文模型作为LR对应的目标上下文模型,并使用该LR目标上下文模型对第一量化系数中的LR进行编码。
上述S703-B中从目标量化系数对应的多个上下文模型中,确定第一量化系数的目标量化系数对应的目标上下文模型的实现方式包括但不限于如下几种:
方式1,将上述目标量化系数对应的多个上下文模型中的任意一个上下文模型,确定为目标量化系数对应的目标上下文模型。
方式2,上述S703-B包括如下S703-B1和S703-B2的步骤:
S703-B1、确定目标量化系数对应的目标上下文模型的索引;
S703-B2、根据索引从目标量化系数对应的多个上下文模型中,确定目标量化系数对应的目标上下文模型。
在该方式2中,上述目标量化系数对应的多个上下文模型中的每个上下文模型包括一个索引,这样编码端可以通过确定目标量化系数对应的目标上下文模型的索引,进而根据索引,从目标量化系数对应的多个上下文模型中选出该目标量化系数对应的目标上下文模型。
本申请实施例对确定目标量化系数对应的上下文模型的索引的具体实现方式不做限制。
本申请实施例中,第一量化系数包括BR,若第一量化系数大于3时,则第一量化系数还包括LR,其中确定BR对应的目标上下文模型的索引和确定LR对应的目标上下文模型的索引的方式不同。下面对确定LR对应的目标上下文模型的索引和确定LR对应的目标上下文模型的索引的过程分别进行介绍。
情况1,若目标量化系数为第一量化系数的BR时,则上述S703-B1包括如下步骤:
S703-B11、根据第一量化系数周围已编码的量化系数的BR之和,确定BR对应的目标上下文模型的索引。
本申请实施例中,第一量化系数的BR与第一量化系数周围已编码的量化系数的BR相关,因此,本申请实施例,根据第一量化系数周围已编码的量化系数的BR之和,确定第一量化系数中BR对应的目标上下文模型的索引。
本申请实施例对上述S703-B11的具体实现方式不做限制。
在一种示例中,将第一量化系数周围已编码的量化系数的BR之和,确定为第一量化系数中BR对应的目标上下文模型的索引。
在另一种示例中,对第一量化系数周围已编码的量化系数的BR之和进行运算处理,得到第一量化系数中BR对应的目标上下文模型的索引。
在一些实施例后,上述S703-B11包括如下步骤:
S703-B111、将第一量化系数周围已编码的量化系数的BR之和、与第一预设值进行相加,得到第一和值;
S703-B112、将第一和值与第一数值进行相除,得到第一比值;
S703-B113、根据第一比值,确定BR对应的目标上下文模型的索引。
在该实施例中,根据第一量化系数周围已编码的量化系数的BR之和,确定第一量化系数中BR对应的目标上下文模型的索引的方法可以是,将第一量化系数周围已编码的量化系数的BR之和、与第一预设值进行相加,得到第一和值,接着,将第一和值与第一数值进行相除,得到第一比值,最后根据该第一比值,确定第一量化系数中BR对应的目标上下文模型的索引。
本申请实施例对第一量化系数的周围已编码的量化系数可以理解为在扫描顺序上位于第一系数周围的J个已编码的量化系数。
在一种示例中,假设J为5,以Zig-Zag scan、Column scan和Row scan这3种扫描方式为例,第一量化系数周围已编码的5个量化系数如图9A至图9C所示,第一量化系数为图中的黑色部分,第一量化系数周围已编码的5个量化系数为灰色部分。也就是说,在确定BR对应的目标上下文模型的索引,参照的是如9A至图9C所示的第一量化系数周围5个已编码的量化系数。
本申请实施例对上述第一预设值和第一数值的具体取值不做限制。
可选的,第一预设值为1。
可选的,第一预设值可以为2。
可选的,第一数值为1。
可选的,第一数值为2。
本申请实施例对上述S703-B113中根据第一比值,确定第一量化系数中BR对应的目标上下文模型的索引的具体方式不做限制。
示例1,将第一比值,确定为第一量化系数中BR对应的目标上下文模型的索引。
示例2,对第一比值进行处理,得到第一量化系数中BR对应的目标上下文模型的索引。本申请实施例对第一比值进行处理的具体方式不做限制。
在示例2的一种可能的实现方式中,上述S703-B113包括:
S703-B113-1、将第一比值与第一预设阈值中的最小值,确定为第二数值;
S703-B113-2、根据第二数值,确定BR对应的目标上下文模型的索引。
在该可能的实现方式中,将第一比值与第一预设阈值进行比较,将第一比值和第一预设阈值中的最小值,确定为第二数值,进而根据该第二数值,确定第一量化系数中BR对应的目标上下文模型的索引。
本申请实施例对上述S703-B113-2中根据第二数值,确定BR对应的目标上下文模型的索引的具体方式不做限制。
例如,将上述确定的第二数值,确定为第一量化系数中BR对应的目标上下文模型的索引。
再例如,确定BR的偏置索引offset BR,将第二数值和第一量化系数中BR的偏置索引之后,确定为第一量化系数中BR对应的目标上下文模型的索引。
在一种示例中,可以根据上述公式(5),确定第一量化系数中BR对应的目标上下文模型的索引。
本申请实施例对上述第一预设值、第一数值以及第一预设阈值的具体取值不做限制。
在一种可能的实现方式中,假设第一预设值a1为1,第一数值为2,即b1=1,第一预设阈值为4时,则上述公式(5)可以具体表示为上述公式(6)。
在另一种可能的实现方式中,假设第一预设值a1为2,第一数值为4,即b1=2,第一预设阈值为4时,则上述公式(5)可以具体表示为上述公式(7)。
本申请实施例中,在复用原有上下文模型对第一量化系数的BR进行编码时,考虑到奇偶性被隐藏的量化系数与奇偶性未被隐藏的量化系数的大小整体上分布不同,例如奇偶性被隐藏的量化系数大小约为奇偶性未被隐藏的量化系数的一半。因此,本申请实施例中,在复用已有的上下文模型时,对上行文模型索引的确定过程进行调整,以选择适用于奇偶性被隐藏的第一量化系数的目标上下文模型,具体是对第一量化系数的周围量化系数的BR之和除以4,得到第一比值,根据该第一比值,确定第一量化系数中BR对应的目标上下文模型的索引。
进一步的,为了实现整除,在将第一量化系数的周围量化系数的BR之和从公式(6)所示的右移一位,调整为公式(7)所示的右移2位时,将第一预设值a1从公式(6)中的1,调整为公式(7)中的2,以实现四舍五入。
本申请实施例对上述确定BR的偏置索引offset BR的具体方式不做限制。
在一种示例中,根据第一量化系数在当前块中的位置、当前块的大小、当前块的扫描顺序、当前块的颜色分量中的至少一个,确定BR的偏置索引。
例如,若第一量化系数为当前块的左上方位置处的量化系数时,则BR的偏置索引为第一阈值。
再例如,若第一量化系数为当前块的左上方位置处的量化系数时,则BR的偏置索引为第二阈值。
本申请实施例对上述第一阈值和第二阈值的具体取值不做限制。
可选的,在当前块的颜色分量为亮度分量的情况下,则第一阈值为0。
可选的,在当前块的颜色分量为色度分量的情况下,则第一阈值为10。
可选的,在当前块的颜色分量为亮度分量的情况下,则第二阈值为5。
可选的,在当前块的颜色分量为色度分量的情况下,则第二阈值为15。
上文对确定第一量化系数中BR对应的目标上下文模型的索引的具体过程进行介绍。在一些实施例中,若第一量化系数还包括LB部分时,则本申实施例还需要确定第一量化系数中LR对应的目标上下文模型的索引。
下面对确定第一量化系数中LR对应的目标上下文模型的索引的过程进行介绍。
情况2,若目标量化系数为第一量化系数的LR时,则上述S703-B1包括如下步骤:
S703-B21、根据第一量化系数周围已编码的量化系数的BR与LR之和,确定LR对应的目标上下文模型的索引。
本申请实施例中,第一量化系数的LR与第一量化系数周围已编码的量化系数的BR和LR相关,因此,本申请实施例,根据第一量化系数周围已编码的量化系数的BR与LR之和,确定第一量化系数中LR对应的目标上下文模型的索引。
本申请实施例对上述S703-B21的具体实现方式不做限制。
在一种示例中,将第一量化系数周围已编码的量化系数的BR与LR之和,确定为第一量化系数中LR对应的目标上下文模型的索引。
在另一种示例中,对第一量化系数周围已编码的量化系数的BR与LR之和进行运算处理,得到第一量化系数中LR对应的目标上下文模型的索引。
在一些实施例后,上述S703-B21包括如下步骤:
S703-B211、将第一量化系数周围已编码的量化系数的BR与LR之和、与第二预设值进行相加,得到第二和值;
S703-B212、将第二和值与第三数值进行相除,得到第二比值;
S703-B213、根据第二比值,确定LR对应的目标上下文模型的索引。
在该实施例中,根据第一量化系数周围已编码的量化系数的BR与LR之和,确定第一量化系数中LR对应的目标上下文模型的索引的方法可以是,将第一量化系数周围已编码的量化系数的BR与LR之和、与第二预设值进行相加,得到第二和值,接着,将第二和值与第三数值进行相除,得到第二比值,最后根据该第二比值,确定第一量化系数中LR对应的目标上下文模型的索引。
本申请实施例对第一量化系数的周围已编码的量化系数可以理解为在扫描顺序上位于第一系数周围的J个已编码的量化系数。
在一种示例中,假设J为3,以Zig-Zag scan、Column scan和Row scan这3种扫描方式为例,第一量化系数周围已编码的3个量化系数如图10A至图10C所示,第一量化系数为图中的黑色部分,第一量化系数周围已编码的3个量化系数为灰色部分。也就是说,在确定LR对应的目标上下文模型的索引,参照的是如10A至图10C所示的第一量化系数周围3个已编码的量化系数。
本申请实施例对上述第二预设值和第三数值的具体取值不做限制。
可选的,第二预设值为1。
可选的,第二预设值可以为2。
可选的,第三数值为1。
可选的,第三数值为2。
本申请实施例对上述S703-B213中根据第二比值,确定第一量化系数中LR对应的目标上下文模型的索引的具体方式不做限制。
示例1,将第二比值,确定为第一量化系数中LR对应的目标上下文模型的索引。
示例2,对第二比值进行处理,得到第一量化系数中LR对应的目标上下文模型的索引。本申请实施例对第二比值进行处理的具体方式不做限制。
在示例2的一种可能的实现方式中,上述S703-B113包括:
S703-B213-1、将第二比值与第二预设阈值中的最小值,确定为第四数值;
S703-B213-2、根据第四数值,确定LR对应的目标上下文模型的索引。
在该可能的实现方式中,将第二比值与第二预设阈值进行比较,将第二比值和第二预设阈值中的最小值,确定为第四数值,进而根据该第四数值,确定第一量化系数中LR对应的目标上下文模型的索引。
本申请实施例对上述S703-B213-2中根据第四数值,确定LR对应的目标上下文模型的索引的具体方式不做限制。
例如,将上述确定的第四数值,确定为第一量化系数中LR对应的目标上下文模型的索引。
再例如,确定LR的偏置索引offset LR,将第二数值和第一量化系数中LR的偏置索引之后,确定为第一量化系数中LR对应的目标上下文模型的索引。
在一种示例中,可以根据上述公式(8),确定第一量化系数中LR对应的目标上下文模型的索引。
本申请实施例对上述第二预设值、第三数值以及第二预设阈值的具体取值不做限制。
在一种可能的实现方式中,假设第二预设值a2为1,第三数值为2,即b2=1,第二预设阈值为6时,则上述公式(8)可以具体表示为上述公式(9)。
在另一种可能的实现方式中,假设第二预设值a1为2,第三数值为4,即b2=2,第二预设阈值为6时,则上述公式(8)可以具体表示为上述公式(10)。
本申请实施例中,在复用原有上下文模型对第一量化系数的LR进行编码时,考虑到奇偶性被隐藏的量化系数与奇偶性未被隐藏的量化系数的大小整体上分布不同,例如奇偶性被隐藏的量化系数大小约为奇偶性未被隐藏的量化系数的一半。因此,本申请实施例中,在复用已有的上下文模型时,对上行文模型索引的确定过程进行调整,以选择适用于奇偶性被隐藏的第一量化系数的目标上下文模型,具体是对第一量化系数的周围量化系数的小于16部分之和除以4,得到第二比值,根据该第二比值,确定第一量化系数中LR对应的目标上下文模型的索引。
进一步的,为了实现整除,在将第一量化系数的周围量化系数的小于16部分之和从公式(9)所示的右移一位,调整为公式(10)所示的右移2位时,将第二预设值a21从公式(9)中的1,调整为公式(10)中的2,以实现四舍五入。
本申请实施例对上述确定LR的偏置索引offset LR的具体方式不做限制。
在一种示例中,根据第一量化系数在当前块中的位置、当前块的大小、当前块的扫描顺序、当前块的颜色分量中的至少一个,确定LR的偏置索引。
例如,若第一量化系数为当前块的左上方位置处的量化系数时,则LR的偏置索引为第三阈值。
再例如,若第一量化系数为当前块的左上方位置处的量化系数时,则LR的偏置索引为第四阈值。
本申请实施例对上述第三阈值和第四阈值的具体取值不做限制。
可选的,若当前块的颜色分量为亮度分量时,则第三阈值为0。
可选的,若当前块的颜色分量为色度分量时,则第三阈值为14。
可选的,若当前块的颜色分量为亮度分量时,则第四阈值为7。
可选的,若当前块的颜色分量为色度分量时,则第四阈值为21。
在一些实施例中,第一量化系数的上下文模型索引值的计算如下表4所示。
根据上述步骤,确定出第一量化系数中目标量化系数对应的目标上下文模型的索引后,根据该索引,从目标量化系数对应的多个上下文模型中,确定出第一量化系数中目标量化系数对应的目标上下文模型。例如,根据第一量化系数中BR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的BR部分对应的多个上下文模型中,得到第一量化系数中BR对应的目标上下文模型,根据第一量化系数中LR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的LR部分对应的多个上下文模型中,得到第一量化系数中LR对应的目标上下文模型。
由上述可知,上述多个上下文模型包括多个QP段(例如4个QP段)、2个分量(例如亮度分量和色度分量)下的不同索引的上下文模型,这样,上述S703-B2中根据索引从目标量化系数对应的多个上下文模型中,确定目标量化系数对应的目标上下文模型,包括:根据第一量化系数对应的量化参数、变换类型、变换块大小和颜色分量中的至少一个,从上述目标量化系数对应的多个上下文模型中,选出至少一个上下文模型;接着,将至少一个下上文模型中与该索引对应的上下文模型,确定为目标量化系数对应的目标上下文模型。
例如,根据上述步骤,确定出第一量化系数中的BR(即标识1)对应的索引为索引1,第一量化系数对应的QP段为QP段1,且第一量化系数为亮度分量下的量化系数。假设,标识1对应的上行文模型为S个,则首先从这S个上下文模型中,选出QP段1和亮度分量下的T个上下文模型,接着,将这T个上下文模型中索引1对应的上下文模型,确定为第一量化系数中的BR对应的目标上下文模型,进而使用该目标上下文模型对第一量化系数中的BR进行编码。
再例如,根据上述步骤,确定出第一量化系数中的LR(即标识2至标识5)对应的索引为索引2,第一量化系数对应的QP段为QP段1,且第一量化系数为亮度分量下的量化系数。假设,LR对应的上行文模型为U个,则首先从这U个上下文模型中,选出QP段1和亮度分量下的V个上下文模型,接着,将这V个上下文模型中索引2对应的上下文模型,确定为第一量化系数中的标识2至标识5对应的目标上下文模型,进而使用该目标上下文模型对第一量 化系数中的LR进行编码。
在一些实施例中,编码端在从多个上下文模型中,确定目标上下文模型之前,需要对这多个上下文模型进行初始化,接着,从这些初始化后的多个上下文模型中,确定出目标上下文模型。
例如,对BR对应的多个上下文模型进行初始化,再从这些初始化后的多个上下文模型中,选择BR对应的目标上下文模型。
再例如,对LR对应的多个上下文模型进行初始化,再从这些初始化后的多个上下文模型中,选择LR对应的目标上下文模型。
也就是说,本申请实施例为了使奇偶性被隐藏系数对应的上下文模型中的概率值能够更快收敛,每个模型可以根据每个符号出现的概率,有一组对应的初始值。在AVM中,每一个上下文模型的概率根据该模型中的符号个数和每个符号出现的概率和累计分布函数计算该上下文模型中出现符号小于某个值的概率,该概率值通过归一化用一个16bit的整数来表示。
本申请实施例中,对多个上下文模型进行初始化的方式包括但不限于如下几种:
方式1,使用等概率值对多个上下文模型进行初始化。例如,一个4符号的上下文模型,符号0,1,2,3出现的概率都为0.25,则出现小于1符号的概率为0.25,小于2符号的概率为0.5,小于3符号的概率为0.75,小于4符号的概率为1,将概率进行16bit取整得到出现小于1符号的概率为8192,小于2符号的概率为16384,小于3符号的概率为24576,小于4符号的概率为32768,这四个整数值构成该上下文模型的概率初始值。
方式2,使用收敛概率值对多个上下文模型进行初始化,其中收敛概率值为使用上下文模型对测试视频进行编码时,上下文模型对应的收敛概率值。示例性的,首先,给这些上下文模型设定一个初始的概率值例如等概率。接着,使用某些目标码率对应的量化系数对一系列测试视频进行编码,并将这些上下文模型最终的收敛概率值作为这些上下文模型的初始值。重复上述步骤可得到新的收敛值,选取合适的收敛值作为最终的初始值。
根据上述方法,确定出第一量化系数中目标量化系数对应的目标上下文模型后,使用该目标上下文模型对第一量化系数的目标量化系数进行编码,得到码流。
示例性的,根据第一量化系数中BR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的BR部分对应的多个上下文模型中,得到第一量化系数中BR对应的目标上下文模型,并使用第一量化系数中BR对应的目标上下文模型,对第一量化系数中的BR进行编码,得到编码后的BR,进而根据编码后的BR,得到码流。
在一些实施例中,若第一量化系数只包括BR部分,不包括LR部分时,则将编码后的BR,作为码流输出。
在一些实施例中,若第一量化系数还包括LR部分时,则根据第一量化系数中LR对应的目标上下文模型的索引,从获取的奇偶性未被隐藏的其他量化系数的LR部分对应的多个上下文模型中,得到第一量化系数中LR对应的目标上下文模型。接着,编码端使用LR对应的目标上下文模型,编码所述第一量化系数中的LR,得到编码后的LR,并根据编码后的BR和编码后的LR,得到码流。
例如,若第一量化系数不包括大于15的部分时,则将编码后的BR和编码后的LR形成的比特流作为码流输出。
再例如,若第一量化系数还包括大于15的部分时,编码得到第一量化系数中大于15的部分,进而将编码后的BR、编码后的LR以及大于15部分的编码值组成的比特流作为码流输出。
本申请实施例,通过当前区域中P个量化系数对该第一量化系数中奇偶性被隐藏的量化系数的奇偶性进行指示。
例如,使用当前区域中P个量化系数的一种二值特性(0或1),指示奇偶性被隐藏的量化系数的奇偶性。
在一些实施例中,通过当前区域中P个量化系数对应的奇偶性指示奇偶性被隐藏的量化系数的奇偶性。
在一种示例中,上述P个量化系数为当前区域中的所有量化系数。
在另一种示例中,上述P个量化系数为当前区域中的部分量化系数。
示例性的,P个量化系数对应的奇偶性可以是P个量化系数中最小量化系数、P个量化系数中最大量化系数、P个量化系数的绝对值之和、P个量化系数中目标量化系数的个数等等的奇偶性。
本申请实施例使用当前区域中P个量化系数对应的奇偶性指示奇偶性被隐藏的量化系数的奇偶性,例如,使用P个量化系数对应的奇偶性为奇数,来指示奇偶性被隐藏的量化系数为奇数,使用P个量化系数对应的奇偶性为偶数,来指示奇偶性被隐藏的量化系数为偶数。
在一些实施例中,若P个量化系数对应的奇偶性,与奇偶性被隐藏的量化系数的奇偶性不一致时,则本申请实施例的方法还包括如下步骤1:
步骤1、对P个量化系数进行调整,以使P个量化系数对应的奇偶性,与奇偶性被隐藏的量化系数的奇偶性一致。
情况1,若奇偶性被隐藏的量化系数的奇偶性通过P个量化系数的第一绝对值之和的奇偶性进行指示,则上述步骤1包括:对P个量化系数的值进行调整,以使P个量化系数的第一绝对值之和的奇偶性,与奇偶性被隐藏的量化系数的奇偶性一致。
其中,量化系数的第一绝对值为量化系数的部分或全部绝对值,例如,量化系数的第一绝对值为小于15部分的绝对值。
上述情况1包括如下两种示例:
示例1,若奇偶性被隐藏的量化系数为奇数,且P个量化系数的第一绝对值之和为偶数时,则调整P个量化系数的值,以使调整后的P个量化系数的第一绝对值之和为奇数。
例如,若奇偶性被隐藏的量化系数为奇数,而P个量化系数的第一绝对值之和为偶数,此时,将P个量化系数中最小量化系数加1或减1,以将P个量化系数的第一绝对值之和修改为奇数。
示例2,若奇偶性被隐藏的量化系数为偶数,且P个量化系数的第一绝对值之和为奇数时,则调整P个量化系数的取值,以使调整后的P个量化系数的第一绝对值之和为偶数。
例如,奇偶性被隐藏的量化系数为偶数,而P个量化系数的第一绝对值之和为奇数,此时,将P个量化系数中最小量化系数加1或减1,以将P个量化系数的第一绝对值之和修改为偶数。
情况2,若奇偶性被隐藏的量化系数的奇偶性通过P个量化系数中目标量化系数的个数的奇偶性进行指示,则上述步骤1包括:对P个量化系数的值进行调整,以使P个量化系数中目标量化系数个数的奇偶性,与奇偶性被隐藏的量化系数的奇偶性一致。
上述示例2至少包括如下两种示例:
示例1,若奇偶性被隐藏的量化系数为奇数,且P个量化系数中目标量化系数的个数为偶数,则调整P个量化系数的取值,以使调整后的P个量化系数中目标量化系数个数的奇数。
示例2,若奇偶性被隐藏的量化系数为偶数,且P个量化系数中目标量化系数的个数为奇数,则调整P个量化系数的取值,以使调整后的P个量化系数中目标量化系数个数的偶数。
上述目标量化系数为P个量化系数中的非零量化系数、值为偶数的非零量化系数、值为偶数的量化系数、值为奇数的量化系数中的任意一个。
在一种示例中,若目标量化系数为P个量化系数中的非零量化系数,则编码端对P个量化系数中的至少一个系数进行修改。例如,奇偶性被隐藏的量化系数为奇数,而P个量化系数中非零量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个量化系数的值调整为0,以使P个量化系数中非零量化系数的个数为奇数。
再例如,奇偶性被隐藏的量化系数为偶数,而P个量化系数中非零量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数的值调整为0,以使P个量化系数中非零量化系数的个数为偶数。
在另一种示例中,若目标量化系数为P个量化系数中值为偶数的非零量化系数,则编码端对P个量化系数中的至少一个系数进行修改。例如,奇偶性被隐藏的量化系数为奇数,而P个量化系数中值为偶数的非零量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为2,或者,将P个量化系数中最小的一个非零量化系数的值加1或减1,以使P个量化系数中值为偶数的非零量化系数的个数为奇数。
再例如,奇偶性被隐藏的量化系数为偶数,而P个量化系数中值为偶数的非零量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为2,或者,将P个量化系数中最小的一个非零量化系数的值加1或减1,以使P个量化系数中非零量化系数的个数为偶数。编码端使用率失真代价最小的调整方式,对P个量化系数中的至少一个系数进行调整。
在另一种示例中,若目标量化系数为P个量化系数中值为偶数的量化系数,则编码端对P个量化系数中的至少一个系数进行修改。例如,奇偶性被隐藏的量化系数为奇数,而P个量化系数中值为偶数的量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0,以使P个量化系数中值为偶数的量化系数的个数为奇数。
再例如,奇偶性被隐藏的量化系数为偶数,而P个量化系数中值为偶数的量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0,以使P个量化系数中量化系数的个数为偶数。
在另一种示例中,若目标量化系数为P个量化系数中值为奇数的量化系数,则编码端对P个量化系数中的至少一个系数进行修改。例如,奇偶性被隐藏的量化系数为奇数,而P个量化系数中值为奇数的量化系数的个数为偶数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0或加1或减1,以使P个量化系数中值为奇数的量化系数的个数为奇数。
再例如,奇偶性被隐藏的量化系数为偶数,而P个量化系数中值为奇数的量化系数的个数为奇数,则可以将P个量化系数中的一个为零的量化系数调整为1,或者,将P个量化系数中最小的一个非零量化系数调整为0或加1或减1,以使P个量化系数中量化系数的个数为偶数。
进一步的,将本申请提出的奇偶性隐藏技术实施在AVM参考软件上,使用的区域大小设置为16,奇偶性隐藏的开启非零系数个数阈值为4。当前区域中非零系数小于等于15部分的绝对值和为奇数,则表示该区域扫描顺序上最后一个系数(即第二量化系数)为奇数,为偶数表示该区域扫描顺序上最后一个系数(即第二量化系数)为偶数。
在一种示例中,若本申请的奇偶性隐藏技术的开启条件是,变换块的至少一个方向为identity transform时,则不开启奇偶性隐藏技术,变换块的两个方向均不是identity transform时,开启奇偶性隐藏技术,亮度分量下开启奇偶性隐藏技术,色度分量下不开启奇偶性隐藏技术。在该条件下,在通用测试条件CTC下对分辨率从270p~4K的序列分别进行all intra(全帧内)、random access(随机访问)和low delay(低延时)配置下进行测试,得到编解码性能变化分别如表10、表11和表12所示:
表10:all intra
Figure PCTCN2022093411-appb-000013
Figure PCTCN2022093411-appb-000014
表11:random access
Figure PCTCN2022093411-appb-000015
表12:low delay
Figure PCTCN2022093411-appb-000016
上述表10至表12中的PSNR为峰值信噪比(Peak signal-to-noise ratio),表示信号最大可能功率和影响它的表示精度的破坏性噪声功率的比值。由于许多信号都有非常宽的动态范围,峰值信噪比常用对数分贝单位来表示。在视频编解码中,PSNR用于来评价一幅图像压缩后和原图像相比质量的好坏,PSNR越高,压缩后失真越小。
上述表中的负号“-”表示性能增益。
由上述表10至表12可知,在变换块的至少一个方向为identity transform时,则不开启奇偶性隐藏技术,以及亮度分量下开启奇偶性隐藏技术,在色度分量下不开启奇偶性隐藏技术的条件下,在采用本申请实施例的奇偶性隐藏技术,在all intra(全帧内)、random access(随机访问)和low delay(低延时)配置下分别进行测试时,性能均有显著提升。
在另一种示例中,若变换块的至少一个方向为identity transform时,则不开启奇偶性隐藏技术,变换块的两个方向均不是identity transform时,开启奇偶性隐藏技术,且复用奇偶性未被隐藏的量化系数对应的上下文模型对奇偶性被隐藏的量化系数进行编码。在该条件下,在通用测试条件CTC下对分辨率从270p~4K的序列分别进行all intra(全帧内)和random access(随机访问)配置下进行测试,得到编解码性能变化分别如表13和表14所示:
表13:all intra
Figure PCTCN2022093411-appb-000017
表14:random access
Figure PCTCN2022093411-appb-000018
由上述表13和表14可知,在变换块的至少一个方向为identity transform时,则不开启奇偶性隐藏技术,以及复用奇偶性未被隐藏的量化系数对应的上下文模型对奇偶性被隐藏的量化系数进行编码的条件下,在采用本申请实施例 的奇偶性隐藏技术,在all intra(全帧内)和random access(随机访问)配置下分别进行测试时,性能均有显著提升。
本申请实施例提供的视频编码方法,编码端将当前块划分为N个区域,确定当前区域中的第二量化系数,并对第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,当前区域为N个区域中包括至少一个非零量化系数的区域;接着,确定第一量化系数对应的目标上下文模型,使用目标上下文模型,对第一量化系数进行编码,得到码流,第一量化系数中奇偶性被隐藏的量化系数的奇偶性通过当前区域中P个量化系数进行指示。即本申请实施例,编码端对第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,由于第一量化系数的绝对值小于第二量化系数,这样对第一量化系数进行编码相比于第二量化系数,可以减少编码比特数,进而降低编码代价。另外,通过单独确定第一量化系数对应的目标上下文模型对第一量化系数进行编码,提高第一量化系数的编码准确性。
图13为本申请实施例提供的视频编码方法的一种流程示意图。
在上述图12所示实施例的基础上,下面结合图13,对本申请实施例提供的视频编码方法做进一步说明。
图13为本申请一实施例提供的视频编码方法流程示意图。如图13所示,本申请实施例的方法包括:
S801、获得至少一个标志。
上述至少一个标志用于指示是否允许量化系数的奇偶性被隐藏。
可选的,至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
若根据至少一个标志,确定允许当前块的量化系数的奇偶性被隐藏时,则执行如下步骤S802。
若根据至少一个标志,确定不允许当前块的量化系数的奇偶性被隐藏时,则执行如下步骤S806。
S802、将当前块划分为N个区域。
本申请实施例对当前块的区域划分方式不做限制,例如,根据扫描顺序,将当前块划分为N个区域,或者根据当前块中像素点的空间位置,将当前块划分为N个区域。
上述N个区域的大小可以相同,也可以不同,本申请实施例对此不做限制。
上述S802的具体实现过程可以参照上述S701的相关描述,在此不再赘述。
S803、判断当前区域是否满足条件。
该条件包括下面的预设条件和奇偶性隐藏技术开启条件中的至少一个。
本申请实施例中,对本申请提出的奇偶性隐藏技术进行限定。具体是,若当前区域满足条件时,则说明当前区域使用本申请实施例提供的奇偶性隐藏技术可以得到显著的有益效果,此时,执行如下S804的步骤。
若当前区域不满足条件时,说明当前区域使用本申请实施例提供的奇偶性隐藏技术无法达到显著的有益效果,此时,执行如下S806的步骤。
下面对奇偶性隐藏技术开启条件进行介绍。
在一些实施例中,可以根据量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个的不同,预设不同的奇偶性隐藏技术开启条件。
例如,若量化系数大于或等于某一预设值时,开启奇偶性隐藏技术,若量化系数小于某一预设值时,不开启奇偶性隐藏技术。
再例如,若变换类型为预设变换类型时,则开启奇偶性隐藏技术,若变换类型不为预设变换类型时,不开启奇偶性隐藏技术。例如,当前块的变换类型为第一变换类型时,则确定第一量化系数的奇偶性不允许被隐藏,该第一变换类型用于指示当前块的至少一个方向跳过变换,具体参照上述表2所示。
再例如,若变换块大小大于或等于预设大小时,则开启奇偶性隐藏技术,若变换块大小小于预设大小时,不开启奇偶性隐藏技术。
再例如,若当前块的颜色分量为第一分量时,则确定第一量化系数的奇偶性不允许被隐藏,即不开启奇偶性隐藏技术,若当前块的颜色分量不是第一分量时,则确定允许第一量化系数的奇偶性被隐藏,即开启奇偶性隐藏技术。本申请实施例对第一分量不做限制,在一种可能的实现方式中,第一分量为色度分量,也就是说,在亮度分量下,开启奇偶性隐藏技术,在色度分量下,不开启奇偶性隐藏技术。
再例如,若变换块使用预设扫描类型时,则开启奇偶性隐藏技术,若变换块不适用预设扫描类型时,不开启奇偶性隐藏技术。本申请对预设扫描类型不做限制,例如预设扫描类型为ZigZag扫描或对角线扫描。
需要说明的是,上述各举例只是一种示例性说明,上述各示例可以相同组合作为奇偶性隐藏技术开启条件。
下面对预设条件进行介绍。
本申请实施例对上述预设条件的具体内容不做限制,可以根据实际需要设定。
在一种可能的实现方式中,上述预设条件包括如下至少一个条件:
条件1,当前区域中非零量化系数的个数大于第一预设数值;
条件2,当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个非零量化系数之间距离大于第二预设数值;
条件3,当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个量化系数之间距离大于第三预设数值;
条件4,当前区域中,非零量化系数的绝对值之和大于第四预设数值;
条件5,当前块为颜色分量为第二分量,可选的的第二分量为亮度分量;
条件6,当前块的变换类型不是第一变换类型,该第一变换类型用于指示当前块的至少一个方向跳过变换。
在一些实施例中,上述6个条件还可以相互组合,形成新的约束条件。
本申请实施例对上述第一预设数值至第四预设数值的具体取值不做限制,只有第一预设数值至第四预设数值均为正整数即可。
在一种示例中,上述第一预设数值、第二预设数值、第三预设数值和第四预设数值中的至少一个为固定值。
在另一种示例中,上述第一预设数值、第二预设数值、第三预设数值和第四预设数值中的至少一个为非固定值,即为编码端根据当前编码信息确定的数值。
在一些实施例中,若第一预设数值、第二预设数值、第三预设数值和第四预设数值中的至少一个为非固定值时,则编码端将该非固定值写入码流,解码端从码流中解析出该非固定值。
在一些实施例中,编码端可以先根据第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,在确定第一量化系数允许量化系数的奇偶性被隐藏时,再判断当前区域是否满足上述预设条件。
在一些实施例中,编码端可以在确定当前区域满足上述预设条件时,判断第一量化系数是否满足该奇偶性隐藏技术开启条件。
也就是说,上述预设条件,与上述奇偶性隐藏技术开启条件可以单独使用,也可以相互组合使用。
S804、确定当前区域中的第二量化系数,并对第二量化系数的全部或部分进行奇偶性隐藏,得到第一量化系数。
S805、确定第一量化系数对应的目标上下文模型,并使用目标上下文模型对第一量化系数进行编码,得到码流。
可选的,第一量化系数对应的目标上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。
上述S804和S805的步骤,可以参照上述S702和S703的描述,在此不再赘述。
S806、对第二量化系数进行编码,得到码流。
本申请实施例,编码端在编码当前区域时,首先判断当前区域是否满足条件,来确定当前区域采用本申请提供的奇偶性隐藏技术进行编码时,是否可以带来显著的有益技术效果,在确定采用该奇偶性隐藏技术具有显著的有益效果时,采用本申请的技术方案对当前区域进行编码,提高编码可靠性。
应理解,图6至图13仅为本申请的示例,不应理解为对本申请的限制。
以上结合附图详细描述了本申请的优选实施方式,但是,本申请并不限于上述实施方式中的具体细节,在本申请的技术构思范围内,可以对本申请的技术方案进行多种简单变型,这些简单变型均属于本申请的保护范围。例如,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本申请对各种可能的组合方式不再另行说明。又例如,本申请的各种不同的实施方式之间也可以进行任意组合,只要其不违背本申请的思想,其同样应当视为本申请所公开的内容。
还应理解,在本申请的各种方法实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。另外,本申请实施例中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系。具体地,A和/或B可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
上文结合图6至图13,详细描述了本申请的方法实施例,下文结合图14至图17,详细描述本申请的装置实施例。
图14是本申请实施例提供的视频解码装置的示意性框图。
如图14所示,视频解码装置10包括:
解码单元11,用于解码码流,得到当前区域中的P个量化系数,所述当前区域为当前块中包括至少一个非零量化系数的区域,所述P为正整数;
奇偶性确定单元12,用于根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,所述第一量化系数为对第二量化系数的全部或部分进行奇偶性隐藏后得到的量化系数,所述第二量化系数为所述当前区域中奇偶性未被隐藏的一个量化系数;
处理单元13,用于确定所述第一量化系数对应的目标上下文模型,并使用所述目标上下文模型,对基于上下文编码的所述第一量化系数进行解码,得到解码后的第一量化系数;
确定单元14,用于根据所述奇偶性被隐藏的量化系数的奇偶性和所述解码后的第一量化系数,确定具有奇偶性的第二量化系数。
在一些实施例中,所述第一量化系数对应的目标上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。
在一些实施例中,处理单元13,具体用于获取所述奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型;从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型;使用所述目标量化系数对应的目标上下文模型,解码所述第一量化系数的目标量化系数,得到所述第一量化系数。
在一些实施例中,处理单元13,具体用于确定所述目标量化系数对应的目标上下文模型的索引;根据所述索引从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型。
在一些实施例中,处理单元13,具体用于若所述目标量化系数为所述第一量化系数中标识1表示的基本部分基本部分时,则根据所述第一量化系数周围已解码的量化系数的基本部分之和,确定所述基本部分对应的目标上下文模型的索引。
在一些实施例中,处理单元13,具体用于将所述第一量化系数周围已解码的量化系数的基本部分之和、与第一预设值进行相加,得到第一和值;将所述第一和值与第一数值进行相除,得到第一比值;根据所述第一比值,确定所述基本部分对应的目标上下文模型的索引。
可选的,所述第一数值为4。
可选的,所述第一预设值为2。
在一些实施例中,处理单元13,具体用于将所述第一比值与第一预设阈值中的最小值,确定为第二数值;根据所述第二数值,确定所述基本部分对应的目标上下文模型的索引。
可选的,所述第一预设阈值为4。
在一些实施例中,处理单元13,具体用于确定所述基本部分的偏置索引;将所述第二数值和所述基本部分的偏置索引之和,确定为所述基本部分对应的目标上下文模型的索引。
在一些实施例中,处理单元13,具体用于根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色分量中的至少一个,确定所述基本部分的偏置索引。
在一些实施例中,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第一阈值。
可选的,在所述当前块的颜色分量为亮度分量的情况下,则所述第一阈值为0。
在一些实施例中,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第二阈值。
可选的,若所述当前块的颜色分量为亮度分量时,则所述第二阈值为5。
在一些实施例中,处理单元13,具体用于若所述目标量化系数为所述第一量化系数中标识2至标识5表示的较低部分较低部分时,则根据所述第一量化系数周围已解码的量化系数的基本部分与较低部分之和,确定所述较低部分对应的目标上下文模型的索引。
在一些实施例中,处理单元13,具体用于将所述第一量化系数周围已解码的量化系数的基本部分与较低部分之和、与第二预设值进行相加,得到第二和值;将所述第二和值与第三数值进行相除,得到第二比值;根据所述第二比值,确定所述较低部分对应的目标上下文模型的索引。
可选的,所述第三数值为4。
可选的,所述第二预设值为2。
在一些实施例中,处理单元13,具体用于将所述第二比值与第二预设阈值中的最小值,确定为第四数值;根据所述第四数值,确定所述较低部分对应的目标上下文模型的索引。
可选的,所述第二预设阈值为4。
在一些实施例中,处理单元13,具体用于确定所述较低部分的偏置索引;将所述第四数值和所述较低部分的偏置索引之和,确定为所述较低部分对应的目标上下文模型的索引。
在一些实施例中,处理单元13,具体用于根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色分量中的至少一个,确定所述较低部分的偏置索引。
在一些实施例中,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第三阈值。
可选的,在所述当前块的颜色分量为亮度分量的情况下,则所述第三阈值为0。
在一些实施例中,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第四阈值。
可选的,在所述当前块的颜色分量为亮度分量的情况下,则所述第四阈值为7。
在一些实施例中,处理单元13,具体用于使用所述第一量化系数中标识1表示的基本部分基本部分对应的目标上下文模型,解码所述第一量化系数中的基本部分,得到解码后的基本部分;根据所述解码后的基本部分,确定所述第一量化系数。
在一些实施例中,处理单元13,具体用于若所述第一量化系数还包括标识2至标识5表示的较低部分较低部分时,则使用所述较低部分对应的目标上下文模型,解码所述第一量化系数中的较低部分,得到解码后的较低部分;根据所述解码后的基本部分和解码后的较低部分,确定所述第一量化系数。
在一些实施例中,处理单元13,具体用于根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,从所述目标量化系数对应的多个上下文模型中,选出至少一个上下文模型;将所述至少一个下上文模型中与所述索引对应的上下文模型,确定为所述目标量化系数对应的目标上下文模型。
在一些实施例中,处理单元13,还用于对所述目标量化系数对应的多个上下文模型进行初始化;从初始化后的所述目标量化系数对应的多个上下文模型中,确定所述目标量化系数对应的目标上下文模型。
在一些实施例中,处理单元13,具体用于使用等概率值对所述目标量化系数对应的多个上下文模型进行初始化;或者,使用收敛概率值对所述目标量化系数对应的多个上下文模型进行初始化,其中所述收敛概率值为使用上下文模型对测试视频进行编码时,所述上下文模型对应的收敛概率值。
在一些实施例中,奇偶性确定单元12,还用于根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏;在确定允许所述第一量化系数的奇偶性被隐藏时,根据所述P个量化系数,确定所述奇偶性被隐藏的量化系数的奇偶性。
在一些实施例中,奇偶性确定单元12,具体用于若所述当前块的变换类型为第一变换类型时,则确定所述第一量化系数的奇偶性不允许被隐藏,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
在一些实施例中,奇偶性确定单元12,具体用于若所述当前块的颜色分量为第一分量时,则确定所述第一量化系数的奇偶性不允许被隐藏。
可选的,所述第一分量为色度分量。
在一些实施例中,奇偶性确定单元12,具体用于根据所述P个量化系数对应的奇偶性,确定所述奇偶性被隐藏的量化系数的奇偶性。
在一些实施例中,确定单元14,具体用于使用预设运算方式对所述奇偶性被隐藏的量化系数进行运算,得到第一运算结果;根据所述奇偶性,对所述第一运算结果进行处理,得到第二运算结果;根据所述第二运算结果和所述第一量化系数,得到所述第二量化系数。
奇偶性确定单元12,具体用于若所述当前区域满足预设条件时,则根据所述P个量化系数,确定所述奇偶性被隐藏的量化系数的奇偶性。
在一些实施例中,所述预设条件包括如下至少一个:
所述当前区域中非零量化系数的个数大于第一预设数值;
所述当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个非零量化系数之间距离大于第二预设数值;
所述当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个量化系数之间距离大于第三预设数值;
所述当前区域中,非零量化系数的绝对值之和大于第四预设数值;
所述当前块为颜色分量为第二分量;
所述当前块的变换类型不是第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
可选的,所述第二分量为亮度分量。
在一些实施例中,奇偶性确定单元12,还用于若所述当前块采用目标变换方式进行变换时,则跳过根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性的步骤。
在一些实施例中,所述目标变换方式包括二次变换、多次变换或第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
在一些实施例中,奇偶性确定单元12,还用于解码所述码流,得到至少一个标志,所述至少一个标志用于指示是否允许量化系数的奇偶性被隐藏;若根据所述至少一个标志,确定允许所述当前块中至少一个量化系数的奇偶性被隐藏时,根据所述P个量化系数,确定所述奇偶性被隐藏的量化系数的奇偶性。
可选的,所述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
在一些实施例中,所述第一量化系数为所述当前区域中位于扫描顺序中第K个位置的非零量化系数,所述K小于或等于所述当前区域中非零量化系数个数。
在一些实施例中,解码单元11,具体用于解码所述码流,得到所述当前块的已解码信息;将所述当前块划分为N个区域,所述N为正整数;从所述当前块的已解码信息中,得到所述当前区域中的P个量化系数。。
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图14所示的视频解码装置10可以对应于执行本申请实施例的方法中的相应主体,并且视频解码装置10中的各个单元的前述和其它操作和/或功能分别为了实现方法等各个方法中的相应流程,为了简洁,在此不再赘述。
图15是本申请实施例提供的视频编码装置的示意性框图。
如图15所示,视频编码装置20包括:
划分单元21,用于将当前块划分为N个区域,所述N为正整数;
处理单元22,用于确定所述当前区域中的第二量化系数,并对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,所述当前区域为N个区域中包括至少一个非零量化系数的区域;
编码单元23,用于确定所述第一量化系数对应的目标上下文模型,使用所述目标上下文模型,对所述第一量化系数进行编码,得到码流,所述第一量化系数中奇偶性被隐藏的量化系数的奇偶性通过所述当前区域中P个量化系数进行指示,所述P为正整数。
在一些实施例中,所述第一量化系数对应的目标上下文模型与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。
在一些实施例中,编码单元23,具体用于获取所述奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型;从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型;使用所述目标量化系数的目标上下文模型,编码所述第一量化系数的目标量化系数,得到所述码流。
在一些实施例中,编码单元23,具体用于确定所述目标量化系数对应的目标上下文模型的索引;根据所述索引从所述目标量化系数对应的多个上下文模型中,确定所述目标量化系数对应的目标上下文模型。
在一些实施例中,编码单元23,具体用于若所述目标量化系数为所述第一量化系数中标识1表示的基本部分基本部分时,则根据所述第一量化系数周围已编码的量化系数的基本部分之和,确定所述基本部分对应的目标上下文模型的索引。
在一些实施例中,编码单元23,具体用于将所述第一量化系数周围已编码的量化系数的基本部分之和、与第一预设值进行相加,得到第一和值;将所述第一和值与第一数值进行相除,得到第一比值;根据所述第一比值,确定所述基本部分对应的目标上下文模型的索引。
可选的,所述第一数值为2。
可选的,所述第一预设值为2。
在一些实施例中,编码单元23,具体用于将所述第一比值与第一预设阈值中的最小值,确定为第二数值;根据所述第二数值,确定所述基本部分对应的目标上下文模型的索引。
可选的,所述第一预设阈值为4。
在一些实施例中,编码单元23,具体用于确定所述基本部分的偏置索引;将所述第二数值和所述基本部分的偏置索引之和,确定为所述基本部分对应的目标上下文模型的索引。
在一些实施例中,编码单元23,具体用于根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色分量中的至少一个,确定所述基本部分的偏置索引。
在一些实施例中,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第一阈值。
在一些实施例中,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第二阈值。
可选的,在所述当前块的颜色分量为亮度分量的情况下,则所述第二阈值为5。
在一些实施例中,编码单元23,具体用于若所述目标量化系数为所述第一量化系数中标识2至标识5表示的较低部分较低部分时,则根据所述第一量化系数周围已编码的量化系数的基本部分与较低部分之和,确定所述较低部分对应的目标上下文模型的索引。
在一些实施例中,编码单元23,具体用于将所述第一量化系数周围已编码的量化系数的基本部分与较低部分之和、与第二预设值进行相加,得到第二和值;将所述第二和值与第三数值进行相除,得到第二比值;根据所述第二比值,确定所述较低部分对应的目标上下文模型的索引。
可选的,所述第三数值为2。
可选的,所述第二预设值为2。
在一些实施例中,编码单元23,具体用于将所述第二比值与第二预设阈值中的最小值,确定为第四数值;根据所述第四数值,确定所述较低部分对应的目标上下文模型的索引。
可选的,所述第二预设阈值为4。
在一些实施例中,编码单元23,具体用于确定所述较低部分的偏置索引;将所述第四数值和所述较低部分的偏置索引之和,确定为所述较低部分对应的目标上下文模型的索引。
在一些实施例中,编码单元23,具体用于根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色分量中的至少一个,确定所述较低部分的偏置索引。
在一些实施例中,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第三阈值。
可选的,在所述当前块的颜色分量为亮度分量的情况下,则所述第三阈值为0。
在一些实施例中,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第四阈值。
可选的,在所述当前块的颜色分量为亮度分量的情况下,则所述第四阈值为7。
在一些实施例中,编码单元23,具体用于使用所述第一量化系数中标识1表示的基本部分基本部分对应的目标上下文模型,编码所述第一量化系数中的基本部分,得到编码后的基本部分;根据所述编码后的基本部分,得到所述码流。
在一些实施例中,编码单元23,具体用于若所述第一量化系数还包括标识2至标识5表示的较低部分较低部分时,则使用所述较低部分对应的目标上下文模型,编码所述第一量化系数中的较低部分,得到编码后的较低部分;根据所述编码后的基本部分和编码后的较低部分,确定所述码流。
在一些实施例中,编码单元23,具体用于根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,从所述目标量化系数对应的多个上下文模型中,选出至少一个上下文模型;将所述至少一个下上文模型中与所述索引对应的上下文模型,确定为所述目标量化系数对应的目标上下文模型。
在一些实施例中,编码单元23,还用于对所述目标量化系数对应的多个上下文模型进行初始化;从初始化后的所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型。
在一些实施例中,编码单元23,具体用于使用等概率值对所述目标量化系数对应的多个上下文模型进行初始化;或者,使用收敛概率值对所述目标量化系数对应的多个上下文模型进行初始化,其中所述收敛概率值为使用上下文模型对测试视频进行编码时,所述上下文模型对应的收敛概率值。
在一些实施例中,处理单元22,具体用于根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏;在确定允许所述第一量化系数的奇偶性被隐藏时,对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
在一些实施例中,处理单元22,具体用于若所述当前块的变换类型为第一变换类型时,则确定所述第一量化系数的奇偶性不允许被隐藏,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
在一些实施例中,处理单元22,具体用于若所述当前块的颜色分量为第一分量时,则确定所述第一量化系数的奇偶性不允许被隐藏。
可选的,所述第一分量为色度分量。
在一些实施例中,所述奇偶性被隐藏的量化系数的奇偶性通过所述当前区域中P个量化系数对应的奇偶性进行指示。
在一些实施例中,若所述P个量化系数对应的奇偶性,与所述奇偶性被隐藏的量化系数的奇偶性不一致时,处理单元22,用于对所述P个量化系数进行调整,以使所述P个量化系数对应的奇偶性,与所述奇偶性被隐藏的量化系数的奇偶性一致。
在一些实施例中,处理单元22,具体用于采用预设运算方式对所述第二量化系数的部分或全部进行处理,得到所述第一量化系数。
在一些实施例中,所述预设运算方式包括所述第二量化系数的部分或全部除以二取整。
在一些实施例中,处理单元22,具体用于若所述当前区域满足预设条件时,则对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
在一些实施例中,所述预设条件包括如下至少一个:
所述当前区域中非零量化系数的个数大于第一预设数值;
所述当前区域中,扫描顺序上的第一个非零量化系数与最后一个非零量化系数之间距离大于第二预设数值;
所述当前区域中,扫描顺序上的第一个非零量化系数与最后一个量化系数之间距离大于第三预设数值;
所述当前区域中,非零量化系数的绝对值之和大于第四预设数值;
所述当前块为颜色分量为第二分量;
所述当前块的变换类型不是第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
可选的,所述第二分量为亮度分量。
在一些实施例中,处理单元22,还用于若所述当前块采用目标变换方式进行变换时,则跳过对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
在一些实施例中,所述目标变换方式包括二次变换、多次变换或第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
在一些实施例中,处理单元22,还用于获得至少一个标志,所述至少一个标志用于指示是否允许量化系数的奇偶性被隐藏;若根据所述至少一个标志,确定允许所述当前块的量化系数的奇偶性被隐藏时,对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
可选的,所述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
在一些实施例中,所述第一量化系数为所述当前区域中位于扫描顺序中第K个位置的非零量化系数,所述K小于或等于所述当前区域中非零量化系数个数。
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图15所示的视频编码装置20可以对应于执行本申请实施例的方法中的相应主体,并且视频编码装置20中的各个单元的前述和其它操作和/或功能分别为了实现方法等各个方法中的相应流程,为了简洁,在此不再赘述。
上文中结合附图从功能单元的角度描述了本申请实施例的装置和***。应理解,该功能单元可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件单元组合实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件单元组合执行完成。可选地,软件单元可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图16是本申请实施例提供的电子设备的示意性框图。
如图16所示,该电子设备30可以为本申请实施例所述的视频编码器,或者视频解码器,该电子设备30可包括:
存储器33和处理器32,该存储器33用于存储计算机程序34,并将该程序代码34传输给该处理器32。换言之,该处理器32可以从存储器33中调用并运行计算机程序34,以实现本申请实施例中的方法。
例如,该处理器32可用于根据该计算机程序34中的指令执行上述方法中的步骤。
在本申请的一些实施例中,该处理器32可以包括但不限于:
通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。
在本申请的一些实施例中,该存储器33包括但不限于:
易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
在本申请的一些实施例中,该计算机程序34可以被分割成一个或多个单元,该一个或者多个单元被存储在该存储器33中,并由该处理器32执行,以完成本申请提供的方法。该一个或多个单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序34在该电子设备30中的执行过程。
如图16所示,该电子设备30还可包括:
收发器33,该收发器33可连接至该处理器32或存储器33。
其中,处理器32可以控制该收发器33与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器33可以包括发射机和接收机。收发器33还可以进一步包括天线,天线的数量可以为一个或多个。
应当理解,该电子设备30中的各个组件通过总线***相连,其中,总线***除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
图17是本申请实施例提供的视频编解码***40的示意性框图。
如图17所示,该视频编解码***40可包括:视频编码器41和视频解码器42,其中视频编码器41用于执行本申请实施例涉及的视频编码方法,视频解码器42用于执行本申请实施例涉及的视频解码方法。
本申请还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。或者说,本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机 指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。例如,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。

Claims (103)

  1. 一种视频解码方法,其特征在于,包括:
    解码码流,得到当前区域中的P个量化系数,所述当前区域为当前块中包括至少一个非零量化系数的区域,所述P为正整数;
    根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,所述第一量化系数为对第二量化系数的全部或部分进行奇偶性隐藏后得到的量化系数,所述第二量化系数为所述当前区域中奇偶性未被隐藏的一个量化系数;
    确定所述第一量化系数对应的目标上下文模型,并使用所述目标上下文模型,对基于上下文编码的所述第一量化系数进行解码,得到解码后的第一量化系数;
    根据所述奇偶性被隐藏的量化系数的奇偶性和所述解码后的第一量化系数,确定所述第二量化系数。
  2. 根据权利要求1所述的方法,其特征在于,所述第一量化系数对应的目标上下文模型、与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。
  3. 根据权利要求2所述的方法,其特征在于,所述确定所述第一量化系数对应的目标上下文模型,包括:
    获取所述奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型,所述目标量化系数为量化系数的部分量化系数;
    从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型;
    所述使用所述目标上下文模型,对基于上下文编码的所述第一量化系数进行解码,得到解码后的第一量化系数,包括:
    使用所述目标量化系数对应的目标上下文模型,解码所述第一量化系数的目标量化系数,得到所述解码后的第一量化系数。
  4. 根据权利要求3所述的方法,其特征在于,所述从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型,包括:
    确定所述目标量化系数对应的目标上下文模型的索引;
    根据所述索引从所述目标量化系数对应的多个上下文模型中,确定所述目标量化系数对应的目标上下文模型。
  5. 根据权利要求4所述的方法,其特征在于,所述确定所述目标量化系数对应的目标上下文模型的索引,包括:
    若所述目标量化系数为所述第一量化系数中标识1表示的基本部分时,则根据所述第一量化系数周围已解码的量化系数的基本部分之和,确定所述基本部分对应的目标上下文模型的索引。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第一量化系数周围已解码的量化系数的基本部分之和,确定所述基本部分对应的目标上下文模型的索引,包括:
    将所述第一量化系数周围已解码的量化系数的基本部分之和、与第一预设值进行相加,得到第一和值;
    将所述第一和值与第一数值进行相除,得到第一比值;
    根据所述第一比值,确定所述基本部分对应的目标上下文模型的索引。
  7. 根据权利要求6所述的方法,其特征在于,所述第一数值为4。
  8. 根据权利要求6所述的方法,其特征在于,所述第一预设值为2。
  9. 根据权利要求7所述的方法,其特征在于,所述根据所述第一比值,确定所述基本部分对应的目标上下文模型的索引,包括:
    将所述第一比值与第一预设阈值中的最小值,确定为第二数值;
    根据所述第二数值,确定所述基本部分对应的目标上下文模型的索引。
  10. 根据权利要求7所述的方法,其特征在于,所述第一预设阈值为4。
  11. 根据权利要求9所述的方法,其特征在于,所述根据所述第二数值,确定所述基本部分对应的目标上下文模型的索引,包括:
    确定所述基本部分的偏置索引;
    将所述第二数值和所述基本部分的偏置索引之和,确定为所述基本部分对应的目标上下文模型的索引。
  12. 根据权利要求11所述的方法,其特征在于,所述确定所述基本部分的偏置索引,包括:
    根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色分量中的至少一个,确定所述基本部分的偏置索引。
  13. 根据权利要求12所述的方法,其特征在于,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第一阈值。
  14. 根据权利要求13所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第一阈值为0。
  15. 根据权利要求12所述的方法,其特征在于,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第二阈值。
  16. 根据权利要求15所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第二阈值为5。
  17. 根据权利要求4所述的方法,其特征在于,所述确定所述目标量化系数对应的目标上下文模型的索引,包括:
    若所述目标量化系数为所述第一量化系数中标识2至标识5表示的较低部分时,则根据所述第一量化系数周围已解码的量化系数的基本部分和较低部分之和,确定所述较低部分对应的目标上下文模型的索引。
  18. 根据权利要求17所述的方法,其特征在于,所述根据所述第一量化系数周围已解码的量化系数的基本部分与较低部分之和,确定所述较低部分对应的目标上下文模型的索引,包括:
    将所述第一量化系数周围已解码的量化系数的基本部分与较低部分之和、与第二预设值进行相加,得到第二和值;
    将所述第二和值与第三数值进行相除,得到第二比值;
    根据所述第二比值,确定所述较低部分对应的目标上下文模型的索引。
  19. 根据权利要求18所述的方法,其特征在于,所述第三数值为4。
  20. 根据权利要求18所述的方法,其特征在于,所述第二预设值为2。
  21. 根据权利要求18所述的方法,其特征在于,所述根据所述第二比值,确定所述较低部分对应的目标上下文模型的索引,包括:
    将所述第二比值与第二预设阈值中的最小值,确定为第四数值;
    根据所述第四数值,确定所述较低部分对应的目标上下文模型的索引。
  22. 根据权利要求21所述的方法,其特征在于,所述第二预设阈值为4。
  23. 根据权利要求21所述的方法,其特征在于,所述根据所述第四数值,确定所述较低部分对应的目标上下文模型的索引,包括:
    确定所述较低部分的偏置索引;
    将所述第四数值和所述较低部分的偏置索引之和,确定为所述较低部分对应的目标上下文模型的索引。
  24. 根据权利要求23所述的方法,其特征在于,所述确定较低部分的偏置索引,包括:
    根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色分量中的至少一个,确定所述较低部分的偏置索引。
  25. 根据权利要求24所述的方法,其特征在于,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第三阈值。
  26. 根据权利要求25所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第三阈值为0。
  27. 根据权利要求24所述的方法,其特征在于,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第四阈值。
  28. 根据权利要求27所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第四阈值为7。
  29. 根据权利要求3-28任一项所述的方法,其特征在于,所述使用所述目标量化系数的目标上下文模型,解码所述目标量化系数,得到所述第一量化系数,包括:
    使用所述第一量化系数中标识1表示的基本部分对应的目标上下文模型,解码所述第一量化系数中的基本部分,得到解码后的基本部分;
    根据所述解码后的基本部分,确定所述第一量化系数。
  30. 根据权利要求29所述的方法,其特征在于,所述根据所述解码后的基本部分,确定所述第一量化系数,包括:
    若所述第一量化系数还包括标识2至标识5表示的较低部分时,则使用所述较低部分对应的目标上下文模型,解码所述第一量化系数中的较低部分,得到解码后的较低部分;
    根据所述解码后的基本部分和解码后的较低部分,确定所述第一量化系数。
  31. 根据权利要求4-28任一项所述的方法,其特征在于,所述根据所述索引从所述目标量化系数对应的多个上下文模型中,确定所述目标量化系数对应的目标上下文模型,包括:
    根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,从所述目标量化系数对应的多个上下文模型中,选出至少一个上下文模型;
    将所述至少一个下上文模型中与所述索引对应的上下文模型,确定为所述目标量化系数对应的目标上下文模型。
  32. 根据权利要求3-28任一项所述的方法,其特征在于,所述方法还包括:
    对所述目标量化系数对应的多个上下文模型进行初始化;
    所述从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型,包括:
    从初始化后的所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型。
  33. 根据权利要求32所述的方法,其特征在于,所述对所述目标量化系数对应的多个上下文模型进行初始化,包括:
    使用等概率值对所述目标量化系数对应的多个上下文模型进行初始化;或者,
    使用收敛概率值对所述目标量化系数对应的多个上下文模型进行初始化,其中所述收敛概率值为使用上下文模型对测试视频进行编码时,所述上下文模型对应的收敛概率值。
  34. 根据权利要求1-28任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏;
    所述根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,包括:
    在确定允许所述第一量化系数的奇偶性被隐藏时,根据所述P个量化系数,确定所述奇偶性被隐藏的量化系数的奇偶性。
  35. 根据权利要求34所述的方法,其特征在于,所述根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏,包括:
    若所述当前块的变换类型为第一变换类型时,则确定所述第一量化系数的奇偶性不允许被隐藏,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
  36. 根据权利要求34所述的方法,其特征在于,所述根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏,包括:
    若所述当前块的颜色分量为第一分量时,则确定所述第一量化系数的奇偶性不允许被隐藏。
  37. 根据权利要求36所述的方法,其特征在于,所述第一分量为色度分量。
  38. 根据权利要求1-28任一项所述的方法,其特征在于,所述根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,包括:
    根据所述P个量化系数对应的奇偶性,确定所述奇偶性被隐藏的量化系数的奇偶性。
  39. 根据权利要求1-28任一项所述的方法,其特征在于,所述根据所述奇偶性被隐藏的量化系数的奇偶性和所述解码后的第一量化系数,确定具有奇偶性的第二量化系数,包括:
    使用预设运算方式对所述奇偶性被隐藏的量化系数进行运算,得到第一运算结果;
    根据所述奇偶性,对所述第一运算结果进行处理,得到第二运算结果;
    根据所述第二运算结果和所述第一量化系数,得到所述第二量化系数。
  40. 根据权利要求1-28任一项所述的方法,其特征在于,所述根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,包括:
    若所述当前区域满足预设条件时,则根据所述P个量化系数,确定所述奇偶性被隐藏的量化系数的奇偶性。
  41. 根据权利要求40所述的方法,其特征在于,所述预设条件包括如下至少一个:
    所述当前区域中非零量化系数的个数大于第一预设数值;
    所述当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个非零量化系数之间距离大于第二预设数值;
    所述当前区域中,解码扫描顺序上的第一个非零量化系数与最后一个量化系数之间距离大于第三预设数值;
    所述当前区域中,非零量化系数的绝对值之和大于第四预设数值;
    所述当前块为颜色分量为第二分量;
    所述当前块的变换类型不是第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
  42. 根据权利要求41所述的方法,其特征在于,所述第二分量为亮度分量。
  43. 根据权利要求1-28任一项所述的方法,其特征在于,所述方法还包括:
    若所述当前块采用目标变换方式进行变换时,则跳过根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性的步骤。
  44. 根据权利要求43所述的方法,其特征在于,所述目标变换方式包括二次变换、多次变换或第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
  45. 根据权利要求1-28任一项所述的方法,其特征在于,所述方法还包括:
    解码所述码流,得到至少一个标志,所述至少一个标志用于指示是否允许量化系数的奇偶性被隐藏;
    所述根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,包括:
    若根据所述至少一个标志,确定允许所述当前块中至少一个量化系数的奇偶性被隐藏时,根据所述P个量化系数,确定所述奇偶性被隐藏的量化系数的奇偶性。
  46. 根据权利要求45所述的方法,其特征在于,所述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
  47. 根据权利要求1-28任一项所述的方法,其特征在于,所述第一量化系数为所述当前区域中位于扫描顺序中第K个位置的非零量化系数,所述K小于或等于所述当前区域中非零量化系数个数。
  48. 根据权利要求1-28任一项所述的方法,其特征在于,所述解码码流,得到当前区域中的P个量化系数,包括:
    解码所述码流,得到所述当前块的已解码信息;
    将所述当前块划分为N个区域,所述N为正整数;
    从所述当前块的已解码信息中,得到所述当前区域中的P个量化系数。
  49. 一种视频编码方法,其特征在于,包括:
    将当前块划分为N个区域,所述N为正整数;
    确定所述当前区域中的第二量化系数,并对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,所述当前区域为N个区域中包括至少一个非零量化系数的区域;
    确定所述第一量化系数对应的目标上下文模型,使用所述目标上下文模型,对所述第一量化系数进行编码,得到码流,所述第一量化系数中奇偶性被隐藏的量化系数的奇偶性通过所述当前区域中P个量化系数进行指示,所述P为正整数。
  50. 根据权利要求49所述的方法,其特征在于,所述第一量化系数对应的目标上下文模型、与奇偶性未被隐藏的其他量化系数对应的上下文模型中的至少一个上下文模型相同。
  51. 根据权利要求50所述的方法,其特征在于,所述确定所述第一量化系数对应的目标上下文模型,使用所述目标上下文模型,对所述第一量化系数进行编码,得到码流,包括:
    获取所述奇偶性未被隐藏的其他量化系数的目标量化系数对应的多个上下文模型,所述目标量化系数为量化系数的部分量化系数;
    从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型;
    所述使用所述目标上下文模型,对所述第一量化系数进行编码,得到码流,包括:
    使用所述目标量化系数的目标上下文模型,编码所述第一量化系数的目标量化系数,得到所述码流。
  52. 根据权利要求51所述的方法,其特征在于,所述从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型,包括:
    确定所述目标量化系数对应的目标上下文模型的索引;
    根据所述索引从所述目标量化系数对应的多个上下文模型中,确定所述目标量化系数对应的目标上下文模型。
  53. 根据权利要求52所述的方法,其特征在于,所述确定所述目标量化系数对应的目标上下文模型的索引,包括:
    若所述目标量化系数为所述第一量化系数中标识1表示的基本部分时,则根据所述第一量化系数周围已解码的量化系数的基本部分之和,确定所述基本部分对应的目标上下文模型的索引。
  54. 根据权利要求53所述的方法,其特征在于,所述根据所述第一量化系数周围已解码的量化系数的基本部分之和,确定所述基本部分对应的目标上下文模型的索引,包括:
    将所述第一量化系数周围已解码的量化系数的基本部分之和、与第一预设值进行相加,得到第一和值;
    将所述第一和值与第一数值进行相除,得到第一比值;
    根据所述第一比值,确定所述基本部分对应的目标上下文模型的索引。
  55. 根据权利要求54所述的方法,其特征在于,所述第一数值为2。
  56. 根据权利要求54所述的方法,其特征在于,所述第一预设值为2。
  57. 根据权利要求54所述的方法,其特征在于,所述根据所述第一比值,确定所述基本部分对应的目标上下文模型的索引,包括:
    将所述第一比值与第一预设阈值中的最小值,确定为第二数值;
    根据所述第二数值,确定所述基本部分对应的目标上下文模型的索引。
  58. 根据权利要求54所述的方法,其特征在于,所述第一预设阈值为4。
  59. 根据权利要求57所述的方法,其特征在于,所述根据所述第二数值,确定所述基本部分对应的目标上下文模型的索引,包括:
    确定所述基本部分的偏置索引;
    将所述第二数值和所述基本部分的偏置索引之和,确定为所述基本部分对应的目标上下文模型的索引。
  60. 根据权利要求59所述的方法,其特征在于,所述确定所述基本部分的偏置索引,包括:
    根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色分量中的至少一个,确定所述基本部分的偏置索引。
  61. 根据权利要求60所述的方法,其特征在于,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第一阈值。
  62. 根据权利要求61所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第一阈值为0。
  63. 根据权利要求60所述的方法,其特征在于,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述基本部分的偏置索引为第二阈值。
  64. 根据权利要求63所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第二阈值为5。
  65. 根据权利要求52所述的方法,其特征在于,所述确定所述目标量化系数对应的目标上下文模型的索引,包括:
    若所述目标量化系数为所述第一量化系数中标识2至标识5表示的较低部分时,则根据所述第一量化系数周围已编码的量化系数的基本部分与较低部分之和,确定所述较低部分对应的目标上下文模型的索引。
  66. 根据权利要求65所述的方法,其特征在于,所述根据所述第一量化系数周围已编码的量化系数的基本部分与较低部分之和,确定所述较低部分对应的目标上下文模型的索引,包括:
    将所述第一量化系数周围已编码的量化系数的基本部分与较低部分之和、与第二预设值进行相加,得到第二和值;
    将所述第二和值与第三数值进行相除,得到第二比值;
    根据所述第二比值,确定所述较低部分对应的目标上下文模型的索引。
  67. 根据权利要求66所述的方法,其特征在于,所述第三数值为2。
  68. 根据权利要求66所述的方法,其特征在于,所述第二预设值为2。
  69. 根据权利要求66所述的方法,其特征在于,所述根据所述第二比值,确定所述较低部分对应的目标上下文模型的索引,包括:
    将所述第二比值与第二预设阈值中的最小值,确定为第四数值;
    根据所述第四数值,确定所述较低部分对应的目标上下文模型的索引。
  70. 根据权利要求69所述的方法,其特征在于,所述第二预设阈值为4。
  71. 根据权利要求69所述的方法,其特征在于,所述根据所述第四数值,确定所述较低部分对应的目标上下文模型的索引,包括:
    确定所述较低部分的偏置索引;
    将所述第四数值和所述较低部分的偏置索引之和,确定为所述较低部分对应的目标上下文模型的索引。
  72. 根据权利要求71所述的方法,其特征在于,所述确定所述较低部分的偏置索引,包括:
    根据所述第一量化系数在所述当前块中的位置、所述当前块的大小、所述当前块的扫描顺序、所述当前块的颜色 分量中的至少一个,确定所述较低部分的偏置索引。
  73. 根据权利要求72所述的方法,其特征在于,在所述第一量化系数为所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第三阈值。
  74. 根据权利要求73所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第三阈值为0。
  75. 根据权利要求72所述的方法,其特征在于,在所述第一量化系数非所述当前块的左上方位置处的量化系数的情况下,则所述较低部分的偏置索引为第四阈值。
  76. 根据权利要求75所述的方法,其特征在于,在所述当前块的颜色分量为亮度分量的情况下,则所述第四阈值为7。
  77. 根据权利要求51-76任一项所述的方法,其特征在于,所述使用所述目标量化系数的目标上下文模型,编码所述第一量化系数的目标量化系数,得到所述码流,包括:
    使用所述第一量化系数中标识1表示的基本部分对应的目标上下文模型,编码所述第一量化系数中的基本部分,得到编码后的基本部分;
    根据所述编码后的基本部分,得到所述码流。
  78. 根据权利要求77所述的方法,其特征在于,所述根据所述编码后的基本部分,得到所述码流,包括:
    若所述第一量化系数还包括标识2至标识5表示的较低部分时,则使用所述较低部分对应的目标上下文模型,编码所述第一量化系数中的较低部分,得到编码后的较低部分;
    根据所述编码后的基本部分和编码后的较低部分,确定所述码流。
  79. 根据权利要求52-76任一项所述的方法,其特征在于,所述根据所述索引从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型,包括:
    根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,从所述目标量化系数对应的多个上下文模型中,选出至少一个上下文模型;
    将所述至少一个下上文模型中与所述索引对应的上下文模型,确定为所述第一量化系数的目标量化系数对应的目标上下文模型。
  80. 根据权利要求51-76任一项所述的方法,其特征在于,所述方法还包括:
    对所述目标量化系数对应的多个上下文模型进行初始化;
    所述从所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型,包括:
    从初始化后的所述目标量化系数对应的多个上下文模型中,确定所述第一量化系数的目标量化系数对应的目标上下文模型。
  81. 根据权利要求80所述的方法,其特征在于,所述对所述目标量化系数对应的多个上下文模型进行初始化,包括:
    使用等概率值对所述目标量化系数对应的多个上下文模型进行初始化;或者,
    使用收敛概率值对所述目标量化系数对应的多个上下文模型进行初始化,其中所述收敛概率值为使用上下文模型对测试视频进行编码时,所述上下文模型对应的收敛概率值。
  82. 根据权利要求49-76任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏;
    所述对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,包括:
    在确定允许所述第一量化系数的奇偶性被隐藏时,对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
  83. 根据权利要求82所述的方法,其特征在于,所述根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏,包括:
    若所述当前块的变换类型为第一变换类型时,则确定所述第一量化系数的奇偶性不允许被隐藏,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
  84. 根据权利要求83所述的方法,其特征在于,所述根据所述第一量化系数对应的量化参数、变换类型、变换块大小、扫描类型和颜色分量中的至少一个,确定是否允许所述第一量化系数的奇偶性被隐藏,包括:
    若所述当前块的颜色分量为第一分量时,则确定所述第一量化系数的奇偶性不允许被隐藏。
  85. 根据权利要求84所述的方法,其特征在于,所述第一分量为色度分量。
  86. 根据权利要求49-76任一项所述的方法,其特征在于,所述奇偶性被隐藏的量化系数的奇偶性通过所述当前区域中P个量化系数对应的奇偶性进行指示。
  87. 根据权利要求86所述的方法,其特征在于,若所述P个量化系数对应的奇偶性,与所述奇偶性被隐藏的量化系数的奇偶性不一致时,所述方法还包括:
    对所述P个量化系数进行调整,以使所述P个量化系数对应的奇偶性,与所述奇偶性被隐藏的量化系数的奇偶性一致。
  88. 根据权利要求49-76任一项所述的方法,其特征在于,所述对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,包括:
    采用预设运算方式对所述第二量化系数的部分或全部进行处理,得到所述第一量化系数。
  89. 根据权利要求88所述的方法,其特征在于,所述预设运算方式包括所述第二量化系数的部分或全部除以二 取整。
  90. 根据权利要求49-76任一项所述的方法,其特征在于,所述对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,包括:
    若所述当前区域满足预设条件时,则对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
  91. 根据权利要求90所述的方法,其特征在于,所述预设条件包括如下至少一个:
    所述当前区域中非零量化系数的个数大于第一预设数值;
    所述当前区域中,扫描顺序上的第一个非零量化系数与最后一个非零量化系数之间距离大于第二预设数值;
    所述当前区域中,扫描顺序上的第一个非零量化系数与最后一个量化系数之间距离大于第三预设数值;
    所述当前区域中,非零量化系数的绝对值之和大于第四预设数值;
    所述当前块为颜色分量为第二分量;
    所述当前块的变换类型不是第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
  92. 根据权利要求91所述的方法,其特征在于,所述第二分量为亮度分量。
  93. 根据权利要求49-76任一项所述的方法,其特征在于,所述方法还包括:
    若所述当前块采用目标变换方式进行变换时,则跳过对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
  94. 根据权利要求93所述的方法,其特征在于,所述目标变换方式包括二次变换、多次变换或第一变换类型,所述第一变换类型用于指示所述当前块的至少一个方向跳过变换。
  95. 根据权利要求49-76任一项所述的方法,其特征在于,所述方法还包括:
    获得至少一个标志,所述至少一个标志用于指示是否允许量化系数的奇偶性被隐藏;
    所述对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,包括:
    若根据所述至少一个标志,确定允许所述当前块的量化系数的奇偶性被隐藏时,对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数。
  96. 根据权利要求95所述的方法,其特征在于,所述至少一个标志包括序列级标志、图像级标志、片级标志、单元级标志和块级标志中的至少一个。
  97. 根据权利要求49-76任一项所述的方法,其特征在于,所述第一量化系数为所述当前区域中位于扫描顺序中第K个位置的非零量化系数,所述K小于或等于所述当前区域中非零量化系数个数。
  98. 一种视频解码装置,其特征在于,包括:
    解码单元,用于解码码流,得到当前区域中的P个量化系数,所述当前区域为当前块中包括至少一个非零量化系数的区域,所述P为正整数;
    奇偶性确定单元,用于根据所述P个量化系数,确定第一量化系数中奇偶性被隐藏的量化系数的奇偶性,所述第一量化系数为对第二量化系数的全部或部分进行奇偶性隐藏后得到的量化系数,所述第二量化系数为所述当前区域中奇偶性未被隐藏的一个量化系数;
    处理单元,用于确定所述第一量化系数对应的目标上下文模型,并使用所述目标上下文模型,对基于上下文编码的所述第一量化系数进行解码,得到解码后的第一量化系数;
    确定单元,用于根据所述奇偶性被隐藏的量化系数的奇偶性和所述解码后的第一量化系数,确定所述第二量化系数。
  99. 一种视频编码装置,其特征在于,包括:
    划分单元,用于将当前块划分为N个区域,所述N为正整数;
    处理单元,用于确定所述当前区域中的第二量化系数,并对所述第二量化系数的部分或全部进行奇偶性隐藏,得到第一量化系数,所述当前区域为N个区域中包括至少一个非零量化系数的区域;
    编码单元,用于确定所述第一量化系数对应的目标上下文模型,使用所述目标上下文模型,对所述第一量化系数进行编码,得到码流,所述第一量化系数中奇偶性被隐藏的量化系数的奇偶性通过所述当前区域中P个量化系数进行指示,所述P为正整数。
  100. 一种视频编解码***,其特征在于,包括视频编码器和视频解码器;
    所述视频解码器用于执行如权利要求1-48任一项所述的视频解码方法;
    所述视频编码器用于执行如权利要求49-97任一项所述的视频编码方法。
  101. 一种电子设备,其特征在于,包括:存储器,处理器;
    所述存储器,用于存储计算机程序;
    所述处理器,用于执行所述计算机程序以实现如上述权利要求1至48或49至97任一项所述方法。
  102. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,所述计算机执行指令被处理器执行时用于实现如权利要求1至48或49至97任一项所述的方法。
  103. 一种码流,其特征在于,包括如权利要求49至97任一项所述的方法得到的码流。
PCT/CN2022/093411 2022-05-17 2022-05-17 视频编解码方法、装置、设备、***及存储介质 WO2023220946A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/093411 WO2023220946A1 (zh) 2022-05-17 2022-05-17 视频编解码方法、装置、设备、***及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/093411 WO2023220946A1 (zh) 2022-05-17 2022-05-17 视频编解码方法、装置、设备、***及存储介质

Publications (1)

Publication Number Publication Date
WO2023220946A1 true WO2023220946A1 (zh) 2023-11-23

Family

ID=88834399

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093411 WO2023220946A1 (zh) 2022-05-17 2022-05-17 视频编解码方法、装置、设备、***及存储介质

Country Status (1)

Country Link
WO (1) WO2023220946A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103813167A (zh) * 2012-11-13 2014-05-21 联发科技股份有限公司 一种量化转换参数处理方法及装置
CN104380740A (zh) * 2012-06-29 2015-02-25 索尼公司 编码装置、编码方法、解码装置和解码方法
CN109788285A (zh) * 2019-02-27 2019-05-21 北京大学深圳研究生院 一种量化系数结束标志位的上下文模型选取方法及装置
CN111819854A (zh) * 2018-03-07 2020-10-23 华为技术有限公司 用于协调多符号位隐藏和残差符号预测的方法和装置
CN114342396A (zh) * 2019-09-06 2022-04-12 索尼集团公司 图像处理装置和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104380740A (zh) * 2012-06-29 2015-02-25 索尼公司 编码装置、编码方法、解码装置和解码方法
CN103813167A (zh) * 2012-11-13 2014-05-21 联发科技股份有限公司 一种量化转换参数处理方法及装置
CN111819854A (zh) * 2018-03-07 2020-10-23 华为技术有限公司 用于协调多符号位隐藏和残差符号预测的方法和装置
CN109788285A (zh) * 2019-02-27 2019-05-21 北京大学深圳研究生院 一种量化系数结束标志位的上下文模型选取方法及装置
CN114342396A (zh) * 2019-09-06 2022-04-12 索尼集团公司 图像处理装置和方法

Similar Documents

Publication Publication Date Title
CN112889281B (zh) 对图像中的块进行帧内预测的方法、设备和存储介质
WO2020119814A1 (zh) 图像重建方法和装置
CN113784126A (zh) 图像编码方法、装置、设备及存储介质
WO2023044868A1 (zh) 视频编解码方法、设备、***、及存储介质
CN114205582B (zh) 用于视频编解码的环路滤波方法、装置及设备
WO2023220946A1 (zh) 视频编解码方法、装置、设备、***及存储介质
WO2023184248A1 (zh) 视频编解码方法、装置、设备、***及存储介质
WO2023184747A1 (zh) 视频编解码方法、装置、设备、***及存储介质
CN115086664A (zh) 未匹配像素的解码方法、编码方法、解码器以及编码器
WO2023236113A1 (zh) 视频编解码方法、装置、设备、***及存储介质
WO2023220969A1 (zh) 视频编解码方法、装置、设备、***及存储介质
US20230421765A1 (en) Video coding and decoding method and system, and video coder and video decoder
WO2023122969A1 (zh) 帧内预测方法、设备、***、及存储介质
WO2023173255A1 (zh) 图像编解码方法、装置、设备、***、及存储介质
WO2023197229A1 (zh) 视频编解码方法、装置、设备、***及存储介质
WO2022155922A1 (zh) 视频编解码方法与***、及视频编码器与视频解码器
WO2023122968A1 (zh) 帧内预测方法、设备、***、及存储介质
US20230300329A1 (en) Picture processing method and video decoder
WO2023221599A1 (zh) 图像滤波方法、装置及设备
US20230319267A1 (en) Video coding method and video decoder
WO2024050723A1 (zh) 一种图像预测方法、装置及计算机可读存储介质
WO2023092404A1 (zh) 视频编解码方法、设备、***、及存储介质
CN116760976B (zh) 仿射预测决策方法、装置、设备及存储介质
WO2023220970A1 (zh) 视频编码方法、装置、设备、***、及存储介质
WO2024007128A1 (zh) 视频编解码方法、装置、设备、***、及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22941988

Country of ref document: EP

Kind code of ref document: A1