WO2020168509A1 - 视频编码方法、视频解码方法、装置以及电子设备 - Google Patents

视频编码方法、视频解码方法、装置以及电子设备 Download PDF

Info

Publication number
WO2020168509A1
WO2020168509A1 PCT/CN2019/075685 CN2019075685W WO2020168509A1 WO 2020168509 A1 WO2020168509 A1 WO 2020168509A1 CN 2019075685 W CN2019075685 W CN 2019075685W WO 2020168509 A1 WO2020168509 A1 WO 2020168509A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
block
reference motion
unit
prediction
Prior art date
Application number
PCT/CN2019/075685
Other languages
English (en)
French (fr)
Inventor
蔡文婷
朱建清
Original Assignee
富士通株式会社
蔡文婷
朱建清
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 蔡文婷, 朱建清 filed Critical 富士通株式会社
Priority to PCT/CN2019/075685 priority Critical patent/WO2020168509A1/zh
Publication of WO2020168509A1 publication Critical patent/WO2020168509A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Definitions

  • the embodiments of the present invention relate to the field of video image technology, and in particular to a video encoding method, video decoding method, device, and electronic equipment.
  • inter-frame prediction uses the time of video image Correlation, using adjacent coded image blocks in the reference image to predict the current image block, which can effectively remove video temporal redundancy.
  • the principle is to find the best image block for the current image block among the coded image blocks in the previous reference image.
  • Matching block reference block
  • MV motion vector
  • the video codec can use the reconstructed spatial and/or temporal motion vectors of neighboring blocks to generate a motion vector candidate list.
  • inter-frame prediction use the appropriate motion vector selected from the candidate list for the current image block. Make predictions.
  • the embodiments of the present invention provide a video encoding method, a video decoding method, a device, and an electronic device, which can improve the accuracy of motion vector correction, thereby effectively performing video encoding and decoding.
  • a video encoding device wherein the device includes:
  • An encoding unit configured to perform inter-frame prediction on a current block according to a reference motion vector to obtain a prediction block, and perform encoding according to the current block and the prediction block;
  • a reconstruction unit configured to obtain a reconstruction block of the current block according to the prediction block
  • a correction unit configured to correct the reference motion vector after the reconstruction unit obtains the reconstruction block
  • the coding unit uses the modified reference motion vector and the reconstructed block to perform inter-frame prediction on other blocks.
  • a video decoding device wherein the device includes:
  • a decoding unit which is used to decode the encoded data of the encoded current block; according to the decoding result and the reference motion vector, perform inter-frame prediction to obtain a prediction block;
  • a reconstruction unit configured to obtain a reconstruction block of the current block according to the prediction block
  • a correction unit configured to correct the reference motion vector after the reconstruction unit obtains the reconstruction block
  • the decoding unit uses the corrected reference motion vector and the reconstructed block to perform inter-frame prediction on other blocks.
  • an electronic device including: an encoder, which includes the video encoding device as described in the first aspect; and/or, a decoder, which includes as in the second aspect The video decoding device.
  • the beneficial effect of the embodiment of the present invention is that the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • FIG. 1 is a schematic diagram of a video encoding device according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of the structure of a correction unit in Embodiment 1 of the present invention.
  • Fig. 3 is an example diagram of a modified motion vector in embodiment 1 of the present invention.
  • Embodiment 4 is an example diagram of a second predetermined number of pixels in Embodiment 1 of the present invention.
  • Fig. 5 is a diagram showing an example of bidirectional reference motion vector correction in Embodiment 1 of the present invention.
  • Fig. 6 is a schematic diagram of a video encoding device according to Embodiment 1 of the present invention.
  • Fig. 7 is a schematic diagram of an encoder according to Embodiment 2 of the present invention.
  • FIG. 8 is a schematic diagram of a video decoding device according to Embodiment 3 of the present invention.
  • FIG. 9 is a schematic diagram of a video decoding device according to Embodiment 3 of the present invention.
  • Figure 10 is a schematic diagram of a decoder according to Embodiment 4 of the present invention.
  • Figure 11 is a schematic diagram of a video encoding method according to Embodiment 5 of the present invention.
  • FIG. 12 is a schematic diagram of step 1103 of Embodiment 5 of the present invention.
  • Figure 13 is a schematic diagram of a video decoding method according to Embodiment 6 of the present invention.
  • FIG. 14 is a schematic diagram of an electronic device according to Embodiment 7 of the present invention.
  • the terms “first”, “second”, etc. are used to distinguish different elements in terms of numelations, but they do not indicate the spatial arrangement or temporal order of these elements. These elements should not be used by these terms. Limited.
  • the term “and/or” includes any and all combinations of one or more of the associated listed terms.
  • the terms “comprising”, “including”, “having” and the like refer to the existence of the stated features, elements, elements or components, but do not exclude the presence or addition of one or more other features, elements, elements or components.
  • Fig. 1 is a schematic diagram of a video encoding device according to an embodiment of the present invention. As shown in Figure 1, the device 100 includes:
  • the coding unit 101 is configured to perform inter-frame prediction on a current block according to a reference motion vector to obtain a prediction block, and perform coding according to the current block and the prediction block;
  • a reconstruction unit 102 configured to obtain a reconstruction block of the current block according to the prediction block
  • the correction unit 103 is configured to correct the reference motion vector after the reconstruction unit obtains the reconstruction block;
  • the encoding unit 101 uses the corrected reference motion vector and the reconstructed block to perform inter-frame prediction on other blocks.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video coding and decoding.
  • the input video stream is composed of multiple consecutive frame images.
  • Each frame of image can be split into at least one processing unit block in advance.
  • the video encoding device uses one processing unit block as a unit to split the image into each The processing unit block performs processing.
  • the frame image can be composed of multiple coding tree units, and each coding tree unit can be split into coding units (CU) according to a quadtree or other structure, and the prediction unit (PU) or transformation unit (TU) can be further divided from the CU.
  • the processing unit block can be a prediction unit (PU), a transformation unit (TU) or a coding unit (CU).
  • the “block” mentioned in this embodiment can be Is PU or TU or CU.
  • the inter-frame prediction refers to predicting the current block of the current frame image (the processing unit block currently to be processed) with reference to at least one previous frame image and/or at least one subsequent frame image, thereby generating a prediction block.
  • the encoding unit 101 encodes the difference between the prediction block and the current block.
  • the frame referred to in encoding or decoding the current frame image is referred to as a reference frame.
  • the encoding unit 101 may use the inter-frame prediction and encoding methods in the prior art to process the current block, as described below with an example.
  • the existing motion estimation algorithm can be used to select a reference frame for the current block, and select the reference block corresponding to the current block in the reference frame.
  • the relative offset between the reference block and the current block in the spatial position is called Motion vector.
  • the motion vector includes horizontal offset and vertical offset.
  • a motion vector competition mechanism is currently used, that is, the motion vector (MV) candidate list can be selected Reference motion vector, where the motion vector in the candidate list is a set of motion vectors of adjacent blocks in the spatial domain and/or motion vectors of corresponding blocks in adjacent frames in the time domain. Therefore, optionally, the device may also include : (Optional, not shown) a selection unit for selecting the reference motion vector from the motion vector candidate list.
  • the inter-frame prediction can use the merge mode or the advanced motion vector prediction mode.
  • an MV candidate list can be established for the current block.
  • the candidate list includes the motion of neighboring blocks in the spatial domain.
  • Vector and/or motion vector of the corresponding block in the adjacent frame in the time domain traverse each MV in the candidate list, and calculate the rate-distortion cost , Select the least costly motion vector as the reference motion vector of the merge mode; for example, when the advanced motion vector prediction mode is applied, the difference from the previous merge mode is that it needs to process candidate motion vectors to filter out two motions
  • the vectors constitute the MV candidate list, and perform motion estimation to obtain the reference motion vector.
  • the candidate motion vector list is a list0; when the inter-frame prediction is bidirectional prediction (for example, B frame), the candidate motion vector list is Two, namely list0 and list1.
  • the reference block is superimposed with the reference motion vector to obtain a prediction block in the motion compensation process.
  • the coding unit 101 subtracts the prediction block from the current block, that is, calculates the difference between pixel values in the block, and This generates prediction residuals.
  • the coding unit 101 performs change processing on the prediction residuals to obtain transform coefficients. For example, discrete sine transform or discrete cosine transform is used to transform the prediction residuals.
  • the coding unit 101 may also quantize the transformed prediction residuals (ie transform coefficients), and the coding unit 101 may perform entropy coding on the quantization results (for example, context adaptive binary arithmetic coding CABAC), output bit stream.
  • the encoding unit 101 may also encode the motion information required for decoding the bitstream.
  • the motion information includes a reference frame index, a reference motion vector index, or MVD.
  • the reconstruction unit 102 may perform inverse quantization and inverse transformation processing on the above prediction residuals, thereby reconstructing the prediction residuals, and inversely quantize and inversely quantize the prediction blocks generated by the inter-frame prediction and the inverse-transformed predictions.
  • the residuals are added to obtain the reconstructed block of the current block.
  • the processing of the inverse quantization and inverse transform can refer to the prior art, which will not be repeated here.
  • the correction unit 103 corrects the reference motion vector after the reconstruction unit 102 acquires the reconstructed block; in this way, since the reconstructed block is closer to the original current block, Compared with the method of correcting the reference motion vector before reconstructing the block, the accuracy of the motion vector correction can be further improved, thereby effectively performing video encoding and decoding.
  • the correction unit 103 will be described below with reference to FIGS. 2-3.
  • FIG. 2 is a schematic diagram of the structure of the correction unit 103
  • FIG. 3 is a schematic diagram of a corrected motion vector.
  • the correction unit 103 includes:
  • a processing module 201 configured to shift the reference block of the reference frame corresponding to the reference motion vector to at least one direction by a first predetermined number of pixel values
  • a calculation module 202 which is used to calculate the sum of absolute differences between the second predetermined number of pixels in the reconstructed block and the corresponding pixels of the offset reference block;
  • the correction module 203 is configured to determine the corrected reference motion vector according to the offset reference block corresponding to the smallest sum of absolute differences.
  • the reference block of the reference frame corresponding to the reference motion vector is determined according to the motion estimation and motion compensation in the aforementioned inter prediction process, and the processing module 201 points the reference block to at least one direction.
  • Offset the first predetermined number of pixel values as shown in the dashed box in Figure 3, can be offset by one pixel value in the horizontal and/or vertical direction, but this embodiment is not limited to this, and can also be offset by at least two Pixel values, etc.
  • the calculation module 202 calculates the sum of absolute differences (SAD) between the second predetermined number of pixels in the reconstructed block and the corresponding pixels of the shifted reference block, and the calculation module 203 calculates according to the reconstruction block.
  • the size of the building block determines the second predetermined number of pixels. For example, when the size of the reconstructed block is greater than a predetermined area, the calculation module 203 uses the pixels in the predetermined area of the reconstructed block as the second predetermined number of pixels.
  • the area is an edge area, or an area at a predetermined location, and this embodiment is not limited to this.
  • FIG. 4 is a schematic diagram of the second predetermined number of pixels, as shown in FIG. 4 , Set the predetermined area to 16 ⁇ 16, when the size of the reconstructed block is less than 16 ⁇ 16, all pixels in the reconstructed block are regarded as the second predetermined number of pixels, and when the size of the reconstructed block is greater than 16 ⁇ 16,
  • the pixels in the edge area of the reconstructed block with a width of 2 pixels up, down, left, and right are taken as the second predetermined number of pixels.
  • the up, down, left, and right directions are only examples. In actual calculations, it can be at least one of the directions.
  • the edge area is also an example. For example, it may be 1 pixel wide or more than 2 pixels wide. This embodiment is not limited to this.
  • the apparatus 100 may further include:
  • the judging unit 104 is (optional), which is used to judge whether the reference motion vector has been corrected; and, when the judgment result of the judging unit 104 is that it has not been corrected, the correction unit 103 corrects the reference motion vector. For example, the judging unit 104 can judge whether the reference motion vector has been corrected according to the set flag.
  • the device 100 may further include a setting unit (not shown in the figure).
  • the setting unit may set a correction flag for each reference motion vector, the correction flag indicating whether the reference motion vector has been corrected, for example, in the correction When the flag is 0, it indicates that the reference motion vector has not been corrected; when the correction flag is 1, it indicates that the reference motion vector has been corrected.
  • the judging unit 104 can check the correction flag of the reference motion vector, where the flag is 1. At this time, the reference motion vector is not corrected. On the contrary, as shown in Figure 3, the reference motion vector is corrected.
  • the reference motion vector when the inter-frame prediction is unidirectional prediction, the reference motion vector is a unidirectional reference motion vector, and the correction unit 103 modifies the unidirectional reference motion vector (that is, the unidirectional reference motion vector in list0 is corrected).
  • the reference motion vector is a bidirectional reference motion vector (including the forward reference motion vector and the backward reference motion vector), and the correction unit 103 is the forward reference motion vector And the backward reference motion vector for correction.
  • Figure 5 is a schematic diagram of the two-way reference motion vector correction. As shown in Figure 5, the forward reference motion vector MV0 in list0 is corrected to obtain MV0', and the backward reference motion vector MV1 in list1 is corrected to obtain MV1'.
  • the corrected reference motion vector is used to replace the corresponding reference motion vector before correction in the motion vector candidate list; and the encoding unit 101 uses the corrected reference motion vector and the reconstructed block (as other The reference block of the block inter-frame prediction) is used to perform inter-frame prediction on other blocks.
  • the process of this inter-frame prediction is as described above and will not be repeated here.
  • the modified reference motion vector and the reconstructed block (as a reference block for the inter prediction of other blocks) can be used for inter prediction by other blocks, but it does not mean that the modified reference motion vector and the The reconstructed block is used for inter-frame prediction of all remaining uncoded blocks.
  • the coding unit 101 selects the reconstructed block as the reference block, and the selection unit selects the modified reference
  • the encoding unit 101 uses the modified reference motion vector and the reconstructed block to perform inter-frame prediction on a block to obtain a prediction block.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • the video encoding device may perform filtering processing on the reconstructed block to reduce the difference between the reconstructed block and the original current block.
  • Fig. 6 is a schematic diagram of the structure of a video encoding device in this embodiment. As shown in Fig. 6, the device 600 includes: an encoding unit 601, a reconstruction unit 602, a filtering unit 603, and a correction unit 604. The implementation of the construction unit 602 is the same as the encoding unit 101 and the reconstruction unit 102 in FIG. 1, and will not be repeated here.
  • the filtering unit 603 is used to perform filtering processing on the reconstructed block; the filtering processing includes: deblocking filtering (deblock) and pixel adaptive compensation (SAO), the specific implementation of which can refer to the prior art
  • deblocking filtering can remove the block distortion generated at the boundary between each block in the reconstructed image.
  • SAO applies the offset difference between the residual block of the deblocking filter and the original image, and uses the offset Application in the form of offset, edge offset, etc.
  • the correction unit 604 corrects the reference motion vector after the filtering unit 603 performs filtering processing on the reconstructed block; in another embodiment, after the reconstructed block is obtained, the correction unit 604, before the filtering unit 603 performs filtering processing on the reconstructed block, correct the reference motion vector.
  • the device 600 may further include: a judgment unit (optional, not shown), which is used to judge whether the reference motion vector has been revised; and, the judgment result of the judgment unit When it has not been corrected, the correction unit 604 corrects the reference motion vector.
  • a judgment unit optional, not shown
  • the correction unit 604 corrects the reference motion vector.
  • reference may be made to the judgment unit 104, which will not be repeated here.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • Embodiment 2 of the present invention provides an encoder, which may be included in an electronic device used for video processing, or may be one or some parts or components of the electronic device, including the video in Embodiment 1. Encoding device, its content is incorporated here, and will not be repeated here.
  • Embodiment 2 of the present invention also provides an encoder.
  • the encoder may be included in an electronic device used for video processing, or may be one or some parts or components of the electronic device. This embodiment 2 and the embodiment 1 The same content will not be repeated.
  • Figure 7 is a schematic diagram of the encoder structure of the second embodiment.
  • the encoder 700 includes: a predictor 701, a transform and quantizer 702, an entropy encoder 703, an inverse transform and inverse quantizer 704, and a filter 705 (Optional), corrector 706, subtractor 707, adder 708, memory 709.
  • the implementation of the predictor 701, transform and quantizer 702, entropy encoder 703, inverse transform and inverse quantizer 704, subtractor 707, and adder 708 can refer to the encoding unit 101 and The reconstruction unit 102.
  • the predictor 701 includes an inter predictor that performs inter prediction and an intra predictor that performs intra prediction.
  • the predictor 701 performs prediction processing on the divided block of the frame image to generate a prediction block.
  • the predictor can determine whether the prediction performed on the corresponding block is inter-frame prediction or intra-frame prediction, and can determine the prediction mode of inter-frame prediction.
  • the prediction block and the current block are passed through the subtractor 707 to obtain the prediction residual, and the prediction residual is input into the transform and quantizer 702, transform coefficients are created and quantized, and the quantization result is input to the entropy
  • the encoder 703, optionally, the motion information described in Embodiment 1 may also be input into the entropy encoder.
  • the entropy encoder 703 performs entropy coding on the input data and outputs a bit stream.
  • the specific entropy coding method can refer to the prior art, which will not be repeated here; the inverse transform and inverse quantizer 704 performs inverse quantization and inverse transform on the aforementioned quantization result After processing, the prediction residual is added by the adder 708 to generate a reconstructed block.
  • the implementation of the filter 705 can refer to the filtering unit 603 in Embodiment 1, and perform the following processing on the reconstructed block: deblocking filtering (deblock) and pixel adaptive compensation (SAO), optionally,
  • deblock deblocking filtering
  • SAO pixel adaptive compensation
  • the filter 705 may not perform a filtering operation.
  • the implementation of the corrector 706 can refer to the correcting unit 103 or the correcting unit 604 in Embodiment 1, to correct the reference motion vector after obtaining the reconstructed block, or to correct the reconstructed block after filtering.
  • the corrector 706 may also include the function of the judging unit 104, which is not limited in this embodiment.
  • the specific correction method refer to Embodiment 1, which will not be repeated here.
  • the memory 709 may store the generated reconstruction block (or filtered reconstruction block) and the corrected reference motion vector, and the reconstruction block and motion vector stored in the memory 709 may be provided to the execution
  • the predictor 701 for inter prediction is used for inter prediction of other blocks.
  • the functions of the predictor 701, transform and quantizer 702, entropy encoder 703, inverse transform and inverse quantizer 704, filter 705, corrector 706, subtractor 707, and adder 708 can be integrated into In the central processing unit, video encoding is performed under the control of the processor.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • FIG. 8 is a schematic diagram of the video decoding device according to an embodiment of the present invention. As shown in Figure 8, the device includes:
  • the decoding unit 801 is configured to decode the encoded data of the current block after encoding; perform inter-frame prediction according to the decoding result and the reference motion vector to obtain a prediction block;
  • a reconstruction unit 802 configured to obtain a reconstruction block of the current block according to the prediction block
  • the correction unit 803 is configured to correct the reference motion vector after the reconstruction unit obtains the reconstruction block;
  • the decoding unit 801 uses the corrected reference motion vector and the reconstructed block to perform inter-frame prediction on other blocks.
  • the encoded data of the current block after encoding may be the bitstream after entropy encoding described in Embodiment 1, and the decoding unit 801 may perform entropy decoding on the bitstream to obtain the decoding result, and the specific entropy decoding
  • the method corresponds to the method of entropy coding. For details, please refer to the prior art, which will not be repeated here.
  • the decoding unit 801 performs inter-frame prediction according to the decoding result and the reference motion vector to generate a prediction block.
  • the inter-frame prediction process at the decoding end can refer to the prior art, which can use the video encoding device provided
  • the motion information of the inter-frame prediction of the current block for example, the reference motion vector used in the inter-frame prediction of the encoding end, the reference frame index and other information
  • the decoding result to generate the prediction block.
  • the reconstruction unit 802 performs inverse quantization and inverse transformation on the decoding result to restore the prediction residual, adds the prediction residual to the prediction block, obtains the reconstructed block of the current block, and inversely quantizes the sum. Please refer to Embodiment 1 for the inverse transformation processing, which will not be repeated here.
  • the apparatus 800 may further include:
  • the judging unit (optional, not shown) is used to judge whether the reference motion vector has been corrected; and when the judgment result of the judging unit is that it has not been corrected, the correction unit 803 corrects the reference motion vector.
  • the judgment unit 104 For the specific implementation manner, reference may be made to the judgment unit 104, which will not be repeated here.
  • the corrected reference motion vector is used to replace the corresponding reference motion vector before correction in the motion vector candidate list; and the decoding unit 801 uses the corrected reference motion vector and the reconstructed block (as other The reference block of the block inter-frame prediction) is used to perform inter-frame prediction on other blocks.
  • the process of this inter-frame prediction is as described above and will not be repeated here.
  • the modified reference motion vector and the reconstructed block (as a reference block for the inter prediction of other blocks) can be used for inter prediction by other blocks, but it does not mean that the modified reference motion vector and the The reconstructed block is used for inter-frame prediction of all remaining undecoded blocks.
  • the decoding unit 801 selects the reconstructed block and the reference motion vector, the decoding unit 801 uses The modified reference motion vector and the reconstructed block perform inter-frame prediction on the block to obtain a prediction block.
  • Fig. 9 is a schematic diagram of a structure of a video encoding device in this embodiment.
  • the device 900 includes: a decoding unit 901, a reconstruction unit 902, a filtering unit 903, and a correction unit 904.
  • the implementation of the construction unit 902 is the same as the decoding unit 801 and the reconstruction unit 802 in FIG. 8, and will not be repeated here.
  • the filtering unit 903 is used to perform filtering processing on the reconstructed block; the filtering processing includes: deblocking filtering (deblock) and pixel adaptive compensation (SAO), the specific implementation of which can refer to the prior art
  • deblocking filtering can remove the block distortion generated at the boundary between each block in the reconstructed image.
  • SAO applies the offset difference between the residual block of the deblocking filter and the original image, and uses the offset Application in the form of offset, edge offset, etc.
  • the correction unit 904 corrects the reference motion vector after the filtering unit 903 performs filtering processing on the reconstructed block; in another embodiment, after obtaining the reconstructed block, the correction unit 904, before the filtering unit 903 performs filtering processing on the reconstructed block, correct the reference motion vector.
  • the apparatus 900 may further include:
  • the judging unit (optional, not shown) is used to judge whether the reference motion vector has been corrected; and when the judgment result of the judging unit is that it has not been corrected, the correction unit 904 corrects the reference motion vector.
  • the judgment unit 104 For the specific implementation manner, reference may be made to the judgment unit 104, which will not be repeated here.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • Embodiment 4 of the present invention provides a decoder, which may be included in an electronic device used for video processing, or may be one or some parts or components of the electronic device, which includes the video in Embodiment 3.
  • the content of the decoding device is incorporated here, and will not be repeated here.
  • Embodiment 4 of the present invention also provides a decoder.
  • the decoder may be included in an electronic device used for video processing, or may be a certain or some parts or components of the electronic device.
  • This embodiment 4 and the embodiments 3 The same content will not be repeated.
  • Figure 10 is a schematic diagram of the decoder structure of the fourth embodiment.
  • the decoder 1000 includes: an entropy decoder 1001, an inverse transform and inverse quantizer 1002, a predictor 1003, a filter 1004 (optional), and a correction
  • the implementation of the entropy decoder 1001, the inverse transform and inverse quantizer 1002, the predictor 1003, and the adder 1006 can refer to the decoding unit 801 and the reconstruction unit 802 in Embodiment 3.
  • the entropy decoder 1001 decodes the encoded data (bit stream) of the encoded current block to generate a prediction residual with quantized frequency coefficients.
  • it can also decode the motion information of the current block.
  • the predictor 1003 can, based on the motion information, The same scheme as the predictor in the encoder of Embodiment 2 is used to predict the current block, and the obtained prediction block is input to the adder 1006.
  • the prediction residual obtained by the inverse transform and inverse quantizer 1002 inversely quantizes and decodes is input to the adder 1006 after inverse transform processing to superimpose the prediction block to generate a reconstructed block of the current block.
  • the implementation of the filter 1004 can refer to the filtering unit 903 in Embodiment 3 to perform the following processing on the reconstructed block: deblocking filtering (deblock) and pixel adaptive compensation (SAO).
  • deblock deblocking filtering
  • SAO pixel adaptive compensation
  • the implementation of the corrector 1005 can refer to the correcting unit 803 or the correcting unit 904 in Embodiment 3 to correct the reference motion vector after obtaining the reconstructed block, or to correct the reconstructed block after filtering.
  • the corrector 1005 may also include the function of the judging unit 104, which is not limited in this embodiment, and the specific correction method is described in Embodiment 1, which will not be repeated here.
  • the memory 1007 may store the filtered reconstructed block and the corrected reference motion vector, and the reconstructed block and motion vector stored in the memory 1007 may be provided to the predictor 1003 that performs inter prediction. Used for inter prediction of other blocks.
  • the functions of the aforementioned entropy decoder 1001, inverse transform and inverse quantizer 1002, predictor 1003, filter 1004 (optional), modifier 1005, and adder 1006 can be integrated into the central processing unit.
  • Video decoding is performed under the control of the processor.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • FIG. 11 is a schematic diagram of the video encoding method of the embodiment of the present invention. As shown in Figure 11, the method includes:
  • Step 1101 Perform inter-frame prediction on the current block according to the reference motion vector to obtain a prediction block, and perform encoding according to the current block and the prediction block;
  • Step 1102 Obtain a reconstructed block of the current block according to the predicted block
  • Step 1103 after obtaining the reconstructed block, correct the reference motion vector; and the corrected reference motion vector and the reconstructed block can be used for inter-frame prediction of other blocks.
  • the implementation of the above steps 1101-1103 can refer to the encoding unit 101, the reconstruction unit 102, and the correction unit 103 in the video encoding device in Embodiment 1, and will not be repeated here.
  • the above steps 1101-1103 are the encoding process of the current block. After completion, the steps 1101-1103 are repeated for the next block of the current block until the encoding of all the blocks divided in multiple frame images is completed.
  • Fig. 12 is a flowchart of the correction step in step 1103 in this embodiment. As shown in Fig. 12, the correction step includes:
  • Step 1201 Shift the reference block of the reference frame corresponding to the reference motion vector to at least one direction by a first predetermined number of pixel values
  • Step 1202 Calculate the sum of absolute differences between the second predetermined number of pixels in the reconstructed block and the corresponding pixels of the offset reference block;
  • Step 1203 Determine a corrected reference motion vector according to the offset reference block corresponding to the smallest sum of absolute differences.
  • steps 1201-1203 please refer to the processing module 201, the calculation module 202, and the correction module 203 in the embodiment 1, which will not be repeated here.
  • the method may further include (optional): step 1200, judging whether the reference motion vector has been corrected; and, when the judgment result is that it has not been corrected, steps 1201-1203 are executed, otherwise the correction is skipped step.
  • step 1200 judging whether the reference motion vector has been corrected; and, when the judgment result is that it has not been corrected, steps 1201-1203 are executed, otherwise the correction is skipped step.
  • each reference motion vector has a correction flag, and the correction flag indicates whether the reference motion vector has been corrected.
  • the reference motion vector when the inter prediction is a unidirectional prediction, the reference motion vector is a unidirectional reference motion vector, and the unidirectional reference motion vector is corrected in steps 1201-1203; the inter prediction is bidirectional During prediction, the reference motion vector is a bidirectional reference motion vector (including a forward reference motion vector and a backward reference motion vector), and the forward reference motion vector and the backward reference motion vector are corrected in steps 1201-1203.
  • the method may further include:
  • Step 1103' filter the reconstructed block; that is, after the reconstructed block is obtained in step 1102, perform this step 1103'.
  • the above step 1103 can be performed before step 1103' or after step 1103', That is, after filtering the reconstructed block, the reference motion vector is corrected. This embodiment is not limited by this.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • FIG. 13 is a schematic diagram of the video decoding method of the embodiment of the present invention. As shown in Figure 13, the method includes:
  • Step 1301 Decode the encoded data of the current block after encoding; perform inter-frame prediction according to the decoding result and the reference motion vector to obtain a prediction block;
  • Step 1302 Obtain a reconstructed block of the current block according to the predicted block
  • Step 1303 After the reconstructed block is acquired, the reference motion vector is corrected; and the corrected reference motion vector and the reconstructed block are used for inter-frame prediction of other blocks.
  • the implementation of the above steps 1301-1303 can refer to the decoding unit 801, the reconstruction unit 802, and the correction unit 803 in the video decoding device in Embodiment 3, which will not be repeated here.
  • steps 1301-1303 are the decoding process of the current block. After completion, steps 1301-1303 are repeated for the next block of the current block until the decoding of all the blocks divided in multiple frame images is completed.
  • step 1303 for the specific implementation of the above step 1303, reference may be made to step 1103 in embodiment 5 and FIG. 12, which will not be repeated here.
  • the method may further include:
  • Step 1303' perform filtering processing on the reconstructed block; that is, after the reconstructed block is obtained in step 1302, perform this step 1303'.
  • the above-mentioned step 1303 can be performed before step 1303' or after step 1303', That is, after filtering the reconstructed block, the reference motion vector is corrected. This embodiment is not limited by this.
  • the accuracy of the motion vector correction can be improved by performing the motion vector correction after the reconstructed block is obtained, thereby effectively performing video encoding and decoding.
  • the embodiment of the present invention also provides an electronic device that performs image processing or video processing, including the encoder in Embodiment 2 and/or the decoder in Embodiment 4, the content of which is incorporated here and will not be omitted here Repeat.
  • Fig. 14 is a schematic diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device 1400 may include: a processor 1401 and a memory 1402; the memory 1402 is coupled to the processor 1401.
  • the memory 1402 can store various data; in addition, it also stores an information processing program 1403, and the program 1403 is executed under the control of the processor 1401.
  • the electronic device 1400 may be used as an encoder, and the functions of the video encoding apparatus 100 or 600 may be integrated into the processor 1401.
  • the processor 1401 may be configured to implement the video encoding method described in Embodiment 5.
  • the processor 1401 may be configured to perform the following control: perform inter-frame prediction on the current block to obtain a prediction block according to the reference motion vector, and perform encoding according to the current block and the prediction block; obtain the current block according to the prediction block After obtaining the reconstructed block, the reference motion vector is corrected; and the corrected reference motion vector and the reconstructed block are used for inter prediction of other blocks.
  • the electronic device 1400 may be used as a decoder, and the functions of the video decoding apparatus 800 or 900 may be integrated into the processor 1401.
  • the processor 1401 may be configured to implement the video decoding method described in Embodiment 6.
  • the processor 1401 may be configured to perform the following control: decode the encoded data of the encoded current block; perform inter-frame prediction according to the decoding result and the reference motion vector to obtain a prediction block; obtain the current block according to the prediction block After obtaining the reconstructed block, the reference motion vector is corrected; and the corrected reference motion vector and the reconstructed block are used for inter prediction of other blocks.
  • the electronic device 1400 may further include: an input/output (I/O) device 1404, a display 1405, etc.; wherein the functions of the above-mentioned components are similar to those of the prior art, and will not be repeated here. It is worth noting that the electronic device 1400 does not necessarily include all the components shown in FIG. 14; in addition, the electronic device 1400 may also include components not shown in FIG. 14, and related technologies can be referred to.
  • I/O input/output
  • An embodiment of the present invention provides a computer-readable program, wherein when the program is executed in an encoder or an electronic device, the program causes the encoder or an electronic device to execute the video encoding method described in Embodiment 5.
  • An embodiment of the present invention provides a storage medium storing a computer-readable program, wherein the computer-readable program enables an encoder or an electronic device to execute the video encoding method described in Embodiment 5.
  • An embodiment of the present invention provides a computer-readable program, wherein when the program is executed in a decoder or an electronic device, the program causes the decoder or an electronic device to execute the video decoding method described in Embodiment 6.
  • An embodiment of the present invention provides a storage medium storing a computer-readable program, wherein the computer-readable program enables a decoder or an electronic device to execute the video decoding method described in Embodiment 6.
  • the above devices and methods of the present invention can be implemented by hardware, or by hardware combined with software.
  • the present invention relates to such a computer-readable program, when the program is executed by a logic component, the logic component can realize the above-mentioned device or constituent component, or the logic component can realize the above-mentioned various methods Or steps.
  • the present invention also relates to storage media for storing the above programs, such as hard disks, magnetic disks, optical disks, DVDs, flash memory, and the like.
  • the method/device described in conjunction with the embodiments of the present invention may be directly embodied as hardware, a software module executed by a processor, or a combination of the two.
  • one or more of the functional block diagrams and/or one or more combinations of the functional block diagrams shown in the figure may correspond to each software module of the computer program flow or each hardware module.
  • These software modules can respectively correspond to the steps shown in the figure.
  • These hardware modules can be implemented by curing these software modules by using a field programmable gate array (FPGA), for example.
  • FPGA field programmable gate array
  • the software module can be located in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM or any other form of storage medium known in the art.
  • a storage medium may be coupled to the processor, so that the processor can read information from the storage medium and write information to the storage medium; or the storage medium may be a component of the processor.
  • the processor and the storage medium may be located in the ASIC.
  • the software module can be stored in the memory of the mobile terminal, or can be stored in a memory card that can be inserted into the mobile terminal.
  • the software module can be stored in the MEGA-SIM card or a large-capacity flash memory device.
  • One or more of the functional blocks and/or one or more combinations of the functional blocks described in the drawings can be implemented as general-purpose processors, digital signal processors (DSPs) for performing the functions described in the present invention. ), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component or any appropriate combination thereof.
  • DSPs digital signal processors
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • One or more of the functional blocks described in the drawings and/or one or more combinations of the functional blocks can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, or multiple micro-processing Processor, one or more microprocessors in communication with the DSP, or any other such configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种视频编码方法、视频解码方法、装置以及电子设备。该视频编码装置包括:编码单元,其用于根据参考运动矢量,对当前块进行帧间预测得到预测块,并根据该当前块和该预测块进行编码;重构单元,其用于根据该预测块获取该当前块的重构块;修正单元,其用于在该重构单元获取该重构块后,对该参考运动矢量进行修正;并且,该编码单元使用被修正的参考运动矢量和该重构块,对其他块作帧间预测。由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。

Description

视频编码方法、视频解码方法、装置以及电子设备 技术领域
本发明实施例涉及视频图像技术领域,特别涉及一种视频编码方法、视频解码方法、装置以及电子设备。
背景技术
在视频编码(也可称为图像编码)标准(例如MPEG 2,H.264/AVC,H.265/HEVC,等等)中,帧间预测(也称为运动补偿预测)利用视频图像时间的相关性,使用参考图像中邻近的已编码图像块预测当前图像块,从而可以有效去除视频时域冗余,其原理是在之前参考图像中已编码的图像块中为当前图像块寻找一个最佳匹配块(参考块),其中,参考图像中的参考块到当前图像块的位移称为运动矢量(MV);根据该运动矢量确定当前图像块的预测块。
应该注意,上面对技术背景的介绍只是为了方便对本发明的技术方案进行清楚、完整的说明,并方便本领域技术人员的理解而阐述的。不能仅仅因为这些方案在本发明的背景技术部分进行了阐述而认为上述技术方案为本领域技术人员所公知。
发明内容
目前,视频编解码器可以使用重构的空域和/或时域相邻块的运动矢量生成运动矢量候选列表,在帧间预测时,使用从候选列表中选择的合适的运动矢量对当前图像块进行预测。
发明人发现:在现有视频编解码标准中,需要从有限的一组候选列表中选择运动矢量,如果选出的运动矢量并不适合当前图像块,可能会影响其他图像块的预测结果。
针对上述问题,本发明实施例提供一种视频编码方法、视频解码方法、装置以及电子设备;能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
根据本发明实施例的第一个方面,提供一种视频编码装置,其中,所述装置包括:
编码单元,其用于根据参考运动矢量,对当前块进行帧间预测得到预测块,并根据所述当前块和所述预测块进行编码;
重构单元,其用于根据所述预测块获取所述当前块的重构块;
修正单元,其用于在所述重构单元获取所述重构块后,对所述参考运动矢量进行修正;
并且,所述编码单元使用被修正的参考运动矢量和所述重构块,对其他块作帧间预测。
根据本发明实施例的第二个方面,提供一种视频解码装置,其中,所述装置包括:
解码单元,其用于对编码后的当前块的编码数据进行解码;根据解码结果和参考运动矢量,进行帧间预测得到预测块;
重构单元,其用于根据所述预测块获取所述当前块的重构块;
修正单元,其用于在所述重构单元获取所述重构块后,对所述参考运动矢量进行修正;
并且,所述解码单元使用被修正的参考运动矢量和所述重构块,对其他块作帧间预测。
根据本发明实施例的第三个方面,提供一种电子设备,包括:编码器,其包括如第一个方面所述的视频编码装置;和/或,解码器,其包括如第二个方面所述的视频解码装置。
本发明实施例的有益效果在于:通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
参照后文的说明和附图,详细公开了本发明的特定实施方式,指明了本发明的原理可以被采用的方式。应该理解,本发明的实施方式在范围上并不因而受到限制。在所附权利要求的精神和条款的范围内,本发明的实施方式包括许多改变、修改和等同。
针对一种实施方式描述和/或示出的特征可以以相同或类似的方式在一个或更多个其它实施方式中使用,与其它实施方式中的特征相组合,或替代其它实施方式中的特征。
应该强调,术语“包括/包含”在本文使用时指特征、整件、步骤或组件的存在,但并不排除一个或更多个其它特征、整件、步骤或组件的存在或附加。
附图说明
在本发明实施例的一个附图或一种实施方式中描述的元素和特征可以与一个或更多个其它附图或实施方式中示出的元素和特征相结合。此外,在附图中,类似的标 号表示几个附图中对应的部件,并可用于指示多于一种实施方式中使用的对应部件。
图1是本发明实施例1的视频编码装置的示意图;
图2是本发明实施例1中修正单元构成一示意图;
图3是本发明实施例1中修正运动矢量的一示例图;
图4是本发明实施例1中第二预定数量像素的一示例图;
图5是本发明实施例1中双向参考运动矢量修正的一示例图;
图6是本发明实施例1的视频编码装置的示意图;
图7是本发明实施例2的编码器的示意图;
图8是本发明实施例3的视频解码装置的示意图;
图9是本发明实施例3的视频解码装置的示意图;
图10是本发明实施例4的解码器的示意图;
图11是本发明实施例5的视频编码方法示意图;
图12是本发明实施例5的步骤1103示意图;
图13是本发明实施例6的视频解码方法示意图;
图14是本发明实施例7的电子设备的示意图。
具体实施方式
参照附图,通过下面的说明书,本发明的前述以及其它特征将变得明显。在说明书和附图中,具体公开了本发明的特定实施方式,其表明了其中可以采用本发明的原则的部分实施方式,应了解的是,本发明不限于所描述的实施方式,相反,本发明包括落入所附权利要求的范围内的全部修改、变型以及等同物。
在本发明实施例中,术语“第一”、“第二”等用于对不同元素从称谓上进行区分,但并不表示这些元素的空间排列或时间顺序等,这些元素不应被这些术语所限制。术语“和/或”包括相关联列出的术语的一种或多个中的任何一个和所有组合。术语“包含”、“包括”、“具有”等是指所陈述的特征、元素、元件或组件的存在,但并不排除存在或添加一个或多个其他特征、元素、元件或组件。
在本发明实施例中,单数形式“一”、“该”等包括复数形式,应广义地理解为“一种”或“一类”而并不是限定为“一个”的含义;此外术语“所述”应理解为既包括单数形式也包括复数形式,除非上下文另外明确指出。此外术语“根据”应理解 为“至少部分根据……”,术语“基于”应理解为“至少部分基于……”,除非上下文另外明确指出。
实施例1
本发明实施例提供一种视频编码装置。图1是本发明实施例的视频编码装置的示意图。如图1所示,该装置100包括:
编码单元101,其用于根据参考运动矢量,对当前块进行帧间预测得到预测块,并根据该当前块和该预测块进行编码;
重构单元102,其用于根据该预测块获取该当前块的重构块;
修正单元103,其用于在该重构单元获取该重构块后,对该参考运动矢量进行修正;
并且,该编码单元101使用被修正的参考运动矢量和该重构块,对其他块作帧间预测。
在本实施例中,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
在本实施例中,输入视频流由多个连续帧图像构成,可以预先将每帧图像***成至少一个处理单元块,该视频编码装置以一个处理单元块为单位,对该图像***成的各个处理单元块进行处理。其中,帧图像可以由多个编码树单元组成,每个编码树单元可以按四叉树或其他结构***成编码单元(CU),可以进一步从CU分割得到预测单元(PU)或变换单元(TU),作为该处理单元块可以是预测单元(PU),变换单元(TU)或编码单元(CU),本实施例并不以此作为限制,例如,本实施例中提到的“块”可以是PU或TU或CU。
在本实施例中,该帧间预测表示参考至少一个在先的帧图像和/或至少一个后续帧图像预测当前帧图像的当前块(当前待处理的处理单元块),从而产生预测块,该编码单元101编码该预测块和当前块的差异,以下将在编码或解码当前帧图像中被参考的帧称为参考帧。
在本实施例中,该编码单元101可以采用现有技术中的帧间预测和编码方法对该当前块进行处理,以下举例说明。
在本实施例中,可以使用现有的运动估计算法为当前块选择参考帧,并选择参考 帧中与该当前块对应的参考块,参考块和当前块在空间位置的相对偏移量称为运动矢量,该运动矢量包括水平方向偏移量和垂直方向偏移量,为了提高运动矢量预测和编码的压缩效率,目前采用运动矢量竞争机制,即可以从运动矢量(MV)候选列表中选择该参考运动矢量,其中该候选列表中的运动矢量是空域上相邻块的运动矢量和/或时域上相邻帧中对应块的运动矢量的集合,因此,可选的,该装置还可以包括:(可选,未图示)选择单元,其用于从运动矢量候选列表中选择该参考运动矢量。
在本实施例中,帧间预测可以使用合并模式或高级运动矢量预测模式等,例如,当应用合并模式时,可以为当前块建立一个MV候选列表,该候选列表包括空域上相邻块的运动矢量和/或时域上相邻帧中对应块(当前帧中当前块处于相同的空间位置,以下称为同位块)的运动矢量,遍历上述候选列表中的各个MV,并进行率失真代价计算,选择代价最小的一个运动矢量作为合并模式的参考运动矢量;例如,当应用高级运动矢量预测模式时,与前述合并模式不同之处在于,其需要对候选运动矢量进行处理,筛选出两个运动矢量构成MV候选列表,进行运动估计,得到参考运动矢量,此外还需要计算当前块的运动矢量和参考MV之间的运动矢量差(MVD),其具体可以参考现有技术。以上以合并模式和高级运动矢量预测模式为例进行说明,但本实施例并不以此作为限制,其他使用运动矢量作帧间预测的模式也适用于本发明实施例。
在本实施例中,在帧间预测是单向预测时(例如P帧),该候选运动矢量列表为一个list0;在帧间预测是双向预测时(例如B帧),该候选运动矢量列表为两个,分别为list0和list1。
在本实施例中,该参考块叠加该参考运动矢量即可以在运动补偿处理过程中得到预测块,该编码单元101从当前块中减去该预测块,即计算块中像素值的差,由此产生预测残差,可选的,该编码单元101对该预测残差执行变化处理,得到变换系数,例如使用离散正弦变换或离散余弦变换来对预测残差进行变换,具体可以参考现有技术,此处不再赘述;可选的,该编码单元101还可以对变换后的预测残差(即变换系数)进行量化,编码单元101可以对量化结果进行熵编码(例如上下文自适应二进制算术编码CABAC),输出比特流。以上编码过程仅为示例说明,本实施例并不以此作为限制。可选的,该编码单元101还可以对解码该比特流所需要的运动信息进行编码,例如该运动信息包括参考帧索引,参考运动矢量索引,或MVD等。
在本实施中,重构单元102可以通过对上述预测残差进行反量化和反变换处理, 从而重构该预测残差,将该帧间预测产生的预测块和反量化以及反变换后的预测残差相加,得到当前块的重构块,该反量化和反变换的处理可以参考现有技术,此处不再赘述。
在本实施例中,该修正单元103在该重构单元102获取该重构块后,对该参考运动矢量进行修正;这样,由于重构之后的块更接近原始当前块,因此,与在得到重构块之前对参考运动矢量进行修正的方法相比,可以进一步提高运动矢量修正的准确性,从而有效的进行视频编解码。
以下结合图2-3对该修正单元103进行说明。
图2是该修正单元103构成示意图,图3是修正运动矢量示意图,如图2所示,该修正单元103包括:
处理模块201,其用于将该参考运动矢量对应的参考帧的参考块向至少一个方向偏移第一预定数量个像素值;
计算模块202,其用于计算该重构块中第二预定数量个像素与偏移后的参考块的对应像素的绝对差值之和;
修正模块203,其用于根据最小的绝对差值之和对应的偏移后的参考块确定修正后的参考运动矢量。
在本实施例中,如图3所示,根据前述帧间预测过程中的运动估计和运动补偿,确定该参考运动矢量对应的参考帧的参考块,处理模块201将该参考块向至少一个方向偏移第一预定数量个像素值,如图3虚线框所示,可以向水平和/或垂直方向分别偏移一个像素值,但本实施例并不以此作为限制,还可以偏移至少两个像素值等。
在本实施例中,该计算模块202计算该该重构块中第二预定数量个像素与偏移后的参考块的对应像素的绝对差值之和(SAD),该计算模块203根据该重构块的大小确定该第二预定数量个像素,例如在该重构块的大小大于预定面积时,该计算模块203将该重构块中预定区域的像素作为该第二预定数量像素,该预定区域是边缘区域,或预定位置的区域,本实施例并不以此作为限制。在该重构块的大小小于或预定面积时,该计算模块203将该重构块中所有像素作为该第二预定数量像素;图4是该第二预定数量个像素示意图,如图4所示,将该预定面积设置为16×16,在该重构块的大小小于16×16时,该重构块中所有像素作为第二预定数量像素,该重构块的大小大于16×16时,该重构块中上下左右宽为2个像素的边缘区域中的像素作为该第二 预定数量像素,该上下左右方向仅作为示例,实际计算时可以是其中至少一个方向,该2个像素宽的边缘区域也是示例,例如还可以是1个像素宽,或2个以上像素宽,本实施例并不以此作为限制。
在本实施例中,由于该参考运动矢量可能还用于其他块(即除该当前块外的其他块,例如当前帧的其他块或其他帧***成的各个处理单元块)的帧间预测,因此,可能在重构其他块后被修正过了,为了避免重复修正,该装置100还可以包括:
判断单元104(可选),其用于判断该参考运动矢量是否被修正过;并且,在判断单元104的判断结果为未被修正过时,该修正单元103对该参考运动矢量进行修正。例如,判断单元104可根据设置的标识来判断该参考运动矢量是否被修正过。这样,该装置100还可包括设置单元(图中未示出),该设置单元可以为每个参考运动矢量设置修正标识,该修正标识指示该参考运动矢量是否被修正过,例如,在该修正标识为0时,指示该参考运动矢量没有被修正过;在该修正标识为1是,指示该参考运动矢量被修正过,该判断单元104可以检查参考运动矢量的修正标识,在该标识为1时,不对该参考运动矢量作修正,反之,如图3所示,对该参考运动矢量作修正。
在本实施例中,在该帧间预测是单向预测时,该参考运动矢量是单向参考运动矢量,该修正单元103对该单向参考运动矢量进行修正(即对list0中的单向参考运动矢量进行修正);在帧间预测是双向预测时,该参考运动矢量是双向参考运动矢量(包括前向参考运动矢量和后向参考运动矢量),该修正单元103对该前向参考运动矢量和该后向参考运动矢量进行修正。图5是该双向参考运动矢量修正示意图,如图5所示,对list0中的前向参考运动矢量MV0进行修正得到MV0’,对list1中的后向参考运动矢量MV1进行修正得到MV1’。
在本实施例中,使用修正后的参考运动矢量替换运动矢量候选列表中对应的被修正前的参考运动矢量;并且,该编码单元101使用被修正的参考运动矢量和该重构块(作为其他块帧间预测的参考块),对其他块作帧间预测,该帧间预测的过程如前所述,此处不再赘述。需要说明的是,被修正的参考运动矢量和该重构块(作为其他块帧间预测的参考块)可以被其他块作用于帧间预测,但并不表示该被修正的参考运动矢量和该重构块用于剩余未被编码的所有块的帧间预测,只有在对某个块进行编码帧间预测时,编码单元101选择该重构块作为参考块,选择单元选择该修正后的参考运动矢量作为参考运动矢量时,该编码单元101才使用被修正的参考运动矢量和该重构 块对该一个块作帧间预测,得到预测块。
由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
在本实施例中,为了降低重构块和原始当前块之间的失真程度,该视频编码装置可以对该重构块进行滤波处理,以减小重构块和原始当前块之间的差异,图6是本实施例中视频编码装置一构成示意图,如图6所示,该装置600包括:编码单元601,重构单元602,滤波单元603,修正单元604,其中,该编码单元601和重构单元602的实施方式与图1中编码单元101,重构单元102相同,此处不再赘述。
在本实施例中,该滤波单元603用于对该重构块进行滤波处理;该滤波处理包括:去方块滤波(deblock)和像素自适应补偿(SAO),其具体实现方式可以参考现有技术,例如,去方块滤波可以去除重构图像中各个块之间的边界处生成的块失真,SAO应用了区去块滤波的残差块与原始图像之间的偏移量差,并且以带偏移量、边缘偏移量等的形式应用。
在一个实施方式中,该修正单元604在该滤波单元603对该重构块进行滤波处理后,对该参考运动矢量进行修正;在另一个实施方式中,在得到重构块后,该修正单元604在该滤波单元603对该重构块进行滤波处理前,对该参考运动矢量进行修正。
在本实施例中,为了避免重复修正,该装置600还可以包括:判断单元(可选,未图示),其用于判断该参考运动矢量是否被修正过;并且,在判断单元的判断结果为未被修正过时,该修正单元604对该参考运动矢量进行修正。具体实施方式可以参考判断单元104,此处不再赘述。
由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
实施例2
本发明实施例2提供一种编码器,该编码器可以包含于用于视频处理的电子设备,也可以是配置于电子设备的某个或某些部件或组件,其包含实施例1中的视频编码装置,将其内容合并于此,此处不再赘述。
本发明实施例2还提供一种编码器,该编码器可以包含于用于视频处理的电子设备,也可以是配置于电子设备的某个或某些部件或组件,本实施例2与实施例1相同 的内容不再赘述。
图7是本实施例2编码器结构示意图,如图7所示,该编码器700包括:预测器701、变换和量化器702、熵编码器703、反变换和反量化器704、滤波器705(可选)、修正器706、减法器707、加法器708、存储器709。
在本实施例中,该预测器701、变换和量化器702、熵编码器703、反变换和反量化器704、减法器707、加法器708的实施方式可以参考实施例1中编码单元101和重构单元102,该预测器701包括执行帧间预测的帧间预测器和执行帧内预测的帧内预测器,预测器701对帧图像划分后的块进行预测处理,以生成预测块,该预测器可以确定对应的块上执行的预测是帧间预测还是帧内预测,并且可以确定帧间预测的预测模式,帧间预测的具体方法请参考实施例1和现有技术,此处不再赘述。
在本实施例中,该预测块和当前块经过减法器707后得到预测残差,并将该预测残差输入到变换和量化器702中,创建变换系数并进行量化,将量化结果输入至熵编码器703,可选的,实施例1中所述的运动信息也可以一并输入至该熵编码器中。该熵编码器703对输入数据进行熵编码,输出比特流,其具体熵编码方法可以参考现有技术,此处不再赘述;反变换和反量化器704对前述量化结果进行反量化、反变换处理后,通过加法器708添加预测残差,以生成重构块。
在本实施例中,该滤波器705的实施方式可以参考实施例1中滤波单元603,对重构块执行以下处理:去方块滤波(deblock)和像素自适应补偿(SAO),可选的,对于帧间预测获得的重构块,该滤波器705可以不执行滤波操作。
在本实施例中,该修正器706的实施方式可以参考实施例1中修正单元103或修正单元604,在得到重构块后修正参考运动矢量,或者在对重构块进行滤波处理后,修正参考运动矢量,另外,该修正器706还可以包含判断单元104的功能,本实施例并不以此作为限制,具体修正方法详见实施例1,此处不再赘述。
在本实施例中,该存储器709可以存储生成的重构块(或经过滤波的重构块)以及修正后的参考运动矢量,该存储器709中存储的重构块和运动矢量可以被提供给执行帧间预测的预测器701,用于其他块的帧间预测。
在本实施例中,上述预测器701、变换和量化器702、熵编码器703、反变换和反量化器704、滤波器705、修正器706、减法器707、加法器708的功能可以整合至中央处理器中,在处理器的控制下进行视频编码。
由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
实施例3
本发明实施例还提供一种视频解码装置,其用于对使用实施例1中视频编码装置编码生成的数据进行解码,图8是本发明实施例的视频解码装置的示意图。如图8所示,该装置包括:
解码单元801,其用于对编码后的当前块的编码数据进行解码;根据解码结果和参考运动矢量,进行帧间预测得到预测块;
重构单元802,其用于根据所述预测块获取该当前块的重构块;
修正单元803,其用于在该重构单元获取该重构块后,对该参考运动矢量进行修正;
并且,该解码单元801使用被修正的参考运动矢量和该重构块,对其他块作帧间预测。
在本实施例中,该编码后的当前块的编码数据可以是实施例1中所述的熵编码后的比特流,解码单元801可以对该比特流进行熵解码,得到解码结果,具体熵解码的方法与熵编码的方法对应,具体可以参考现有技术,此处不再赘述。
在本实施例中,解码单元801根据解码结果和参考运动矢量,进行帧间预测,生成预测块,该解码端的帧间预测的过程可以参考现有技术,其可以使用在视频编码装置中提供的当前块的帧间预测的运动信息(例如编码端帧间预测使用的参考运动矢量,参考帧索引等信息),以及解码结果生成预测块。
在本实施例中,该重构单元802对该解码结果进行反量化和反变换处理恢复预测残差,将该预测残差与该预测块相加,获取当前块的重构块,反量化和反变换处理请参考实施例1,此处不再赘述。
在本实施例中,该修正单元803的实施方式请参考实施例1中修正单元103,此处不再赘述。
在本实施例中,为了避免重复修正,该装置800还可以包括:
判断单元(可选,未图示),其用于判断该参考运动矢量是否被修正过;并且,在判断单元的判断结果为未被修正过时,该修正单元803对该参考运动矢量进行修 正。具体实施方式可以参考判断单元104,此处不再赘述。
在本实施例中,使用修正后的参考运动矢量替换运动矢量候选列表中对应的被修正前的参考运动矢量;并且,该解码单元801使用被修正的参考运动矢量和该重构块(作为其他块帧间预测的参考块),对其他块作帧间预测,该帧间预测的过程如前所述,此处不再赘述。需要说明的是,被修正的参考运动矢量和该重构块(作为其他块帧间预测的参考块)可以被其他块作用于帧间预测,但并不表示该被修正的参考运动矢量和该重构块用于剩余未被解码的所有块的帧间预测,只有在对某个块进行解码帧间预测时,解码单元801选择该重构块以及参考运动矢量时,该解码单元801才使用被修正的参考运动矢量和该重构块对该块作帧间预测,得到预测块。
在本实施例中,为了降低重构块和原始当前块之间的失真程度,该视频解码装置可以对该重构块进行滤波处理,以减小重构块和原始当前块之间的差异,图9是本实施例中视频编码装置一构成示意图,如图9所示,该装置900包括:解码单元901,重构单元902,滤波单元903,修正单元904,其中,该解码单元901和重构单元902的实施方式与图8中解码单元801,重构单元802相同,此处不再赘述。
在本实施例中,该滤波单元903用于对该重构块进行滤波处理;该滤波处理包括:去方块滤波(deblock)和像素自适应补偿(SAO),其具体实现方式可以参考现有技术,例如,去方块滤波可以去除重构图像中各个块之间的边界处生成的块失真,SAO应用了区去块滤波的残差块与原始图像之间的偏移量差,并且以带偏移量、边缘偏移量等的形式应用。
在一个实施方式中,该修正单元904在该滤波单元903对该重构块进行滤波处理后,对该参考运动矢量进行修正;在另一个实施方式中,在得到重构块后,该修正单元904在该滤波单元903对该重构块进行滤波处理前,对该参考运动矢量进行修正。
在本实施例中,为了避免重复修正,该装置900还可以包括:
判断单元(可选,未图示),其用于判断该参考运动矢量是否被修正过;并且,在判断单元的判断结果为未被修正过时,该修正单元904对该参考运动矢量进行修正。具体实施方式可以参考判断单元104,此处不再赘述。
由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
实施例4
本发明实施例4提供一种解码器,该解码器可以包含于用于视频处理的电子设备,也可以是配置于电子设备的某个或某些部件或组件,其包含实施例3中的视频解码装置,将其内容合并于此,此处不再赘述。
本发明实施例4还提供一种解码器,该解码器可以包含于用于视频处理的电子设备,也可以是配置于电子设备的某个或某些部件或组件,本实施例4与实施例3相同的内容不再赘述。
图10是本实施例4解码器结构示意图,如图10所示,该解码器1000包括:熵解码器1001、反变换和反量化器1002、预测器1003、滤波器1004(可选)、修正器1005、加法器1006、存储器1007。
在本实施例中,该熵解码器1001、反变换和反量化器1002、预测器1003、加法器1006的实施方式可以参考实施例3中解码单元801和重构单元802,其中,熵解码器1001对编码后的当前块的编码数据(比特流)进行解码,产生具有量化频率系数的预测残差,另外,还可以解码得到该当前块的运动信息,该预测器1003根据该运动信息,可以使用与实施例2的编码器中的预测器相同的方案来预测当前块,得到预测块输入至加法器1006中。
在本实施例中,反变换和反量化器1002反量化解码得到的预测残差,经过反变换处理后输入至加法器1006中与该预测块叠加,以生成当前块的重构块。
在本实施例中,该滤波器1004的实施方式可以参考实施例3中滤波单元903,对重构块执行以下处理:去方块滤波(deblock)和像素自适应补偿(SAO)。
在本实施例中,该修正器1005的实施方式可以参考实施例3中修正单元803或修正单元904,在得到重构块后修正参考运动矢量,或者在对重构块进行滤波处理后,修正参考运动矢量,另外,该修正器1005还可以包含判断单元104的功能,本实施例并不以此作为限制,具体修正方法详见实施例1,此处不再赘述。
在本实施例中,该存储器1007可以存储经过滤波的重构块以及修正后的参考运动矢量,该存储器1007中存储的重构块和运动矢量可以被提供给执行帧间预测的预测器1003,用于其他块的帧间预测。
在本实施例中,上述熵解码器1001、反变换和反量化器1002、预测器1003、滤波器1004(可选)、修正器1005、加法器1006的功能可以整合至中央处理器中,在 处理器的控制下进行视频解码。
由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
实施例5
本发明实施例还提供一种视频编码方法,图11是本发明实施例的视频编码方法的示意图。如图11所示,该方法包括:
步骤1101,根据参考运动矢量,对当前块进行帧间预测得到预测块,并根据该当前块和该预测块进行编码;
步骤1102,根据该预测块获取该当前块的重构块;
步骤1103,在获取该重构块后,对该参考运动矢量进行修正;并且,被修正的参考运动矢量和该重构块可以用于其他块的帧间预测。
在本实施例中,上述步骤1101-1103的实施方式可以参考实施例1中视频编码装置中编码单元101,重构单元102,修正单元103,此处不再赘述。
在本实施例中,上述步骤1101-1103是对当前块的编码过程,在完成后,对当前块的下一块重复执行步骤1101-1103,直至对多个帧图像中划分的所有块完成编码。
图12是本实施例中步骤1103中修正步骤的流程图,如图12所示,该修正步骤包括:
步骤1201,将该参考运动矢量对应的参考帧的参考块向至少一个方向偏移第一预定数量个像素值;
步骤1202,计算该重构块中第二预定数量个像素与偏移后的参考块的对应像素的绝对差值之和;
步骤1203,根据最小的绝对差值之和对应的偏移后的参考块确定修正后的参考运动矢量。
在本实施例中,步骤1201-1203的实施方式请参考实施例1中处理模块201、计算模块202、修正模块203,此处不再重复。
在本实施例中,该方法还可以包括(可选):步骤1200,判断该参考运动矢量是否被修正过;并且,在判断结果为未被修正过时,执行步骤1201-1203,否则跳过修正步骤。其中,每个参考运动矢量具有修正标识,该修正标识指示该参考运动矢量是 否被修正过。
在本实施例中,在该帧间预测是单向预测时,该参考运动矢量是单向参考运动矢量,在步骤1201-1203中对该单向参考运动矢量进行修正;在帧间预测是双向预测时,该参考运动矢量是双向参考运动矢量(包括前向参考运动矢量和后向参考运动矢量),在步骤1201-1203中对该前向参考运动矢量和该后向参考运动矢量进行修正。
在本实施例中,可选的,该方法还可以包括:
步骤1103’,对该重构块进行滤波处理;即在步骤1102中获得重构块后,执行该步骤1103’,上述步骤1103可以在步骤1103’之前执行,也可以在步骤1103’之后执行,即对该重构块进行滤波处理后,修正参考运动矢量,本实施例并不以此作为限制。
由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
实施例6
本发明实施例还提供一种视频解码方法,图13是本发明实施例的视频解码方法的示意图。如图13所示,该方法包括:
步骤1301,对编码后的当前块的编码数据进行解码;根据解码结果和参考运动矢量,进行帧间预测得到预测块;
步骤1302,根据该预测块获取该当前块的重构块;
步骤1303,在所获取该重构块后,对该参考运动矢量进行修正;并且,被修正的参考运动矢量和该重构块用于其他块的帧间预测。
在本实施例中,上述步骤1301-1303的实施方式可以参考实施例3中视频解码装置中解码单元801,重构单元802,修正单元803,此处不再赘述。
在本实施例中,上述步骤1301-1303是对当前块的解码过程,在完成后,对当前块的下一块重复执行步骤1301-1303,直至对多个帧图像中划分的所有块完成解码。
在本实施例中,上述步骤1303的具体实施方式可以参考实施例5中步骤1103,附图12,此处不再赘述。
在本实施例中,可选的,该方法还可以包括:
步骤1303’,对该重构块进行滤波处理;即在步骤1302中获得重构块后,执行 该步骤1303’,上述步骤1303可以在步骤1303’之前执行,也可以在步骤1303’之后执行,即对该重构块进行滤波处理后,修正参考运动矢量,本实施例并不以此作为限制。
由此,通过在得到重构块后进行运动矢量修正能够提高运动矢量修正的准确性,从而有效的进行视频编解码。
实施例7
本发明实施例还提供一种电子设备,该电子设备进行图像处理或视频处理,包括实施例2中的编码器和/或实施例4中的解码器,其内容合并于此,此处不再赘述。
图14是本发明实施例的电子设备的示意图。如图14所示,电子设备1400可以包括:处理器1401和存储器1402;存储器1402耦合到处理器1401。其中该存储器1402可存储各种数据;此外还存储信息处理的程序1403,并且在处理器1401的控制下执行该程序1403。
在一个实施例中,电子设备1400可以作为编码器使用,视频编码装置100或600的功能可以被集成到处理器1401中。其中,处理器1401可以被配置为实现如实施例5所述的视频编码方法。
例如,处理器1401可以被配置为进行如下的控制:根据参考运动矢量,对当前块进行帧间预测得到预测块,并根据该当前块和该预测块进行编码;根据该预测块获取该当前块的重构块;在获取该重构块后,对该参考运动矢量进行修正;并且,被修正的参考运动矢量和该重构块用于其他块的帧间预测。
在一个实施例中,电子设备1400可以作为解码器使用,视频解码装置800或900的功能可以被集成到处理器1401中。其中,处理器1401可以被配置为实现如实施例6所述的视频解码方法。
例如,处理器1401可以被配置为进行如下的控制:对编码后的当前块的编码数据进行解码;根据解码结果和参考运动矢量,进行帧间预测得到预测块;根据该预测块获取该当前块的重构块;在获取该重构块后,对该参考运动矢量进行修正;并且,被修正的参考运动矢量和该重构块用于其他块的帧间预测。
此外,如图14所示,电子设备1400还可以包括:输入输出(I/O)设备1404和显示器1405等;其中,上述部件的功能与现有技术类似,此处不再赘述。值得注意 的是,电子设备1400也并不是必须要包括图14中所示的所有部件;此外,电子设备1400还可以包括图14中没有示出的部件,可以参考相关技术。
本发明实施例提供一种计算机可读程序,其中当在编码器或电子设备中执行所述程序时,所述程序使得所述编码器或电子设备执行如实施例5所述的视频编码方法。
本发明实施例提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得编码器或电子设备执行如实施例5所述的视频编码方法。
本发明实施例提供一种计算机可读程序,其中当在解码器或电子设备中执行所述程序时,所述程序使得所述解码器或电子设备执行如实施例6所述的视频解码方法。
本发明实施例提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得解码器或电子设备执行如实施例6所述的视频解码方法。
本发明以上的装置和方法可以由硬件实现,也可以由硬件结合软件实现。本发明涉及这样的计算机可读程序,当该程序被逻辑部件所执行时,能够使该逻辑部件实现上文所述的装置或构成部件,或使该逻辑部件实现上文所述的各种方法或步骤。本发明还涉及用于存储以上程序的存储介质,如硬盘、磁盘、光盘、DVD、flash存储器等。
结合本发明实施例描述的方法/装置可直接体现为硬件、由处理器执行的软件模块或二者组合。例如,图中所示的功能框图中的一个或多个和/或功能框图的一个或多个组合,既可以对应于计算机程序流程的各个软件模块,亦可以对应于各个硬件模块。这些软件模块,可以分别对应于图中所示的各个步骤。这些硬件模块例如可利用现场可编程门阵列(FPGA)将这些软件模块固化而实现。
软件模块可以位于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动磁盘、CD-ROM或者本领域已知的任何其它形式的存储介质。可以将一种存储介质耦接至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息;或者该存储介质可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。该软件模块可以存储在移动终端的存储器中,也可以存储在可***移动终端的存储卡中。例如,若设备(如移动终端)采用的是较大容量的MEGA-SIM卡或者大容量的闪存装置,则该软件模块可存储在该MEGA-SIM卡或者大容量的闪存装置中。
针对附图中描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,可以实现为用于执行本发明所描述功能的通用处理器、数字信号处理器(DSP)、专 用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件或者其任意适当组合。针对附图描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,还可以实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、与DSP通信结合的一个或多个微处理器或者任何其它这种配置。
以上结合具体的实施方式对本发明进行了描述,但本领域技术人员应该清楚,这些描述都是示例性的,并不是对本发明保护范围的限制。本领域技术人员可以根据本发明的精神和原理对本发明做出各种变型和修改,这些变型和修改也在本发明的范围内。

Claims (20)

  1. 一种视频编码装置,其中,所述装置包括:
    编码单元,其用于根据参考运动矢量,对当前块进行帧间预测得到预测块,并根据所述当前块和所述预测块进行编码;
    重构单元,其用于根据所述预测块获取所述当前块的重构块;
    修正单元,其用于在所述重构单元获取所述重构块后,对所述参考运动矢量进行修正;
    并且,所述编码单元使用被修正的参考运动矢量和所述重构块,对其他块作帧间预测。
  2. 根据权利要求1所述的装置,其中,所述装置还包括:
    滤波单元,其用于对所述重构块进行滤波处理;
    并且,所述修正单元在所述滤波单元对所述重构块进行滤波处理后,对所述参考运动矢量进行修正。
  3. 根据权利要求1所述的装置,其中,所述装置还包括:
    判断单元,其用于判断所述参考运动矢量是否被修正过;
    并且,在判断单元的判断结果为未被修正过时,所述修正单元对所述参考运动矢量进行修正。
  4. 根据权利要求3所述的装置,其中,所述参考运动矢量具有修正标识,所述修正标识指示所述参考运动矢量是否被修正过。
  5. 根据权利要求1所述的装置,其中,所述装置还包括:
    选择单元,其用于从运动矢量候选列表中选择所述参考运动矢量,其中所述运动矢量候选列表中的运动矢量是相邻块的运动矢量和/或相邻帧对应块的运动矢量的集合。
  6. 根据权利要求1所述的装置,其中,在所述帧间预测是单向预测时,所述参考运动矢量是单向参考运动矢量,所述修正单元对所述单向参考运动矢量进行修正;
    在帧间预测是双向预测时,所述参考运动矢量是双向参考运动矢量,所述双向参考运动矢量包括前向参考运动矢量和后向参考运动矢量,所述修正单元对所述前向参考运动矢量和所述后向参考运动矢量进行修正。
  7. 根据权利要求1所述的装置,其中,所述修正单元包括:
    处理模块,其用于将所述参考运动矢量对应的参考帧的参考块向至少一个方向偏移第一预定数量个像素值;
    计算模块,其用于计算所述重构块中第二预定数量个像素与偏移后的参考块的对应像素的绝对差值之和;
    修正模块,其用于根据最小的绝对差值之和对应的偏移后的参考块确定修正后的参考运动矢量。
  8. 根据权利要求7所述的装置,其中,所述计算模块根据所述重构块的大小确定所述第二预定数量个像素。
  9. 根据权利要求8所述的装置,其中,在所述重构块的大小大于预定面积时,所述计算模块将所述重构块中预定区域的像素作为所述第二预定数量像素。
  10. 根据权利要求9所述的装置,其中,所述预定区域是边缘区域。
  11. 一种视频解码装置,其中,所述装置包括:
    解码单元,其用于对编码后的当前块的编码数据进行解码;根据解码结果和参考运动矢量,进行帧间预测得到预测块;
    重构单元,其用于根据所述预测块获取所述当前块的重构块;
    修正单元,其用于在所述重构单元获取所述重构块后,对所述参考运动矢量进行修正;
    并且,所述解码单元使用被修正的参考运动矢量和所述重构块,对其他块作帧间预测。
  12. 根据权利要求11所述的装置,其中,所述装置还包括:
    滤波单元,其用于对所述重构块进行滤波处理;
    并且,所述修正单元在所述滤波单元对所述重构块进行滤波处理后,对所述参考运动矢量进行修正。
  13. 根据权利要求11所述的装置,其中,所述装置还包括:
    判断单元,其用于判断所述参考运动矢量是否被修正过;
    并且,在判断单元的判断结果为未被修正过时,所述修正单元对所述参考运动矢量进行修正。
  14. 根据权利要求13所述的装置,其中,所述参考运动矢量具有修正标识,所 述修正标识指示所述参考运动矢量是否被修正过。
  15. 根据权利要求11所述的装置,其中,所述装置还包括:
    选择单元,其用于从运动矢量候选列表中选择所述参考运动矢量,其中所述运动矢量候选列表中的运动矢量是相邻块的运动矢量和/或相邻帧对应块的运动矢量的集合。
  16. 根据权利要求11所述的装置,其中,在所述帧间预测是单向预测时,所述参考运动矢量是单向参考运动矢量,所述修正单元对所述单向参考运动矢量进行修正;
    在帧间预测是双向预测时,所述参考运动矢量是双向参考运动矢量,所述双向参考运动矢量包括前向参考运动矢量和后向参考运动矢量,所述修正单元对所述前向参考运动矢量和所述后向参考运动矢量进行修正。
  17. 根据权利要求11所述的装置,其中,所述修正单元包括:
    处理模块,其用于将所述参考运动矢量对应的参考帧的参考块向至少一个方向偏移第一预定数量个像素值;
    计算模块,其用于计算所述重构块中第二预定数量个像素与偏移后的参考块的对应像素的绝对差值之和;
    修正模块,其用于根据最小的绝对差值之和对应的偏移后的参考块确定修正后的参考运动矢量。
  18. 根据权利要求17所述的装置,其中,所述计算模块根据所述重构块的大小确定所述第二预定数量个像素。
  19. 根据权利要求18所述的装置,其中,在所述重构块的大小大于预定面积时,所述计算模块将所述重构块中预定区域的像素作为所述第二预定数量像素。
  20. 一种电子设备,其中,所述电子设备包括:
    编码器,其包括如权利要求1所述的视频编码装置;和/或,
    解码器,其包括如权利要求11所述的视频解码装置。
PCT/CN2019/075685 2019-02-21 2019-02-21 视频编码方法、视频解码方法、装置以及电子设备 WO2020168509A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/075685 WO2020168509A1 (zh) 2019-02-21 2019-02-21 视频编码方法、视频解码方法、装置以及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/075685 WO2020168509A1 (zh) 2019-02-21 2019-02-21 视频编码方法、视频解码方法、装置以及电子设备

Publications (1)

Publication Number Publication Date
WO2020168509A1 true WO2020168509A1 (zh) 2020-08-27

Family

ID=72143310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075685 WO2020168509A1 (zh) 2019-02-21 2019-02-21 视频编码方法、视频解码方法、装置以及电子设备

Country Status (1)

Country Link
WO (1) WO2020168509A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103329537A (zh) * 2011-01-21 2013-09-25 Sk电信有限公司 基于预测运动矢量索引编码产生/恢复运动信息的设备和方法及用于使用该运动信息进行视频编码/解码的设备和方法
US20160080770A1 (en) * 2008-03-07 2016-03-17 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
CN109155847A (zh) * 2016-03-24 2019-01-04 英迪股份有限公司 用于编码/解码视频信号的方法和装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160080770A1 (en) * 2008-03-07 2016-03-17 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
CN103329537A (zh) * 2011-01-21 2013-09-25 Sk电信有限公司 基于预测运动矢量索引编码产生/恢复运动信息的设备和方法及用于使用该运动信息进行视频编码/解码的设备和方法
CN109155847A (zh) * 2016-03-24 2019-01-04 英迪股份有限公司 用于编码/解码视频信号的方法和装置

Similar Documents

Publication Publication Date Title
KR101422422B1 (ko) Dmvd 처리 향상을 위한 시스템 및 방법
JP5801363B2 (ja) 符号化及び復号化のための装置及び方法並びにコンピュータプログラム
KR101947142B1 (ko) 스킵 모드를 이용한 영상 복호화 방법 및 이러한 방법을 사용하는 장치
JP5061179B2 (ja) 照明変化補償動き予測符号化および復号化方法とその装置
US20110032990A1 (en) Apparatus and method for deblocking filtering image data and video decoding apparatus and method using the same
US8804826B2 (en) Methods and devices for incorporating deblocking into encoded video
RU2523920C2 (ru) Способ кодирования с предсказанием вектора движения, способ декодирования с предсказанием вектора движения, устройство кодирования фильма, устройство декодирования фильма и их программы
CN112385211A (zh) 用于视频编码和解码的运动补偿
WO2019114721A1 (zh) 视频数据的帧间预测方法和装置
US20130089136A1 (en) Spatial Intra Prediction Estimation Based on Mode Suppression in Macroblocks of a Video Frame
JP5938424B2 (ja) 画像ブロックを再構成および符号化する方法
Abou-Elailah et al. Improved side information generation for distributed video coding
JP2023063506A (ja) 構築されたアフィンマージ候補を導出する方法
KR101443865B1 (ko) 인터 예측 방법 및 장치
WO2020168509A1 (zh) 视频编码方法、视频解码方法、装置以及电子设备
AU2016316317B2 (en) Method and apparatus of prediction offset derived based on neighbouring area in video coding
JP2024512647A (ja) 明示的な動きシグナリングを用いた幾何学的分割
US11290739B2 (en) Video processing methods and apparatuses of determining motion vectors for storage in video coding systems
JP7228980B2 (ja) 予測画像補正装置、画像符号化装置、画像復号装置、及びプログラム
JP7034363B2 (ja) 画像復号装置、画像復号方法及びプログラム
JP7083971B1 (ja) 画像復号装置、画像復号方法及びプログラム
JP7061737B1 (ja) 画像復号装置、画像復号方法及びプログラム
JP7387806B2 (ja) 画像復号装置、画像復号方法及びプログラム
KR101802304B1 (ko) 하다마드 변환을 이용한 부호화 방법 및 이러한 방법을 사용하는 장치
JP2020150312A (ja) 画像復号装置、画像復号方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19915829

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19915829

Country of ref document: EP

Kind code of ref document: A1