WO2019184556A1 - 一种双向帧间预测方法及装置 - Google Patents

一种双向帧间预测方法及装置 Download PDF

Info

Publication number
WO2019184556A1
WO2019184556A1 PCT/CN2019/071471 CN2019071471W WO2019184556A1 WO 2019184556 A1 WO2019184556 A1 WO 2019184556A1 CN 2019071471 W CN2019071471 W CN 2019071471W WO 2019184556 A1 WO2019184556 A1 WO 2019184556A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference frame
motion information
image block
current image
motion vector
Prior art date
Application number
PCT/CN2019/071471
Other languages
English (en)
French (fr)
Inventor
陈焕浜
杨海涛
陈建乐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to MX2020010174A priority Critical patent/MX2020010174A/es
Priority to CA3095220A priority patent/CA3095220C/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to BR112020018923-5A priority patent/BR112020018923A2/pt
Priority to KR1020207029639A priority patent/KR102525178B1/ko
Priority to KR1020237013341A priority patent/KR102622150B1/ko
Priority to RU2020134970A priority patent/RU2762262C1/ru
Priority to JP2020552815A priority patent/JP7143435B2/ja
Priority to AU2019240981A priority patent/AU2019240981B2/en
Priority to EP19776934.2A priority patent/EP3771211A4/en
Priority to SG11202009509VA priority patent/SG11202009509VA/en
Publication of WO2019184556A1 publication Critical patent/WO2019184556A1/zh
Priority to PH12020551544A priority patent/PH12020551544A1/en
Priority to US16/948,625 priority patent/US11350122B2/en
Priority to ZA2020/06408A priority patent/ZA202006408B/en
Priority to US17/731,109 priority patent/US11924458B2/en
Priority to US17/827,361 priority patent/US11838535B2/en
Priority to JP2022146320A priority patent/JP2022179516A/ja
Priority to US18/416,294 priority patent/US20240171765A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the embodiments of the present invention relate to the field of video image coding and decoding technologies, and in particular, to a bidirectional inter prediction method and apparatus.
  • the predicted image block of the current image block may be generated from only one reference image block (ie, uni-directional inter prediction), or the current image may be generated according to at least two reference image blocks.
  • the predicted image block of the block ie, bidirectional inter prediction
  • the at least two reference image blocks may be from the same reference frame or different reference frames.
  • the encoding end needs to send the motion information of each image block to the decoding end in the code stream.
  • the motion information of the current image block includes an index value of a reference frame, a motion vector predictor (MVP) flag, and a motion vector difference (MVD).
  • MVP motion vector predictor
  • MVD motion vector difference
  • the encoding end needs to send motion information of each image block in each direction to the decoding end.
  • the transmission resources occupied by the motion information are large, the effective utilization of the transmission resources is reduced, the transmission rate is reduced, and the coding and decoding compression efficiency is reduced.
  • the embodiment of the present invention provides a bidirectional inter-frame prediction method and apparatus, which can have large transmission resources occupied by motion information, reduce effective utilization of transmission resources, reduce transmission rate, and reduce coding and decoding efficiency.
  • a bidirectional inter prediction method where the indication information for determining the second motion information is determined according to the first motion information, where the first motion information is motion information of the current image block in the first direction, where The second motion information is the motion information of the current image block in the second direction; the first motion information is obtained; and the second motion information is determined according to the acquired first motion information, so that the first motion information and the determined information may be determined according to the obtained
  • the second motion information is output, and the predicted pixel of the current image block is determined.
  • the bidirectional inter prediction method provided by the present application determines the second motion information according to the first motion information after acquiring the indication information, so that the code stream includes only the indication information and the first motion information, and the second motion does not need to be included. information.
  • the code stream includes motion information of each image block in each direction, which effectively reduces the motion information included in the code stream, improves the effective utilization of the transmission resource, and improves the transmission rate. , also increased the codec rate.
  • the method for determining the second motion information according to the first motion information is: acquiring an index value of the first reference frame in the first motion information, and Determining, according to the index value of the first reference frame and the first reference frame list, the sequence number of the first reference frame, where the first reference frame is a reference frame of the current image block in the first direction, and the index value of the first reference frame is a number of a reference frame in the first reference frame list; obtaining an index value of the second reference frame, and determining a sequence number of the second reference frame according to the index value of the second reference frame and the second reference frame list, where the second The reference frame is a reference frame of the current image block in the second direction, and the index value of the second reference frame is a number of the second reference frame in the second reference frame list; according to the first motion vector difference and the first in the first motion information a motion vector predictor flag, the first motion vector is determined, the first motion vector is a motion vector of the current image block in the
  • mv_lY represents the second motion vector
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • mv_lX represents the first motion vector
  • the second motion vector represents the current image. The motion vector of the block in the second direction.
  • the method for determining the second motion information according to the first motion information is: acquiring an index value of the first reference frame in the first motion information, where The first reference frame is a reference frame of the current image block in the first direction, and the index value of the first reference frame is a number of the first reference frame in the first reference frame list; obtaining an index value of the second reference frame, the second reference The frame is a reference frame of the current image block in the second direction, and the index value of the second reference frame is a number of the second reference frame in the second reference frame list; according to the first motion vector difference and the first in the first motion information
  • the motion vector predictor flag determines a first motion vector, where the first motion vector is a motion vector of the current image block in the first direction; the first reference frame is a forward reference frame of the current image block, and the second reference frame is a current image block.
  • the second reference frame is a forward reference frame of the current picture block, or, in the first reference frame with Where the second reference frame is a forward reference frame of the current image block, or in the case where the first reference frame and the second reference frame are both backward reference frames of the current image block, determining the second according to the following formula
  • the second motion vector in the motion information :
  • mv_lY represents a second motion vector
  • mv_lX represents a first motion vector
  • the second motion vector is a motion vector of a current image block in a second direction.
  • the first reference frame is the forward reference frame of the current image block
  • the second reference frame is the backward reference frame of the current image block
  • the first reference frame is the backward reference frame of the current image block
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame.
  • the method for determining the second motion information according to the first motion information is: acquiring an index value of the first reference frame in the first motion information, and The first motion vector is different, and determining, according to the index value of the first reference frame and the first reference frame list, the sequence number of the first reference frame, where the first reference frame is a reference frame of the current image block in the first direction, the first reference frame
  • the index value is a number of the first reference frame in the first reference frame list; obtaining an index value of the second reference frame, and determining a sequence number of the second reference frame according to the index value of the second reference frame and the second reference frame list, Determining, according to the index value of the second reference frame and the second candidate predicted motion vector list, the second predicted motion vector is a predicted motion vector of the current image block in the second direction, and the second reference frame is the current image block.
  • a reference frame of the second direction the index value of the second reference frame is a number of the second reference frame in the second reference frame list;
  • mvd_lY represents the second motion vector difference
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • mvd_lX represents the first motion vector difference
  • the vector and the second motion vector are different, and the second motion vector is determined, and the second motion vector is a motion vector of the current image block in the second direction.
  • the method for determining the second motion information according to the first motion information is: acquiring an index value of the first reference frame in the first motion information, and a first motion vector, where the first reference frame is a reference frame of the current image block in the first direction, the index value of the first reference frame is a number of the first reference frame in the first reference frame list; and an index of the second reference frame is obtained. And determining, according to the index value of the second reference frame and the second candidate predicted motion vector list, the second predicted motion vector is a predicted motion vector of the current image block in the second direction, and the second reference frame is the current image.
  • the index value of the second reference frame is a number of the second reference frame in the second reference frame list;
  • the first reference frame is a forward reference frame of the current image block, and the second reference frame is In the case of a backward reference frame of the current image block, or in the case where the first reference frame is a backward reference frame of the current image block, and the second reference frame is a forward reference frame of the current image block, or In the case where the first reference frame and the second reference frame are both forward reference frames of the current image block, or in the case where the first reference frame and the second reference frame are both backward reference frames of the current image block, Determining a second motion vector difference in the second motion information according to the following formula:
  • mvd_lY represents a second motion vector difference
  • mvd_lX represents a first motion vector difference
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame.
  • the bidirectional inter prediction method provided by the present application may be that the second motion vector is determined according to the first motion vector, or the second motion vector difference is determined according to the first motion vector difference, and is determined according to the second motion vector difference.
  • the second motion vector may be that the second motion vector is determined according to the first motion vector, or the second motion vector difference is determined according to the first motion vector difference, and is determined according to the second motion vector difference. The second motion vector.
  • *POC_Cur-POC_listX the first sequence number is calculated, wherein POC_Cur represents the sequence number of the current frame, POC_listX represents the sequence number of the first reference frame, POC_listY0 represents the first sequence number; and in the case where the second reference frame list includes the first sequence number, The number of the reference frame characterized by a sequence number in the second reference frame list is determined as the index value of the second reference frame.
  • the foregoing method for “acquiring an index value of the second reference frame” is: using a formula (POC_Cur- according to the sequence number of the current frame and the sequence number of the first reference frame.
  • POC_listX *(POC_listY0'-POC_Cur)>0
  • calculating a second sequence number wherein POC_listY0' represents a second sequence number; in the case where the second reference frame list includes a second sequence number, the reference frame characterized by the second sequence number is at The number in the second reference frame list is determined as the index value of the second reference frame.
  • the foregoing method for “acquiring an index value of the second reference frame” is: using a formula POC_listX ⁇ POC_listY0 according to the sequence number of the current frame and the sequence number of the first reference frame. ", the third serial number is calculated, wherein POC_listY0" represents the third serial number; the number of the reference frame characterized by the third serial number in the second reference frame list is determined as the index value of the second reference frame.
  • *POC_Cur-POC_listX the first sequence number is calculated, wherein POC_Cur represents the sequence number of the current frame, POC_listX represents the sequence number of the first reference frame, and POC_listY0 represents the first sequence number.
  • the second reference frame list includes the first sequence number
  • the number of the reference frame characterized by the first sequence number in the second reference frame list is determined as the index value of the second reference frame.
  • the second sequence number is calculated according to the sequence number of the current frame and the sequence number of the first reference frame by using a formula (POC_Cur-POC_listX)*(POC_listY0'-POC_Cur)>0.
  • POC_listY0' represents the second serial number.
  • the number of the reference frame characterized by the second sequence number in the second reference frame list is determined as the index value of the second reference frame.
  • the third sequence number is calculated by the formula POC_listX ⁇ POC_listY0′′ according to the sequence number of the current frame and the sequence number of the first reference frame, where POC_listY0′′ represents the third sequence number;
  • the number of the reference frame characterized by the third sequence number in the second reference frame list is determined as the index value of the second reference frame.
  • the method for “acquiring an index value of the second reference frame” is: parsing the code stream, and acquiring an index value of the second reference frame.
  • the method for obtaining the index value of the second reference frame needs to be determined according to actual requirements or presets.
  • a bidirectional inter prediction apparatus comprising an acquisition unit and a determination unit.
  • the acquiring unit is configured to acquire indication information, where the indication information is used to indicate that the second motion information is determined according to the first motion information, where the first motion information is motion information of the current image block in the first direction, and the second motion information is The motion information of the current image block in the second direction, and the acquisition of the first motion information.
  • the determining unit is configured to determine second motion information according to the first motion information acquired by the acquiring unit, and to determine a predicted pixel of the current image block according to the first motion information and the second motion information.
  • the determining unit is specifically configured to: obtain an index value of the first reference frame in the first motion information, and according to an index value of the first reference frame, and the first Determining, by the reference frame list, a sequence number of the first reference frame, where the first reference frame is a reference frame of the current image block in the first direction, and an index value of the first reference frame is a number of the first reference frame in the first reference frame list; Obtaining an index value of the second reference frame, and determining a sequence number of the second reference frame according to the index value of the second reference frame and the second reference frame list, where the second reference frame is a reference frame of the current image block in the second direction, The index value of the second reference frame is a number of the second reference frame in the second reference frame list; determining the first motion vector according to the first motion vector difference and the first motion vector predictor flag in the first motion information, first The motion vector is a motion vector of the current image block in the first direction; the second motion vector in the second
  • mv_lY represents the second motion vector
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • mv_lX represents the first motion vector
  • the second motion vector represents the current image block. The motion vector in the second direction.
  • the determining unit is specifically configured to: obtain an index value of the first reference frame in the first motion information, where the first reference frame is the current image block at the first a reference frame of the direction, the index value of the first reference frame is a number of the first reference frame in the first reference frame list; obtaining an index value of the second reference frame, where the second reference frame is a reference of the current image block in the second direction a frame, the index value of the second reference frame is a number of the second reference frame in the second reference frame list; determining the first motion vector according to the first motion vector difference and the first motion vector predictor flag in the first motion information
  • the first motion vector is a motion vector of the current image block in the first direction; the first reference frame is a forward reference frame of the current image block, and the second reference frame is a backward reference frame of the current image block, or In a case where the first reference frame is a backward reference frame of the current image block, the second reference frame is a forward reference frame of the current image
  • mv_lY represents a second motion vector
  • mv_lX represents a first motion vector
  • the second motion vector is a motion vector of a current image block in a second direction.
  • the determining unit is specifically configured to: obtain an index value of the first reference frame and a first motion vector difference in the first motion information, and according to the first reference Determining a sequence number of the first reference frame, the first reference frame is a reference frame of the current image block in the first direction, and the index value of the first reference frame is the first reference frame at the first Referencing a number in the frame list; obtaining an index value of the second reference frame, and determining a sequence number of the second reference frame according to the index value of the second reference frame and the second reference frame list, according to the index value of the second reference frame and the second
  • the candidate prediction motion vector list determines a second prediction motion vector, the second prediction motion vector is a prediction motion vector of the current image block in the second direction, the second reference frame is a reference frame of the current image block in the second direction, and the second reference frame is The index value is a number of the second reference frame in the second reference frame list; the second motion vector difference in the second motion information is determined
  • mvd_lY represents the second motion vector difference
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • mvd_lX represents the first motion vector difference
  • the determining unit is specifically configured to: obtain an index value of the first reference frame and a first motion vector in the first motion information, where the first reference frame is current a reference frame of the image block in the first direction, an index value of the first reference frame is a number of the first reference frame in the first reference frame list; an index value of the second reference frame is obtained, according to an index value of the second reference frame,
  • the second candidate predicted motion vector list determines a second predicted motion vector, the second predicted motion vector is a predicted motion vector of the current image block in the second direction, the second reference frame is a reference frame of the current image block in the second direction, and the second The index value of the reference frame is the number of the second reference frame in the second reference frame list; the first reference frame is the forward reference frame of the current image block, and the second reference frame is the backward reference frame of the current image block.
  • the second reference frame is a forward reference frame of the current image block, or both the first reference frame and the second reference frame are In the case of the forward reference frame of the previous image block, or in the case where the first reference frame and the second reference frame are both backward reference frames of the current image block, the second motion information is determined according to the following formula Two motion vector differences:
  • mvd_lY represents the second motion vector difference
  • mvd_lX represents the first motion vector difference
  • the acquiring unit is specifically configured to: according to the sequence number of the current frame and the sequence number of the first reference frame, by using a formula (POC_Cur-POC_listX)* (POC_listY0'-POC_Cur >0, the second sequence number is calculated, wherein POC_listY0' represents the second sequence number; and in the case where the second reference frame list includes the second sequence number, the number of the reference frame characterized by the second sequence number in the second reference frame list is determined Is the index value of the second reference frame.
  • the acquiring unit is specifically configured to: calculate, according to a sequence number of the current frame and a sequence number of the first reference frame, a third sequence number by using a formula POC_listX ⁇ POC_listY0′′, where , POC_listY0" represents a third sequence number; the number of the reference frame characterized by the third sequence number in the second reference frame list is determined as the index value of the second reference frame.
  • a bidirectional inter prediction method is provided. There are several implementations of this bidirectional inter prediction method:
  • An implementation manner is: parsing a code stream, and acquiring a first identifier, where the first identifier is used to indicate whether the second motion information is determined according to the first motion information, where the first motion information is motion information of the current image block in the first direction, where The second motion information is the motion information of the current image block in the second direction; if the value of the first identifier is the first preset value, the first motion information is acquired, and the second motion information is determined according to the first motion information; The motion information and the second work information determine a predicted pixel of the current image block.
  • the other implementation manner is: parsing the code stream, and acquiring the second identifier, where the second identifier is used to indicate whether the motion information of the current image block is calculated by using the motion information derivation algorithm; if the value of the second identifier is the second preset value, Obtaining a third identifier, where the third identifier is used to indicate whether the second motion information is determined according to the first motion information, where the first motion information is motion information of the current image block in the first direction, and the second motion information is that the current image block is in the second a motion information of the direction; if the third identifier is a third preset value, acquiring the first motion information, and determining the second motion information according to the first motion information; determining the current according to the first motion information and the second motion information The predicted pixel of the image block.
  • the other implementation manner is: parsing the code stream, and acquiring the second identifier, where the second identifier is used to indicate whether the motion information of the current image block is calculated by using the motion information derivation algorithm; if the value of the second identifier is the second preset value, Acquiring the first motion information, and determining the second motion information according to the first motion information, where the first motion information is motion information of the current image block in the first direction, and the second motion information is motion information of the current image block in the second direction; Determining predicted pixels of the current image block based on the first motion information and the second motion information.
  • Another implementation manner is: parsing a code stream, and acquiring a fourth identifier, where the fourth identifier is used to indicate whether the motion information of the current image block is calculated by using a motion information derivation algorithm; and if the value of the fourth identifier is a fourth preset value, Determining, according to the first reference frame list and the second reference frame list, an index value of the first reference frame and an index value of the second reference frame, where the first reference frame list is a reference frame list of the current image block in the first direction, and second The reference frame list is a reference frame list of the current image block in the second direction, the first reference frame is a reference frame of the current image block in the first direction, and the second reference frame is a reference frame of the current image block in the second direction; a motion vector difference and a first motion vector predictor flag, and determining second motion information according to the first motion information, the first motion information including an index value of the first reference frame, a first motion vector difference, and a first motion vector predictor And the second motion
  • Another implementation manner is: parsing a code stream, and acquiring a first identifier, where the first identifier is used to indicate whether the second motion information is determined according to the first motion information, where the first motion information is motion information of the current image block in the first direction, The second motion information is the motion information of the current image block in the second direction; if the value of the first identifier is the eighth preset value, the fifth identifier is obtained, where the fifth identifier is used to indicate whether the second motion information is determined according to the second motion information.
  • a motion information if the value of the fifth identifier is a fifth preset value, acquiring second motion information, and determining first motion information according to the second motion information; determining current according to the first motion information and the second motion information The predicted pixel of the image block.
  • the other implementation manner is: parsing the code stream, and acquiring the second identifier, where the second identifier is used to indicate whether the motion information of the current image block is calculated by using the motion information derivation algorithm; if the value of the second identifier is the second preset value, Obtaining a third identifier, where the third identifier is used to indicate whether the second motion information is determined according to the first motion information, where the first motion information is motion information of the current image block in the first direction, and the second motion information is that the current image block is in the second a motion information of the direction; if the third identifier is a sixth preset value, acquiring second motion information, and determining first motion information according to the second motion information; determining current according to the first motion information and the second motion information The predicted pixel of the image block.
  • the code stream includes motion information of each image block in each direction, which effectively reduces the motion information included in the code stream, improves the effective utilization of the transmission resource, and improves the transmission rate. , also increased the codec rate.
  • a bidirectional inter prediction apparatus comprising an acquisition unit and a determination unit.
  • the acquiring unit is configured to parse the code stream, and obtain a first identifier, where the first identifier is used to indicate whether the second motion information is determined according to the first motion information, where the first motion information is the current image.
  • the motion information of the block in the first direction, the second motion information is the motion information of the current image block in the second direction, and the first motion information is obtained if the value of the first identifier is the first preset value.
  • the determining unit is configured to determine second motion information according to the first motion information acquired by the acquiring unit, and to determine a predicted pixel of the current image block according to the first motion information and the second motion information.
  • the acquiring unit is configured to parse the code stream, and obtain a second identifier, where the second identifier is used to indicate whether to use the motion information derivation algorithm to calculate motion information of the current image block, and to use the second identifier.
  • the value is a second preset value
  • the third identifier is obtained, where the third identifier is used to indicate whether the second motion information is determined according to the first motion information, where the first motion information is motion information of the current image block in the first direction, where The second motion information is motion information of the current image block in the second direction, and is used to obtain the first motion information if the third identifier is a third preset value.
  • the determining unit is configured to determine second motion information according to the first motion information acquired by the acquiring unit, and to determine a predicted pixel of the current image block according to the first motion information and the second motion information.
  • the acquiring unit is configured to parse the code stream, and obtain a second identifier, where the second identifier is used to indicate whether to use the motion information derivation algorithm to calculate motion information of the current image block, and to use the second identifier.
  • the value is the second preset value, and the first motion information is obtained.
  • the determining unit is configured to determine second motion information according to the first motion information acquired by the acquiring unit, where the first motion information is motion information of the current image block in the first direction, and the second motion information is that the current image block is in the second The motion information of the direction, and for determining the predicted pixel of the current image block according to the first motion information and the second motion information.
  • the acquiring unit is configured to parse the code stream, and obtain a fourth identifier, where the fourth identifier is used to indicate whether the motion information of the current image block is calculated by using the motion information derivation algorithm.
  • the determining unit is configured to determine, according to the first reference frame list and the second reference frame list, an index value of the first reference frame and a second, if the value of the fourth identifier acquired by the acquiring unit is a fourth preset value.
  • the index value of the reference frame, the first reference frame list is a reference frame list of the current image block in the first direction
  • the second reference frame list is a reference frame list of the current image block in the second direction
  • the first reference frame is the current image block.
  • the second reference frame is a reference frame of the current image block in the second direction.
  • the acquiring unit is further configured to acquire the first motion vector difference and the first motion vector predictor flag.
  • the determining unit is further configured to determine second motion information according to the first motion information, where the first motion information includes an index value of the first reference frame, a first motion vector difference, and a first motion vector predictor flag, where the second motion information is The motion information of the current image block in the second direction; determining the predicted pixel of the current image block according to the first motion information and the second motion information.
  • a terminal comprising: one or more processors, a memory, and a communication interface.
  • the memory, communication interface is coupled to one or more processors; the memory is for storing computer program code, the computer program code comprising instructions, and when the one or more processors execute the instructions, the terminal performs the first aspect as described above and any one thereof A bidirectional inter prediction method as described in a possible implementation manner, or a bidirectional inter prediction method as described in the third aspect above and any possible implementation manner thereof.
  • a video decoder including a non-volatile storage medium storing a executable program, and a non-volatile storage medium, the non-volatile storage Media interfacing, and executing the executable program to implement the bi-directional inter-frame prediction method as described in the first aspect above and any one of its possible implementations, or the third aspect and any one of the possible implementations thereof The bidirectional interframe prediction method described.
  • a decoder comprising the bidirectional inter prediction apparatus and the reconstruction module in the second aspect, wherein the reconstruction module is configured to use a prediction pixel obtained according to the bidirectional inter prediction apparatus Determining a reconstructed pixel value of a current image block; or the decoder comprises the bidirectional inter prediction apparatus and the reconstruction module of the fourth aspect described above, wherein the reconstruction module is configured to obtain according to the bidirectional inter prediction apparatus The predicted pixel determines the reconstructed pixel value of the current image block.
  • a computer readable storage medium stores an instruction, when the instruction is run on the terminal described in the fifth aspect, causing the terminal to perform the foregoing
  • a ninth aspect a computer program product comprising instructions for causing the terminal to perform the first aspect as described above and any one of possible implementations thereof when the computer program product is run on the terminal described in the fifth aspect above.
  • the name of the above-mentioned bidirectional inter prediction apparatus is not limited to the device or the function module itself. In actual implementation, these devices or function modules may appear under other names. As long as the functions of the respective devices or functional modules are similar to the present application, they are within the scope of the claims and their equivalents.
  • FIG. 1 is a schematic structural diagram of a video codec system according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a video encoder according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a video decoder in an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a bidirectional inter-frame prediction method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram 1 of a bidirectional inter-frame prediction apparatus according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram 2 of a bidirectional inter-frame prediction apparatus according to an embodiment of the present disclosure.
  • the words “exemplary” or “such as” are used to mean an example, illustration, or illustration. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of the words “exemplary” or “such as” is intended to present the concepts in a particular manner.
  • Image encoding The process of compressing a sequence of images into a stream of code.
  • Image decoding The process of restoring a code stream into a reconstructed image according to specific grammar rules and processing methods.
  • the encoding process of the video image is: the encoding end first divides a frame of the original image into a plurality of parts that do not overlap each other, and each part can be used as an image block; then, the encoding end performs prediction for each image block (Prediction). Operations such as transform and quantization to obtain a code stream corresponding to the image block; wherein the prediction is to obtain a prediction block of the image block, so that only the difference between the image block and its prediction block can be obtained (or called a residual or residual block) to encode and transmit, thereby saving transmission overhead; finally, the encoding end sends the code stream corresponding to the image block to the decoding end.
  • Prediction prediction for each image block
  • the decoding end after receiving the code stream, performs a video decoding process. Specifically, the decoding end performs operations such as prediction, inverse quantization, and inverse transform on the received code stream to obtain a reconstructed image block (or a reconstructed image block), and the process is called an image reconstruction process (or an image reconstruction process). Then, the decoding end assembles the reconstructed block of each image block in the original image to obtain a reconstructed image of the original image, and plays the reconstructed image.
  • operations such as prediction, inverse quantization, and inverse transform on the received code stream to obtain a reconstructed image block (or a reconstructed image block)
  • the process is called an image reconstruction process (or an image reconstruction process).
  • the decoding end assembles the reconstructed block of each image block in the original image to obtain a reconstructed image of the original image, and plays the reconstructed image.
  • Existing video image coding and decoding techniques include intra prediction and inter prediction.
  • the inter prediction refers to prediction performed by using the correlation between the current frame and its reference frame in units of coded image blocks/decoded image blocks.
  • the predicted image block of the current image block is generated according to the pixels in the reference frame of the current image block.
  • the predicted image block of the current image block may be generated from only one reference image block, or the predicted image block of the current image block may be generated according to at least two reference image blocks.
  • the above-described predicted image block for generating a current image block from one reference image block is referred to as unidirectional prediction
  • the above-described predicted image block for generating a current image block from at least two reference image blocks is referred to as bidirectional inter prediction.
  • At least two reference image blocks in bi-directional inter prediction may be from the same reference frame or different reference frames. That is to say, the "direction" referred to in the present application is a broad definition. One direction in the present application corresponds to one reference image block.
  • the first direction and the second direction in the following correspond to different reference image blocks, which may be included in the forward reference frame/backward reference frame of the current image block, or may be included in the current reference block.
  • the forward reference frame of the image block and the other are included in the backward reference frame of the current image block.
  • the bi-directional inter-prediction may refer to the correlation between the current video frame and the video frame previously encoded and played before it, and the current video frame and the video encoded before and after the video frame. Inter-prediction by correlation between frames.
  • forward inter-frame prediction refers to inter-prediction using the correlation between the current video frame and a video frame that was previously encoded and played before it.
  • Backward inter prediction refers to inter prediction using the correlation between the current video frame and a video frame that was previously encoded and played after it.
  • the forward inter prediction corresponds to the forward reference frame list L0
  • the backward inter prediction corresponds to the backward reference frame list L1.
  • the number of reference frames included in the two reference frame lists may be the same or different.
  • Motion Compensation is a process of predicting a current image block using a reference image block.
  • the video sequence includes a series of pictures that are divided into at least one slice, each of which is in turn divided into image blocks.
  • the video encoding/decoding is performed in units of image blocks, and encoding/decoding processing can be performed from left to right, top to bottom, and line by line from the upper left corner position of the image.
  • the image block may be a macro block (MB) in the video codec standard H.264, or may be a coding unit (CU) in the High Efficiency Video Coding (HEVC) standard. This embodiment of the present application does not specifically limit this.
  • an image block in which encoding/decoding processing is being performed is referred to as a current image block
  • an image in which the current image block is located is referred to as a current frame.
  • the current frame may be a unidirectional prediction frame (P frame) or a bidirectional prediction frame (B frame).
  • P frame unidirectional prediction frame
  • B frame bidirectional prediction frame
  • the current frame has a list of reference frames.
  • the current frame has two reference frame lists, which are generally referred to as L0 and L1, respectively.
  • Each reference frame list contains at least one reconstructed frame that serves as a reference frame for the current frame.
  • the reference frame is used to provide reference pixels for inter prediction of the current frame.
  • the image blocks adjacent to the current image block may have completed the encoding/decoding process, resulting in reconstructed images, which are referred to as reconstructed image blocks.
  • the information of reconstructing the coding mode of the image block, reconstructing pixels, and the like is available.
  • a frame that has completed encoding/decoding processing before encoding/decoding of the current frame is referred to as a reconstructed frame.
  • Motion Vector is an important parameter in the inter prediction process, which represents the spatial displacement of the encoded image block relative to the current image block.
  • a Motion Estimation (ME) method such as a motion search, can be used to acquire a motion vector.
  • the encoding end transmits the motion vector of the current image block in the code stream, so that the decoding end reproduces the predicted pixel of the current image block, thereby obtaining the reconstructed block.
  • MVD Motion Vector Difference
  • the encoding end In order to make the decoding end and the encoding end use the same reference image block, the encoding end needs to send the motion information of each image block to the decoding end in the code stream. If the encoding end directly encodes the motion vector of each image block, it will consume a large amount of transmission resources. Since the motion vector of the image block adjacent to the spatial domain has a strong correlation, the motion vector of the current image block can be predicted according to the motion vector of the adjacent coded image block, and the predicted motion vector is called MVP, and the current The difference between the motion vector of the image block and the MVP is called MVD.
  • the video codec standard H.264 uses multi-reference frame prediction in the motion estimation process to improve the prediction accuracy, that is, to establish a buffer for storing multiple reconstructed frames, and to find the optimal reference image block in all reconstructed frames in the buffer. Motion compensation is performed to better remove the redundancy of the time domain.
  • the interframe prediction of the video codec standard H.264 uses two buffers, reference frame list 0 (reference list 0) and reference frame list 1 (reference list 1).
  • the reference frame in which the best reference block in each list is located is indicated by an index value, namely ref_idx_l0 and ref_idx_l1.
  • the motion information of the reference picture block includes an index value (ref_idx_l0 or ref_idx_l1) of the reference frame, an MVP flag, and an MVD.
  • the decoding end can find the correct reference image block in the selected reference frame according to the index value of the reference frame, the MVP flag and the MVD.
  • the inter prediction mode is often used in the HEVC standard as an Advanced Motion Vector Prediction (AMVP) mode, a Merge mode, and a non-translational motion model prediction mode.
  • AMVP Advanced Motion Vector Prediction
  • Merge mode Merge mode
  • non-translational motion model prediction mode non-translational motion model prediction mode
  • the encoding end constructs a candidate motion vector list by using motion information of the current image block spatial domain or the time domain adjacent encoded image block, and determines an optimal motion vector from the candidate motion vector list according to the rate distortion cost.
  • the MVP of the current image block The MVP of the current image block.
  • the encoding end performs motion search in the neighborhood centered on the MVP to obtain the motion vector of the current image block.
  • the encoding end passes the index value of the MVP in the candidate motion vector list (ie, the above MVP flag), the index value of the reference frame, and the MVD to the decoding end.
  • the encoding end constructs a candidate motion information list by using motion information of the current image block spatial domain or the time domain adjacent coded image block, and determines the optimal motion information from the candidate motion information list according to the rate distortion cost.
  • the motion information of the current image block The encoding end passes the index value of the position of the optimal motion information in the candidate motion information list to the decoding end.
  • the codec uses the same motion model to derive the motion information of each sub-block in the current image block, and performs motion compensation based on the motion information of all sub-blocks to obtain a prediction image block, thereby improving Predictive efficiency.
  • the motion model commonly used at the codec side is a 4-parameter affine model, a 6-parameter affine transformation model or an 8-parameter bilinear model.
  • the 4-parameter affine transformation model can be represented by the motion vector of two pixel points and its coordinates relative to the upper left vertex pixel of the current image block.
  • a pixel point for representing a motion model parameter is referred to as a control point.
  • the motion vectors of the upper left vertex and the upper right vertex of the current image block are (vx 0 , vy 0 ) and (vx, respectively). 1 , vy 1 )
  • the motion information of each sub-block in the current image block is obtained according to the following formula (1).
  • (x, y) in the following formula (1) is the coordinate of the sub-block relative to the upper left vertex pixel of the current image block
  • (vx, vy) is the motion vector of the sub-block
  • W is the width of the current image block.
  • the 6-parameter affine transformation model can be represented by a motion vector of three pixel points and its coordinates relative to the upper left vertex pixel of the current image block. If the upper left vertex (0,0), the upper right vertex (W,0), and the lower left vertex (0,H) pixel of the current image block are control points, the motion vectors of the upper left vertex, the upper right vertex, and the lower left vertex of the current image block are respectively For (vx 0 , vy 0 ), (vx 1 , vy 1 ), (vx 2 , vy 2 ), motion information of each sub-block in the current image block is obtained according to the following formula (2).
  • (x, y) in the following formula (2) is the coordinate of the sub-block relative to the upper left vertex pixel of the current image block
  • (vx, vy) is the motion vector of the sub-block
  • W and H are respectively the width of the current image block. And high.
  • an 8-parameter bilinear model can be represented by a motion vector of four pixel points and its coordinates relative to the top-left vertex pixel of the current image block. If the upper left vertex (0,0), the upper right vertex (W,0), the lower left vertex (0,H), and the lower right vertex (W,H) pixel of the current image block are control points, the upper left vertex of the current image block.
  • the motion vectors of the upper right vertex, the lower left vertex, and the lower right vertex are (vx 0 , vy 0 ), (vx 1 , vy 1 ), (vx 2 , vy 2 ), (vx 3 , vy 3 ), respectively, according to the following Equation (3) obtains motion information for each sub-block in the current image block.
  • (x, y) in the following formula (3) is the coordinate of the sub-block relative to the upper left vertex pixel of the current image block
  • (vx, vy) is the motion vector of the sub-block
  • W and H are respectively the width of the current image block. And high.
  • the encoding end needs to send the motion information of each image block in each direction to the decoding end.
  • the transmission resources occupied by the motion information are large, the effective utilization of the transmission resources is reduced, the transmission rate is reduced, and the coding and decoding compression efficiency is reduced.
  • the present application provides a bidirectional inter prediction method.
  • the encoding end sends the motion information of the current image block in the first direction to the decoding end, and the decoding end receives the current image block in the first direction.
  • the motion information calculating motion information of the current image block in the second direction according to the motion information of the current image block in the first direction, so that the motion information in the first direction and the current image block may be according to the current image block.
  • the motion information in the two directions calculates the predicted pixels of the current image block.
  • the bidirectional inter prediction method provided by the present application may be performed by a bidirectional inter prediction apparatus, a video codec apparatus, a video codec, and other devices having a video codec function.
  • the bidirectional inter prediction method provided by the present application is applicable to a video codec system.
  • the video encoder 100 and the video decoder 200 of the video codec system are used to implement the calculation of the motion information of the current image block according to the bidirectional inter prediction method example proposed by the present application. Specifically, calculating motion information of the current image block in the second direction according to the motion information of the current image block in the first direction, thereby determining, according to the motion information of the current image block in the first direction and the motion information of the current image block in the second direction.
  • the predicted pixel of the current image block so that only the motion information of the current image block in the first direction can be transmitted between the video encoder 10 and the video encoder 20, thereby effectively improving the utilization of the transmission resource and improving the codec. Compression efficiency.
  • Figure 1 shows the structure of a video codec system.
  • the video codec system 1 includes a source device 10 and a destination device 20.
  • the source device 10 generates encoded video data, and the source device 10 may also be referred to as a video encoding device or a video encoding device.
  • the destination device 20 may decode the encoded video data generated by the source device 10, and the destination device 20 also It may be referred to as a video decoding device or a video decoding device.
  • Source device 10 and/or destination device 20 may include at least one processor and a memory coupled to the at least one processor.
  • the memory may include, but is not limited to, a read-only memory (ROM), a random access memory (RAM), and an electrically erasable programmable read-only memory (EEPROM).
  • ROM read-only memory
  • RAM random access memory
  • EEPROM electrically erasable programmable read-only memory
  • the flash memory or any other medium that can be used to store the desired program code in the form of an instruction or data structure accessible by the computer is not specifically limited herein.
  • Source device 10 and destination device 20 may comprise various devices, including desktop computers, mobile computing devices, notebook (eg, laptop) computers, tablet computers, set top boxes, telephone handsets such as so-called “smart" phones, A television, a camera, a display device, a digital media player, a video game console, an on-board computer or the like.
  • Link 30 may include one or more media and/or devices capable of moving encoded video data from source device 10 to destination device 20.
  • link 30 can include one or more communication media that enable source device 10 to transmit encoded video data directly to destination device 20 in real time.
  • source device 10 may modulate the encoded video data in accordance with a communication standard (eg, a wireless communication protocol) and may transmit the modulated video data to destination device 20.
  • the one or more communication media may include wireless and/or wired communication media, such as a radio frequency (RF) spectrum, one or more physical transmission lines.
  • RF radio frequency
  • the one or more communication media described above may form part of a packet-based network, a portion of a packet-based network (eg, a local area network, a wide area network, or a global network (eg, the Internet)).
  • the one or more communication media described above may include routers, switches, base stations, or other devices that enable communication from source device 10 to destination device 20.
  • the encoded video data can be output from output interface 140 to storage device 40.
  • the encoded video data can be accessed from storage device 40 via input interface 240.
  • the storage device 40 can include a variety of local access data storage media, such as Blu-ray Disc, High Density Digital Video Disc (DVD), Compact Disc Read-Only Memory (CD-ROM), flash. Memory, or other suitable digital storage medium for storing encoded video data.
  • storage device 40 may correspond to a file server or another intermediate storage device that stores encoded video data generated by source device 10.
  • destination device 20 may retrieve its stored video data from storage device 40 via streaming or download.
  • the file server can be any type of server capable of storing encoded video data and transmitting the encoded video data to the destination device 20.
  • the file server may include a World Wide Web (Web) server (for example, for a website), a File Transfer Protocol (FTP) server, a Network Attached Storage (NAS) device, and a local disk. driver.
  • Web World Wide Web
  • FTP File Transfer Protocol
  • NAS Network Attached Storage
  • the destination device 20 can access the encoded video data over any standard data connection (e.g., an internet connection).
  • the instance type of the data connection includes a wireless channel, a wired connection (e.g., a cable modem, etc.), or a combination of both, suitable for accessing the encoded video data stored on the file server.
  • the transmission of the encoded video data from the file server may be streaming, downloading, or a combination of both.
  • the bidirectional inter prediction method of the present application is not limited to a wireless application scenario.
  • the bidirectional inter prediction method of the present application can be applied to video codec supporting multiple multimedia applications such as the following applications: aerial television broadcasting, cable television transmission, Satellite television transmission, streaming video transmission (e.g., via the Internet), encoding of video data stored on a data storage medium, decoding of video data stored on a data storage medium, or other application.
  • video codec system 1 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
  • the video codec system 1 shown in FIG. 1 is only an example of a video codec system, and is not a limitation on the video codec system in this application.
  • the bidirectional inter prediction method provided by the present application is also applicable to a scenario where there is no data communication between the encoding device and the decoding device.
  • the video data to be encoded or the encoded video data may be retrieved from a local memory, streamed over a network, or the like.
  • the video encoding apparatus may encode the encoded video data and store the encoded video data to a memory, and the video decoding apparatus may also acquire the encoded video data from the memory and decode the encoded video data.
  • source device 10 includes a video source 101, a video encoder 102, and an output interface 103.
  • output interface 103 can include a regulator/demodulator (modem) and/or a transmitter.
  • Video source 101 may include a video capture device (eg, a video camera), a video archive containing previously captured video data, a video input interface to receive video data from a video content provider, and/or a computer graphic for generating video data. A combination of systems, or such sources of video data.
  • Video encoder 102 may encode video data from video source 101.
  • source device 10 transmits the encoded video data directly to destination device 20 via output interface 103.
  • the encoded video data may also be stored on storage device 40 for later access by destination device 20 for decoding and/or playback.
  • destination device 20 includes display device 201, video decoder 202, and input interface 203.
  • input interface 203 includes a receiver and/or a modem.
  • Input interface 203 can receive encoded video data via link 30 and/or from storage device 40.
  • Display device 201 can be integrated with destination device 20 or can be external to destination device 20. Generally, the display device 201 displays the decoded video data.
  • Display device 201 can include a variety of display devices, such as liquid crystal displays, plasma displays, organic light emitting diode displays, or other types of display devices.
  • video encoder 102 and video decoder 202 may each be integrated with an audio encoder and decoder, and may include appropriate multiplexer-demultiplexer units or other hardware and software to handle common Encoding of both audio and video in a data stream or in a separate data stream.
  • the video encoder 102 and the video decoder 202 may include at least one microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), and a field programmable gate array (Field Programmable). Gate Array, FPGA), discrete logic, hardware, or any combination thereof. If the bi-directional inter prediction method provided by the present application is implemented in software, the instructions for the software may be stored in a suitable non-transitory computer readable storage medium, and the at least one processor may be used to execute the The instructions thus implement the application. Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) can be considered at least one processor.
  • Video encoder 102 may be included in an encoder
  • video decoder 202 may be included in a decoder, which may be part of a combined encoder/decoder (codec) in a respective device.
  • the video encoder 102 and the video decoder 202 in this application may operate according to a video compression standard (for example, HEVC), and may also operate according to other industry standards, which is not specifically limited in the present application.
  • a video compression standard for example, HEVC
  • the video encoder 102 is configured to perform bidirectional motion estimation on the current image block, determine motion information of the current image block in the first direction, and calculate motion of the current image block in the second direction according to motion information of the current image block in the first direction.
  • Information such that video encoder 102 determines a predicted image block of the current image block based on the motion information of the current image block in the first direction and the motion information of the current image block in the second direction.
  • video encoder 102 performs operations such as transform and quantization on the residual between the current image block and its predicted image block, generates a code stream, and transmits the code stream to video decoder 202.
  • the code stream includes motion information of the current image block in the first direction and indication information for indicating the second motion information according to the first motion information.
  • the indication information can be represented by different identifiers. The method of indicating the indication information can refer to the subsequent description.
  • the method for the video encoder 102 to calculate the motion information of the current image block in the second direction according to the motion information of the current image block in the first direction may be the video encoder 102 according to the current image block in the first direction.
  • the motion vector determines the motion vector of the current image block in the second direction, and may also determine, by the video encoder 102, the motion vector difference of the current image block in the second direction according to the motion vector difference value of the current image block in the first direction, and according to The motion vector difference value of the current image block in the second direction and the predicted motion vector of the current image block in the second direction determine the motion vector of the current image block in the second direction.
  • the video decoder 202 is configured to: acquire a code stream, and parse the code stream to obtain indication information for indicating that the second motion information is determined according to the first motion information (S400), that is, determine from which one The motion information of the direction is derived to calculate motion information in another direction, the first motion information includes motion information of the current image block in the first direction, and the second motion information includes motion information of the current image block in the second direction, where the first direction Different from the second direction; acquiring first motion information (S401), and determining second motion information according to the acquired first motion information (S402), so that the video decoder 202 can be based on the first motion information and the second The motion information determines predicted pixels of the current image block (S403).
  • the method for the video decoder 202 to calculate the motion information of the current image block in the second direction according to the motion information of the current image block in the first direction may be determined by the video decoder 202 according to the motion vector of the current image block in the first direction.
  • the motion vector of the image block in the second direction may also be that the video decoder 202 determines the motion vector difference value of the current image block in the second direction according to the motion vector difference value of the current image block in the first direction, and is based on the current image block.
  • the motion vector difference in the second direction and the predicted motion vector of the current image block in the second direction determine a motion vector of the current image block in the second direction.
  • FIG. 2 is a schematic structural diagram of a video encoder 102 in the embodiment of the present application.
  • video encoder 102 is operative to output video to post-processing entity 41.
  • Post-processing entity 41 represents an example of a video entity that can process encoded video data from video encoder 102, such as a Media Perception Network Element (MANE) or stitching/editing device.
  • MEM Media Perception Network Element
  • post-processing entity 41 may be an instance of a network entity.
  • post-processing entity 41 and video encoder 102 may be portions of a separate device, while in other cases, the functionality described with respect to post-processing entity 41 may be the same device including video encoder 100. carried out.
  • post-processing entity 41 is an example of storage device 40 of FIG.
  • the video encoder 102 may be configured to calculate motion information of the current image block in the second direction according to the motion information of the current image block in the first direction, and further, according to the motion information of the current image block in the first direction and the current image block in the second direction.
  • the motion information determines a prediction image block of the current image block, thereby completing bidirectional interframe prediction coding.
  • video encoder 102 includes a transformer 301, a quantizer 302, an entropy encoder 303, a filter 306, a memory 307, a prediction processing unit 308, and a summer 312.
  • the prediction processing unit 308 includes an intra predictor 309 and an inter predictor 310.
  • video encoder 102 also includes an inverse quantizer 304, an inverse transformer 305, and a summer 311.
  • Filter 306 is intended to represent one or more loop filters, such as a deblocking filter, an adaptive loop filter, and a sample adaptive offset filter.
  • Memory 307 can store video data encoded by components of video encoder 102.
  • the video data stored in the memory 307 can be obtained from the video source 101.
  • Memory 307 can be a reference image memory that stores reference video data for encoding video data by video encoder 102 in an intra, inter coding mode.
  • the memory 307 can be a synchronous DRAM (SDRAM) dynamic random access memory (DRAM), a magnetoresistive RAM (MRAM), a resistive RAM (RRAM), or other types.
  • SDRAM synchronous DRAM
  • DRAM dynamic random access memory
  • MRAM magnetoresistive RAM
  • RRAM resistive RAM
  • Video encoder 102 receives the video data and stores the video data in a video data store.
  • the segmentation unit divides the video data into a plurality of image blocks, and the image blocks may be further divided into smaller blocks, such as image block segmentation based on a quadtree structure or a binary tree structure. This segmentation may also include segmentation into slices, tiles, or other larger cells.
  • Video encoder 102 generally illustrates components that encode image blocks within a video strip to be encoded.
  • the strip may be divided into a plurality of image blocks (and may be divided into a set of image blocks called slices).
  • the intra predictor 309 within the prediction processing unit 308 can perform intra-predictive coding of the current image block with respect to one or more adjacent image blocks in the same frame or strip as the current image block to remove spatial redundancy.
  • Inter predictor 310 within prediction processing unit 308 can perform inter-predictive encoding of the current image block relative to one or more of the one or more reference images to remove temporal redundancy.
  • Prediction processing unit 308 can provide the resulting intra-frame, inter-coded image block to summer 310 to generate a residual block and provide to summer 309 to reconstruct a coded block for use as a reference image.
  • the video encoder 102 forms a residual image block by subtracting the predicted image block from the current image block to be encoded.
  • Summer 312 represents one or more components that perform this subtraction.
  • the residual video data in the residual block may be included in one or more transform units (TUs) and applied to the transformer 301.
  • the transformer 301 transforms the residual video data into residual transform coefficients using a transform such as a discrete cosine transform (DCT) or a conceptually similar transform.
  • DCT discrete cosine transform
  • the transformer 301 can convert the residual video data from a pixel value domain to a transform domain, such as a frequency domain.
  • the transformer 301 can send the resulting transform coefficients to the quantizer 302.
  • Quantizer 302 quantizes the transform coefficients to further reduce the bit rate.
  • quantizer 302 can then perform a scan of the matrix containing the quantized transform coefficients.
  • the entropy encoder 303 can perform a scan.
  • the entropy encoder 303 After quantization, the entropy encoder 303 entropy encodes the quantized transform coefficients. For example, the entropy encoder 303 may perform context adaptive variable length coding (CAVLC), context based adaptive binary arithmetic coding (CABAC), or another entropy coding method or technology. After entropy encoding by entropy encoder 303, the encoded code stream can be sent to video decoder 202, or archived for later transmission or retrieved by video decoder 202. Entropy encoder 303 may also entropy encode syntax elements of the current image block to be encoded.
  • CAVLC context adaptive variable length coding
  • CABAC context based adaptive binary arithmetic coding
  • Entropy encoder 303 may also entropy encode syntax elements of the current image block to be encoded.
  • the inverse quantizer 304 and the inverse variator 305 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain, for example, for later use as a reference block of the reference image.
  • the summer 311 adds the reconstructed residual block to the predicted image block produced by the inter predictor 310 or the intra predictor 309 to produce a reconstructed image block.
  • processing a reference image block of one image block may obtain a predicted image block of the image block.
  • video encoder 102 may directly quantize the residual signal without the need for processing via transformer 301, and accordingly need not be processed by inverse transformer 305; or, for some image blocks Or the image frame, video encoder 102 does not generate residual data, and accordingly does not need to be processed by transformer 301, quantizer 302, inverse quantizer 304, and inverse transformer 305; alternatively, video encoder 102 can reconstruct the reconstructed image
  • the block is stored directly as a reference block without being processed by filter 306; alternatively, quantizer 302 and inverse quantizer 304 in video encoder 102 may be combined.
  • FIG. 3 is a schematic structural diagram of a video decoder 202 in the embodiment of the present application.
  • video decoder 202 includes an entropy decoder 401, an inverse quantizer 402, an inverse transformer 403, a filter 404, a memory 405, a prediction processing unit 406, and a summer 409.
  • Prediction processing unit 406 includes an intra predictor 407 and an inter predictor 408.
  • video decoder 202 may perform a decoding process that is substantially reciprocal with respect to the encoding process described with respect to video encoder 102 from FIG.
  • Video decoder 202 receives the code stream from video encoder 102 during the decoding process.
  • Video decoder 202 may receive video data from network entity 42 and, optionally, may store the video data in a video data store (not shown).
  • the video data store may store video data to be decoded by components of video decoder 202, such as an encoded code stream.
  • the video data stored in the video data store can be obtained, for example, from storage device 40, from a local video source such as a camera, via a wired or wireless network via video data, or by accessing a physical data storage medium.
  • the video data memory is not illustrated in FIG. 3, the video data memory and the memory 405 may be the same memory or may be separately provided.
  • the video data memory and memory 405 can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM) including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), and resistive RAM (RRAM). , or other type of memory device.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • MRAM magnetoresistive RAM
  • RRAM resistive RAM
  • the video data store can be integrated on-chip with other components of video decoder 200, or off-chip with respect to those components.
  • Network entity 42 may be, for example, a server, a MANE, a video editor/splicer, or other such device for implementing one or more of the techniques described above.
  • Network entity 42 may or may not include a video encoder, such as video encoder 102.
  • network entity 42 may implement portions of the techniques described in this application.
  • network entity 42 and video decoder 202 may be part of a separate device, while in other cases, the functionality described with respect to network entity 42 may be performed by the same device including video decoder 202.
  • network entity 42 may be an example of storage device 40 of FIG.
  • Entropy decoder 401 of video decoder 202 entropy decodes the code stream to produce quantized coefficients and some syntax elements. Entropy decoder 401 forwards the syntax elements to filter 404.
  • Video decoder 202 may receive syntax elements at the video stripe level and/or image block level.
  • the syntax element herein may include indication information related to a current image block, the indication information being used to indicate that the second motion information is determined according to the first motion information.
  • video encoder 102 may be signaled to indicate whether a particular syntax element of the second motion information is determined based on the first motion information.
  • the inverse quantizer 402 inverse quantizes, ie dequantizes, the quantized transform coefficients provided in the code stream and decoded by the entropy decoder 401.
  • the inverse quantization process may include using the quantization parameters calculated by video encoder 102 for each of the video slices to determine the degree of quantization that should be applied and likewise determine the degree of inverse quantization that should be applied.
  • the inverse transformer 403 applies an inverse transform to transform coefficients, such as inverse DCT, inverse integer transform, or a conceptually similar inverse transform process, to generate residual blocks in the pixel domain.
  • the video decoder 202 passes the residual block from the inverse transformer 403 with the corresponding predicted image generated by the prediction processing unit 406.
  • the blocks are summed to obtain a reconstructed block, ie a decoded image block.
  • Summer 409 also known as reconstructor 409) represents the component that performs this summation operation.
  • Filters can be used to smooth pixel transitions or otherwise improve video quality when needed.
  • Filter 404 can be one or more loop filters, such as a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) filter.
  • video decoder 202 may be used for the decoding of the code stream.
  • the entropy decoder 401 of video decoder 202 does not decode the quantized coefficients, and accordingly does not need to be processed by inverse quantizer 402 and inverse transformer 403.
  • inverse quantizer 402 and inverse transformer 403 in video decoder 202 can be combined.
  • FIG. 4 is a schematic flowchart diagram of a bidirectional inter-frame prediction method according to an embodiment of the present application. The method illustrated in Figure 4 is performed by a bi-directional inter prediction device.
  • the bi-directional inter prediction device may be the video decoder 202 of FIG.
  • FIG. 4 illustrates a video decoder 202 as an example of a bidirectional inter prediction apparatus.
  • the bidirectional inter prediction method in the embodiment of the present application may include the following steps:
  • the video decoder 202 parses the obtained code stream and obtains the indication information.
  • the video decoder 202 parses the code stream and determines an inter prediction mode for inter prediction of the current image block in the current frame according to the value of the syntax element in the code stream.
  • the video decoder 202 acquires the indication information.
  • the video decoder 202 may receive the encoded code stream sent by the video encoder 102, or may obtain the encoded code stream from the storage device 40.
  • the video decoder 202 in the embodiment of the present application determines an inter prediction mode used for inter prediction of the current image block in the current frame according to the value of the syntax element inter_pred_idc.
  • inter prediction includes unidirectional inter prediction and bidirectional inter prediction.
  • the video decoder 202 determines that the inter prediction mode for inter prediction of the current picture block in the current frame is forward inter prediction.
  • the video decoder 202 determines that the inter prediction mode for inter prediction of the current picture block in the current frame is backward inter prediction.
  • the video decoder 202 determines that the inter prediction mode for inter prediction of the current picture block in the current frame is bidirectional inter prediction.
  • the video decoder 202 acquires indication information for indicating that the second motion information is determined according to the first motion information.
  • the first motion information is motion information of the current image block in the first direction
  • the second motion information is motion information of the current image block in the second direction, where the first direction is different from the second direction.
  • the image block involved in the present application may be a basic unit for performing video coding or video decoding, for example, a coding unit (CU), or may be a basic unit for performing a prediction operation, such as a prediction unit (PU).
  • CU coding unit
  • PU prediction unit
  • the embodiment of the present application does not specifically limit this.
  • the current image block in the embodiment of the present application includes at least one sub-block.
  • the first motion information includes motion information of each of the at least one sub-block of the current image block in the first direction
  • the second motion information includes that each of the at least one sub-block of the current image block is in the second direction.
  • the motion information, the indication information may be used to indicate that the motion information of the sub-block in the second direction is determined according to the motion information of the certain sub-block in the first direction.
  • Video decoder 202 can obtain the indication information in a variety of ways.
  • video decoder 202 parses the first identity.
  • the video decoder 202 determines to parse the first motion information, and determines the second motion information according to the first motion information, that is, the video decoder 202 acquires the indication information.
  • the video decoder 202 parses the code stream to obtain the fifth identifier.
  • the video decoder 202 determines to parse the second motion information, and calculates the first motion information according to the second motion information.
  • the video decoder 202 acquires the first motion information and the second motion information.
  • the first preset value and the fifth preset value may be the same or different, and the embodiment of the present application does not specifically limit this.
  • the first identifier is mv_derived_flag_l0
  • the fifth identifier is mv_derived_flag_l1.
  • the first preset value and the fifth preset value are both 1
  • the eighth preset value and the ninth preset value are both 0.
  • Video decoder 202 parses mv_derived_flag_l0 first. When the value of mv_derived_flag_l0 is 1, the video decoder 202 parses the first motion information, and determines the second motion information according to the first motion information. When the value of mv_derived_flag_l0 is 0, the video decoder 202 parses mv_derived_flag_l1.
  • the video decoder 202 parses the second motion information, and calculates the first motion information based on the second motion information.
  • the video decoder 202 parses the first motion information and the second motion information.
  • video decoder 202 parses the second identity.
  • the video decoder 202 determines to calculate the motion information of the current image block by using the motion information derivation algorithm.
  • Video decoder 202 then parses the third identity.
  • the video decoder 202 determines to parse the first motion information, and determines the second motion information according to the first motion information, that is, the video decoder 202 acquires the indication information.
  • the video decoder 202 determines to parse the second motion information, and calculates the first motion information according to the second motion information.
  • the second identifier is derived_mv_flag
  • the third identifier is derived_mv_direction
  • the third preset value is 1
  • the sixth preset value is 0.
  • Video decoder 202 parses derived_mv_flag first. When the value of derived_mv_flag is 1, the video decoder 202 determines to calculate the motion information of the current image block using the motion information derivation algorithm. When the value of derived_mv_flag is 0, the video decoder 202 parses the first motion information and the second motion information. When the value of derived_mv_direction is 1, the video decoder 202 parses the first motion information and determines the second motion information based on the first motion information. When the value of derived_mv_direction is 0, the video decoder 202 determines to parse the second motion information, and calculates the first motion information according to the second motion information.
  • video decoder 202 parses the second identity.
  • the video decoder 202 determines to calculate the motion information of the current image block by using the motion information derivation algorithm. Then, the video decoder 202 determines to parse the first motion information according to the preset derivation direction, and determines the second motion information according to the first motion information, that is, the video decoder 202 acquires the indication information. That is to say, in the present implementation manner, "determining the second motion information according to the first motion information" is preset.
  • the value of the second identifier is the seventh preset value
  • the first group of motion information and the second motion information are parsed.
  • the second identifier is derived_mv_flag
  • the second preset value is 1
  • the seventh preset value is 0.
  • Video decoder 202 parses derived_mv_flag. When the value of derived_mv_flag is 1, the video decoder 202 determines to calculate the motion information of the current image block using the motion information derivation algorithm. Further, the video decoder 202 determines to parse the first motion information and determines the second motion information based on the first motion information. When the value of derived_mv_flag is 0, the video decoder 202 parses the first motion information and the second motion information.
  • video decoder 202 parses a fourth identity (eg, mv_derived_flag_l0).
  • the video decoder 202 determines to calculate the motion information of the current image block by using the motion information derivation algorithm, and calculates the variable derived_ref_num according to the first reference frame list and the second reference frame list. .
  • This variable represents the number of reference frame pairs that the first reference frame and the second reference frame can form into a mirror/linear.
  • the video decoder 202 directly determines the index value of the reference frame.
  • the video decoder 202 determines to parse the first motion information according to the preset derivation direction, and determines the second motion information according to the first motion information, that is, the video decoder 202 acquires the indication information.
  • the first reference frame list is a reference frame list of the current image block in the first direction
  • the second reference frame list is a reference frame list of the current image block in the second direction
  • the first reference frame is a reference of the current image block in the first direction
  • the second reference frame is a reference frame of the current image block in the second direction.
  • the index values of the reference frames in the embodiment of the present application are the numbers of the reference frames in the corresponding reference frame list.
  • the current frame has a sequence number of 4, the first reference frame list is ⁇ 2, 0 ⁇ , and the second reference frame list is ⁇ 6, 7 ⁇ , and the first reference frame list is determined according to the foregoing condition B or condition C.
  • the reference frame numbered 2 and the reference frame numbered 6 in the second reference frame list can form a reference frame pair. Therefore, the index values of the first reference frame and the second reference frame are both zero.
  • the sequence number of the current frame is 4, the first reference frame list is ⁇ 2, 0 ⁇ , and the second reference frame list is ⁇ 6, 7 ⁇ , according to the above condition B or condition C, it is determined that the sequence number in the first reference frame list is 2
  • the reference frame with the sequence number 6 in the reference frame and the second reference frame list can form a reference frame pair, and the reference frame with the sequence number 0 in the first reference frame list and the reference frame with the sequence number 8 in the second reference frame list can also be composed. Reference frame pair.
  • video decoder 202 needs to resolve the index value of the reference frame.
  • the video decoder 202 may also determine whether the feature information of the current frame satisfies a preset condition. Thus, when the feature information of the current frame satisfies a preset condition, the video decoder 202 acquires the indication information.
  • the process of S401 may be specifically: when it is determined that the inter prediction mode is a bidirectional inter prediction mode, and the feature information of the current frame satisfies the first preset condition, the video decoder 202 acquires the indication information.
  • the feature information of the current frame includes at least one of a sequence number, a Temporal Level ID (TID), and a number of reference frames.
  • the code stream acquired by the video decoder 202 includes a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), a slice header or a slice segment header, and Encoded image data. Thereafter, the video decoder 202 parses the code stream to acquire feature information of the current frame.
  • the above preset conditions include at least one of the following conditions:
  • Condition A there are at least two reference frames in the current image block.
  • POC_Cur-POC_listX POC_listY-POC_Cur
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • the first reference frame is the reference frame of the current image block in the first direction
  • the second reference frame is the current image. The reference frame of the block in the second direction.
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • the first reference frame is the reference frame of the current image block in the first direction
  • the second reference frame is the current image. The reference frame of the block in the second direction.
  • Condition D the TID of the current frame is greater than or equal to the preset value.
  • the preset condition in the embodiment of the present application may be preset, or may be specified in a high-level syntax, such as SPS, PPS, slice header, or slice segment header.
  • a high-level syntax such as SPS, PPS, slice header, or slice segment header.
  • the video decoder 202 obtains the sequence number of the reference frame from the first reference frame list and the second reference frame list, and determines the sequence number of the obtained reference frame and the sequence number of the current frame. Whether the above condition B or condition C is satisfied. In the case where the above condition B (or condition C) is satisfied, the indication information is acquired.
  • the video decoder 202 acquires the indication information" and the above "determined frame"
  • the method in which the video decoder 202 acquires the indication information is the same.
  • Prediction_unit() is a syntax structure of a prediction image block, and describes a method of determining motion information of each sub-block in the current image block.
  • x0 and y0 respectively represent the horizontal coordinate offset and the vertical coordinate offset of the sub-block in the current image block with respect to the upper left vertex of the current image block
  • nPbW represents the width of the current image block
  • nPbH represents the height of the current image block.
  • the value of the mv_derived_flag_l0 is the first preset value or the value of the mv_derived_flag_l1[x0][y0] is the fifth preset value, determining the motion information of the sub-block of the current image block, that is, determining the index value of the reference frame Ref_idx_l0[x0][y0], motion vector predictor flag mvp_l0_flag[x0][y0], and motion vector difference value mvd_coding(x0, y0, 0).
  • the video decoder 202 acquires the syntax of the indication information by using the third implementation manner described above. table.
  • the video decoder 202 obtains the indication information by using the fourth implementation manner described above. Syntax table.
  • the foregoing first identifier, the second identifier, the third identifier, and the fourth identifier may all be preset, or may be in a high-level syntax, such as: SPS, PPS, slice header, or slice segment header.
  • SPS Session Initiation Protocol
  • PPS Packet Prediction Protocol
  • the video decoder 202 obtains the indication information when the inter prediction mode is determined to be the bidirectional inter prediction mode, and the feature information of the current frame satisfies the preset condition, thereby effectively improving the decoding rate of the video decoder 202 and reducing information redundancy. .
  • the video decoder 202 acquires first motion information.
  • the video decoder 202 parses the code stream, and obtains an index value of the first reference frame, a first motion vector predictor flag, and a first motion vector difference, that is, acquires the first motion information.
  • the first motion vector predictor flag is used to indicate an index value of the first predicted motion vector in the first candidate motion vector list, where the first predicted motion vector is a predicted motion vector of the current image block in the first direction, and the first motion vector difference is a difference between the first predicted motion vector and the first motion vector, where the first motion vector is a motion vector of the current image block in the first direction.
  • the video decoder 202 determines the motion information of the sub-blocks of the current image block in the first direction.
  • the video decoder 202 determines the second motion information according to the first motion information.
  • the video decoder 202 determines the second motion information by: the video decoder 202 selects an index value of the first reference frame from the first motion information, and according to the index value of the first reference frame. And determining, by the first reference frame list, a sequence number of the first reference frame; calculating, according to the sequence number of the current frame and the sequence number of the first reference frame, a sequence number of the second reference frame by using a preset formula; Determining, by the second reference frame list, an index value of the second reference frame; determining second motion information according to the first motion information and the index of the second reference frame.
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame.
  • the video decoder 202 determines that the index value ref_lY_idx of the second reference frame is zero.
  • the preset formula may also be (POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0. It should be noted that if the sequence number of multiple reference frames in the second reference frame list satisfies the formula, the video decoder 202 first selects the reference frame with the smallest abs ((POC_listY-POC_Cur)-(POC_Cur-POC_listX)), Then select the smallest reference frame of abs (POC_listY-POC_Cur) to determine the index value of the second reference frame. Among them, abs is an absolute value function.
  • the video decoder 202 determines that the index value ref_lY_idx of the second reference frame is 0.
  • the preset formula can also be POC_listX ⁇ POC_listY. It should be noted that if the sequence number of multiple reference frames in the second reference frame list satisfies the formula, the video decoder 202 first selects the reference frame with the smallest abs ((POC_listY-POC_Cur)-(POC_Cur-POC_listX)), Then select the smallest reference frame of abs (POC_listY-POC_Cur) to determine the index value of the second reference frame. Among them, abs is an absolute value function.
  • the sequence number of the second reference frame is determined according to the formula POC_listX ⁇ POC_listY. 3.
  • the video decoder 202 determines that the index value ref_lY_idx of the second reference frame is zero.
  • the video decoder 202 determines the first
  • POC_listY0' represents a second sequence number
  • the reference frame characterized by the second sequence number is in the second reference frame list
  • the number is determined as the index value of the second reference frame
  • the third is calculated according to the sequence number of the current frame and the sequence number of the first reference frame by the formula POC_listX ⁇ POC_listY0′′ a sequence number, wherein POC_listY0" represents a third sequence number
  • the number of the reference frame characterized by the third sequence number in the second reference frame list is determined as an index value of the second reference frame.
  • the video decoder 202 determines the second motion information by: the video decoder 202 parses the code stream, acquires an index value of the second reference frame, and according to the first motion information and the second reference frame.
  • the index value determines the second motion information.
  • the index value of the second reference frame may also be pre-defined, or may be specified in a parameter set such as an SPS, a PPS, a slice header, or a slice segment header, which is not specifically limited in this embodiment of the present application. .
  • the video decoder 202 determines the second motion information according to the first motion information and the index value of the second reference frame.
  • the video decoder 202 may calculate all motion information of the current image block in the second direction, and may also calculate partial motion information of the current image block in the second direction.
  • the method for the video decoder 202 to determine the second motion information according to the first motion information and the index value of the second reference frame may be: acquiring an index value of the first reference frame in the first motion information, and Determining a sequence number of the first reference frame according to the index value of the first reference frame and the first reference frame list; acquiring an index value of the second reference frame, and determining, according to the index value of the second reference frame and the second reference frame list a sequence number of the second reference frame; determining, according to the first motion vector difference and the first motion vector predictor flag in the first motion information, the first motion vector (the motion vector of the current image block in the first direction);
  • the second motion vector in the second motion information :
  • mv_lY represents the second motion vector
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • mv_lX represents the first motion vector
  • the second motion vector represents the current image block. The motion vector in the second direction.
  • the video decoder 202 constructs a candidate motion information list in the same manner as the encoding end construction candidate motion information list in the AMVP mode or the merge mode, and determines the first prediction in the candidate motion information list according to the first motion vector prediction flag.
  • the motion vector such that video decoder 202 may determine the sum of the first motion prediction vector and the first motion vector difference as the first motion vector.
  • the first reference frame is a forward reference frame of the current image block
  • the second reference frame is a backward reference frame of the current image block
  • the first reference frame is a backward reference of the current image block.
  • the first reference frame is the forward reference frame of the current image block
  • the second reference frame is the case of the backward reference frame of the current image block
  • the first reference frame is the backward direction of the current image block
  • the method for the video decoder 202 to determine the second motion information according to the first motion information and the index value of the second reference frame may be: acquiring an index value of the first reference frame in the first motion information, and a motion vector difference, and determining a sequence number of the first reference frame according to the index value of the first reference frame and the first reference frame list; acquiring an index value of the second reference frame, and according to the index value of the second reference frame and the second Determining, by the reference frame list, a sequence number of the second reference frame, determining a second predicted motion vector according to the index value of the second reference frame and the second candidate predicted motion vector list, where the second predicted motion vector is a predicted motion of the current image block in the second direction Vector; determining a second motion vector difference in the second motion information according to the following formula:
  • mvd_lY represents the second motion vector difference
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame
  • POC_listY represents the sequence number of the second reference frame
  • mvd_lX represents the first motion vector difference
  • the first reference frame is a forward reference frame of the current image block
  • the second reference frame is a backward reference frame of the current image block
  • the first reference frame is a backward reference of the current image block.
  • the video decoder 202 determines, according to the first motion information and the second motion information, a predicted pixel of the current image block.
  • the video decoder 202 determines the first motion vector and the second motion vector in S402, so that the video decoder 202 may determine the first reference image block according to the first motion vector and the first reference frame list, according to the first The second motion vector and the second reference frame list determine a second reference image block, and in turn determine a predicted pixel of the current image block based on the first reference image block and the second reference image block, ie, the video decoder 202 completes the motion compensation process.
  • the method for the video decoder 202 to determine the predicted pixel of the current image block according to the first reference image block and the second reference image block may refer to any existing method, which is not specifically limited in this embodiment of the present application.
  • the video decoder 202 can only obtain the first motion information from the encoded code stream, and after acquiring the first motion information, the video encoder 202 according to the first The motion information calculates second motion information, and further, determines predicted pixels of the current image block based on the first motion information and the second motion information.
  • the method provided by the present application does not need to transmit motion information of each image block in all directions, thereby effectively reducing the number of motion information transmission, improving the effective utilization of transmission resources, and improving the transmission rate. And improve the coding and decoding compression efficiency.
  • the bidirectional inter prediction method shown in FIG. 4 is described for the current image block, that is, it can be understood that the current image block performs inter prediction based on the AMVP mode.
  • the bi-directional inter-frame prediction method provided by the present application is also applicable to a non-translational motion model prediction mode, such as a 4-parameter affine transformation motion model, a 6-parameter affine transformation motion model, an 8-parameter bilinear motion model, and the like.
  • the current image block includes at least one sub-block
  • the motion information of the current image block includes motion information of each of the sub-blocks of the current image block.
  • Video decoder 202 determines that each sub-block motion information (motion information in the first direction and motion information in the second direction) is similar to the method in which video decoder 202 determines motion information for the current image block.
  • the video decoder 202 calculates the motion vector of the i-th control point in the second direction according to the motion vector of the i-th control point in the first direction:
  • mvi_lY represents the motion vector of the i-th control point in the second direction
  • mvi_lX represents the motion vector of the i-th control point in the first direction
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the sequence number of the first reference frame.
  • POC_listY indicates the sequence number of the second reference frame.
  • the video decoder 202 calculates the motion vector difference of the i-th control point in the second direction according to the motion vector difference of the i-th control point in the first direction:
  • mvdi_lY represents the motion vector difference of the i-th control point in the second direction
  • mvdi_lX represents the motion vector difference of the i-th control point in the first direction
  • POC_Cur represents the sequence number of the current frame
  • POC_listX represents the first reference frame.
  • the sequence number, POC_listY represents the sequence number of the second reference frame.
  • the video encoder 102 in the embodiment of the present application performs bidirectional motion estimation on the current image block to determine motion information of the current image block in the first direction, and according to the motion information of the current image block in the first direction.
  • the motion information of the current image block in the second direction is calculated.
  • the video encoder 102 determines the predicted image block of the current image block according to the motion information of the current image block in the first direction and the motion information of the current image block in the second direction. Thereafter, video encoder 102 performs operations such as transform and quantization on the residual between the current image block and its predicted image block, generates a code stream, and transmits the code stream to video decoder 202.
  • the code stream includes motion information of the current image block in the first direction.
  • the method of “the video encoder 102 calculates the motion information of the current image block in the second direction according to the motion information of the current image block in the first direction” may refer to “the video decoder 202 determines the second motion information according to the first motion information”.
  • the method that is, the description of the above S402, is not described in detail in this application.
  • the bi-directional inter-frame prediction method does not need to transmit motion information of each image block in all directions, and only transmits motion information in a certain direction, thereby effectively reducing transmission of motion information.
  • the number increases the effective utilization of transmission resources, increases the transmission rate, and improves the coding and decoding compression efficiency.
  • the embodiment of the present application provides a bidirectional inter prediction apparatus, which may be a video decoder.
  • the bidirectional inter prediction apparatus is configured to perform the steps performed by the video decoder 202 in the above bidirectional inter prediction method.
  • the bidirectional inter prediction apparatus provided by the embodiment of the present application may include a module corresponding to the corresponding step.
  • the embodiment of the present application may divide the function module into the bidirectional inter prediction device according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and may be further divided in actual implementation.
  • FIG. 5 shows a possible structural diagram of the bidirectional inter prediction apparatus involved in the above embodiment.
  • the bidirectional inter prediction apparatus 5 includes an acquisition unit 50 and a determination unit 51.
  • the obtaining unit 50 is for supporting the bidirectional inter prediction apparatus to perform S400, S401, etc. in the above embodiments, and/or other processes for the techniques described herein.
  • the determining unit 51 is for supporting the bidirectional inter prediction apparatus to perform S402, S403, etc. in the above embodiments, and/or other processes for the techniques described herein.
  • the bidirectional inter prediction apparatus provided by the embodiment of the present application includes, but is not limited to, the foregoing module.
  • the bidirectional inter prediction apparatus may further include a storage unit 52.
  • the storage unit 52 can be used to store program codes and data of the bi-directional inter prediction device.
  • the bidirectional inter prediction apparatus 6 includes a processing module 60 and a communication module 61.
  • the processing module 60 is for controlling management of the actions of the bi-directional inter prediction apparatus, for example, performing the steps performed by the acquisition unit 50 and the determining unit 51 described above, and/or other processes for performing the techniques described herein.
  • the communication module 61 is for supporting interaction between the bidirectional inter prediction device and other devices.
  • the bidirectional inter prediction apparatus may further include a storage module 62 for storing program codes and data of the bidirectional inter prediction apparatus, for example, storing the content held by the storage unit 52.
  • the processing module 60 may be a processor or a controller, such as a central processing unit (CPU), a general purpose processor, a digital signal processor (DSP), an ASIC, an FPGA, or other programmable. Logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 61 can be a transceiver, an RF circuit or a communication interface or the like.
  • the storage module 62 can be a memory.
  • the bidirectional inter prediction apparatus 5 and the bidirectional inter prediction apparatus 6 can perform the bidirectional inter prediction method shown in FIG. 4, and the bidirectional inter prediction apparatus 5 and the bidirectional inter prediction apparatus 6 can be specifically a video decoding apparatus or the like. A device with video codec. The bidirectional inter prediction apparatus 5 and the bidirectional inter prediction apparatus 6 can be used for image prediction in the decoding process.
  • the application also provides a terminal, the terminal comprising: one or more processors, a memory, and a communication interface.
  • the memory, communication interface is coupled to one or more processors; the memory is for storing computer program code, and the computer program code includes instructions for performing bidirectional inter prediction of embodiments of the present application when one or more processors execute the instructions method.
  • the terminals here can be video display devices, smart phones, laptops, and other devices that can process video or play video.
  • the present application also provides a video decoder including a nonvolatile storage medium, and a central processing unit, the nonvolatile storage medium storing an executable program, the central processing unit and the nonvolatile storage The medium is connected, and the executable program is executed to implement the bidirectional inter prediction method of the embodiment of the present application.
  • the present application further provides a decoder, which includes a bidirectional inter prediction apparatus (a bidirectional inter prediction apparatus 5 or a bidirectional inter prediction apparatus 6) and a reconstruction module in the embodiment of the present application, wherein the reconstruction module And determining a reconstructed pixel value of the current image block according to the predicted picture pixel obtained by the bidirectional inter prediction apparatus.
  • a decoder which includes a bidirectional inter prediction apparatus (a bidirectional inter prediction apparatus 5 or a bidirectional inter prediction apparatus 6) and a reconstruction module in the embodiment of the present application, wherein the reconstruction module And determining a reconstructed pixel value of the current image block according to the predicted picture pixel obtained by the bidirectional inter prediction apparatus.
  • Another embodiment of the present application also provides a computer readable storage medium including one or more program codes, the one or more programs including instructions, when a processor in a terminal is executing the program code At the time, the terminal performs a bidirectional inter prediction method as shown in FIG.
  • a computer program product comprising computer executable instructions stored in a computer readable storage medium; at least one processor of the terminal Reading the storage medium reads the computer execution instructions, and the at least one processor executing the computer execution instructions causes the terminal to perform the steps of performing the video decoder 202 in the bi-directional inter prediction method illustrated in FIG.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • a software program it may occur in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used.
  • the combination may be integrated into another device, or some features may be ignored or not performed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a readable storage medium.
  • the technical solution of the embodiments of the present application may be embodied in the form of a software product in the form of a software product in essence or in the form of a contribution to the prior art, and the software product is stored in a storage medium.
  • a number of instructions are included to cause a device (which may be a microcontroller, chip, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种双向帧间预测方法及装置,涉及视频图像编解码技术领域,提高了编解码效率。该方法包括:获取指示信息,指示信息用于指示根据第一运动信息确定确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息;获取第一运动信息;根据第一运动信息,确定第二运动信息;根据第一运动信息和第二运动信息,确定当前图像块的预测像素。

Description

一种双向帧间预测方法及装置
本申请要求于2018年03月29日提交中国专利局、申请号为201810274457.X、发明名称为“一种双向帧间预测方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及视频图像编解码技术领域,尤其涉及一种双向帧间预测方法及装置。
背景技术
在视频编解码技术中,对于当前图像块而言,可以仅根据一个参考图像块生成当前图像块的预测图像块(即单向帧间预测),也可以根据至少两个参考图像块生成当前图像块的预测图像块(即双向帧间预测),上述至少两个参考图像块可来自于同一个参考帧或者不同的参考帧。
为了使得解码端与编码端使用相同的参考图像块,编码端需要在码流中向解码端发送各个图像块的运动信息。一般的,当前图像块的运动信息包括参考帧的索引值、运动矢量预测值(motion Vector Predictor,MVP)标志和运动矢量差(Motion Vector Difference,MVD)。解码端根据参考帧索引值、MVP标志和MVD,即可以在选定的参考帧中找到正确的参考图像块。
相应的,对于双向帧间预测而言,编码端需要向解码端发送每个图像块在每一个方向的运动信息。这样,运动信息占用的传输资源较大,降低了传输资源的有效利用率,降低了传输速率,且降低了编解码压缩效率。
发明内容
本申请实施例提供一种双向帧间预测方法及装置,能够运动信息占用的传输资源较大,降低了传输资源的有效利用率,降低了传输速率,且降低了编解码压缩效率的问题。
为达到上述目的,本申请实施例采用如下技术方案:
第一方面,提供一种双向帧间预测方法,获取用于指示根据第一运动信息确定第二运动信息的指示信息,这里,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息;获取第一运动信息;根据获取到的第一运动信息,确定第二运动信息,这样,即可根据获取到的第一运动信息和确定出的第二运动信息,确定当前图像块的预测像素。
本申请提供的双向帧间预测方法在获取到指示信息后,根据第一运动信息确定第二运动信息,这样,码流中仅包括指示信息和第一运动信息即可,无需再包括第二运动信息。与现有技术中,码流包括每个图像块在每个方向的运动信息相比,有效的减少了码流包括的运动信息,提高了传输资源的有效利用率,提高了传输速率,相应的,也提高了编解码速率。
可选的,在本申请的一种可能的实现方式中,上述“根据第一运动信息,确定第 二运动信息”的方法为:获取第一运动信息中的第一参考帧的索引值,并根据第一参考帧的索引值和第一参考帧列表,确定第一参考帧的序号,这里,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,并根据第二参考帧的索引值和第二参考帧列表,确定第二参考帧的序号,这里,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;根据第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,第一运动矢量为当前图像块在第一方向的运动矢量;根据下述公式确定第二运动信息中的第二运动矢量:
Figure PCTCN2019071471-appb-000001
该公式中,mv_lY表示第二运动矢量,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,mv_lX表示第一运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
可选的,在本申请的另一种可能的实现方式中,上述“根据第一运动信息,确定第二运动信息”的方法为:获取第一运动信息中的第一参考帧的索引值,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;根据第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,第一运动矢量为当前图像块在第一方向的运动矢量;第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况下,或者,在第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况下,根据下述公式确定第二运动信息中的第二运动矢量:
mv_lY=-mv_lX
该公式中,mv_lY表示第二运动矢量,mv_lX表示第一运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
上述“第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况”,或者,“第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况”均可以用公式(POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0表示,或者用公式POC_listY=2*POC_Cur-POC_listX表示,本申请对此不作具体限定。
此外,上述“第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况”或者“第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况”均可以用公式(POC_Cur-POC_listX)*(POC_listY-POC_Cur)<0表示。
上述POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号。
可选的,在本申请的另一种可能的实现方式中,上述“根据第一运动信息,确定 第二运动信息”的方法为:获取第一运动信息中的第一参考帧的索引值和第一运动矢量差,并根据第一参考帧的索引值和第一参考帧列表,确定第一参考帧的序号,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,并根据第二参考帧的索引值和第二参考帧列表确定第二参考帧的序号,根据第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,第二预测运动矢量为当前图像块在第二方向的预测运动矢量,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;根据下述公式确定第二运动信息中的第二运动矢量差:
Figure PCTCN2019071471-appb-000002
该公式中,mvd_lY表示第二运动矢量差,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,mvd_lX表示第一运动矢量差;根据第二预测运动矢量和第二运动矢量差,确定第二运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
可选的,在本申请的另一种可能的实现方式中,上述“根据第一运动信息,确定第二运动信息”的方法为:获取第一运动信息中的第一参考帧的索引值和第一运动矢量,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,根据第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,第二预测运动矢量为当前图像块在第二方向的预测运动矢量,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;在第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况下,或者,在第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况下,根据下述公式确定第二运动信息中的第二运动矢量差:
mvd_lY=-mvd_lX
该公式中,mvd_lY表示第二运动矢量差,mvd_lX表示第一运动矢量差;这样,根据第二预测运动矢量和第二运动矢量差,确定第二运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
同理,上述“第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况”,或者,“第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况”均可以用公式(POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0表示,或者用公式POC_listY=2*POC_Cur-POC_listX表示,本申请对此不作具体限定。
上述“第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况”或者“第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况”均可以用公式(POC_Cur-POC_listX)*(POC_listY-POC_Cur)<0表示。
上述POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号。
可以看出,本申请提供的双向帧间预测方法可以为根据第一运动矢量确定第二运动矢量,也可以为根据第一运动矢量差确定第二运动矢量差,并根据第二运动矢量差确定第二运动矢量。
可选的,在本申请的另一种可能的实现方式中,上述“获取第二参考帧的索引值”的方法为:根据当前帧的序号和第一参考帧的序号,通过公式POC_listY0=2*POC_Cur-POC_listX,计算第一序号,其中,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY0表示第一序号;在第二参考帧列表包括第一序号的情况下,将第一序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。
可选的,在本申请的另一种可能的实现方式中,上述“获取第二参考帧的索引值”的方法为:根据当前帧的序号和第一参考帧的序号,通过公式(POC_Cur-POC_listX)*(POC_listY0'-POC_Cur)>0,计算第二序号,其中,POC_listY0'表示第二序号;在第二参考帧列表包括第二序号的情况下,将第二序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。
可选的,在本申请的另一种可能的实现方式中,上述“获取第二参考帧的索引值”的方法为:根据当前帧的序号和第一参考帧的序号,通过公式POC_listX≠POC_listY0″,计算第三序号,其中,POC_listY0″表示第三序号;将第三序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。
可选的,在本申请的另一种可能的实现方式中,上述“获取第二参考帧的索引值”的方法为:根据当前帧的序号和第一参考帧的序号,通过公式POC_listY0=2*POC_Cur-POC_listX,计算第一序号,其中,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY0表示第一序号。在第二参考帧列表包括第一序号的情况下,将第一序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。在第二参考帧列表不包括第一序号的情况下,根据当前帧的序号和第一参考帧的序号,通过公式(POC_Cur-POC_listX)*(POC_listY0'-POC_Cur)>0,计算第二序号,其中,POC_listY0'表示第二序号。在第二参考帧列表包括第二序号的情况下,将第二序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。在第二参考帧列表不包括第二序号的情况下,根据当前帧的序号和第一参考帧的序号,通过公式POC_listX≠POC_listY0″,计算第三序号,其中,POC_listY0″表示第三序号;将第三序号表征的参考帧在第二参考帧列表中的编号确定为所述第二参考帧的索引值。
可选的,在本申请的另一种可能的实现方式中,上述“获取第二参考帧的索引值”的方法为:解析码流,获取第二参考帧的索引值。
可以看出,本申请中“获取第二参考帧的索引值”的方法可以有多种,具体采用哪一种方法获取第二参考帧的索引值需要根据实际需求或预先设定确定。
第二方面,提供一种双向帧间预测装置,该双向帧间预测装置包括获取单元和确定单元。
具体的,上述获取单元,用于获取指示信息,指示信息用于指示根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息,以及获取第一运动信息。上述确定单元,用于根据获取单元获取到的第一运动信息,确定第二运动信息,以及用于根据第一运动信息和第二运动信息,确定当前图像块的预测像素。
可选的,在本申请的一种可能的实现方式中,上述确定单元具体用于:获取第一运动信息中的第一参考帧的索引值,并根据第一参考帧的索引值和第一参考帧列表,确定第一参考帧的序号,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,并根据第二参考帧的索引值和第二参考帧列表,确定第二参考帧的序号,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;根据第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,第一运动矢量为当前图像块在第一方向的运动矢量;根据下述公式确定第二运动信息中的第二运动矢量:
Figure PCTCN2019071471-appb-000003
其中,mv_lY表示第二运动矢量,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,mv_lX表示第一运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
可选的,在本申请的另一种可能的实现方式中,上述确定单元具体用于:获取第一运动信息中的第一参考帧的索引值,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;根据第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,第一运动矢量为当前图像块在第一方向的运动矢量;第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况下,或者,在第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况下,根据下述公式确定第二运动信息中的第二运动矢量:
mv_lY=-mv_lX
该公式中,mv_lY表示第二运动矢量,mv_lX表示第一运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
可选的,在本申请的另一种可能的实现方式中,上述确定单元具体用于:获取第一运动信息中的第一参考帧的索引值和第一运动矢量差,并根据第一参考帧的索引值和第一参考帧列表,确定第一参考帧的序号,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,并根据第二参考帧的索引值和第二参考帧列表确定第二参考帧的序号,根据第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,第二 预测运动矢量为当前图像块在第二方向的预测运动矢量,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;根据下述公式确定第二运动信息中的第二运动矢量差:
Figure PCTCN2019071471-appb-000004
其中,mvd_lY表示第二运动矢量差,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,mvd_lX表示第一运动矢量差;根据第二预测运动矢量和第二运动矢量差,确定第二运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
可选的,在本申请的另一种可能的实现方式中,上述确定单元具体用于:获取第一运动信息中的第一参考帧的索引值和第一运动矢量,第一参考帧为当前图像块在第一方向的参考帧,第一参考帧的索引值为第一参考帧在第一参考帧列表中的编号;获取第二参考帧的索引值,根据第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,第二预测运动矢量为当前图像块在第二方向的预测运动矢量,第二参考帧为当前图像块在第二方向的参考帧,第二参考帧的索引值为第二参考帧在第二参考帧列表中的编号;在第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况下,或者,在第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况下,根据下述公式确定第二运动信息中的第二运动矢量差:
mvd_lY=-mvd_lX
其中,mvd_lY表示第二运动矢量差,mvd_lX表示第一运动矢量差;这样,根据第二预测运动矢量和第二运动矢量差,确定第二运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
可选的,在本申请的另一种可能的实现方式中,上述获取单元具体用于:根据当前帧的序号和第一参考帧的序号,通过公式POC_listY0=2*POC_Cur-POC_listX,计算第一序号,其中,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY0表示第一序号;在第二参考帧列表包括第一序号的情况下,将第一序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。
可选的,在本申请的另一种可能的实现方式中,上述获取单元具体用于:根据当前帧的序号和第一参考帧的序号,通过公式(POC_Cur-POC_listX)*(POC_listY0'-POC_Cur)>0,计算第二序号,其中,POC_listY0'表示第二序号;在第二参考帧列表包括第二序号的情况下,将第二序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。
可选的,在本申请的另一种可能的实现方式中,上述获取单元具体用于:根据当前帧的序号和第一参考帧的序号,通过公式POC_listX≠POC_listY0″,计算第三序号,其中,POC_listY0″表示第三序号;将第三序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。
第三方面,提供一种双向帧间预测方法。该双向帧间预测方法存在多种实现方式:
一种实现方式为:解析码流,获取第一标识,第一标识用于指示是否根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息;若第一标识的取值为第一预设值,获取第一运动信息,并根据第一运动信息确定第二运动信息;根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
另一种实现方式为:解析码流,获取第二标识,第二标识用于指示是否采用运动信息推导算法计算当前图像块的运动信息;若第二标识的取值为第二预设值,获取第三标识,第三标识用于指示是否根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息;若第三标识取值为第三预设值,获取第一运动信息,并根据第一运动信息确定第二运动信息;根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
另一种实现方式为:解析码流,获取第二标识,第二标识用于指示是否采用运动信息推导算法计算当前图像块的运动信息;若第二标识的取值为第二预设值,获取第一运动信息,并根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息;根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
另一种实现方式为:解析码流,获取第四标识,第四标识用于指示是否采用运动信息推导算法计算当前图像块的运动信息;若第四标识的取值为第四预设值,根据第一参考帧列表和第二参考帧列表,确定第一参考帧的索引值和第二参考帧的索引值,第一参考帧列表为当前图像块在第一方向的参考帧列表,第二参考帧列表为当前图像块在第二方向的参考帧列表,第一参考帧为当前图像块在第一方向的参考帧,第二参考帧为当前图像块在第二方向的参考帧;获取第一运动矢量差和第一运动矢量预测值标志,并根据第一运动信息确定第二运动信息,第一运动信息包括第一参考帧的索引值、第一运动矢量差和第一运动矢量预测值标志,第二运动信息为当前图像块在第二方向的运动信息;根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
另一种实现方式为:解析码流,获取第一标识,第一标识用于指示是否根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息;若第一标识的取值为第八预设值,获取第五标识,该第五标识用于指示是否根据第二运动信息确定第一运动信息;若第五标识的取值为第五预设值,获取第二运动信息,并根据第二运动信息确定第一运动信息;根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
另一种实现方式为:解析码流,获取第二标识,第二标识用于指示是否采用运动信息推导算法计算当前图像块的运动信息;若第二标识的取值为第二预设值,获取第三标识,第三标识用于指示是否根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息;若第三标识取值为第六预设值,获取第二运动信息,并根据第二运动信息确定第一运动信息;根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
上述第一标识~第四标识的具体描述可以参考后续描述。
本申请提供的双向帧间预测方法中,在解析码流获取到某一标识后,根据该标识 的取值,确定是否根据第一运动信息确定第二运动信息。在确定需要根据第一运动信息确定第二运动信息后,获取第一运动信息,进而根据获取到的第一运动信息确定第二运功信息,这样,码流中仅包括相应标识和第一运动信息即可,无需再包括第二运动信息。与现有技术中,码流包括每个图像块在每个方向的运动信息相比,有效的减少了码流包括的运动信息,提高了传输资源的有效利用率,提高了传输速率,相应的,也提高了编解码速率。
第四方面,提供一种双向帧间预测装置,该双向帧间预测装置包括获取单元和确定单元。
具体的,在一种实现方式中,上述获取单元,用于解析码流,获取第一标识,第一标识用于指示是否根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息,以及用于若第一标识的取值为第一预设值,获取第一运动信息。上述确定单元,用于根据上述获取单元获取到的第一运动信息确定第二运动信息,以及用于根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
在另一种实现方式中,上述获取单元,用于解析码流,获取第二标识,第二标识用于指示是否采用运动信息推导算法计算当前图像块的运动信息,以及用于若第二标识的取值为第二预设值,获取第三标识,第三标识用于指示是否根据第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息,以及用于若第三标识取值为第三预设值,获取第一运动信息。上述确定单元,用于根据上述获取单元获取到的第一运动信息确定第二运动信息,以及用于根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
在另一种实现方式中,上述获取单元,用于解析码流,获取第二标识,第二标识用于指示是否采用运动信息推导算法计算当前图像块的运动信息,以及用于若第二标识的取值为第二预设值,获取第一运动信息。上述确定单元,用于根据上述获取单元获取到的第一运动信息确定第二运动信息,第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息,以及用于根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
在另一种实现方式中,上述获取单元,用于解析码流,获取第四标识,第四标识用于指示是否采用运动信息推导算法计算当前图像块的运动信息。上述确定单元,用于若上述获取单元获取到的第四标识的取值为第四预设值,根据第一参考帧列表和第二参考帧列表,确定第一参考帧的索引值和第二参考帧的索引值,第一参考帧列表为当前图像块在第一方向的参考帧列表,第二参考帧列表为当前图像块在第二方向的参考帧列表,第一参考帧为当前图像块在第一方向的参考帧,第二参考帧为当前图像块在第二方向的参考帧。上述获取单元,还用于获取第一运动矢量差和第一运动矢量预测值标志。上述确定单元,还用于根据第一运动信息确定第二运动信息,第一运动信息包括第一参考帧的索引值、第一运动矢量差和第一运动矢量预测值标志,第二运动信息为当前图像块在第二方向的运动信息;根据第一运动信息和第二运功信息,确定当前图像块的预测像素。
第五方面,提供一种终端,该终端包括:一个或多个处理器、存储器、通信接口。该存储器、通信接口与一个或多个处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括指令,当一个或多个处理器执行指令时,终端执行如上述第一方面及其任意一种可能的实现方式所述的双向帧间预测方法或执行如上述第三方面及其任意一种可能的实现方式所述的双向帧间预测方法。
第六方面,提供一种视频解码器,包括非易失性存储介质以及中央处理器,所述非易失性存储介质存储有可执行程序,所述中央处理器与所述非易失性存储介质连接,并执行所述可执行程序以实现如上述第一方面及其任意一种可能的实现方式所述的双向帧间预测方法或如上述第三方面及其任意一种可能的实现方式所述的双向帧间预测方法。
第七方面,提供一种解码器,所述解码器包括上述第二方面中的双向帧间预测装置以及重建模块,其中,所述重建模块用于根据所述双向帧间预测装置得到的预测像素确定当前图像块的重建像素值;或者,所述解码器包括包括上述第四方面中的双向帧间预测装置以及重建模块,其中,所述重建模块用于根据所述双向帧间预测装置得到的预测像素确定当前图像块的重建像素值。
第八方面,提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在上述第五方面所述的终端上运行时,使得所述终端执行如上述第一方面及其任意一种可能的实现方式所述的双向帧间预测方法或执行如上述第三方面及其任意一种可能的实现方式所述的双向帧间预测方法。
第九方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在上述第五方面所述的终端上运行时,使得所述终端执行如上述第一方面及其任意一种可能的实现方式所述的双向帧间预测方法或执行如上述第三方面及其任意一种可能的实现方式所述的双向帧间预测方法。
在本申请中,上述双向帧间预测装置的名字对设备或功能模块本身不构成限定,在实际实现中,这些设备或功能模块可以以其他名称出现。只要各个设备或功能模块的功能和本申请类似,属于本申请权利要求及其等同技术的范围之内。
本申请中第五方面到第九方面及其各种实现方式的具体描述,可以参考第一方面及其各种实现方式中的详细描述或第三方面及其各种实现方式中的详细描述;并且,第五方面到第九方面及其各种实现方式的有益效果,可以参考第一方面及其各种实现方式中的有益效果分析或第三方面及其各种实现方式中的有益效果分析,此处不再赘述。
本申请的这些方面或其他方面在以下的描述中会更加简明易懂。
附图说明
图1为本申请实施例中视频编解码***的结构示意图;
图2为本申请实施例中视频编码器的结构示意图;
图3为本申请实施例中视频解码器的结构示意图;
图4为本申请实施例提供的双向帧间预测方法流程示意图;
图5为本申请实施例提供的双向帧间预测装置的结构示意图一;
图6为本申请实施例提供的双向帧间预测装置的结构示意图二。
具体实施方式
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于限定特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
为了便于理解本申请实施例,首先在此介绍本申请实施例涉及到的相关要素。
图像编码(image encoding):将图像序列压缩成码流的处理过程。
图像解码(image decoding):将码流按照特定的语法规则和处理方法恢复成重建图像的处理过程。
目前,视频图像的编码过程为:编码端首先将一帧原始图像划分成互不重叠的多个部分,每一部分即可作为一个图像块;然后,编码端针对每个图像块执行预测(Prediction)、变换(Transform)和量化(Quantization)等操作,以得到该图像块对应的码流;其中,预测是为了得到图像块的预测块,从而可以仅对该图像块与其预测块之间的差值(或称为残差或残差块)进行编码和传输,进而节省传输开销;最后,编码端将该图像块对应的码流发送给解码端。
相应的,解码端在接收到该码流之后,执行视频解码过程。具体的,解码端对接收到的码流进行预测、反量化和反变换等操作,得到经重建的图像块(或称为重建图像块),该过程称为图像重建过程(或图像重构过程);然后,解码端对上述原始图像中的每个图像块的重建块进行组装,得到该原始图像的经重建的图像,并播放经重建的图像。
现有的视频图像编解码技术包括帧内预测与帧间预测。其中,帧间预测是指以编码图像块/解码图像块为单位,利用当前帧与其参考帧之间的相关性完成的预测。当前帧可以存在一个或多个参考帧。具体的,根据当前图像块的参考帧中的像素,生成当前图像块的预测图像块。
一般的,对于当前图像块而言,可以仅根据一个参考图像块生成当前图像块的预测图像块,也可以根据至少两个参考图像块生成当前图像块的预测图像块。上述根据一个参考图像块生成当前图像块的预测图像块称为单向预测,上述根据至少两个参考图像块生成当前图像块的预测图像块称为双向帧间预测。双向帧间预测中的至少两个参考图像块可来自于同一个参考帧或者不同的参考帧。也就是说,本申请涉及到的“方向”为一个广义的定义。本申请中的一个方向与一个参考图像块对应。下文中的第一方向和第二方向对应不同的参考图像块,这两个参考图像块可以均被包含于当前图像块的前向参考帧/后向参考帧中,也可以一个被包含于当前图像块的前向参考帧,另一个包含于当前图像块的后向参考帧。
可选的,双向帧间预测可以是指利用当前视频帧与在其之前编码且在其之前播放的视频帧之间的相关性,和当前视频帧与在其之前编码且在其之后播放的视频帧之间的相关性进行的帧间预测。
可以看出,上述双向帧间预测涉及两个方向的帧间预测,一般称为:前向帧间预 测和后向帧间预测。前向帧间预测是指利用当前视频帧与在其之前编码且在其之前播放的视频帧之间的相关性进行的帧间预测。后向帧间预测是指利用当前视频帧与在其之前编码且在其之后播放的视频帧之间的相关性进行的帧间预测。
前向帧间预测对应前向参考帧列表L0,后向帧间预测对应后向参考帧列表L1,两个参考帧列表中所包含的参考帧的数量可以相同,也可以不同。
运动补偿(Motion Compensation,MC)为利用参考图像块对当前图像块进行预测的过程。
在大多数的编码框架中,视频序列包括一系列图像(picture),图像被划分为至少一个条带(slice),每个条带又被划分为图像块(block)。视频编码/解码以图像块为单位,可从图像的左上角位置开始从左到右、从上到下、一行一行进行编码/解码处理。这里,图像块可以为视频编解码标准H.264中的宏块(macro block,MB),也可以为高效视频编码(High Efficiency Video Coding,HEVC)标准中的编码单元(Coding Unit,CU),本申请实施例对此不作具体限定。
本申请中,正在进行编码/解码处理的图像块称为当前图像块(current block),当前图像块所在的图像称为当前帧。
一般的,当前帧可以为单向预测帧(P帧),也可以为双向预测帧(B帧)。在当前帧为P帧的情况下,当前帧具有一个参考帧列表。在当前帧为B帧的情况下,当前帧具有两个参考帧列表,这两个列表通常分别称为L0和L1。每个参考帧列表均包含至少一个用作当前帧的参考帧的重建帧。参考帧用于为当前帧的帧间预测提供参考像素。
在当前帧中,与当前图像块相邻的(例如位于当前块的左侧、上侧或右侧)的图像块可能已经完成了编码/解码处理,得到了重建图像,它们称为重建图像块;重建图像块的编码模式、重建像素等信息是可以获得的(available)。
在当前帧进行编码/解码之前已经完成编码/解码处理的帧称为重建帧。
运动矢量(Motion Vector,MV)是帧间预测过程中的一个重要参数,其表示已编码的图像块相对于当前图像块的空间位移。一般的,可以使用运动估计(Motion Estimation,ME)的方法,诸如运动搜索来获取运动矢量。初期的帧间预测技术,编码端在码流中传输当前图像块的运动矢量,以使得解码端再现当前图像块的预测像素,进而得到重建块。为了进一步的改善编码效率,后来又提出使用参考运动矢量差分地编码运动矢量,即仅仅编码运动矢量差(Motion Vector Difference,MVD)。
为了使得解码端与编码端使用相同的参考图像块,编码端需要在码流中向解码端发送各个图像块的运动信息。若编码端直接对每个图像块的运动矢量进行编码,则会消耗大量的传输资源。由于空间域相邻的图像块的运动矢量具有很强的相关性,因此,当前图像块的运动矢量可以根据邻近已编码图像块的运动矢量进行预测,预测所得到的运动矢量称为MVP,当前图像块的运动矢量和MVP之间的差值称为MVD。
视频编解码标准H.264在运动估计过程中采用了多参考帧预测来提高预测精度,即建立储存多个重建帧的缓存,并在缓存内的所有的重建帧中寻找最优的参考图像块进行运动补偿,以便更好地去除时间域的冗余度。视频编解码标准H.264的帧间预测使用两个缓存,即参考帧列表0(reference list0)和参考帧列表1(reference list1)。 每一个列表中最优的参考块所在的参考帧用索引值标明,即ref_idx_l0和ref_idx_l1。每一参考帧列表中,参考图像块的运动信息包括参考帧的索引值(ref_idx_l0或ref_idx_l1)、MVP标志和MVD。解码端根据参考帧的索引值、MVP标志和MVD,即可以在选定的参考帧中找到正确的参考图像块。
目前,在HEVC标准中常使用帧间预测模式为高级运动矢量预测(Advanced Motion Vector Prediction,AMVP)模式、合并(Merge)模式和非平动运动模型预测模式。
对于AMVP模式,编码端通过当前图像块空域或者时域相邻的已编码的图像块的运动信息构建候选运动矢量列表,并根据率失真代价从该候选运动矢量列表中确定最优的运动矢量作为当前图像块的MVP。此外,编码端在以MVP为中心的邻域内进行运动搜索获得当前图像块的运动矢量。编码端将MVP在候选运动矢量列表中的索引值(即上述MVP标志)、参考帧的索引值以及MVD传递到解码端。
对于合并模式,编码端通过当前图像块空域或者时域相邻的已编码图像块的运动信息,构建候选运动信息列表,并根据率失真代价从该候选运动信息列表中确定最优的运动信息作为当前图像块的运动信息。编码端将最优的运动信息在候选运动信息列表中位置的索引值传递到解码端。
对于非平动运动模型预测模式,编解码端使用相同的运动模型推导出当前图像块中每一个子块的运动信息,并根据所有子块的运动信息进行运动补偿,得到预测图像块,从而提高预测效率。其中,编解码端常用的运动模型为4参数仿射模型、6参数仿射变换模型或8参数的双线性模型。
示例性的,4参数仿射变换模型可以通过两个像素点的运动矢量及其相对于当前图像块左上顶点像素的坐标来表示。这里,将用于表示运动模型参数的像素点称为控制点。若当前图像块的左上顶点(0,0)和右上顶点(W,0)像素点为控制点,当前图像块的左上顶点和右上顶点的运动矢量分别为(vx 0,vy 0)和(vx 1,vy 1),则根据下述公式(1)得到当前图像块中每一个子块的运动信息。下述公式(1)中的(x,y)为子块相对于当前图像块的左上顶点像素的坐标,(vx,vy)为子块的运动矢量,W为当前图像块的宽。
Figure PCTCN2019071471-appb-000005
示例性的,6参数仿射变换模型可以通过三个像素点的运动矢量及其相对于当前图像块左上顶点像素的坐标来表示。若当前图像块的左上顶点(0,0)、右上顶点(W,0)和左下顶点(0,H)像素点为控制点,当前图像块的左上顶点、右上顶点和左下顶点的运动矢量分别为(vx 0,vy 0)、(vx 1,vy 1)、(vx 2,vy 2),则根据下述公式(2)得到当前图像块中每一个子块的运动信息。下述公式(2)中的(x,y)为子块相对于当前图像块的左上顶点像素的坐标,(vx,vy)为子块的运动矢量,W和H分别为当前图像块的宽和高。
Figure PCTCN2019071471-appb-000006
示例性的,8参数双线性模型可以通过四个像素点的运动矢量及其相对于当前图像块左上顶点像素的坐标来表示。若当前图像块的左上顶点(0,0)、右上顶点(W,0)、左下顶点(0,H)和右下顶点(W,H)像素点为控制点,当前图像块的左上顶点、右上顶点、左下顶点和右下顶点的运动矢量分别为(vx 0,vy 0)、(vx 1,vy 1)、(vx 2,vy 2)、(vx 3,vy 3),则根据下述公式(3)得到当前图像块中每一个子块的运动信息。下述公式(3)中的(x,y)为子块相对于当前图像块的左上顶点像素的坐标,(vx,vy)为子块的运动矢量,W和H分别为当前图像块的宽和高。
Figure PCTCN2019071471-appb-000007
容易看出,对于上述任一帧间预测模式,若帧间预测为双向帧间预测,则编码端需要向解码端发送每个图像块在每一个方向的运动信息。这样,运动信息占用的传输资源较大,降低了传输资源的有效利用率,降低了传输速率,且降低了编解码压缩效率。
针对上述问题,本申请提供一种双向帧间预测方法,对于双向帧间预测,编码端向解码端发送当前图像块在第一方向的运动信息,解码端在接收到当前图像块在第一方向的运动信息后,根据所述当前图像块在第一方向的运动信息计算当前图像块在第二方向的运动信息,这样,可根据当前图像块在第一方向的运动信息和当前图像块在第二方向的运动信息计算当前图像块的预测像素。
本申请提供的双向帧间预测方法可以由双向帧间预测装置、视频编解码装置、视频编解码器以及其它具有视频编解码功能的设备来执行。
本申请提供的双向帧间预测方法适用于视频编解码***。视频编解码***的视频编码器100和视频解码器200用于根据本申请提出的双向帧间预测方法实例实现当前图像块的运动信息的计算。具体的,根据当前图像块在第一方向的运动信息计算当前图像块在第二方向的运动信息,从而根据当前图像块在第一方向的运动信息以及当前图像块在第二方向的运动信息确定当前图像块的预测像素,这样,视频编码器10与视频编码器20之间只需传输当前图像块在第一方向的运动信息即可,有效的提高了传输资源的利用率,提高了编解码压缩效率。
图1示出了视频编解码***的结构。如图1所示,视频编解码***1包含源装置10和目的装置20。源装置10产生经过编码后的视频数据,源装置10也可以被称为视频编码装置或视频编码设备,目的装置20可以对源装置10产生的经过编码后的视频数据进行解码,目的装置20也可以被称为视频解码装置或视频解码设备。源装置10和/或目的装置20可包含至少一个处理器以及耦合到所述至少一个处理器的存储器。所述存储器可包含但不限于只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、带电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、快闪存储器或可用于以可由计算机存取的指令或数据结构的形式存储所要的程序代码的任何其它媒体,本申请对此不作具体限定。
源装置10和目的装置20可以包括各种装置,包含桌上型计算机、移动计算装置、 笔记本(例如,膝上型)计算机、平板计算机、机顶盒、例如所谓的“智能”电话等电话手持机、电视机、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机或其类似者。
目的装置20可经由链路30从源装置10接收经编码视频数据。链路30可包括能够将经编码视频数据从源装置10移动到目的装置20的一个或多个媒体和/或装置。在一个实例中,链路30可包括使得源装置10能够实时地将编码后的视频数据直接发射到目的装置20的一个或多个通信媒体。在此实例中,源装置10可根据通信标准(例如:无线通信协议)来调制编码后的视频数据,并且可以将调制后的视频数据发射到目的装置20。上述一个或多个通信媒体可包含无线和/或有线通信媒体,例如:射频(Radio Frequency,RF)频谱、一个或多个物理传输线。上述一个或多个通信媒体可形成基于分组的网络的一部分,基于分组的网络(例如,局域网、广域网或全球网络(例如,因特网))的部分。上述一个或多个通信媒体可以包含路由器、交换器、基站,或者实现从源装置10到目的装置20的通信的其它设备。
在另一实例中,可将编码后的视频数据从输出接口140输出到存储装置40。类似地,可通过输入接口240从存储装置40存取编码后的视频数据。存储装置40可包含多种本地存取式数据存储媒体,例如蓝光光盘、高密度数字视频光盘(Digital Video Disc,DVD)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、快闪存储器,或用于存储经编码视频数据的其它合适数字存储媒体。
在另一实例中,存储装置40可对应于文件服务器或存储由源装置10产生的编码后的视频数据的另一中间存储装置。在此实例中,目的装置20可经由流式传输或下载从存储装置40获取其存储的视频数据。文件服务器可为任何类型的能够存储经编码的视频数据并且将经编码的视频数据发射到目的装置20的服务器。例如,文件服务器可以包含全球广域网(World Wide Web,Web)服务器(例如,用于网站)、文件传送协议(File Transfer Protocol,FTP)服务器、网络附加存储(Network Attached Storage,NAS)装置以及本地磁盘驱动器。
目的装置20可通过任何标准数据连接(例如,因特网连接)存取编码后的视频数据。数据连接的实例类型包含适合于存取存储于文件服务器上的编码后的视频数据的无线信道、有线连接(例如,缆线调制解调器等),或两者的组合。编码后的视频数据从文件服务器的发射可为流式传输、下载传输或两者的组合。
本申请的双向帧间预测方法不限于无线应用场景,示例性的,本申请的双向帧间预测方法可以应用于支持以下应用等多种多媒体应用的视频编解码:空中电视广播、有线电视发射、***发射、流式传输视频发射(例如,经由因特网)、存储于数据存储媒体上的视频数据的编码、存储于数据存储媒体上的视频数据的解码,或其它应用。在一些实例中,视频编解码***1可经配置以支持单向或双向视频发射,以支持例如视频流式传输、视频播放、视频广播及/或视频电话等应用。
需要说明的是,图1示出的视频编解码***1仅仅是视频编解码***的示例,并不是对本申请中视频编解码***的限定。本申请提供的双向帧间预测方法还可适用于编码装置与解码装置之间无数据通信的场景。在其它实例中,待编码视频数据或编码后的视频数据可以从本地存储器检索,也可以在网络上流式传输等。视频编码装置可 对待编码视频数据进行编码并且将编码后的视频数据存储到存储器,视频解码装置也可从存储器中获取编码后的视频数据并且对该编码后的视频数据进行解码。
在图1中,源装置10包含视频源101、视频编码器102和输出接口103。在一些实例中,输出接口103可包含调节器/解调器(调制解调器)和/或发射器。视频源101可包括视频捕获装置(例如,摄像机)、含有先前捕获的视频数据的视频存档、用以从视频内容提供者接收视频数据的视频输入接口,和/或用于产生视频数据的计算机图形***,或视频数据的此些来源的组合。
视频编码器102可对来自视频源101的视频数据进行编码。在一些实例中,源装置10经由输出接口103将编码后的视频数据直接发射到目的装置20。在其它实例中,编码后的视频数据还可存储到存储装置40上,供目的装置20稍后存取来用于解码和/或播放。
在图1的实例中,目的装置20包含显示装置201、视频解码器202以及输入接口203。在一些实例中,输入接口203包含接收器和/或调制解调器。输入接口203可经由链路30和/或从存储装置40接收编码后的视频数据。显示装置201可与目的装置20集成或可在目的装置20外部。一般来说,显示装置201显示解码后的视频数据。显示装置201可包括多种显示装置,例如,液晶显示器、等离子显示器、有机发光二极管显示器或其它类型的显示装置。
可选的,视频编码器102和视频解码器202可各自与音频编码器和解码器集成,且可包含适当的多路复用器-多路分用器单元或其它硬件和软件,以处置共同数据流或单独数据流中的音频和视频两者的编码。
视频编码器102和视频解码器202可以包括至少一个微处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application-Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、离散逻辑、硬件或其任何组合。若本申请提供的双向帧间预测方法采用软件实现,则可将用于软件的指令存储在合适的非易失性计算机可读存储媒体中,且可使用至少一个处理器在硬件中执行所述指令从而实施本申请。前述内容(包含硬件、软件、硬件与软件的组合等)中的任一者可被视为至少一个处理器。视频编码器102可以包含在编码器中,视频解码器202可包含在解码器中,所述编码器或解码器可以为相应装置中组合编码器/解码器(编码解码器)中的一部分。
本申请中的视频编码器102和视频解码器202可以根据视频压缩标准(例如HEVC)操作,也可根据其它业界标准操作,本申请对此不作具体限定。
视频编码器102用于对当前图像块进行双向运动估计,以确定当前图像块在第一方向的运动信息,并根据当前图像块在第一方向的运动信息计算当前图像块在第二方向的运动信息,这样,视频编码器102根据当前图像块在第一方向的运动信息以及当前图像块在第二方向的运动信息确定当前图像块的预测图像块。进而,视频编码器102对当前图像块与其预测图像块之间的残差执行变换和量化等操作,生成码流并向视频解码器202发送该码流。该码流包括当前图像块在第一方向的运动信息以及用于指示根据第一运动信息确定第二运动信息的指示信息。该指示信息可以采用不同的标识表示。这指示信息的表示方法可以参考后续描述。
可选的,上述“视频编码器102根据当前图像块在第一方向的运动信息计算当前图像块在第二方向的运动信息”的方法可以为视频编码器102根据当前图像块在第一方向的运动矢量确定当前图像块在第二方向的运动矢量,也可以为视频编码器102根据当前图像块在第一方向的运动矢量差值确定当前图像块在第二方向的运动矢量差值,并根据当前图像块在第二方向的运动矢量差值以及当前图像块在第二方向的预测运动矢量确定当前图像块在第二方向的运动矢量。
参考下述图4,视频解码器202用于:获取码流,并解析该码流,以获取用于指示根据第一运动信息确定第二运动信息的指示信息(S400),即确定从哪一个方向的运动信息推导计算另一方向的运动信息,第一运动信息包括当前图像块在第一方向的运动信息,第二运动信息包括当前图像块在第二方向的运动信息,这里,第一方向与第二方向不同;获取第一运动信息(S401),并根据获取到的第一运动信息,确定第二运动信息(S402),这样,视频解码器202即可根据第一运动信息和第二运动信息确定当前图像块的预测像素(S403)。
上述“视频解码器202根据当前图像块在第一方向的运动信息计算当前图像块在第二方向的运动信息”的方法可以为视频解码器202根据当前图像块在第一方向的运动矢量确定当前图像块在第二方向的运动矢量,也可以为视频解码器202根据当前图像块在第一方向的运动矢量差值确定当前图像块在第二方向的运动矢量差值,并根据当前图像块在第二方向的运动矢量差值以及当前图像块在第二方向的预测运动矢量确定当前图像块在第二方向的运动矢量。
图2是本申请实施例中视频编码器102的结构示意图。如图2所示,视频编码器102用于将视频输出到后处理实体41。后处理实体41表示可处理来自视频编码器102的经编码视频数据的视频实体的实例,例如媒体感知网络元件(MANE)或拼接/编辑装置。在一些情况下,后处理实体41可为网络实体的实例。在一些视频编码***中,后处理实体41和视频编码器102可为单独装置的若干部分,而在其它情况下,相对于后处理实体41所描述的功能性可由包括视频编码器100的相同装置执行。在某一实例中,后处理实体41是图1的存储装置40的实例。
视频编码器102可根据当前图像块在第一方向的运动信息推到计算当前图像块在第二方向的运动信息,进而根据当前图像块在第一方向的运动信息以及当前图像块在第二方向的运动信息确定当前图像块的预测图像块,进而完成双向帧间预测编码。
如图2所示,视频编码器102包括变换器301、量化器302、熵编码器303、滤波器306、存储器307、预测处理单元308、求和器312。预测处理单元308包括帧内预测器309和帧间预测器310。为了图像块重构,视频编码器102还包含反量化器304、反变换器305以及求和器311。滤波器306既定表示一或多个环路滤波器,例如去块滤波器、自适应环路滤波器和样本自适应偏移滤波器。
存储器307可存储由视频编码器102的组件编码的视频数据。可从视频源101获得存储在存储器307中的视频数据。存储器307可为参考图像存储器,其存储用于由视频编码器102在帧内、帧间译码模式中对视频数据进行编码的参考视频数据。存储器307可以为同步DRAM(synchronous DRAM,SDRAM)的动态随机存取存储器(dynamic RAM,DRAM)、磁阻式RAM(magnetic RAM,MRAM)、电阻式RAM (resistive RAM,RRAM),或其它类型的存储器装置。
视频编码器102接收视频数据,并将所述视频数据存储在视频数据存储器中。分割单元将所述视频数据分割成若干图像块,而且这些图像块可以被进一步分割为更小的块,例如基于四叉树结构或者二叉树结构的图像块分割。此分割还可包含分割成条带(slice)、片(tile)或其它较大单元。视频编码器102通常说明编码待编码的视频条带内的图像块的组件。所述条带可分成多个图像块(并且可能分成被称作片的图像块集合)。
预测处理单元308内的帧内预测器309可相对于与当前图像块在相同帧或条带中的一或多个相邻图像块执行当前图像块的帧内预测性编码,以去除空间冗余。预测处理单元308内的帧间预测器310可相对于一或多个参考图像中的一或多个预测图像块执行当前图像块的帧间预测性编码以去除时间冗余。
预测处理单元308可将所得经帧内、帧间译码的图像块提供给求和器310以产生残差块,且提供给求和器309以重建用作参考图像的编码块。
在预测处理单元308经由帧间预测、帧内预测产生当前图像块的预测图像块之后,视频编码器102通过从待编码的当前图像块减去所述预测图像块来形成残差图像块。求和器312表示执行此减法运算的一或多个组件。所述残差块中的残差视频数据可包含在一或多个变换单元(transform unit,TU)中,并应用于变换器301。变换器301使用例如离散余弦变换(discrete cosine transform,DCT)或概念上类似的变换等变换将残差视频数据变换成残差变换系数。变换器301可将残差视频数据从像素值域转换到变换域,例如频域。
变换器301可将所得变换系数发送到量化器302。量化器302量化所述变换系数以进一步减小位速率。在一些实例中,量化器302可接着执行对包含经量化的变换系数的矩阵的扫描。或者,熵编码器303可执行扫描。
在量化之后,熵编码器303对经量化变换系数进行熵编码。举例来说,熵编码器303可执行上下文自适应可变长度编码(context adaptive variable length coding,CAVLC)、上下文自适应二进制算术编码(context based adaptive binary arithmetic coding,CABAC)或另一熵编码方法或技术。在由熵编码器303熵编码之后,可将编码后的码流发送到视频解码器202,或经存档以供稍后发送或由视频解码器202检索。熵编码器303还可对待编码的当前图像块的语法元素进行熵编码。
反量化器304和反变化器305分别应用逆量化和逆变换以在像素域中重构所述残差块,例如以供稍后用作参考图像的参考块。求和器311将经重构的残差块添加到由帧间预测器310或帧内预测器309产生的预测图像块,以产生经重构图像块。其中,对一个图像块的参考图像块进行处理(例如插值等处理)可以得到该图像块的预测图像块。
应当理解的是,视频编码器102的其它的结构变化可用于编码视频流。例如,对于某些图像块或者图像帧,视频编码器102可以直接地量化残差信号而不需要经变换器301处理,相应地也不需要经反变换器305处理;或者,对于某些图像块或者图像帧,视频编码器102没有产生残差数据,相应地不需要经变换器301、量化器302、反量化器304和反变换器305处理;或者,视频编码器102可以将经重构图像块作为参 考块直接地进行存储而不需要经滤波器306处理;或者,视频编码器102中量化器302和反量化器304可以合并在一起。
图3是本申请实施例中视频解码器202的结构示意图。如图3所示,视频解码器202包括熵解码器401、反量化器402、反变换器403、滤波器404、存储器405、预测处理单元406以及求和器409。预测处理单元406包括帧内预测器407和帧间预测器408。在一些实例中,视频解码器202可执行大体上与相对于来自图2的视频编码器102描述的编码过程互逆的解码过程。
在解码过程中,视频解码器202从视频编码器102接收码流。视频解码器202可从网络实体42接收视频数据,可选的,还可以将所述视频数据存储在视频数据存储器(图中未示意)中。视频数据存储器可存储待由视频解码器202的组件解码的视频数据,例如编码后的码流。存储在视频数据存储器中的视频数据,例如可从存储装置40、从相机等本地视频源、经由视频数据的有线或无线网络通信或者通过存取物理数据存储媒体而获得。尽管在图3中没有示意出视频数据存储器,但视频数据存储器和存储器405可以是同一个的存储器,也可以是单独设置的存储器。视频数据存储器和存储器405可由多种存储器装置中的任一者形成,例如:包含同步DRAM(SDRAM)的动态随机存取存储器(DRAM)、磁阻式RAM(MRAM)、电阻式RAM(RRAM),或其它类型的存储器装置。在各种实例中,视频数据存储器可与视频解码器200的其它组件一起集成在芯片上,或相对于那些组件设置在芯片外。
网络实体42可例如为服务器、MANE、视频编辑器/剪接器,或用于实施上文所描述的技术中的一或多者的其它此装置。网络实体42可包括或可不包括视频编码器,例如视频编码器102。在网络实体42将码流发送到视频解码器202之前,网络实体42可实施本申请中描述的技术中的部分。在一些视频解码***中,网络实体42和视频解码器202可为单独装置的部分,而在其它情况下,相对于网络实体42描述的功能性可由包括视频解码器202的相同装置执行。在一些情况下,网络实体42可为图1的存储装置40的实例。
视频解码器202的熵解码器401对码流进行熵解码以产生经量化的系数和一些语法元素。熵解码器401将语法元素转发到滤波器404。视频解码器202可接收在视频条带层级和/或图像块层级处的语法元素。本申请中,在一种示例下,这里的语法元素可以包括与当前图像块相关的指示信息,该指示信息用于指示根据第一运动信息确定第二运动信息。另外,在一些实例中,可以是视频编码器102发信号通知指示是否根据第一运动信息确定第二运动信息的特定语法元素。
反量化器402将在码流中提供且由熵解码器401解码的经量化变换系数逆量化,即去量化。逆量化过程可包括:使用由视频编码器102针对视频条带中的每个图像块计算的量化参数来确定应施加的量化程度以及同样地确定应施加的逆量化程度。反变换器403将逆变换应用于变换系数,例如逆DCT、逆整数变换或概念上类似的逆变换过程,以便产生像素域中的残差块。
在预测处理单元406产生用于当前图像块或当前图像块的子块的预测图像块之后,视频解码器202通过将来自反变换器403的残差块与由预测处理单元406产生的对应预测图像块求和以得到重建的块,即经解码图像块。求和器409(亦称为重建器409) 表示执行此求和操作的组件。在需要时,还可使用滤波器(在解码环路中或在解码环路之后)使像素转变平滑或者以其它方式改进视频质量。滤波器404可以为一或多个环路滤波器,例如去块效应滤波器、自适应环路滤波器(ALF)以及样本自适应偏移(SAO)滤波器等。
应当理解的是,视频解码器202的其它结构变化可用于码流的解码。例如,对于某些图像块或者图像帧,视频解码器202的熵解码器401没有解码出经量化的系数,相应地不需要经反量化器402和反变换器403处理。例如,视频解码器202中反量化器402和反变换器403可以合并在一起。
以下,结合上述图1示出的视频编解码***1、图2示出的视频编码器102以及图3示出的视频解码器202对本申请提供的双向帧间预测方法进行详细描述。
图4为本申请实施例中双向帧间预测方法的流程示意图。图4所示的方法由双向帧间预测装置执行。该双向帧间预测装置可以是图1中的视频解码器202。图4以双向帧间预测装置为视频解码器202为例进行说明。
如图4所示,本申请实施例中双向帧间预测方法可以包括下述步骤:
S400、视频解码器202解析获取到的码流,并获取指示信息。
可选的,视频解码器202解析码流,并根据码流中语法元素的值确定用于当前帧中当前图像块进行帧间预测的帧间预测模式。在帧间预测模式为双向帧间预测模式的情况下,视频解码器202获取指示信息。
其中,视频解码器202可以接收视频编码器102发送的编码后的码流,也可以从存装置40获取编码后的码流。
可选的,本申请实施例中的视频解码器202根据语法元素inter_pred_idc的值确定用于当前帧中当前图像块进行帧间预测的帧间预测模式。从上面描述可知,帧间预测包括单向帧间预测和双向帧间预测。当语法元素inter_pred_idc的值为0时,视频解码器202确定用于当前帧中当前图像块进行帧间预测的帧间预测模式为前向帧间预测。当语法元素inter_pred_idc的值为1时,视频解码器202确定用于当前帧中当前图像块进行帧间预测的帧间预测模式为后向帧间预测。当语法元素inter_pred_idc的值为2时,视频解码器202确定用于当前帧中当前图像块进行帧间预测的帧间预测模式为双向帧间预测。
可选的,视频解码器202在确定语法元素inter_pred_idc的值为2后,获取用于指示根据第一运动信息确定第二运动信息的指示信息。第一运动信息为当前图像块在第一方向的运动信息,第二运动信息为当前图像块在第二方向的运动信息,第一方向与第二方向不同。
本申请所涉及到的图像块可以为执行视频编码或视频解码的基本单元,例如:编码单元(Coding Unit,CU),也可以为执行预测操作的基本单元,例如预测单元(Prediction Unit,PU),本申请实施例对此不作具体限定。
若图像块为执行视频编码或视频解码的基本单元,则本申请实施例中的当前图像块包括至少一个子块。相应的,第一运动信息包括当前图像块的至少一个子块中每个子块在第一方向的运动信息,第二运动信息包括当前图像块的至少一个子块中每个子块在第二方向的运动信息,指示信息可以用于指示根据某一子块在第一方向的运动信 息确定该子块在第二方向的运动信息。
视频解码器202可以采用多种方式获取指示信息。
在第一种实现方式中,视频解码器202解析第一标识。当第一标识的取值为第一预设值时,视频解码器202确定解析第一运动信息,并根据第一运动信息确定第二运动信息,即视频解码器202获取指示信息。当第一标识的取值为第八预设值时,视频解码器202解析码流获取第五标识。当第五标识的取值为第五预设值时,视频解码器202确定解析第二运动信息,并根据第二运动信息计算第一运动信息。当第五标识的取值为第九预设值时,视频解码器202获取第一运动信息和第二运动信息。其中,第一预设值与第五预设值可以相同,也可以不同,本申请实施例对此不作具体限定。
示例性的,第一标识为mv_derived_flag_l0,第五标识为mv_derived_flag_l1,第一预设值与第五预设值均为1,第八预设值和第九预设值均为0。视频解码器202先解析mv_derived_flag_l0。mv_derived_flag_l0的取值为1时,视频解码器202解析第一运动信息,并根据第一运动信息确定第二运动信息。mv_derived_flag_l0的取值为0时,视频解码器202解析mv_derived_flag_l1。mv_derived_flag_l1的取值为1时,视频解码器202解析第二运动信息,并根据第二运动信息计算第一运动信息。mv_derived_flag_l0和mv_derived_flag_l1的取值均为0时,视频解码器202解析第一运动信息和第二运动信息。
在第二种实现方式中,视频解码器202解析第二标识。当第二标识的取值为第二预设值时,视频解码器202确定采用运动信息推导算法计算当前图像块的运动信息。然后,视频解码器202解析第三标识。当第三标识的取值为第三预设值时,视频解码器202确定解析第一运动信息,并根据第一运动信息确定第二运动信息,即视频解码器202获取指示信息。当第三标识的取值为第六预设值时,视频解码器202确定解析第二运动信息,并根据第二运动信息计算第一运动信息。
示例性的,第二标识为derived_mv_flag,第三标识为derived_mv_direction,第三预设值为1,第六预设值为0。视频解码器202先解析derived_mv_flag。derived_mv_flag的取值为1时,视频解码器202确定采用运动信息推导算法计算当前图像块的运动信息。derived_mv_flag的取值为0时,视频解码器202解析第一运动信息和第二运动信息。derived_mv_direction的取值为1时,视频解码器202解析第一运动信息,并根据第一运动信息确定第二运动信息。derived_mv_direction的取值为0时,视频解码器202确定解析第二运动信息,并根据第二运动信息计算第一运动信息。
在第三种实现方式中,视频解码器202解析第二标识。当第二标识的取值为第二预设值时,视频解码器202确定采用运动信息推导算法计算当前图像块的运动信息。然后,视频解码器202根据预设的推导方向,确定解析第一运动信息,并根据第一运动信息确定第二运动信息,即视频解码器202获取指示信息。也就是说,本实现方式中,“根据第一运动信息确定第二运动信息”是预先设置好的。当第二标识的取值为第七预设值时,解析运动信息第一组和第二运动信息。
示例性的,第二标识为derived_mv_flag,第二预设值为1,第七预设值为0。视频解码器202解析derived_mv_flag。derived_mv_flag的取值为1时,视频解码器202确定采用运动信息推导算法计算当前图像块的运动信息。进一步地,视频解码器202 确定解析第一运动信息,并根据第一运动信息确定第二运动信息。derived_mv_flag的取值为0时,视频解码器202解析第一运动信息和第二运动信息。
在第四种实现方式中,视频解码器202解析第四标识(例如mv_derived_flag_l0)。当第四标识的取值为第四预设值时,视频解码器202确定采用运动信息推导算法计算当前图像块的运动信息,并根据第一参考帧列表和第二参考帧列表,计算变量derived_ref_num。该变量表示第一参考帧和第二参考帧能组成镜像/线性的参考帧对的个数。当参考帧对的个数为1时,视频解码器202直接确定参考帧的索引值。然后,视频解码器202根据预设的推导方向,确定解析第一运动信息,并根据第一运动信息确定第二运动信息,即视频解码器202获取指示信息。第一参考帧列表为当前图像块在第一方向的参考帧列表,第二参考帧列表为当前图像块在第二方向的参考帧列表,第一参考帧为当前图像块在第一方向的参考帧,第二参考帧为当前图像块在第二方向的参考帧。本申请实施例中的参考帧的索引值均为该参考帧在对应的参考帧列表中的编号。
示例性的,当前帧的序号为4,第一参考帧列表为{2,0},第二参考帧列表为{6,7},根据上述条件B或条件C,确定第一参考帧列表中序号为2的参考帧和第二参考帧列表中序号为6的参考帧能组成参考帧对。因此,第一参考帧和第二参考帧的索引值均为0。
若当前帧的序号为4,第一参考帧列表为{2,0},第二参考帧列表为{6,7},根据上述条件B或条件C,确定第一参考帧列表中序号为2的参考帧和第二参考帧列表中序号为6的参考帧能组成参考帧对,第一参考帧列表中序号为0的参考帧和第二参考帧列表中序号为8的参考帧也能组成参考帧对。此时,视频解码器202需要解析参考帧的索引值。
进一步地,视频解码器202在确定帧间预测模式为双向帧间预测模式的情况下,还可以确定当前帧的特征信息是否满足预设条件。这样,当所述当前帧的特征信息满足预设条件时,视频解码器202获取指示信息。即S401的过程具体可以为:在确定帧间预测模式为双向帧间预测模式、且当前帧的特征信息满足第一预设条件的情况下,视频解码器202获取指示信息。
当前帧的特征信息包括序号、时域分层级别(Temporal Level ID,TID)和参考帧数量中的至少一个。视频解码器202获取到的码流包括序列参数集(Sequence Parameter Set,SPS)、图像参数集(Picture Parameter Set,PPS)、条带头(slice header)或条带片段头(slice segment header)、以及编码后的图像数据。之后,视频解码器202解析该码流,获取当前帧的特征信息。
上述预设条件包括以下条件中的至少一个:
条件A、当前图像块存在至少两个参考帧。
条件B、当前帧的序号、第一参考帧的序号以及第二参考帧的序号满足下述公式:
POC_Cur-POC_listX=POC_listY-POC_Cur
其中,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,第一参考帧为当前图像块在第一方向的参考帧,第二参考帧为当前图像块在第二方向的参考帧。
条件C、当前帧的序号、第一参考帧的序号以及第二参考帧的序号满足下述公式:
(POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0
其中,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,第一参考帧为当前图像块在第一方向的参考帧,第二参考帧为当前图像块在第二方向的参考帧。
条件D、当前帧的TID大于或等于预设值。
本申请实施例中的预设条件可以是预先设定的,也可以在高层语法,例如:SPS、PPS、条带头(slice header)或条带片段头(slice segment header)等参数集中指定,本申请实施例对此不作具体限定。
具体的,针对上述条件B(或条件C),视频解码器202从第一参考帧列表和第二参考帧列表各自获取一个参考帧的序号,判断获取到的参考帧的序号和当前帧的序号是否满足上述条件B或条件C。在满足上述条件B(或条件C)的情况下,获取指示信息。
本申请实施例中,“在确定帧间预测模式为双向帧间预测模式、且当前帧的特征信息满足预设条件的情况下,视频解码器202获取指示信息”的方法与上述“在确定帧间预测模式为双向帧间预测模式的情况下,视频解码器202获取指示信息”的方法相同。
结合上述描述,表1为在确定帧间预测模式为双向帧间预测模式、且当前帧的特征信息满足预设条件的情况下,视频解码器202采用上述第一种实现方式获取指示信息的语法表。prediction_unit()为预测图像块的语法结构体,描述确定当前图像块中每一子块的运动信息的方法。
表1中,x0和y0分别表示当前图像块中的子块相对于当前图像块左上顶点的水平坐标偏移和竖直坐标偏移,nPbW表示当前图像块的宽,nPbH表示当前图像块的高。inter_pred_idc[x0][y0]的取值为PRED_L0时,表示当前子块的帧间预测为前向预测。inter_pred_idc[x0][y0]的取值为PRED_L1时,表示当前子块的帧间预测为后向预测。inter_pred_idc[x0][y0]的取值为PRED_BI时,表示当前子块的帧间预测为双向预测。
在双向帧间预测(即inter_pred_idc[x0][y0]==PRED_BI)的情况下,若满足预设条件(conditions),则解析mv_derived_flag_l0[x0][y0]。若mv_derived_flag_l0的取值不为第一预设值,则解析mv_derived_flag_l1[x0][y0]。在mv_derived_flag_l0的取值为第一预设值或者mv_derived_flag_l1[x0][y0]的取值为第五预设值的情况下,确定当前图像块的子块的运动信息,即确定参考帧的索引值ref_idx_l0[x0][y0]、运动矢量预测值标志mvp_l0_flag[x0][y0]以及运动矢量差值mvd_coding(x0,y0,0)。
表1
Figure PCTCN2019071471-appb-000008
Figure PCTCN2019071471-appb-000009
结合上述描述,表2为在确定帧间预测模式为双向帧间预测模式、且当前帧的特征信息满足预设条件的情况下,视频解码器202采用上述第三种实现方式获取指示信息的语法表。
表2中,在双向帧间预测(即inter_pred_idc[x0][y0]==PRED_BI)的情况下,若满足预设条件(conditions),则解析derived_mv_flag[x0][y0]。若derived_mv_flag[x0][y0]的取值为第二预设值,则确定当前图像块的子块的运动信息,即确定参考帧的索引值ref_idx_lx[x0][y0]、运动矢量预测值标志mvp_lx_flag[x0][y0]以及运动矢量差值mvd_coding(x0,y0,x)。
表2
Figure PCTCN2019071471-appb-000010
结合上述描述,表3为在确定帧间预测模式为双向帧间预测模式、且当前帧的特征信息满足第一预设条件的情况下,视频解码器202采用上述第四种实现方式获取指示信息的语法表。
表3中,在双向帧间预测(即inter_pred_idc[x0][y0]==PRED_BI)的情况下,若满足预设条件(conditions),则解析derived_mv_flag[x0][y0]。若derived_mv_flag[x0][y0]的取值为第四预设值,则确定derived_ref_num,并在derived_ref_num的数值大于1的情况下,确定当前图像块的子块的运动信息,即确定参考帧的索引值ref_idx_lx[x0][y0]、运动矢量预测值标志mvp_lx_flag[x0][y0]以及运动矢量差值mvd_coding(x0,y0,x)。
表3
Figure PCTCN2019071471-appb-000011
上述第一标识、第二标识、第三标识、第四标识均可以为预先设定的,也可以在高层语法,例如:SPS、PPS、条带头(slice header)或条带片段头(slice segment header)等参数集中指定,本申请实施例对此不作具体限定。
视频解码器202在确定帧间预测模式为双向帧间预测模式、且当前帧的特征信息满足预设条件的情况下,获取指示信息,有效的提高视频解码器202的解码速率,减少信息冗余。
S401、视频解码器202获取第一运动信息。
可选的,视频解码器202解析码流,获取第一参考帧的索引值、第一运动矢量预测值标志和第一运动矢量差,即获取第一运动信息。第一运动矢量预测值标志用于指 示第一预测运动矢量在第一候选运动矢量列表的索引值,第一预测运动矢量为当前图像块在第一方向的预测运动矢量,第一运动矢量差为第一预测运动矢量与第一运动矢量的差值,第一运动矢量为当前图像块在第一方向的运动矢量。
上述表1~表3所示的语法表中,视频解码器202均确定出了当前图像块的子块在第一方向的运动信息。
S402、视频解码器202根据第一运动信息,确定第二运动信息。
在第一种实现方式中,视频解码器202确定第二运动信息的方法为:视频解码器202从第一运动信息中选取第一参考帧的索引值,并根据该第一参考帧的索引值和第一参考帧列表,确定第一参考帧的序号;根据当前帧的序号和第一参考帧的序号,通过预设公式,计算第二参考帧的序号;根据第二参考帧的序号和第二参考帧列表,确定第二参考帧的索引值;根据第一运动信息和第二参考帧的索引,确定第二运动信息。
这里,预设公式可以为POC_listY=2*POC_Cur-POC_listX。其中,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号。
示例性的,若当前帧的序号为4,第一参考帧的序号为2,第二参考帧列表为[6,8],根据公式POC_listY=2*POC_Cur-POC_listX,确定第二参考帧的序号为6,则视频解码器202确定第二参考帧的索引值ref_lY_idx为0。
可选的,预设公式还可以为(POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0。需要说明的是,若第二参考帧列表中有多个参考帧的序号满足该公式,则视频解码器202先选择出abs((POC_listY-POC_Cur)-(POC_Cur-POC_listX))最小的参考帧,再选择abs(POC_listY-POC_Cur)最小的参考帧,以确定第二参考帧的索引值。其中,abs为绝对值函数。
示例性的,若当前帧的序号为4,第一参考帧的序号为2,第二参考帧列表为[5,7,8],根据公式(POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0确定第二参考帧的序号为5,则视频解码器202确定第二参考帧的索引值ref_lY_idx为0。
可选的,预设公式还可以为POC_listX≠POC_listY。需要说明的是,若第二参考帧列表中有多个参考帧的序号满足该公式,则视频解码器202先选择出abs((POC_listY-POC_Cur)-(POC_Cur-POC_listX))最小的参考帧,再选择abs(POC_listY-POC_Cur)最小的参考帧,以确定第二参考帧的索引值。其中,abs为绝对值函数。
示例性的,若当前帧的序号为4,第一参考帧的序号为2,第二参考帧列表为[3,2,1,0],根据公式POC_listX≠POC_listY确定第二参考帧的序号为3,则视频解码器202确定第二参考帧的索引值ref_lY_idx为0。
可选的,预设公式还可以为POC_listY0=2*POC_Cur-POC_listX、(POC_Cur-POC_listX)*(POC_listY0'-POC_Cur)>0以及POC_listX≠POC_listY0″。在这种情况下,视频解码器202确定第二参考帧的索引值的方法具体为:根据当前帧的序号和第一参考帧的序号,通过公式POC_listY0=2*POC_Cur-POC_listX,计算第一序号,其中,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY0表示第一序号;在第二参考帧列表包括第一序号的情况下,将第一序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值;在第二参考帧列表不包括第一序号的情况下,根据当前帧的序号和第一参考帧的序号,通过(POC_Cur-POC_listX)* (POC_listY0'-POC_Cur)>0,计算第二序号,其中,POC_listY0'表示第二序号;在第二参考帧列表包括第二序号的情况下,将第二序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值;在第二参考帧列表不包括第二序号的情况下,根据当前帧的序号和第一参考帧的序号,通过公式POC_listX≠POC_listY0″,计算第三序号,其中,POC_listY0″表示第三序号;将第三序号表征的参考帧在第二参考帧列表中的编号确定为第二参考帧的索引值。
在第二种实现方式中,视频解码器202确定第二运动信息的方法为:视频解码器202解析码流,获取第二参考帧的索引值,并根据第一运动信息和第二参考帧的索引值,确定第二运动信息。第二参考帧的索引值也可以为预先定义的,或者在SPS、PPS、条带头(slice header)或条带片段头(slice segment header)等参数集中指定,本申请实施例对此不作具体限定。
可以看出,在第一种实现方式和第二种实现方式中,视频解码器202均根据第一运动信息和第二参考帧的索引值,确定第二运动信息。
可选的,视频解码器202可以计算当前图像块在第二方向的所有运动信息,也可以计算当前图像块在第二方向的部分运动信息。
现对视频解码器202均根据第一运动信息和第二参考帧的索引值,确定第二运动信息的过程进行描述。
可选的,“视频解码器202根据第一运动信息和第二参考帧的索引值,确定第二运动信息”的方法可以为:获取第一运动信息中的第一参考帧的索引值,并根据第一参考帧的索引值和第一参考帧列表,确定第一参考帧的序号;获取第二参考帧的索引值,并根据第二参考帧的索引值和第二参考帧列表,确定第二参考帧的序号;根据第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量(当前图像块在第一方向的运动矢量);根据下述公式确定第二运动信息中的第二运动矢量:
Figure PCTCN2019071471-appb-000012
其中,mv_lY表示第二运动矢量,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,mv_lX表示第一运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
其中,视频解码器202采用与上述AMVP模式或合并模式中编码端构建候选运动信息列表相同的方式构建候选运动信息列表,并根据第一运动矢量预测标志在该候选运动信息列表中确定第一预测运动矢量,这样,视频解码器202可将第一运动预测矢量与第一运动矢量差的和确定为第一运动矢量。
可选的,第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况下,或者,在第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况下,视频解码器202可直接令mv_lY=-mv_lX。
示例性的,“第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况”,或者,“第一参考帧为当前图像块的后向参考帧,第二参考 帧为当前图像块的前向参考帧的情况”均可以用公式(POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0表示,或者用公式POC_listY=2*POC_Cur-POC_listX表示。
“第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况”或者“第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况”均可以用公式(POC_Cur-POC_listX)*(POC_listY-POC_Cur)<0表示。
可选的,“视频解码器202根据第一运动信息和第二参考帧的索引值,确定第二运动信息”的方法可以为:获取第一运动信息中的第一参考帧的索引值和第一运动矢量差,并根据第一参考帧的索引值和第一参考帧列表,确定第一参考帧的序号;获取第二参考帧的索引值,并根据第二参考帧的索引值和第二参考帧列表确定第二参考帧的序号,根据第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,第二预测运动矢量为当前图像块在第二方向的预测运动矢量;根据下述公式确定第二运动信息中的第二运动矢量差:
Figure PCTCN2019071471-appb-000013
其中,mvd_lY表示第二运动矢量差,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号,mvd_lX表示第一运动矢量差;根据第二预测运动矢量和第二运动矢量差,确定第二运动矢量,第二运动矢量为当前图像块在第二方向的运动矢量。
可选的,第一参考帧为当前图像块的前向参考帧,第二参考帧为当前图像块的后向参考帧的情况下,或者,在第一参考帧为当前图像块的后向参考帧,第二参考帧为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的前向参考帧的情况下,或者,在第一参考帧和第二参考帧均为当前图像块的后向参考帧的情况下,视频解码器202可直接令mvd_lY=-mvd_lX。
示例性的,若(POC_Cur-POC_listX)*(POC_listY-POC_Cur)>0、POC_listY=2*POC_Cur-POC_listX或(POC_Cur-POC_listX)*(POC_listY-POC_Cur)<0,视频解码器202直接令mvd_lY=-mvd_lX。
S403、视频解码器202根据第一运动信息和第二运动信息,确定当前图像块的预测像素。
可选的,视频解码器202在S402中确定出第一运动矢量和第二运动矢量,这样,视频解码器202可根据第一运动矢量和第一参考帧列表确定第一参考图像块,根据第二运动矢量和第二参考帧列表确定第二参考图像块,进而根据第一参考图像块和第二参考图像块,确定当前图像块的预测像素,即视频解码器202完成运动补偿过程。
视频解码器202根据第一参考图像块和第二参考图像块确定当前图像块的预测像素的方法可以参考现有的任意一种方法,本申请实施例对此不作具体限定。
本申请实施例提供的双向帧间预测方法中,视频解码器202从编码后的码流中仅仅可获取到第一运动信息,在获取到第一运动信息后,视频编码器202根据该第一运动信息计算出第二运动信息,进一步地,根据该第一运动信息和第二运动信息确定出当前图像块的预测像素。与现有技术相比,本申请提供的方法无需再传输每一图像块在所有方向的运动信息,有效的减少了运动信息的传输数量,提高了传输资源的有效 利用率,提高了传输速率,且提高了编解码压缩效率。
图4所示的双向帧间预测方法是针对当前图像块进行描述的,即可以理解为当前图像块基于AMVP模式进行帧间预测。
容易理解的是,本申请提供的双向帧间预测方法也适用于非平动运动模型预测模式,如4参数仿射变换运动模型、6参数仿射变换运动模型、8参数双线性运动模型等。在这种场景中,当前图像块包括至少一个子块,上述当前图像块的运动信息包括当前图像块的所有子块中每个子块的运动信息。视频解码器202确定每个子块运动信息(在第一方向的运动信息和在第二方向的运动信息)与视频解码器202确定当前图像块的运动信息的方法类似。
对于非平动运动模型预测模式,视频解码器202根据第i个控制点在第一方向的运动矢量,采用下述公式计算第i个控制点在第二方向的运动矢量:
Figure PCTCN2019071471-appb-000014
该公式中,mvi_lY表示第i个控制点在第二方向的运动矢量,mvi_lX表示第i个控制点在第一方向的运动矢量,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号。
相应的,视频解码器202根据第i个控制点在第一方向的运动矢量差,采用下述公式计算第i个控制点在第二方向的运动矢量差:
Figure PCTCN2019071471-appb-000015
该公式中,mvdi_lY表示第i个控制点在第二方向的运动矢量差,mvdi_lX表示第i个控制点在第一方向的运动矢量差,POC_Cur表示当前帧的序号,POC_listX表示第一参考帧的序号,POC_listY表示第二参考帧的序号。
与视频解码器202相对,本申请实施例中视频编码器102对当前图像块进行双向运动估计,以确定当前图像块在第一方向的运动信息,并根据当前图像块在第一方向的运动信息计算当前图像块在第二方向的运动信息,这样,视频编码器102根据当前图像块在第一方向的运动信息以及当前图像块在第二方向的运动信息确定当前图像块的预测图像块。之后,视频编码器102对当前图像块与其预测图像块之间的残差执行变换和量化等操作,生成码流并向视频解码器202发送该码流。该码流包括当前图像块在第一方向的运动信息。
“视频编码器102根据当前图像块在第一方向的运动信息计算当前图像块在第二方向的运动信息”的方法可以参考上述“视频解码器202根据第一运动信息,确定第二运动信息”方法,即参考上述S402的描述,本申请对此不再进行详细赘述。
综上,对于双向帧间预测,本申请提供的双向帧间预测方法无需传输每一图像块在所有方向的运动信息,仅传输某一方向的运动信息即可,有效的减少了运动信息的传输数量,提高了传输资源的有效利用率,提高了传输速率,且提高了编解码压缩效率。
本申请实施例提供一种双向帧间预测装置,该双向帧间预测装置可以为视频解码器。具体的,双向帧间预测装置用于执行以上双向帧间预测方法中的视频解码器202所执行的步骤。本申请实施例提供的双向帧间预测装置可以包括相应步骤所对应的模 块。
本申请实施例可以根据上述方法示例对双向帧间预测装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图5示出上述实施例中所涉及的双向帧间预测装置的一种可能的结构示意图。如图5所示,双向帧间预测装置5包括获取单元50和确定单元51。
获取单元50用于支持该双向帧间预测装置执行上述实施例中的S400、S401等,和/或用于本文所描述的技术的其它过程。
确定单元51用于支持该双向帧间预测装置执行上述实施例中的S402、S403等,和/或用于本文所描述的技术的其它过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
当然,本申请实施例提供的双向帧间预测装置包括但不限于上述模块,例如:双向帧间预测装置还可以包括存储单元52。
存储单元52可以用于存储该双向帧间预测装置的程序代码和数据。
在采用集成的单元的情况下,本申请实施例提供的双向帧间预测装置的结构示意图如图6所示。在图6中,双向帧间预测装置6包括:处理模块60和通信模块61。处理模块60用于对双向帧间预测装置的动作进行控制管理,例如,执行上述获取单元50和确定单元51执行的步骤,和/或用于执行本文所描述的技术的其它过程。通信模块61用于支持双向帧间预测装置与其他设备之间的交互。如图6所示,双向帧间预测装置还可以包括存储模块62,存储模块62用于存储双向帧间预测装置的程序代码和数据,例如存储上述存储单元52所保存的内容。
其中,处理模块60可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),ASIC,FPGA或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块61可以是收发器、RF电路或通信接口等。存储模块62可以是存储器。
其中,上述方法实施例涉及的各场景的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
上述双向帧间预测装置5和双向帧间预测装置6均可执行上述图4所示的双向帧间预测方法,双向帧间预测装置5和双向帧间预测装置6具体可以是视频解码装置或者其他具有视频编解码功能的设备。双向帧间预测装置5和双向帧间预测装置6可以用于在解码过程中进行图像预测。
本申请还提供一种终端,该终端包括:一个或多个处理器、存储器、通信接口。 该存储器、通信接口与一个或多个处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括指令,当一个或多个处理器执行指令时,终端执行本申请实施例的双向帧间预测方法。
这里的终端可以是视频显示设备,智能手机,便携式电脑以及其它可以处理视频或者播放视频的设备。
本申请还提供一种视频解码器,包括非易失性存储介质,以及中央处理器,所述非易失性存储介质存储有可执行程序,所述中央处理器与所述非易失性存储介质连接,并执行所述可执行程序以实现本申请实施例的双向帧间预测方法。
本申请还提供一种解码器,所述解码器包括本申请实施例中的双向帧间预测装置(双向帧间预测装置5或双向帧间预测装置6)以及重建模块,其中,所述重建模块用于根据所述双向帧间预测装置得到的预测图像素确定当前图像块的重建像素值。
本申请另一实施例还提供一种计算机可读存储介质,该计算机可读存储介质包括一个或多个程序代码,该一个或多个程序包括指令,当终端中的处理器在执行该程序代码时,该终端执行如图4所示的双向帧间预测方法。
在本申请的另一实施例中,还提供一种计算机程序产品,该计算机程序产品包括计算机执行指令,该计算机执行指令存储在计算机可读存储介质中;终端的至少一个处理器可以从计算机可读存储介质读取该计算机执行指令,至少一个处理器执行该计算机执行指令使得终端实施执行图4所示的双向帧间预测方法中的视频解码器202的步骤。
在上述实施例中,可以全部或部分的通过软件,硬件,固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式出现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。
所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘,硬盘、磁带)、光介质(例如,DVD)或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如 多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (23)

  1. 一种双向帧间预测方法,其特征在于,包括:
    获取指示信息,所述指示信息用于指示根据第一运动信息确定第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息;
    获取所述第一运动信息;
    根据所述第一运动信息,确定所述第二运动信息;
    根据所述第一运动信息和所述第二运动信息,确定所述当前图像块的预测像素。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一运动信息,确定第二运动信息,具体包括:
    获取所述第一运动信息中的第一参考帧的索引值,并根据所述第一参考帧的索引值和第一参考帧列表,确定所述第一参考帧的序号,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,并根据所述第二参考帧的索引值和第二参考帧列表,确定所述第二参考帧的序号,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    根据所述第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,所述第一运动矢量为所述当前图像块在所述第一方向的运动矢量;
    根据下述公式确定所述第二运动信息中的第二运动矢量:
    Figure PCTCN2019071471-appb-100001
    其中,mv_lY表示所述第二运动矢量,POC_Cur表示所述当前帧的序号,POC_listX表示所述第一参考帧的序号,POC_listY表示所述第二参考帧的序号,mv_lX表示所述第一运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述第一运动信息,确定第二运动信息,具体包括:
    获取所述第一运动信息中的第一参考帧的索引值,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    根据所述第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,所述第一运动矢量为所述当前图像块在所述第一方向的运动矢量;
    所述第一参考帧为所述当前图像块的前向参考帧,所述第二参考帧为所述当前图像块的后向参考帧的情况下,或者,在所述第一参考帧为所述当前图像块的后向参考帧,所述第二参考帧为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的后向参考帧的情况下,根据下述公 式确定所述第二运动信息中的第二运动矢量:
    mv_lY=-mv_lX
    其中,mv_lY表示所述第二运动矢量,mv_lX表示所述第一运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述第一运动信息,确定第二运动信息,具体包括:
    获取所述第一运动信息中的第一参考帧的索引值和第一运动矢量差,并根据所述第一参考帧的索引值和第一参考帧列表,确定所述第一参考帧的序号,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,并根据所述第二参考帧的索引值和第二参考帧列表确定所述第二参考帧的序号,根据所述第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,所述第二预测运动矢量为所述当前图像块在所述第二方向的预测运动矢量,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    根据下述公式计算所述第二运动信息中的第二运动矢量差:
    Figure PCTCN2019071471-appb-100002
    其中,mvd_lY表示所述第二运动矢量差,POC_Cur表示所述当前帧的序号,POC_listX表示所述第一参考帧的序号,POC_listY表示所述第二参考帧的序号,mvd_lX表示所述第一运动矢量差;
    根据所述第二预测运动矢量和所述第二运动矢量差,确定第二运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述第一运动信息,确定第二运动信息,具体包括:
    获取所述第一运动信息中的第一参考帧的索引值和第一运动矢量,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,根据所述第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,所述第二预测运动矢量为所述当前图像块在所述第二方向的预测运动矢量,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    在所述第一参考帧为所述当前图像块的前向参考帧,所述第二参考帧为所述当前图像块的后向参考帧的情况下,或者,在所述第一参考帧为所述当前图像块的后向参考帧,所述第二参考帧为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的后向参考帧的情况下,根据下述公式计算所述第二运动信息中的第二运动矢量差:
    mvd_lY=-mvd_lX
    其中,mvd_lY表示所述第二运动矢量差,mvd_lX表示所述第一运动矢量差;
    根据所述第二预测运动矢量和所述第二运动矢量差,确定第二运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  6. 根据权利要求2-5中任意一项所述的方法,其特征在于,所述获取第二参考帧的索引值,具体包括:
    根据所述当前帧的序号和所述第一参考帧的序号,通过公式POC_listY0=2*POC_Cur-POC_listX,计算第一序号,其中,POC_Cur表示所述当前帧的序号,POC_listX表示所述第一参考帧的序号,POC_listY0表示所述第一序号;
    在所述第二参考帧列表包括所述第一序号的情况下,将所述第一序号表征的参考帧在所述第二参考帧列表中的编号确定为所述第二参考帧的索引值。
  7. 根据权利要求2-5中任意一项所述的方法,其特征在于,所述获取第二参考帧的索引值,具体包括:
    根据所述当前帧的序号和所述第一参考帧的序号,通过公式(POC_Cur-POC_listX)*(POC_listY0'-POC_Cur)>0,计算第二序号,其中,POC_listY0'表示所述第二序号;
    在所述第二参考帧列表包括所述第二序号的情况下,将所述第二序号表征的参考帧在所述第二参考帧列表中的编号确定为所述第二参考帧的索引值。
  8. 根据权利要求2-5中任意一项所述的方法,其特征在于,所述获取第二参考帧的索引值,具体包括:
    根据所述当前帧的序号和所述第一参考帧的序号,通过公式POC_listX≠POC_listY0″,计算第三序号,其中,POC_listY0″表示所述第三序号;
    将所述第三序号表征的参考帧在所述第二参考帧列表中的编号确定为所述第二参考帧的索引值。
  9. 一种双向帧间预测方法,其特征在于,包括:
    解析码流,获取第一标识,所述第一标识用于指示是否根据第一运动信息确定第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息;若所述第一标识的取值为第一预设值,获取所述第一运动信息,并根据所述第一运动信息确定所述第二运动信息;根据所述第一运动信息和所述第二运功信息,确定所述当前图像块的预测像素;
    或者,
    解析码流,获取第二标识,所述第二标识用于指示是否采用运动信息推导算法计算所述当前图像块的运动信息;若所述第二标识的取值为第二预设值,获取第三标识,所述第三标识用于指示是否根据第一运动信息确定第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息;若所述第三标识取值为第三预设值,获取所述第一运动信息,并根据所述第一运动信息确定所述第二运动信息;根据所述第一运动信息和所述第二运功信息,确定所述当前图像块的预测像素;
    或者,
    解析码流,获取第二标识,所述第二标识用于指示是否采用运动信息推导算法计 算所述当前图像块的运动信息;若所述第二标识的取值为第二预设值,获取第一运动信息,并根据所述第一运动信息确定所述第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息;根据所述第一运动信息和所述第二运功信息,确定所述当前图像块的预测像素;
    或者,
    解析码流,获取第四标识,所述第四标识用于指示是否采用运动信息推导算法计算所述当前图像块的运动信息;若所述第四标识的取值为第四预设值,根据第一参考帧列表和第二参考帧列表,确定第一参考帧的索引值和第二参考帧的索引值,所述第一参考帧列表为所述当前图像块在所述第一方向的参考帧列表,所述第二参考帧列表为所述当前图像块在所述第二方向的参考帧列表,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第二参考帧为所述当前图像块在所述第二方向的参考帧;获取第一运动矢量差和第一运动矢量预测值标志,并根据第一运动信息确定第二运动信息,所述第一运动信息包括所述第一参考帧的索引值、所述第一运动矢量差和所述第一运动矢量预测值标志,所述第二运动信息为所述当前图像块在第二方向的运动信息;根据所述第一运动信息和所述第二运功信息,确定所述当前图像块的预测像素。
  10. 一种双向帧间预测装置,其特征在于,包括:
    获取单元,用于获取指示信息,所述指示信息用于指示根据第一运动信息确定第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息,以及获取所述第一运动信息;
    确定单元,用于根据所述获取单元获取到的所述第一运动信息,确定所述第二运动信息,以及用于根据所述第一运动信息和所述第二运动信息,确定所述当前图像块的预测像素。
  11. 根据权利要求10所述的装置,其特征在于,所述确定单元具体用于:
    获取所述第一运动信息中的第一参考帧的索引值,并根据所述第一参考帧的索引值和第一参考帧列表,确定所述第一参考帧的序号,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,并根据所述第二参考帧的索引值和第二参考帧列表,确定所述第二参考帧的序号,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    根据所述第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,所述第一运动矢量为所述当前图像块在所述第一方向的运动矢量;
    根据下述公式确定所述第二运动信息中的第二运动矢量:
    Figure PCTCN2019071471-appb-100003
    其中,mv_lY表示所述第二运动矢量,POC_Cur表示所述当前帧的序号,POC_listX表示所述第一参考帧的序号,POC_listY表示所述第二参考帧的序号,mv_lX表示所述第一运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  12. 根据权利要求10所述的装置,其特征在于,所述确定单元具体用于:
    获取所述第一运动信息中的第一参考帧的索引值,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    根据所述第一运动信息中的第一运动矢量差和第一运动矢量预测值标志,确定第一运动矢量,所述第一运动矢量为所述当前图像块在所述第一方向的运动矢量;
    所述第一参考帧为所述当前图像块的前向参考帧,所述第二参考帧为所述当前图像块的后向参考帧的情况下,或者,在所述第一参考帧为所述当前图像块的后向参考帧,所述第二参考帧为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的后向参考帧的情况下,根据下述公式计算所述第二运动信息中的第二运动矢量:
    mv_lY=-mv_lX
    其中,mv_lY表示所述第二运动矢量,mv_lX表示所述第一运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  13. 根据权利要求10所述的装置,其特征在于,所述确定单元具体用于:
    获取所述第一运动信息中的第一参考帧的索引值和第一运动矢量差,并根据所述第一参考帧的索引值和第一参考帧列表,确定所述第一参考帧的序号,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,并根据所述第二参考帧的索引值和第二参考帧列表确定所述第二参考帧的序号,根据所述第二参考帧的索引值和第二候选预测运动矢量列表确定第二预测运动矢量,所述第二预测运动矢量为所述当前图像块在所述第二方向的预测运动矢量,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    根据下述公式计算所述第二运动信息中的第二运动矢量差:
    Figure PCTCN2019071471-appb-100004
    其中,mvd_lY表示所述第二运动矢量差,POC_Cur表示所述当前帧的序号,POC_listX表示所述第一参考帧的序号,POC_listY表示所述第二参考帧的序号,mvd_lX表示所述第一运动矢量差;
    根据所述第二预测运动矢量和所述第二运动矢量差,确定第二运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  14. 根据权利要求10所述的装置,其特征在于,所述确定单元具体用于:
    获取所述第一运动信息中的第一参考帧的索引值和第一运动矢量,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第一参考帧的索引值为所述第一参考帧在所述第一参考帧列表中的编号;
    获取第二参考帧的索引值,根据所述第二参考帧的索引值和第二候选预测运动矢 量列表确定第二预测运动矢量,所述第二预测运动矢量为所述当前图像块在所述第二方向的预测运动矢量,所述第二参考帧为所述当前图像块在所述第二方向的参考帧,所述第二参考帧的索引值为所述第二参考帧在所述第二参考帧列表中的编号;
    在所述第一参考帧为所述当前图像块的前向参考帧,所述第二参考帧为所述当前图像块的后向参考帧的情况下,或者,在所述第一参考帧为所述当前图像块的后向参考帧,所述第二参考帧为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的前向参考帧的情况下,或者,在所述第一参考帧和所述第二参考帧均为所述当前图像块的后向参考帧的情况下,根据下述公式计算所述第二运动信息中的第二运动矢量差:
    mvd_lY=-mvd_lX
    其中,mvd_lY表示所述第二运动矢量差,mvd_lX表示所述第一运动矢量差;
    根据所述第二预测运动矢量和所述第二运动矢量差,确定第二运动矢量,所述第二运动矢量为所述当前图像块在所述第二方向的运动矢量。
  15. 根据权利要求11-14中任意一项所述的装置,其特征在于,所述获取单元具体用于:
    根据所述当前帧的序号和所述第一参考帧的序号,通过公式POC_listY0=2*POC_Cur-POC_listX,计算第一序号,其中,POC_Cur表示所述当前帧的序号,POC_listX表示所述第一参考帧的序号,POC_listY0表示所述第一序号;
    在所述第二参考帧列表包括所述第一序号的情况下,将所述第一序号表征的参考帧在所述第二参考帧列表中的编号确定为所述第二参考帧的索引值。
  16. 根据权利要求11-14中任意一项所述的装置,其特征在于,所述获取单元具体用于:
    根据所述当前帧的序号和所述第一参考帧的序号,通过公式(POC_Cur-POC_listX)*(POC_listY0'-POC_Cur)>0,计算第二序号,其中,POC_listY0'表示所述第二序号;
    在所述第二参考帧列表包括所述第二序号的情况下,将所述第二序号表征的参考帧在所述第二参考帧列表中的编号确定为所述第二参考帧的索引值。
  17. 根据权利要求11-14中任意一项所述的装置,其特征在于,所述获取单元具体用于:
    根据所述当前帧的序号和所述第一参考帧的序号,通过公式POC_listX≠POC_listY0″,计算第三序号,其中,POC_listY0″表示所述第三序号;
    将所述第三序号表征的参考帧在所述第二参考帧列表中的编号确定为所述第二参考帧的索引值。
  18. 一种双向帧间预测装置,其特征在于,包括:
    获取单元,用于解析码流,获取第一标识,所述第一标识用于指示是否根据第一运动信息确定第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息,以及用于若所述第一标识的取值为第一预设值,获取所述第一运动信息;确定单元,用于根据所述获取单元获取到的所述第一运动信息确定所述第二运动信息,以及用于根据所述第一运动信息 和所述第二运功信息,确定所述当前图像块的预测像素;
    或者,
    获取单元,用于解析码流,获取第二标识,所述第二标识用于指示是否采用运动信息推导算法计算所述当前图像块的运动信息,以及用于若所述第二标识的取值为第二预设值,获取第三标识,所述第三标识用于指示是否根据第一运动信息确定第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息,以及用于若所述第三标识取值为第三预设值,获取所述第一运动信息;处理单元,用于根据所述获取单元获取到的所述第一运动信息确定所述第二运动信息,以及用于根据所述第一运动信息和所述第二运功信息,确定所述当前图像块的预测像素;
    或者,
    获取单元,用于解析码流,获取第二标识,所述第二标识用于指示是否采用运动信息推导算法计算所述当前图像块的运动信息,以及用于若所述第二标识的取值为第二预设值,获取第一运动信息;确定单元,用于根据所述获取单元获取到的所述第一运动信息确定所述第二运动信息,所述第一运动信息为当前图像块在第一方向的运动信息,所述第二运动信息为所述当前图像块在第二方向的运动信息,以及用于根据所述第一运动信息和所述第二运功信息,确定所述当前图像块的预测像素;
    或者,
    获取单元,用于解析码流,获取第四标识,所述第四标识用于指示是否采用运动信息推导算法计算所述当前图像块的运动信息;确定单元,用于若所述获取单元获取到的所述第四标识的取值为第四预设值,根据第一参考帧列表和第二参考帧列表,确定第一参考帧的索引值和第二参考帧的索引值,所述第一参考帧列表为所述当前图像块在所述第一方向的参考帧列表,所述第二参考帧列表为所述当前图像块在所述第二方向的参考帧列表,所述第一参考帧为所述当前图像块在所述第一方向的参考帧,所述第二参考帧为所述当前图像块在所述第二方向的参考帧;所述获取单元,还用于获取第一运动矢量差和第一运动矢量预测值标志;所述确定单元,还用于根据第一运动信息确定第二运动信息,所述第一运动信息包括所述第一参考帧的索引值、所述第一运动矢量差和所述第一运动矢量预测值标志,所述第二运动信息为所述当前图像块在第二方向的运动信息;根据所述第一运动信息和所述第二运功信息,确定所述当前图像块的预测像素。
  19. 一种终端,其特征在于,所述终端包括:一个或多个处理器、存储器和通信接口;
    所述存储器、所述通信接口与所述一个或多个处理器连接;所述终端通过所述通信接口与其他设备通信,所述存储器用于存储计算机程序代码,所述计算机程序代码包括指令,当所述一个或多个处理器执行所述指令时,所述终端执行如权利要求1-8中任意一项所述的双向帧间预测方法或者执行如权利要求9所述的双向帧间预测方法。
  20. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在终端上运行时,使得所述终端执行如权利要求1-8中任意一项所述的双向帧间预测方法或者执行如权利要求9所述的双向帧间预测方法。
  21. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在终端上运行时,使得所述终端执行如权利要求1-8中任意一项所述的双向帧间预测方法或者执行如权利要求9所述的双向帧间预测方法。
  22. 一种视频解码器,包括非易失性存储介质以及中央处理器,其特征在于,所述非易失性存储介质存储有可执行程序,所述中央处理器与所述非易失性存储介质连接,当所述中央处理器执行所述可执行程序时,所述视频解码器执行如权利要求1-8中任意一项所述的双向帧间预测方法或者执行如权利要求9所述的双向帧间预测方法。
  23. 一种解码器,其特征在于,所述解码器包括如权利要求10-17中任意一项所述的双向帧间预测装置以及重建模块,其中,所述重建模块用于根据所述双向帧间预测装置得到的预测像素确定当前图像块的重建像素值;或者,所述解码器包括如权利要求18所述的双向帧间预测装置以及重建模块,其中,所述重建模块用于根据所述双向帧间预测装置得到的预测像素确定当前图像块的重建像素值。
PCT/CN2019/071471 2018-03-29 2019-01-11 一种双向帧间预测方法及装置 WO2019184556A1 (zh)

Priority Applications (17)

Application Number Priority Date Filing Date Title
EP19776934.2A EP3771211A4 (en) 2018-03-29 2019-01-11 METHOD AND DEVICE FOR BIDIRECTIONAL INTERFACE PREDICTION
AU2019240981A AU2019240981B2 (en) 2018-03-29 2019-01-11 Bidirectional inter-frame prediction method and device
BR112020018923-5A BR112020018923A2 (pt) 2018-03-29 2019-01-11 Método e aparelho de predição inter bidirecional
KR1020207029639A KR102525178B1 (ko) 2018-03-29 2019-01-11 양방향 인터 예측 방법 및 장치
KR1020237013341A KR102622150B1 (ko) 2018-03-29 2019-01-11 양방향 인터 예측 방법 및 장치
RU2020134970A RU2762262C1 (ru) 2018-03-29 2019-01-11 Способ и устройство двунаправленного межкадрового предсказания
JP2020552815A JP7143435B2 (ja) 2018-03-29 2019-01-11 双方向インター予測の方法および装置
MX2020010174A MX2020010174A (es) 2018-03-29 2019-01-11 Metodo de inter-prediccion bidireccional y aparato.
SG11202009509VA SG11202009509VA (en) 2018-03-29 2019-01-11 Bidirectional inter prediction method and apparatus
CA3095220A CA3095220C (en) 2018-03-29 2019-01-11 Bidirectional inter prediction method and apparatus
PH12020551544A PH12020551544A1 (en) 2018-03-29 2020-09-23 Bidirectional inter prediction method and apparatus
US16/948,625 US11350122B2 (en) 2018-03-29 2020-09-25 Bidirectional inter prediction method and apparatus
ZA2020/06408A ZA202006408B (en) 2018-03-29 2020-10-15 Bidirectional inter prediction method and apparatus
US17/731,109 US11924458B2 (en) 2018-03-29 2022-04-27 Bidirectional inter prediction method and apparatus
US17/827,361 US11838535B2 (en) 2018-03-29 2022-05-27 Bidirectional inter prediction method and apparatus
JP2022146320A JP2022179516A (ja) 2018-03-29 2022-09-14 双方向インター予測の方法および装置
US18/416,294 US20240171765A1 (en) 2018-03-29 2024-01-18 Bidirectional inter prediction method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810274457.X 2018-03-29
CN201810274457.XA CN110324637B (zh) 2018-03-29 2018-03-29 一种双向帧间预测方法及装置

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US201916948625A Continuation 2018-03-29 2019-09-25
US16/948,625 Continuation US11350122B2 (en) 2018-03-29 2020-09-25 Bidirectional inter prediction method and apparatus

Publications (1)

Publication Number Publication Date
WO2019184556A1 true WO2019184556A1 (zh) 2019-10-03

Family

ID=68062196

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/071471 WO2019184556A1 (zh) 2018-03-29 2019-01-11 一种双向帧间预测方法及装置

Country Status (14)

Country Link
US (4) US11350122B2 (zh)
EP (1) EP3771211A4 (zh)
JP (2) JP7143435B2 (zh)
KR (2) KR102525178B1 (zh)
CN (6) CN110324637B (zh)
AU (1) AU2019240981B2 (zh)
BR (1) BR112020018923A2 (zh)
CA (1) CA3095220C (zh)
MX (1) MX2020010174A (zh)
PH (1) PH12020551544A1 (zh)
RU (1) RU2762262C1 (zh)
SG (1) SG11202009509VA (zh)
WO (1) WO2019184556A1 (zh)
ZA (1) ZA202006408B (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324637B (zh) * 2018-03-29 2023-11-17 华为技术有限公司 一种双向帧间预测方法及装置
CN113709488B (zh) * 2020-06-30 2023-02-28 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN114071159B (zh) * 2020-07-29 2023-06-30 Oppo广东移动通信有限公司 帧间预测方法、编码器、解码器及计算机可读存储介质
CN116320428A (zh) * 2021-12-20 2023-06-23 维沃移动通信有限公司 帧间预测方法及终端

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1525762A (zh) * 2003-09-12 2004-09-01 中国科学院计算技术研究所 一种用于视频编码的编码端/解码端双向预测方法
CN103039074A (zh) * 2010-07-28 2013-04-10 高通股份有限公司 Gpb帧中的双向运动向量的联合译码
CN103188490A (zh) * 2011-12-29 2013-07-03 朱洪波 视频编码中的合并补偿模式
CN103430540A (zh) * 2011-03-08 2013-12-04 高通股份有限公司 在视频译码中用于双向预测帧间模式的运动向量预测符(mvp)
CN104717512A (zh) * 2013-12-16 2015-06-17 浙江大学 一种前向双假设编码图像块的编解码方法和装置
WO2017188509A1 (ko) * 2016-04-28 2017-11-02 엘지전자(주) 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2356085T3 (es) * 2002-10-04 2011-04-04 Lg Electronics, Inc. Método de cálculo de vectores de movimiento de modo directo para imágenes b.
US8917769B2 (en) 2009-07-03 2014-12-23 Intel Corporation Methods and systems to estimate motion based on reconstructed reference frames at a video decoder
KR20110071047A (ko) * 2009-12-20 2011-06-28 엘지전자 주식회사 비디오 신호 디코딩 방법 및 장치
CN102835111B (zh) 2010-01-19 2015-08-12 三星电子株式会社 使用先前块的运动矢量作为当前块的运动矢量来对图像进行编码/解码的方法和设备
JP5281597B2 (ja) * 2010-02-04 2013-09-04 日本電信電話株式会社 動きベクトル予測方法,動きベクトル予測装置および動きベクトル予測プログラム
WO2011142815A1 (en) * 2010-05-12 2011-11-17 Thomson Licensing Methods and apparatus for uni-prediction of self-derivation of motion estimation
KR101873767B1 (ko) * 2010-05-26 2018-07-03 엘지전자 주식회사 비디오 신호의 처리 방법 및 장치
KR102285746B1 (ko) * 2011-02-09 2021-08-04 엘지전자 주식회사 움직임 정보 저장 방법 및 이를 이용한 시간적 움직임 벡터 예측자 유도 방법
EP2727357A1 (en) 2011-07-01 2014-05-07 Motorola Mobility LLC Motion vector prediction design simplification
JP5869688B2 (ja) * 2011-10-28 2016-02-24 サムスン エレクトロニクス カンパニー リミテッド インター予測方法及びその装置、動き補償方法及びその装置
JP2013106312A (ja) * 2011-11-16 2013-05-30 Jvc Kenwood Corp 画像符号化装置、画像符号化方法及び画像符号化プログラム
KR102070431B1 (ko) 2012-01-19 2020-01-28 삼성전자주식회사 쌍방향 예측 및 블록 병합을 제한하는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
JP5997363B2 (ja) 2012-04-15 2016-09-28 サムスン エレクトロニクス カンパニー リミテッド ビデオ復号化方法及びビデオ復号化装置
AU2013285746B2 (en) 2012-07-02 2015-09-24 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video determining inter-prediction reference picture list depending on block size
JP6120707B2 (ja) * 2013-07-08 2017-04-26 ルネサスエレクトロニクス株式会社 動画像符号化装置およびその動作方法
CN104427345B (zh) * 2013-09-11 2019-01-08 华为技术有限公司 运动矢量的获取方法、获取装置、视频编解码器及其方法
GB2524476B (en) 2014-03-14 2016-04-27 Canon Kk Method, device and computer program for optimizing transmission of motion vector related information when transmitting a video stream
EP3202143B8 (en) 2014-11-18 2019-09-25 MediaTek Inc. Method of bi-prediction video coding based on motion vectors from uni-prediction and merge candidate
US10958927B2 (en) * 2015-03-27 2021-03-23 Qualcomm Incorporated Motion information derivation mode determination in video coding
KR102646890B1 (ko) * 2015-10-13 2024-03-12 삼성전자주식회사 영상을 부호화 또는 복호화하는 방법 및 장치
JP6921870B2 (ja) 2016-05-24 2021-08-18 エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute 画像復号方法、画像符号化方法及び記録媒体
US10462462B2 (en) * 2016-09-29 2019-10-29 Qualcomm Incorporated Motion vector difference coding technique for video coding
US20180192071A1 (en) * 2017-01-05 2018-07-05 Mediatek Inc. Decoder-side motion vector restoration for video coding
CN110324637B (zh) * 2018-03-29 2023-11-17 华为技术有限公司 一种双向帧间预测方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1525762A (zh) * 2003-09-12 2004-09-01 中国科学院计算技术研究所 一种用于视频编码的编码端/解码端双向预测方法
CN103039074A (zh) * 2010-07-28 2013-04-10 高通股份有限公司 Gpb帧中的双向运动向量的联合译码
CN103430540A (zh) * 2011-03-08 2013-12-04 高通股份有限公司 在视频译码中用于双向预测帧间模式的运动向量预测符(mvp)
CN103188490A (zh) * 2011-12-29 2013-07-03 朱洪波 视频编码中的合并补偿模式
CN104717512A (zh) * 2013-12-16 2015-06-17 浙江大学 一种前向双假设编码图像块的编解码方法和装置
WO2017188509A1 (ko) * 2016-04-28 2017-11-02 엘지전자(주) 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치

Also Published As

Publication number Publication date
US11350122B2 (en) 2022-05-31
CN113315975A (zh) 2021-08-27
MX2020010174A (es) 2020-10-28
KR20200131317A (ko) 2020-11-23
PH12020551544A1 (en) 2021-06-07
EP3771211A4 (en) 2021-05-05
CN117560505A (zh) 2024-02-13
KR102525178B1 (ko) 2023-04-21
US20220286703A1 (en) 2022-09-08
RU2762262C1 (ru) 2021-12-17
AU2019240981A1 (en) 2020-10-15
CN112040244A (zh) 2020-12-04
SG11202009509VA (en) 2020-10-29
CN112040244B (zh) 2021-06-29
KR20230054916A (ko) 2023-04-25
US11924458B2 (en) 2024-03-05
ZA202006408B (en) 2022-09-28
US20240171765A1 (en) 2024-05-23
CA3095220C (en) 2023-12-12
KR102622150B1 (ko) 2024-01-05
US20210021853A1 (en) 2021-01-21
AU2019240981B2 (en) 2023-02-02
CN113315975B (zh) 2022-12-06
CN110324637A (zh) 2019-10-11
BR112020018923A2 (pt) 2020-12-29
CN117560506A (zh) 2024-02-13
JP2021519548A (ja) 2021-08-10
US11838535B2 (en) 2023-12-05
JP7143435B2 (ja) 2022-09-28
CA3095220A1 (en) 2019-10-03
US20220256184A1 (en) 2022-08-11
CN110324637B (zh) 2023-11-17
EP3771211A1 (en) 2021-01-27
JP2022179516A (ja) 2022-12-02
CN117528116A (zh) 2024-02-06

Similar Documents

Publication Publication Date Title
WO2019184556A1 (zh) 一种双向帧间预测方法及装置
WO2020048205A1 (zh) 应用于双向帧间预测中的参考帧获取方法及装置
CN110876065A (zh) 候选运动信息列表的构建方法、帧间预测方法及装置
CN112055970B (zh) 候选运动信息列表的构建方法、帧间预测方法及装置
CN110971899B (zh) 一种确定运动信息的方法、帧间预测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19776934

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3095220

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 122023008920

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2020552815

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019240981

Country of ref document: AU

Date of ref document: 20190111

Kind code of ref document: A

Ref document number: 20207029639

Country of ref document: KR

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112020018923

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2019776934

Country of ref document: EP

Effective date: 20201020

ENP Entry into the national phase

Ref document number: 112020018923

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20200923