CN112040242A - Inter-frame prediction method, device and equipment based on advanced motion vector expression - Google Patents

Inter-frame prediction method, device and equipment based on advanced motion vector expression Download PDF

Info

Publication number
CN112040242A
CN112040242A CN202010753396.2A CN202010753396A CN112040242A CN 112040242 A CN112040242 A CN 112040242A CN 202010753396 A CN202010753396 A CN 202010753396A CN 112040242 A CN112040242 A CN 112040242A
Authority
CN
China
Prior art keywords
motion information
offset
list
motion
current block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010753396.2A
Other languages
Chinese (zh)
Inventor
陈秀丽
江东
曾飞洋
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010753396.2A priority Critical patent/CN112040242A/en
Publication of CN112040242A publication Critical patent/CN112040242A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The method comprises the steps of adopting enhanced duplication checking to construct a basic motion information list for a current block, wherein the basic motion information list comprises at least one piece of basic motion information; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information; calculating an offset predictor of the current block using the plurality of offset motion information. According to the method, the basic motion information is subjected to duplicate checking by adopting the enhanced duplicate checking mode, the duplicate checking accuracy can be improved, the repetition of the basic motion information caused by omission is reduced, the data calculation amount is reduced, and the efficiency is improved.

Description

Inter-frame prediction method, device and equipment based on advanced motion vector expression
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for inter-frame prediction based on advanced motion vector expression.
Background
The video coding and decoding system mainly comprises three parts of coding, transmission and decoding, wherein the video coding mainly has the function of compressing video pixel data (RGB, YUV and the like) into a video code stream because the data volume of a video image is large, so that the data volume of the video is reduced, and the purposes of reducing the network bandwidth and reducing the storage space in the transmission process are achieved.
The video coding system mainly comprises video acquisition, prediction, transform quantization and entropy coding, wherein the prediction is divided into an intra-frame prediction part and an inter-frame prediction part, and the two parts are respectively used for removing the redundancy of video images in space and time.
Generally, the luminance and chrominance signal values of the pixels of the temporally adjacent frames are relatively close and have strong correlation. Inter-frame prediction searches for a matching block closest to a current block in a reference frame by using methods such as Motion search, and records Motion information such as a Motion Vector (MV) and a reference frame index between the current block and the matching block. And encoding the motion information and transmitting the motion information to a decoding end. At the decoding end, the decoder can find the matching block of the current block as long as the MV of the current block is analyzed through the corresponding syntax element, and the pixel value of the matching block is copied to the current block, namely the inter-frame prediction value of the current block.
Disclosure of Invention
The technical problem mainly solved by the application is to improve the technical problem of inaccurate weight checking and provide a prediction method, a coder-decoder and a device based on motion information adjustment.
In order to solve the technical problem, the application adopts a technical scheme that: an inter prediction method based on advanced motion vector representation, comprising: constructing a basic motion information list for the current block by adopting the enhanced duplication checking, wherein the basic motion information list comprises at least one piece of basic motion information; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using a plurality of shift motion vectors to obtain a plurality of shift motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The application also comprises a second technical scheme, and the inter-frame prediction device based on the advanced motion vector expression comprises a construction module, an offset motion vector determination module, an offset motion information acquisition module and a calculation module, wherein the construction module is used for constructing a basic motion information list for the current block by adopting the enhanced repetition checking, and the basic motion information list comprises at least one piece of basic motion information; the offset motion vector determining module is used for determining a plurality of offset motion vectors by using an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset prediction value of the current block by using a plurality of offset motion information.
The present application further includes a third technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a basic motion information list for the current block, wherein the basic motion information list comprises at least two basic motion information; calculating by utilizing at least two pieces of basic motion information to obtain new basic motion information and filling the new basic motion information into a basic motion information list; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using a plurality of shift motion vectors to obtain a plurality of shift motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The application also comprises a fourth technical scheme, wherein the inter-frame prediction device based on the advanced motion vector expression comprises a construction module, a filling module, an offset motion vector determination module, an offset motion information acquisition module and a calculation module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least two pieces of basic motion information; the filling module is used for calculating by utilizing at least two pieces of basic motion information to obtain new basic motion information and filling the new basic motion information into a basic motion information list; the offset motion vector determining module is used for determining a plurality of offset motion vectors by using the offset distance list and the offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset prediction value of the current block by using a plurality of offset motion information.
The present application further includes a fifth technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a basic motion information list for the current block based on the candidate motion information of the current block, wherein the basic motion information list comprises at least one piece of basic motion information, the source of the candidate motion information comprises a time domain candidate block of the current block, and the time domain candidate block comprises a time domain co-located block of at least one sub-block of the current block; determining a plurality of offset motion vectors using the offset distance list and the offset directional list; shifting the basic motion vector in each basic motion information by using a plurality of shift motion vectors to obtain a plurality of shift motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The application also comprises a sixth technical scheme, wherein the inter-frame prediction device based on the high-level motion vector expression comprises a construction module, an offset motion vector determination module, an offset motion information acquisition module and a calculation module, wherein the construction module is used for constructing a basic motion information list for the current block based on the candidate motion information of the current block, the basic motion information list comprises at least one piece of basic motion information, the source of the candidate motion information comprises a time domain candidate block of the current block, and the time domain candidate block comprises a time domain co-located block of at least one sub-block of the current block; an offset motion vector determination module to determine a plurality of offset motion vectors using an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is configured to calculate an offset prediction value of the current block using the plurality of offset motion information.
The present application further includes a seventh technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a basic motion information list for the current block based on the candidate motion information of the current block, wherein the basic motion information list comprises at least one piece of basic motion information, and the source of the candidate motion information comprises at least one historical motion vector; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifting motion vectors to obtain a plurality of shifting motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The application also comprises an eighth technical scheme, wherein the inter-frame prediction device based on the high-level motion vector expression comprises a construction module, an offset motion vector determination module, an offset motion information acquisition module and a calculation module, wherein the construction module is used for constructing a basic motion information list for the current block based on the candidate motion information of the current block, the basic motion information list comprises at least one piece of basic motion information, and the source of the candidate motion information comprises at least one historical motion vector; the offset motion vector determining module is used for determining a plurality of offset motion vectors by utilizing an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset prediction value of the current block by using a plurality of offset motion information.
The present application further includes a ninth technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; scaling the basic motion information to a specified precision, and filling the scaled basic motion information into a basic motion information list; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using a plurality of shift motion vectors to obtain a plurality of shift motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The application also comprises a tenth technical scheme, wherein the inter-frame prediction device based on the advanced motion vector expression comprises a construction module, a filling module, an offset motion vector determination module, an offset motion information acquisition module and a calculation module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least one piece of basic motion information; the filling module is used for scaling the basic motion information to specified precision and filling the scaled basic motion information into a basic motion information list; the offset motion vector determining module is used for determining a plurality of offset motion vectors by utilizing the offset distance list and the offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset prediction value of the current block by using a plurality of offset motion information.
The present application further includes an eleventh technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifting motion vectors to obtain a plurality of shifting motion information; calculating an offset predictor of the current block using the plurality of offset motion information; performing rough selection on a plurality of offset motion information according to the offset prediction value to obtain a rough selection result; taking the roughing result as a starting point, and performing motion search according to preset different motion vector precisions to obtain a plurality of search predicted values; and selecting the motion information corresponding to the search prediction value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
The application also comprises a twelfth technical scheme, wherein the interframe prediction device based on the high-level motion vector expression comprises a construction module, an offset motion vector determination module, an offset motion information acquisition module, a calculation module, a roughing module, a search prediction value acquisition module and a high-level motion vector expression motion information acquisition module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least one piece of basic motion information; the offset motion vector determining module is used for determining a plurality of offset motion vectors by utilizing an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset prediction value of the current block by using a plurality of offset motion information; the rough selection module is used for performing rough selection on the plurality of offset motion information according to the offset prediction value to obtain a rough selection result; the search predicted value acquisition module is used for carrying out motion search according to preset different motion vector precisions by taking the roughing result as a starting point to obtain a plurality of search predicted values; the high-level motion vector expression motion information acquisition module is used for selecting the motion information corresponding to the search prediction value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
The present application further includes a thirteenth technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifting motion vectors to obtain a plurality of shifting motion information; calculating an offset predictor of the current block using the plurality of offset motion information; selecting target motion information from the plurality of offset motion information according to the offset prediction value; correcting the motion vector in the target motion information by using the plurality of corrected motion vectors to obtain a plurality of corrected motion information; calculating a modified prediction value of the current block using the plurality of modified motion information; and selecting the motion information corresponding to the corrected predicted value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
The application also comprises a fourteenth technical scheme, wherein the interframe prediction device based on the high-level motion vector expression comprises a construction module, an offset motion vector determination module, an offset motion information acquisition module, a calculation module, a selection module, a correction calculation module and a high-level motion vector expression motion information acquisition module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least one piece of basic motion information; the offset motion vector determining module is used for determining a plurality of offset motion vectors by utilizing an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset predicted value of the current block by using a plurality of offset motion information; the selection module is used for selecting target motion information from the plurality of offset motion information according to the offset predicted value; the correction module is used for correcting the motion vector in the target motion information by using the plurality of corrected motion vectors to obtain a plurality of corrected motion information; the correction calculation module is used for calculating a correction predicted value of the current block by using the plurality of correction motion information; the advanced motion vector expression motion information acquisition module is used for selecting the motion information corresponding to the corrected predicted value with the minimum evaluation index as the advanced motion vector expression motion information of the current block.
The present application further includes a fifteenth technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; acquiring a plurality of offset initial values by using an offset distance list; if the motion vector in the basic motion information is a bidirectional motion vector, scaling the offset initial value by using the image display sequence difference value to obtain a motion vector offset value, and if the motion vector in the basic motion information is unidirectional, using the offset initial value as the motion vector offset value; determining a plurality of offset motion vectors in combination with the motion vector offset value and the plurality of offset directions in the list of offset directions; shifting the basic motion vector in each basic motion information by using a plurality of motion vector shift values to obtain a plurality of shift motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The application also comprises a sixteenth technical scheme, wherein the interframe prediction device based on the high-level motion vector expression comprises a construction module, an offset initial value acquisition module, a motion vector offset value determination module, an offset motion vector determination module, an offset motion information acquisition module and a calculation module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least one piece of basic motion information; the offset initial value acquisition module is used for acquiring a plurality of offset initial values by using the offset distance list; the motion vector offset value determining module is used for scaling an offset initial value by using an image display sequence difference value to obtain a motion vector offset value when a motion vector in the basic motion information is a bidirectional motion vector, and using the offset initial value as the motion vector offset value when the motion vector in the basic motion information is unidirectional; the offset motion vector determining module determines a plurality of offset motion vectors in combination with the motion vector offset value and a plurality of offset directions in an offset nematic table; the offset motion information acquisition module offsets the basic motion vector in each piece of basic motion information by using a plurality of motion vector offset values to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset prediction value of the current block by using a plurality of offset motion information.
The present application further includes a seventeenth technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifting motion vectors to obtain a plurality of shifting motion information; calculating an offset predictor of the current block using the plurality of offset motion information; acquiring advanced motion vector expression motion information of the current block from a plurality of offset motion information according to the offset predicted value; and performing inter-frame filtering on the predicted value corresponding to the high-level motion vector expression motion information.
The application also comprises an eighteenth technical scheme, wherein the interframe prediction device based on the high-level motion vector expression comprises a construction module, an offset motion vector determination module, an offset motion information acquisition module, a calculation module, a high-level motion vector expression motion information acquisition module and a filtering module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least one piece of basic motion information; an offset motion vector determination module to determine a plurality of offset motion vectors using an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset predicted value of the current block by using a plurality of offset motion information; the high-level motion vector expression motion information acquisition module is used for acquiring high-level motion vector expression motion information of the current block from a plurality of offset motion information according to the offset prediction value; and the filtering module performs interframe filtering on the predicted value corresponding to the high-level motion vector expression motion information.
The present application further includes a nineteenth technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifting motion vectors to obtain a plurality of shifting motion information; calculating an offset predictor of the current block using the plurality of offset motion information; acquiring advanced motion vector expression motion information of the current block from a plurality of offset motion information according to the offset predicted value; and performing bidirectional gradient correction on the predicted value corresponding to the high-level motion vector expression motion information.
The application also comprises a twentieth technical scheme, wherein the interframe prediction device based on the advanced motion vector expression comprises a construction module, an offset motion vector determination module, an offset motion information acquisition module, a calculation module, an advanced motion vector expression motion information acquisition module and a correction module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least one piece of basic motion information; an offset motion vector determination module to determine a plurality of offset motion vectors using an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset predicted value of the current block by using a plurality of offset motion information; the high-level motion vector expression motion information acquisition module is used for acquiring high-level motion vector expression motion information of the current block from a plurality of offset motion information according to the offset prediction value; the correction module is used for performing bidirectional gradient correction on the predicted value corresponding to the high-grade motion vector expression motion information.
The present application further includes a twenty-first technical solution, where an inter-frame prediction method based on advanced motion vector expression includes: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; judging whether the current frame of the current block meets the list updating condition; if so, counting the average value of the deviation values of all the blocks in the frames of the previously specified number, and determining a deviation distance list by using the average value; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The application also comprises a twenty-second technical scheme, and the inter-frame prediction device based on the advanced motion vector expression comprises a construction module, a judgment module, an offset distance list determination module, an offset motion vector determination module, an offset motion information acquisition module and a calculation module, wherein the construction module is used for constructing a basic motion information list for the current block, and the basic motion information list comprises at least one piece of basic motion information; the judging module is used for judging whether the current frame where the current block is located meets the list updating condition; the offset distance list determining module is used for counting the average value of the offset values of all the blocks in the frames of the prior appointed number and determining an offset distance list by using the average value if the list updating condition is met; the offset motion vector determining module is used for determining a plurality of offset motion vectors by utilizing an offset distance list and an offset direction list; the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is used for calculating an offset prediction value of the current block by using a plurality of offset motion information.
The application also includes a twenty-third technical scheme, and the method for inter-frame prediction based on the advanced motion vector expression comprises the following steps: constructing a base motion information list for the current block, the base motion information list including at least one base motion information; comparing the value of the motion vector in the basic motion information in at least one direction with a preset threshold value to obtain a comparison result; determining an offset distance list according to the comparison result; determining a plurality of offset motion vectors using the offset distance list and the offset direction list; shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information; an offset predictor of the current block is calculated using the plurality of offset motion information.
The present application further includes a twenty-fourth technical solution, in which an inter-frame prediction apparatus based on advanced motion vector expression includes: the device comprises a construction module, a comparison module, an offset distance list determination module, an offset motion vector determination module and a calculation module, wherein the construction module is used for constructing a basic motion information list for a current block, and the basic motion information list comprises at least one piece of basic motion information; the comparison module is used for comparing the value of the motion vector in the basic motion information in at least one direction with a preset threshold value to obtain a comparison result; the offset distance list determining module is used for determining an offset distance list according to the comparison result; the offset motion vector determining module is used for determining a plurality of offset motion vectors by utilizing the offset distance list and the offset direction list; the offset motion information determining module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information; the calculation module is configured to calculate an offset predictor for the current block using the plurality of offset motion information.
The application also comprises a twenty-fifth technical scheme, wherein the encoder comprises a processor and a memory connected with the processor, and the memory stores a computer program; the processor is used to execute the computer program stored in the memory to implement the method described above.
The present application further includes a twenty-sixth technical solution, which is a storage medium storing a computer program, and when the computer program is executed, the method is implemented.
The application further comprises a twenty-seventh technical scheme, and the electronic equipment comprises the encoder.
The beneficial effect of this application is: different from the situation in the prior art, the inter-frame prediction method in the embodiment of the application adopts the enhanced duplication checking mode to check the basic motion information, so that the duplication checking accuracy can be improved, the repetition probability of the basic motion information caused by omission is reduced, the data calculation amount is reduced, and the efficiency is improved.
Drawings
FIG. 1 is a schematic flow chart diagram of a first embodiment of an inter-frame prediction method based on advanced motion vector expression according to the present application;
FIG. 2 is a flow diagram illustrating an embodiment of constructing a base motion information list for a current block using enhanced repetition checking;
FIG. 3 is a schematic diagram of an embodiment of a current block and neighboring blocks of the present application;
FIG. 4 is a flowchart illustrating another embodiment of constructing a base motion information list for a current block using enhanced repetition checking;
FIG. 5 is a flowchart illustrating another embodiment of the present application for constructing a base motion information list for a current block using enhanced repetition checking;
FIG. 6 is a flowchart illustrating another embodiment of the present application for constructing a base motion information list for a current block by using enhanced repetition checking;
FIG. 7A is a flowchart illustrating an embodiment of determining whether the image display order of the reference frames in the two candidate motion information is the same and the motion vectors are the same;
FIG. 7B is a flowchart illustrating another embodiment of the present application for determining whether the image display order of the reference frames in the two candidate motion information is the same and the motion vectors are the same;
FIG. 8 is a schematic flow chart diagram of a second embodiment of the inter-frame prediction method based on advanced motion vector expression according to the present application;
FIG. 9 is a schematic flow chart diagram of a third embodiment of the inter-frame prediction method based on advanced motion vector expression according to the present application;
FIG. 10 is a schematic flow chart diagram illustrating a fourth embodiment of the inter-frame prediction method based on advanced motion vector expression according to the present application;
FIG. 11 is a schematic flow chart diagram illustrating a fifth embodiment of an inter-frame prediction method based on advanced motion vector expression according to the present application;
FIG. 12 is a flowchart illustrating a sixth embodiment of the inter prediction method based on advanced motion vector representation according to the present application;
FIG. 13 is a schematic flow chart diagram illustrating a seventh embodiment of an inter-frame prediction method based on advanced motion vector expression according to the present application;
FIG. 14 is a schematic diagram of the present application searching for a forward modified motion vector within the search range of the forward first prediction block;
FIG. 15 is a flowchart illustrating an eighth embodiment of the inter prediction method based on advanced motion vector representation according to the present application;
FIG. 16 is a flow chart showing a ninth embodiment of the inter prediction method based on advanced motion vector expression according to the present application;
FIG. 17 is a schematic flow chart diagram illustrating a tenth embodiment of an inter-frame prediction method based on advanced motion vector expression according to the present application;
FIG. 18 is a schematic view of the present application BGC;
FIG. 19 is a flowchart illustrating an eleventh embodiment of the inter prediction method based on advanced motion vector representation according to the present application;
FIG. 20 is a flowchart illustrating a twelfth embodiment of the inter prediction method based on advanced motion vector representation according to the present application;
FIG. 21 is a schematic structural diagram of a first embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 22 is a schematic structural diagram of a second embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 23 is a schematic structural diagram of a third embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 24 is a schematic structural diagram of a fourth embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 25 is a schematic structural diagram of a fifth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 26 is a schematic structural diagram of a sixth embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 27 is a schematic structural diagram of a seventh embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 28 is a schematic structural diagram of an eighth embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 29 is a schematic structural diagram of a ninth embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
FIG. 30 is a schematic structural diagram of a tenth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application;
fig. 31 is a schematic structural diagram of an eleventh embodiment of an inter prediction apparatus based on advanced motion vector representation according to the present application;
fig. 32 is a schematic structural diagram of a twelfth embodiment of an inter prediction apparatus based on advanced motion vector expression according to the present application;
FIG. 33 is a schematic block diagram of an embodiment of an encoder of the present application;
FIG. 34 is a schematic structural diagram of an embodiment of a storage medium of the present application;
fig. 35 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly understand that embodiments described herein may be combined with other embodiments without conflict.
An embodiment of the present application provides an inter-frame prediction method based on advanced motion vector expression, as shown in fig. 1, including:
s110: and constructing a basic motion information list for the current block by adopting the enhanced repetition checking, wherein the basic motion information list comprises at least one piece of basic motion information.
The current block may also be referred to as a current coding block, i.e., a block to be currently coded, and in some cases, a coding block may be referred to as a Coding Unit (CU). The video frame in which the current block is located may be referred to as the current frame.
The base motion information is selected from motion information of spatial candidate blocks adjacent to the current block, motion information of temporal candidate blocks of the current block, zero motion information, and the like. The motion information may include motion vectors and reference frame index information (forward reference frame index information and/or backward reference frame index information).
The current block has adjacent spatial candidate blocks in a plurality of angular directions, both the current block and the adjacent spatial candidate blocks belong to a current frame, and the adjacent spatial candidate blocks are located on an encoded side of the current block, for example, when the encoding direction is from left to right and from top to bottom, the adjacent spatial candidate blocks are located on the left side and the upper side of the current block. The motion information of the adjacent spatial candidate block is the spatial candidate block motion information.
Temporal motion information specifies motion information for a block (obtained by a fixed calculation) in which a reference frame (typically the first frame in a reference list) is located in the same position as the current block or a sub-block of the current block.
The basic motion information can be repeated, and the basic motion information is subjected to duplicate checking through enhanced duplicate checking, so that the duplicate checking accuracy can be improved, and the probability of repetition of the basic motion information caused by omission is reduced. For example, the enhanced duplicate checking of the embodiment of the present application may include any one or a combination of several ways of sampling duplicate checking, total duplicate checking, duplicate checking for the basic motion information, and the like.
S120: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
Specifically, for example, in an embodiment of the present application, as shown in table 1, an offset Distance list is provided, where the offset Distance list includes an offset Distance index (Distance IDX) and an offset Distance (Pixel Distance) corresponding to the offset Distance index, and the offset Distance is in units of pixels (pel).
Table 1 offset distance list.
Figure BDA0002610728250000081
As shown in table 2, an offset Direction list is provided, the Direction IDX represents the offset Direction index, and the Direction of the motion vector offset value with respect to the starting point.
Table 2 list of offset directions.
Direction IDX 00 01 10 11
x-axis + N/A N/A
y-axis N/A N/A +
In the embodiment of the present application, the offset distance list (table 1) includes five offset distances, and the offset direction list includes four offset direction indexes.
S130: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
In the embodiment of the present application, the offset motion vector is correlated with the base motion vector. The number of offset motion information is equal to the number of offset motion vectors multiplied by the number of base motion information. The reference frame index in the base motion information corresponds to the reference frame of the aforementioned base motion information.
When the motion vector in the basic motion information is a unidirectional motion vector, the offset distance for offset is a value obtained by amplifying the numerical value of table 1 by four times; for the motion vector in the base motion information is a bidirectional motion vector, refer to the following embodiments.
S140: an offset predictor of the current block is calculated using the plurality of offset motion information.
In the embodiment of the application, the offset motion information is used for performing motion compensation on the current block, so that a plurality of offset predicted values can be obtained.
According to the inter-frame prediction method, the basic motion information is subjected to duplicate checking by adopting the enhanced duplicate checking, the duplicate checking accuracy can be improved, the probability of repetition of the basic motion information caused by omission is reduced, the data calculation amount is reduced, and the efficiency is improved.
The present embodiment is a further extension of the above S110, and the extension is to perform all duplicate checking, specifically, the basic motion information list is constructed for the current block by using the enhanced duplicate checking, as shown in fig. 2, and the method includes:
s210: whether the candidate motion information of the current block is available is sequentially determined, and if the candidate motion information is available, S220 is performed.
The current block has a plurality of adjacent blocks in a plurality of angular directions, the current block and the adjacent blocks both belong to the current frame, the adjacent blocks are located on the encoded side of the current block, and the motion information of the adjacent blocks of the current block belongs to the candidate motion information of the current block. For example, in the embodiment of the present application, as shown in fig. 3, current neighboring blocks F, A, D, G and C are selected, wherein the current block has a neighboring block F at the bottom left corner, neighboring blocks a and D at the top left corner, and neighboring blocks G and C at the top right corner, and wherein the motion information of the neighboring blocks F, A, D, G and C is referred to as candidate motion information of the current block.
According to the position of the current block, the adjacent blocks may be the blocks which really exist in the current frame, and may also be the blocks which do not exist beyond the boundary of the current frame; for the real existing adjacent blocks, it may be coded blocks or uncoded blocks; the prediction mode adopted by the encoded neighboring blocks may be inter prediction, Intra prediction, or Intra Block Copy (IBC), and so on. Neighboring blocks that have been encoded inter-predicted may be referred to herein as available neighboring blocks, and remaining neighboring blocks (e.g., non-existent blocks, unencoded blocks, intra blocks, etc.) may be referred to as unavailable neighboring blocks.
Specifically, in the embodiment of the present application, the availability is determined according to whether the neighboring block is inside the image, is already encoded, and is intra-encoded; if a neighboring block is intra-picture, already coded, and not intra-coded, the motion information of the neighboring block is initially considered available.
S220: and performing duplicate checking with all previous candidate motion information, and if the duplicate checking is passed, performing S230.
In case candidate motion information is available, the candidate motion information is checked for duplication with previous candidate motion information. If the candidate motion information is not available, the duplicate checking of the candidate motion information is not needed, and the situation that the basic motion information in the basic motion information list is repeated due to the missing checking can be reduced by checking the currently available candidate motion information and all the candidate motion information.
It should be noted that S210 and S220 may also occur sequentially, or S210 and S220 may occur alternately, and the present application takes the alternate occurrence of S210 and S220 as an example, specifically as follows:
in this embodiment, it is assumed that the adjacent blocks F, A, D, G and C are ordered in the order of F- > G- > C- > a- > D, and the number of motion information of the base motion information list is 2. The duplication checking of the embodiment of the present application specifically includes:
(I) the method comprises the following steps And judging the availability of the adjacent block F, if the adjacent block F is available, the motion information of the adjacent block F can be used as the basic motion information to be filled into the basic motion information list, otherwise, the motion information of the adjacent block F is unavailable.
(II): and (3) judging the availability of the adjacent block G, if the motion information of the adjacent block F and the adjacent block G is available, not judging the availability of the subsequent adjacent block, and if not, performing the next step (III).
(III): judging the availability of the adjacent block C: if the adjacent block C is unavailable, setting the adjacent block C to be unavailable, and entering the next step, otherwise, if the adjacent block C is available, further judging:
judging whether the adjacent block G is available, if so, judging whether the adjacent block C and the adjacent block G are repeated;
judging whether the adjacent block F is available, if so, judging whether the adjacent block C and the adjacent block F are repeated;
the first condition is as follows: if the neighboring block G is not available or available, the motion information of the neighboring block C and the neighboring block G is not repeated;
and a second condition: the motion information of the neighboring block C and the neighboring block F is not repeated if the neighboring block F is not available or available.
If the two conditions are simultaneously satisfied, the adjacent block C is finally available, otherwise, the adjacent block C is unavailable; if two motion information are available, then the availability of the subsequent neighboring blocks is not judged, otherwise, the next step (IV) is performed.
(IV): judging the availability of the adjacent block A: if the adjacent block A is unavailable, setting the adjacent block A to be unavailable, and entering the next step, otherwise, if the adjacent block A is available, further judging:
judging whether the adjacent block C is available, and if so, judging whether the adjacent block A and the adjacent block C are repeated;
judging whether the adjacent block G is available, and if so, judging whether the adjacent block A and the adjacent block G are repeated;
judging whether the adjacent block F is available, and if so, judging whether the adjacent block A and the adjacent block F are repeated;
the first condition is as follows: if the neighboring block C is not available or available, the motion information of the neighboring block a and the neighboring block C is not repeated;
and a second condition: if the neighboring block G is not available or available, the motion information of the neighboring block A and the neighboring block G are not repeated;
and (3) carrying out a third condition: the motion information of the neighboring block a and the neighboring block F is not repeated if the neighboring block F is not available or usable.
If the three conditions are simultaneously satisfied, the adjacent block A is finally available, otherwise, the adjacent block A is unavailable; if two pieces of motion information are available, the availability of the subsequent adjacent blocks is not judged, otherwise, the next step (V) is carried out.
(V): judging the availability of the adjacent block D: if the adjacent block D is not available, setting the adjacent block D to be unavailable, and finishing the availability judgment, otherwise, if the adjacent block D is available, further judging:
judging whether the adjacent block A is available, and if so, judging whether the adjacent block D and the adjacent block A are repeated;
judging whether the adjacent block C is available, and if so, judging whether the adjacent block D and the adjacent block C are repeated;
judging whether the adjacent block G is available, and if so, judging whether the adjacent block D and the adjacent block G are repeated;
judging whether the adjacent block F is available, and if so, judging whether the adjacent block D and the adjacent block F are repeated;
the first condition is as follows: motion information of neighboring block D and neighboring block a is not repeated if neighboring block a is not available or is available;
and a second condition: if the neighboring block C is not available or can be used, the motion information of the neighboring block D and the neighboring block C is not repeated;
and (3) carrying out a third condition: motion information of the neighboring block D and the neighboring block G is not repeated if the neighboring block G is not available or available;
and a fourth condition: the motion information of neighboring block D and neighboring block F is not repeated if neighboring block F is not available or can be used.
If the above four conditions are satisfied simultaneously, the neighboring block D is finally available, otherwise the neighboring block D is unavailable.
S230: and filling the candidate motion information serving as basic motion information into a basic motion information list until the number of the basic motion information in the basic motion information list reaches a preset value.
In the above duplication checking, the motion information of the available and non-repetitive neighboring blocks is used as the candidate motion information and filled in the base motion information list as the base motion information. For example, in the implementation of the present application, the preset value of the number of basic motion information is 2, and when 2 available and non-repeated candidate motion information are found and filled into the basic motion information list; if only one available candidate motion information is searched, and the candidate motion information is not repeated, filling the candidate motion information into the basic motion information list. In the embodiment of the present application, the preset value is 2 for example, and in other embodiments, the preset value may also be 3.
If only one available and non-repeated candidate motion information is found, the motion information is filled into the base motion information list, and the candidate motion information is the motion information of the spatial candidate block, then step (VI) is performed.
(VI): motion vector information of a temporal candidate block in a Skip or Direct temporal (temporal) mode is added to a base motion information list, and whether the motion vector information of the temporal candidate block overlaps with the motion information of the spatial candidate block is determined.
If repeated, then (VII): zero motion vector information is padded into the base motion information list.
In the embodiment of the present application, it is also considered that the zero motion vector information and the motion information of the spatial candidate block constitute 2 pieces of non-repetitive base motion information.
In another embodiment, (VIII) may also be performed: and judging whether the zero motion vector information and the motion information of the spatial candidate block are repeated or not, if so, only one piece of basic motion information exists in the basic motion information list, and if not, only one piece of basic motion information or two pieces of nonrepeated basic motion information exists in the basic motion information list.
In another embodiment, the scheme of S110 may be extended to sample duplication checking, and constructing the base motion information list for the current block using enhanced duplication checking in S110, as shown in fig. 4, includes:
s310: whether the candidate motion information of the current block is available is sequentially determined, and if the candidate motion information is available and is from the spatial candidate block of the current block, S320 is performed.
The current block has a plurality of adjacent blocks in a plurality of angular directions, the current block and the adjacent blocks both belong to the current frame, the adjacent blocks are located on the encoded side of the current block, and the motion information of the adjacent blocks of the current block belongs to the candidate motion information of the current block. For example, in the embodiment of the present application, the current block has the neighboring block F at the bottom left corner, the neighboring blocks a and D at the top left corner, and the neighboring blocks G and C at the top right corner, wherein the motion information of the neighboring blocks F, A, D, G and C is referred to as the candidate motion information of the current block.
According to the position of the current block, the adjacent blocks may be the blocks which really exist in the current frame, and may also be the blocks which do not exist beyond the boundary of the current frame; for the real existing adjacent blocks, it may be coded blocks or uncoded blocks; the prediction mode adopted by the encoded neighboring blocks may be inter prediction, Intra prediction, or Intra Block Copy (IBC), and so on. Neighboring blocks that have been encoded inter-predicted may be referred to herein as available neighboring blocks, and remaining neighboring blocks (e.g., non-existent blocks, unencoded blocks, intra blocks, etc.) may be referred to as unavailable neighboring blocks.
Specifically, in the embodiment of the present application, the availability is determined according to whether the neighboring block is inside the image, is already encoded, and is intra-encoded; if a neighboring block is intra-picture, already coded, and not intra-coded, the motion information of the neighboring block is initially considered available.
Specifically, in the embodiment of the present application, the candidate motion information includes motion information of a spatial candidate block from the current block.
S320: at least one of the candidate motion information from the other spatial candidate blocks of the current block is extracted for duplication checking.
For example, as shown in fig. 3, when the motion information of the neighboring blocks F, A, D, G and C is selected as the motion information candidates, wherein the motion information candidates are the motion information of the neighboring block G, at least one motion information duplication check for the neighboring block G may be extracted from other spatial candidate blocks (F, A, D and C) of the current block, for example, the neighboring block C may be extracted, the motion information of the neighboring block C and the neighboring block G may be compared, and if the motion information of the neighboring block C and the neighboring block G is duplicated, only one of the motion information of the neighboring block C and the neighboring block G and the motion information duplication check for the other neighboring blocks (F, A and D) may be selected; if the motion information of the neighboring block C and the neighboring block G are repeated, one of the motion information of the neighboring block C and the neighboring block G is selected to be duplicated with the motion information of the other neighboring blocks (F, A and D), or the motion information of both the neighboring block C and the neighboring block G is duplicated with the motion information of the other neighboring blocks (F, A and D).
If the duplicate checking is passed, S330: and filling the candidate motion information serving as the basic motion information into the basic motion information list until the number of the basic motion information in the basic motion information list reaches a preset value.
In the above duplication checking, the motion information of the available and non-repetitive neighboring blocks is used as the candidate motion information and filled in the base motion information list as the base motion information. For example, in the implementation of the present application, the preset value of the number of basic motion information is 2, and when 2 available and non-repeated candidate motion information are found and filled into the basic motion information list; if only one available candidate motion information is searched, and the candidate motion information is not repeated, the candidate motion information is filled into the basic motion information list. In the embodiment of the present application, the preset value is 2 for example, and in other embodiments, the preset value may also be 3.
As a priority scheme, S320 extracts at least one candidate motion information from other spatial candidate blocks of the current block for duplication checking, as shown in fig. 5, including:
s321: candidate motion information from other spatial candidate blocks adjacent to the spatial candidate block is selected for a duplicate check.
In the embodiment of the present application, for example, as shown in fig. 3, when the motion information of the neighboring blocks F, A, D, G and C is used as the candidate motion information, the candidate motion information of the spatial candidate block of the current block may be grouped according to the region where the current block is located, for example, neighboring blocks G and C located in the upper right corner are grouped together, and neighboring blocks a and D located in the upper left corner are grouped together. In the case where both adjacent blocks G, C, A and D are available.
Searching the motion information of the adjacent blocks G and C for duplication, and if the motion information of the adjacent blocks G and C is repeated, selecting one of the motion information of the adjacent blocks selected by other groups or the motion information which is not grouped for duplication; if not, selecting one of the adjacent blocks G or C, and checking the motion information of the adjacent blocks selected by other groups or the motion information not grouped; or the neighboring blocks G and C are each duplicated with the motion information of the neighboring blocks selected by other groups or the motion information not grouped.
Searching the motion information of the adjacent blocks D and A for duplication, and if the motion information of the adjacent blocks D and A is repeated, selecting one of the motion information of the adjacent blocks selected by other groups or the motion information which is not grouped for duplication; if not, selecting one of the adjacent blocks D or A, and checking the motion information of the adjacent blocks selected by other groups or the motion information not grouped; or both the neighboring blocks D and a are duplicated with the motion information of the neighboring blocks selected by other groups or the motion information not grouped.
For example, when the motion information of the neighboring blocks G and C are repeated and the motion information of the neighboring blocks D and a are repeated, the motion information of the neighboring block C, D and the motion information of the F block that is not grouped may be selected, and the C, D, F motion information may be repeated according to the prior art repetition checking method or the previous embodiment.
S322: candidate motion information from other spatial candidate blocks not adjacent to the spatial candidate block is selected for duplicate checking.
In the embodiment of the application, the motion information of the adjacent blocks G and C is subjected to duplicate checking, and if the result is repeated, one of the motion information of the adjacent blocks selected from other groups or the motion information which is not grouped is selected for duplicate checking; if not, selecting one of the adjacent blocks G or C, and checking the motion information of the adjacent blocks selected by other groups or the motion information not grouped; or the neighboring blocks G and C are each duplicated with the motion information of the neighboring blocks selected by other groups or the motion information not grouped.
The result of the duplicate checking of the motion information of the adjacent blocks D and A is that if the result is repeated, the duplicate checking of the motion information of one adjacent block selected from other groups or the motion information not grouped is selected; if not, selecting one of the adjacent blocks D or A, and checking the motion information of the adjacent blocks selected by other groups or the motion information not grouped; or both the neighboring blocks D and a are duplicated with the motion information of the neighboring blocks selected by other groups or the motion information not grouped.
For example, when the motion information of the neighboring blocks G and C are repeated and the motion information of the neighboring blocks D and a are repeated, the motion information of the neighboring block C, D and the motion information of the non-grouped neighboring block F may be selected, and the motion information of the neighboring block C, D, F may be re-checked according to the prior art re-checking method or the re-checking method of the previous embodiment.
In another embodiment of the present application, fig. 6 is a specific embodiment of constructing a base motion information list for a current block by using enhanced repetition checking, as shown in fig. 6, including:
s410: a base motion information list is constructed for the current block.
In the embodiment of the application, the basic motion information list is constructed for the current block, the basic motion information list can be constructed by adopting a scheme in the prior art, and the basic motion information list can also be constructed by adopting modes of sampling and duplicate checking, total duplicate checking and the like. When the basic motion information list constructed by constructing the basic motion information list by using the existing scheme includes 2 pieces of basic motion information, repeated basic motion information caused by missing check may occur.
For example, when a current block (CU) and its neighboring blocks around the current block, where F, G, C, A and D are neighboring blocks of the current block, are used to construct a basic motion information list, the following steps are included:
(i) and obtaining available motion vector information of the first two positions as basic motion vector information of advanced motion vector expression (UMVE) according to the scanning sequence of F- > G- > C- > A- > D, wherein the process needs to judge the availability of the motion vectors of adjacent blocks.
(ii) When judging whether the adjacent motion vector is available, judging the availability according to whether the adjacent motion vector is in the image, whether the adjacent motion vector is coded and whether the adjacent motion vector is intra-frame coded; if a neighboring block is intra, coded, and not intra coded, then the neighboring block motion information is initially considered available.
(iii) And according to the availability information acquired previously, further judging whether the adjacent block MV is repeated, if not, setting to be available, adding a basic motion information list, otherwise, setting to be unavailable. The method specifically comprises the following steps:
(iv) judging the availability of the adjacent block F, if the adjacent block F is available, setting the motion information of the adjacent block F as available, and adding a basic motion information list; otherwise, the motion information of the neighboring block F is not available, and the next step (v) is entered.
(v) The method comprises the following steps Judging the availability of the adjacent block G: if the adjacent block G is not available, setting the adjacent block G to be unavailable, and entering the next step (vi), otherwise, if the adjacent block G is available, further judging:
judging whether the adjacent block F is available, if not, setting the adjacent block G as available, and adding a basic motion information list;
otherwise, when the neighboring block F is available, it is necessary to compare whether the motion information of the neighboring block F and the neighboring block G is repeated, if not, the neighboring block G is set to be available, otherwise, the neighboring block G is unavailable.
(vi) The method comprises the following steps Judging the availability of the adjacent block C: if the adjacent block C is not available, setting the adjacent block C to be unavailable, and entering the next step (vii), otherwise, if the adjacent block C is available, further judging:
judging whether the adjacent block G is available, if not, setting the adjacent block C as available, and adding a basic motion information list;
if the neighboring block G is available, it is necessary to compare whether the motion information of the neighboring block C and the neighboring block G is repeated, if not, the neighboring block C is set to be available, otherwise, the neighboring block C is not available.
(vii) The method comprises the following steps Judging the availability of the adjacent block A: if the adjacent block A is not available, setting the adjacent block A to be unavailable, and entering the next step (viii), otherwise, if the adjacent block A is available, further judging:
judging whether the adjacent block F is available, if not, setting the adjacent block A as available, and adding the basic motion information list;
if the neighboring block F is available, it is necessary to compare whether the motion information of the neighboring block a and the neighboring block F are repeated, if not, the neighboring block a is set to be available, otherwise, the neighboring block a is unavailable.
(viii) The method comprises the following steps Judging the availability of the adjacent block D: if the adjacent block D is not available, setting the adjacent block D to be unavailable, and finishing the availability judgment, otherwise, if the adjacent block D is available, further judging:
judging whether the adjacent block A is available, if the adjacent block A is unavailable, initializing the motion information of the adjacent block A as unavailable; otherwise, acquiring the motion information of the adjacent block D and the adjacent block A, and judging whether the motion information of the adjacent block D and the adjacent block A is repeated;
judging whether the adjacent block G is available, if not, initializing the motion information of the adjacent block G as unavailable; otherwise, acquiring the motion information of the adjacent block D and the adjacent block G, and judging whether the motion information of the adjacent block D and the adjacent block G is repeated;
the first condition is as follows: if the neighboring block a is not available or available, the motion information of the neighboring block D and the neighboring block a is not repeated;
and a second condition: if the neighboring block G is not available or available, the motion information of the neighboring block D and the neighboring block G is not repeated;
if the above two conditions are satisfied simultaneously, the neighboring block D is finally available, otherwise the neighboring block D is unavailable.
(ix) The method comprises the following steps And if the corresponding block is available and the reference frame index is available, adding the motion information of the corresponding block into the basic motion information list, and stopping when the number of the motion information of the corresponding block is two according to the sequence of F- > G- > C- > A- > D.
When the spatial motion vector information available for the periphery is less than 2, (x): and adding the motion vector information of the time domain co-located block of the current block into the basic motion information list.
Judging whether the motion vector information and the spatial motion vector information of the time domain co-located block of the current block are repeated or not; if the motion vector information is repeated, zero motion vector information is filled, and finally two basic motion vectors are filled.
S420: and checking the basic motion information in the basic motion information list.
In the embodiment of the application, the situation that the basic motion information is repeated in the basic motion information list is reduced by performing duplicate checking on the two basic motion information in the basic motion information list.
In other embodiments of the present application, for example, sample duplicate checking, total duplicate checking, duplicate checking again for basic motion information, and other enhanced duplicate checking are adopted, where the enhanced duplicate checking is adopted to construct a basic motion information list for a current block, where the current block includes at least two candidate motion information, and the duplicate checking specifically includes:
it is determined whether the picture order of display (POC) and the motion vector in the two candidate motion information are the same. To check it for duplicates.
Specifically, as shown in fig. 7A, it is determined whether the picture display order (POC) is the same and the motion vector is the same in the two candidate motion information to determine whether the two candidate motion information are the same. Wherein the two candidate motion information comprise a first candidate motion information and a second candidate motion information; the motion information contains reference frame information including a reference index and a picture display order (POC), and motion vector information.
The method specifically comprises the following steps:
s510: and judging whether the reference indexes of the reference frames in the first reference frame direction of the current frame are the same, if not, directly returning to be different, and if so, entering the step S520.
S520: and judging whether the reference index of the reference frame in the first reference frame direction of the current frame is available, if so, directly returning to be different, and otherwise, entering the step S530.
S530: and judging whether the reference indexes of the reference frames in the direction of the second reference frame of the current frame are the same or not, if not, directly returning to be different, otherwise, entering the step S540.
S540: and judging whether the reference index of the reference frame in the second reference frame direction of the current frame is available, if so, directly returning to be different, and otherwise, entering the step S550.
The above-described determination of whether the first candidate motion information and the second candidate motion information are repeated is performed by determining whether the reference indexes of the reference frames are the same.
As a further alternative, in the case where the reference indexes of the reference frames are the same, steps S550-S590 are performed.
S550: and judging whether the first candidate motion information is available in the first reference frame direction of the current frame or not and whether the second candidate motion information is available in the second reference frame direction of the current frame or not. If the first candidate motion information is not available in the first reference frame direction of the current frame and the second candidate motion information is not available in the second reference frame direction of the current frame, step S560 is performed. If the first candidate motion information and the second candidate motion information are available in both the first reference frame direction of the current frame and the second reference frame direction of the current frame, the process proceeds to step S570.
S560: it is determined whether a first image display order of the first candidate motion information and a second image display order of the second candidate motion information are the same. If yes, the process proceeds to step S580.
S570: and judging whether the motion vector information of the first candidate motion information in the first reference frame direction of the current frame and the first reference frame direction of the current frame is the same as the motion vector information of the second candidate motion information in the first reference frame direction of the current frame or the first reference frame direction of the current frame. If yes, the process proceeds to step S590.
In this embodiment, step S570 specifically includes: and judging whether the motion vector information of the first candidate motion information in the first reference direction of the current frame is the same as the motion vector information of the second candidate motion information in the second reference direction of the current frame, and judging whether the motion vector information of the first candidate motion information in the second reference direction of the current frame is the same as the motion vector information of the second candidate motion information in the first reference direction of the current frame. The process proceeds to step S590.
S580: and judging whether the motion vector information of the first candidate motion information in the second reference frame direction of the current frame is the same as the motion vector information of the second candidate motion information in the first reference frame direction of the current frame, if so, entering the step S590.
S590: the first candidate motion information and the second candidate motion information are the same.
It should be noted that, in other embodiments, determining whether the two pieces of motion information are the same may be performed by determining whether reference indexes of reference frames of the two pieces of motion information are the same to determine whether the two pieces of motion information are the same, i.e., performing S510-S540. It is also possible to directly determine whether the picture order of display (POC) and motion vectors in the candidate motion information are the same, i.e., only perform S550-S590, without determining whether the reference indices of the reference frames of the two motion information are the same, i.e., without performing S510-S540.
In another embodiment, it is determined whether the picture order of display (POC) and the motion vector in the two candidate motion information are the same. As shown in fig. 7B, the method specifically includes:
s550': and judging whether the first candidate motion information is available in the first reference frame direction of the current frame or not and whether the second candidate motion information is available in the second reference frame direction of the current frame or not. If both are available or none are available, the process proceeds to step S560'.
S560': it is determined whether a first image display order of the first candidate motion information and a second image display order of the second candidate motion information are the same. If yes, the process proceeds to step S580'.
S580': and judging whether the motion vector information of the first candidate motion information in the second reference frame direction of the current frame is the same as the motion vector information of the second candidate motion information in the first reference frame direction of the current frame, if so, entering step S590'.
And S590': the first candidate motion information and the second candidate motion information are the same.
In the embodiment of the present application, the candidate motion information of the current block includes motion information from a spatial candidate block adjacent to the current block.
As shown in fig. 3, the spatial candidate blocks adjacent to the current block include:
a first spatial candidate block (adjacent block F), which is adjacent to the pixel at the lower left corner of the current block, and the bottom edge of the first spatial candidate block and the bottom edge of the current block are positioned on the same straight line;
a second spatial candidate block (adjacent block G) which is adjacent to the pixel at the upper right corner of the current block and the right side of which is on the same straight line with the right side of the current block;
a third spatial candidate block (adjacent block A), which is adjacent to the pixel at the upper left corner of the current block, and the upper edge of the third spatial candidate block and the upper edge of the current block are on the same straight line;
a fourth spatial candidate block (adjacent block C), which is adjacent to the pixel at the top right corner of the current block, and the left side of which is on the same line with the right side of the current block;
and a fifth spatial candidate block (adjacent block D) which is adjacent to the pixel at the upper left corner of the current block, and whose bottom side is on the same line as the upper side of the current block.
In the embodiment of the present application, new spatial candidate blocks are added, or a part of the original spatial candidate blocks are deleted, so as to form new spatial candidate blocks with different numbers.
Specifically, in one embodiment, the spatial mode is determined by adding a sixth spatial candidate block (neighboring block B) and/or a seventh spatial candidate block (neighboring block E); the sixth spatial candidate block is adjacent to the pixel at the upper left corner of the current block, and the left side of the sixth spatial candidate block and the left side of the current block are on the same straight line; the seventh spatial candidate block is respectively adjacent to the first spatial candidate block and the current block, the bottom edge of the seventh spatial candidate block and the upper edge of the first spatial candidate block are positioned on the same straight line, and the right edge of the seventh spatial candidate block and the left edge of the current block are positioned on the same straight line. So that the number N of spatial candidate blocks is 7.
In another embodiment, the number of spatial candidate blocks adjacent to the current block is reduced by deleting the spatial candidate blocks adjacent to the partial wall block.
Specifically, in the embodiment of the present application, the fourth spatial candidate block (the neighboring block C) and the fifth spatial candidate block (the neighboring block D) may be reduced such that the number N of spatial candidate blocks is 3. In other embodiments, only the fourth spatial candidate block (the neighboring block C) or the fifth spatial candidate block (the neighboring block D) may be reduced such that the number N of spatial candidate blocks is 4.
The ordering of the second number of spatial candidate blocks in this embodiment may be arbitrarily freely combined, for example, for N ═ 7 spatial candidate blocks, F- > G- > C- > a- > D- > B- > E; it may be C- > D- > A- > B- > E- > F- > G or the like.
As shown in fig. 8, a second embodiment of an inter prediction method based on advanced motion vector expression according to the present application includes:
s610: a base motion information list is constructed for the current block, the base motion information list including at least two base motion information.
In the embodiment of the present application, the basic motion information list constructed by the current block includes two or more than three basic motion information, wherein the basic motion information list can be constructed by using a duplicate checking method in the prior art in the construction process, or can be constructed by using the enhanced duplicate checking method in the embodiment of the present application.
S611: and calculating by utilizing at least two pieces of basic motion information to obtain new basic motion information and filling the new basic motion information into the basic motion information list.
In the embodiment of the application, new basic motion information is obtained by calculating at least two pieces of basic motion information, and the new basic motion information is not repeated with the original at least two pieces of basic motion information.
In one embodiment, a weighted average of at least two pieces of basic motion information may be used as the new basic motion information. It is also possible to use the result of filtering at least two pieces of basic motion information as new basic motion information. Or adding or weighting the filtering results of at least two pieces of basic motion information to obtain new basic motion information. The embodiment of the application can add new basic motion information calculated in any one or more ways.
The at least two pieces of basic motion information are filled first during filling, the at least two pieces of filled basic motion information have no precedence order, and the filled new basic motion information is positioned behind the at least two pieces of basic motion information.
S620: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
For further details of the steps of this embodiment, reference is made to the previous embodiment, which is not repeated here.
S630: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
For further details of the steps of this embodiment, reference is made to the previous embodiment, which is not repeated here.
S640: an offset predictor of the current block is calculated using the plurality of offset motion information.
According to the embodiment of the application, the new basic motion information is added into the basic motion information list, so that the accommodating quantity in the basic motion information list can be increased, and the accuracy of the deviation prediction value is improved.
As shown in fig. 9, a third embodiment of an inter prediction method based on advanced motion vector expression according to the present application includes:
s710: a base motion information list is constructed for the current block based on the candidate motion information of the current block, the base motion information list comprises at least one piece of base motion information, the source of the candidate motion information comprises a temporal candidate block of the current block, and the temporal candidate block comprises a temporal co-located block of at least one sub-block of the current block.
The time-domain candidate block comprises a time-domain co-located block of at least one sub-block of the current block; i.e. a block in the same position as the sub-block of the current block at the given reference frame, typically the first frame in the reference list.
In the embodiment of the application, the motion information of the time domain co-located blocks of at least two sub-blocks of the block before addition can be averaged to serve as the candidate motion information of the current block, so that the obtained candidate motion information of the current block is more accurate.
By adding the time domain co-located block motion information of at least one sub-block of the previous block when the basic motion information is constructed, the condition that the basic motion information list is not filled can be improved, and the basic motion information sources for constructing the basic motion information list are wider.
S720: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
For further details of the steps of this embodiment, reference is made to the previous embodiment, which is not repeated here.
S730: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
For further details of the steps of this embodiment, reference is made to the previous embodiment, which is not repeated here.
S740: an offset predictor of the current block is calculated using the plurality of offset motion information.
As shown in fig. 10, a fourth embodiment of the inter prediction method based on advanced motion vector expression of the present application includes:
s810: a base motion information list is constructed for the current block based on the candidate motion information for the current block, the base motion information list including at least one base motion information, and a source of the candidate motion information including at least one historical motion vector.
In the embodiment of the application, candidate motion vector information is added when the basic motion information list is constructed, so that the basic motion information in the basic motion information list can be more widely sourced. For example, the source of the base motion information may include motion information of the spatial candidate blocks, motion information of the temporal candidate blocks, zero motion information, and historical motion information, wherein the motion information of the spatial candidate blocks, the motion information of the temporal candidate blocks, the zero motion information, and the historical motion information may fill the ordering in any order in the base motion information list, for example: may be historical motion information, motion information of the spatial candidate block, motion information of the temporal candidate block, and zero motion information; or, the motion information of the time domain candidate block, the motion information of the spatial domain candidate block, the historical motion information and the zero motion information; zero motion information, motion information of spatial candidate blocks, motion information of temporal candidate blocks, and historical motion information may also be used.
S820: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
S830: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S840: an offset predictor of the current block is calculated using the plurality of offset motion information.
As shown in fig. 11, a fifth embodiment of the inter prediction method based on advanced motion vector expression of the present application includes:
s910: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S911: and scaling the basic motion information to a specified precision, and filling the basic motion information scaled to the specified precision into the basic motion information list.
In an embodiment of the present application, the basic motion information is scaled to a specified precision to construct a basic motion information list, and a piece of basic motion information is scaled down or enlarged to form new basic motion information to be added to the basic motion information list, so as to improve the situation that the basic motion information in the basic motion information list is not filled.
S920: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
S930: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S940: an offset predictor of the current block is calculated using the plurality of offset motion information.
As shown in fig. 12, a sixth embodiment of the inter prediction method based on advanced motion vector expression of the present application includes:
s1010: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S1020: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list. S1030: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S1040: an offset predictor of the current block is calculated using the plurality of offset motion information.
S1050: and carrying out rough selection on the plurality of offset motion information according to the offset prediction values to obtain rough selection results.
In the embodiment of the present application, the offset prediction value of each offset motion information is encoded with a fixed precision, for example, encoded with a fixed 1/4 precision, and a plurality of best results are roughly selected, for example, 2 best results are roughly selected.
S1060: and taking the roughing result as a starting point, and performing motion search according to preset different motion vector precisions to obtain a plurality of search predicted values.
In one embodiment, the roughly selected best results, e.g., 2 best results, are respectively traversed over several numbers of precisions, e.g., 5 precisions: 1/4,1/2,1,2 and 4, and a plurality of search predicted values are obtained by carrying out motion search, so that the accuracy can be improved and the distortion can be reduced.
S1070: and selecting the motion information corresponding to the search prediction value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
According to the method and the device, the operation amount can be reduced by roughly selecting the plurality of offset motion information according to the offset prediction values, the rough selection result is taken as a starting point, the motion search is carried out according to the preset different motion vector precisions, and the accuracy of motion information prediction can be improved.
As shown in fig. 13, a seventh embodiment of an inter prediction method based on advanced motion vector expression according to the present application includes:
s1110: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S1120: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
Specific ways of determining this can be found in other examples in the present application.
S1130: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S1140: an offset predictor of the current block is calculated using the plurality of offset motion information.
S1150: target motion information is selected from the plurality of offset motion information according to the offset prediction value.
The motion information includes motion vectors. The operation information may include forward motion information and backward motion information, the forward motion information may include a forward motion vector and a forward reference frame index, and the backward motion information may include a backward motion vector and a backward reference frame index. According to the embodiment of the application, at least one piece of optimal offset motion information can be selected as the target motion information.
S1160: and correcting the motion vector in the target motion information by using the plurality of corrected motion vectors to obtain a plurality of corrected motion information.
Under the condition that the image display sequence of the forward reference frame and the image display sequence of the backward reference frame of the current block are the same, the motion vector of the target motion information of the current block can be corrected by using a decoding-end motion vector correction (DMVR) technology.
Specifically, in the B-frame bidirectional prediction, a first prediction block is obtained by using forward prediction, and a second prediction block is obtained by using backward prediction, and a plurality of modified motion vectors of the current block can be searched within a search range of the forward/backward first prediction block.
The search range of the forward first prediction block may be an N × N pixel range centered on a vertex of the forward first prediction block (a point where an upper left pixel is located), and during the search, all pixel points in the search range may be traversed according to a predetermined order, for example, all pixel points may be traversed according to a raster scan order from left to right and from top to bottom to obtain N2The forward modified motion vectors Δ MV1 and Δ MV1 are motion vectors whose vertices point to the search pixels, or Δ MV1 is a motion vector of the forward first prediction block pointing to the forward second prediction block (refer to the following description).
The search range of the backward first prediction block may be an N × N pixel range centered on a vertex (a point where a lower right pixel is located) of the backward first prediction block, and the search may be performed by traversing the search range in a predetermined orderObtaining N from all the pixel points2The backward modified motion vectors are obtained by traversing all the pixels in a raster scanning order from right to left and from bottom to top, for example, and Δ MV2 is a plurality of backward modified motion vectors, Δ MV2 is a motion vector whose vertex points to a search pixel point, or Δ MV1 is a motion vector of the backward first prediction block pointing to the backward second prediction block.
For simplicity of description, the forward modified motion vector Δ MV1 searched in the search range of the forward first prediction block will be described with reference to fig. 14 as an example:
and after finding the search point C, translating the forward first prediction block to ensure that the vertex A is superposed with the search point C to obtain a forward second prediction block, or forming a forward second prediction block which takes the point C as the vertex and has the same size with the current block when traversing to the point C, and calculating to obtain a prediction value of the forward second prediction block. Δ MV1 is the motion vector pointing from vertex a to search point C, or Δ MV1 is the motion vector pointing from the first prediction block forward of vertex a to the second prediction block forward of search point C.
The forward correction motion vector and the backward correction motion vector corresponding to each group are equal in size and opposite in direction.
The forward motion vector of the current block can be corrected by using the forward correction motion vector to obtain a corrected forward motion vector, and the backward motion vector of the current block can be corrected by using the backward correction motion vector to obtain a corrected backward motion vector.
For example, if the motion vector of the current block is (MV1, MV2), the modified motion vector of the current block is (MV1+ Δ MV1, MV2+ Δ MV 2).
S1170: an offset predictor of the current block is calculated using the plurality of modified motion information.
The offset prediction value is obtained by performing motion compensation by using the offset motion information.
S1180: and selecting the motion information corresponding to the offset predicted value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
The evaluation index may include a mean absolute difference algorithm (MAD), a sum of absolute differences algorithm (SAD), a sum of squared errors algorithm (SSD), a sum of squared average errors algorithm (MSD), a normalized product correlation algorithm (NCC), a Sequential Similarity Detection Algorithm (SSDA), or a hadamard transform algorithm (SATD), and the absolute difference algorithm (SAD) is used as an example in the embodiment of the present application.
Based on Sum of Absolute Differences (SAD), obtaining an evaluation index between the forward offset predicted value and the backward offset predicted value of the current block, and selecting the corrected motion vector information corresponding to the forward offset predicted value and the backward offset predicted value with the minimum SAD as the high-level motion vector expression motion information of the current block.
The present application further includes an eighth technical solution, as shown in fig. 15, in an eighth embodiment of an inter-frame prediction method based on advanced motion vector expression of the present application, the method includes:
s1210: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S1220: and acquiring a plurality of offset initial values by using the offset distance list. If the motion vector in the basic motion information is a bidirectional motion vector, executing S1221; if the motion vector in the base motion information is one-way, the process goes to S1222.
S1221: calculating a first image display order (POC) difference value and a second image display order difference value between a current frame and two reference frames, wherein the first image display order difference value is greater than or equal to the second image display order difference value, scaling the offset initial value by using the first image display order difference value to obtain a first offset value, scaling the offset initial value by using the second image display order difference value to obtain a second offset value, the first offset value is greater than or equal to the offset initial value, the second offset value is less than or equal to the offset initial value, and the first offset value and the second offset value form a motion vector offset value.
The MV offset value (ref _ mvd) is calculated in different ways depending on the availability of two directional MVs for each base MV (umve _ base _ pmv):
when both MVs are available, the MV offset value ref _ mvd is calculated from the POC of the corresponding reference frame pointed to by the two MVs, note that the two MV directions are allowed to be different or the same here;
setting the weight corresponding to the MV in a certain direction with the larger poc difference delta _ poc as a fixed value (1< < MV _ SCALE _ PREC), wherein the delta _ poc refers to the difference between the reference frame poc pointed by the MV and the current frame poc;
regarding the calculation of delta _ poc:
delta_poc1=abs(poc1-cur_poc);
delta_poc0=abs(poc0-cur_poc);
wherein POC0 is the POC at which the MV points to a reference frame in the first reference frame list; POC1 is the POC at which the MV points to a reference frame in the second reference frame list; cur _ POC is the POC of the current frame.
Wherein MV _ SCALE _ PREC is 14; the other direction MV sets a small weight for a small delta _ poc; thus the calculated MV offset value ref _ mvd for which delta _ poc is small:
ref _ mvd is ref _ mvd weight; wherein ref _ mvd initially takes the value of table 1 expanded by a factor of 4.
For example:
if abs (poc1-cur _ poc) >, abs (poc0-cur _ poc), the weight:
list0_weight=(1<<MV_SCALE_PREC)/(abs(poc1-cur_poc))*abs(poc0-cur_poc);
first reference frame list direction MV offset value:
ref_mvd0=(list0_weight*ref_mvd0+(1<<(MV_SCALE_PREC-1)))>> MV_SCALE_PREC。
in another embodiment of the present application, the weight corresponding to a MV in a certain direction with a smaller poc difference delta _ poc, where delta _ poc refers to the difference between the reference frame poc and the current frame poc pointed to by the MV, is reset to a fixed value (1< < MV _ SCALE _ PREC).
Regarding the calculation of delta _ poc:
delta_poc1=abs(poc1-cur_poc);
delta_poc0=abs(poc0-cur_poc);
wherein POC0 is the POC at which the MV points to a reference frame in the first reference frame list; POC1 is the POC at which the MV points to a reference frame in the second reference frame list; cur _ POC is the POC of the current frame.
Wherein MV _ SCALE _ PREC is 14.
The other direction MV is set to a large weight corresponding to delta _ poc being large; thus the calculated MV offset value ref _ mvd with a large delta _ poc is large.
ref _ mvd is ref _ mvd weight; wherein ref _ mvd initially takes the value of table 1 expanded by a factor of 4.
For example:
if abs (poc1-cur _ poc) < ═ abs (poc0-cur _ poc), the weight:
list0_weight=(1<<MV_SCALE_PREC)/(abs(poc1-cur_poc))*abs(poc0-cur_poc);
first reference frame list direction MV offset value:
ref_mvd0=(list0_weight*ref_mvd0+(1<<(MV_SCALE_PREC-1)))>>
MV_SCALE_PREC。
in another embodiment, the poc difference may not be fixed, and both ends of delta _ poc that are small and large are not fixed, and based on the original offset value, the MV offset value with small delta poc is scaled, for example, the MV offset value may be smaller than the original MV offset value; the MV offset value with a large delta poc is scaled, for example, to make the MV offset value larger than the original MV offset value.
And (3) weighting:
List1_weight=(1<<MV_SCALE_PREC)/(abs(poc0-cur_poc))*abs(poc1- cur_poc);
list0_weight=(1<<MV_SCALE_PREC)/(abs(poc1-cur_poc))*abs(poc0- cur_poc)。
s1222: the offset initial value is used as a motion vector offset value.
S1223: determining a plurality of offset motion vectors in combination with the motion vector offset value and the plurality of offset directions in the offset direction list;
s1230: shifting the basic motion vector in each basic motion information by using a plurality of motion vector shift values to obtain a plurality of shift motion information;
s1240: an offset predictor of the current block is calculated using the plurality of offset motion information.
According to the scheme of the embodiment of the application, the weight can be more accurate, and the obtained deviation predicted value is more accurate.
As shown in fig. 16, a ninth embodiment of the inter prediction method based on advanced motion vector expression of the present application includes:
s1310: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S1320: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
S1330: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S1340: calculating an offset predictor of the current block using the plurality of offset motion information;
s1350: and acquiring high-level motion vector expression motion information of the current block from a plurality of offset motion information according to the offset prediction value.
S1360: and performing inter-frame filtering on the predicted value corresponding to the high-level motion vector expression motion information.
The inter prediction filtering is to remove spatial discontinuity between a prediction block and surrounding pixels due to inter prediction. The method is divided into 2 modes of inter frame filtering (inter prediction filter) and enhanced inter. The following describes the 2 modes separately.
And inter-frame filtering, wherein the filtering is performed between an inter-frame prediction process and a reconstruction process. And transmitting an inter-frame prediction filtering identification in the code stream to identify whether the current block uses inter-frame prediction filtering or not. And if the inter-prediction filtering identifier of the current block uses inter-prediction filtering, filtering the inter-prediction block by using the same filtering method after the inter-prediction block is obtained by motion compensation by the decoder, and otherwise, directly calling a reconstruction process to superpose the residual error.
The inter-frame prediction filtering firstly adopts 4 adjacent reconstructed reference pixels above, on the right left, below the left and above the right of the current pixel to construct an intra-frame prediction value, and then adopts the intra-frame prediction value and the inter-frame prediction value to perform weighting calculation to obtain a final inter-frame prediction value:
the method specifically comprises two processes:
the intra prediction block pred _ Q is first obtained using the following equation:
Pred_Q(x,y)=(Pred_V(x,y)+Pred_H(x,y)+1)>>2;
Pred_V(x,y)=((h-1-y)*Recon(x,-1)+(y+1)*Recon(-1,h)+(h>>1))>>log2(h);
Pred_H(x,y)=((w-1-x)*Recon(-1,y)+(x+1)*Recon(w,-1)+(w>>1))>>log2(w);
where w and h are the width and height of the current block, x and y are the relative coordinates within the current block, and Recon (x, y) is the surrounding reconstructed pixel values.
And weighting 5:3 based on the interframe prediction block Pred _ inter and the interframe prediction block Pred _ Q to obtain a final prediction block.
Pred(x,y)=(Pred_inter(x,y)*5+Pred_Q(x,y)*3+4)>>3;
Where Pred _ inter (x, y) is the predicted pixel value from inter prediction.
The enhanced inter-frame prediction filtering (enhanced interpf) is applied to inter-frame prediction, and can effectively remove noise and discontinuity after motion compensation.
A3-tap filter filtering mode is newly added in the inter-frame prediction filtering, an inter-frame prediction filtering identifier is still used for indicating whether the inter-frame prediction filtering is used, and then an index identifier is needed to indicate the selected filtering mode, so that a decoding end can decode the filtering identifier and the filtering mode index to determine the final filtering process.
The specific filtering method comprises the following steps:
P’(x,y)=f(x)*P(-1,y)+f(y)*P(x,-1)+(1-f(x)-f(y))*P(x,y);
wherein:
p (x, y) is a predicted value of the (x, y) point before filtering and P' (x, y) is a predicted value of the (x, y) point after filtering;
f (x) and f (y) are filter coefficients obtained by looking up the following table, wherein P (x, -1) is a reference pixel right above, and P (-1, y) is a reference pixel right to the left.
The inter prediction filtering mode index syntax and meaning are as shown in table 4.
Table 3 filter coefficient table.
Block width/height Filter coefficient (decreasing according to the number of rows/columns)
4 24,6,2,0
8 44,25,14,8,4,2,1,1
16 40,27,19,13,9,6,4,3,2,1
32 36,27,21,16,12,9,7,5,4,3
64 52,44,37,31,26,22,18,15,13,11
Table 4 inter prediction filtering syntax.
Figure BDA0002610728250000221
As shown in fig. 17, a tenth embodiment of an inter prediction method based on advanced motion vector expression according to the present application includes:
s1410: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S1420: determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
s1430: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S1440: an offset predictor of the current block is calculated using the plurality of offset motion information.
S1450: and acquiring high-level motion vector expression motion information of the current block from a plurality of offset motion information according to the offset prediction value.
S1460: and performing bidirectional gradient correction on the predicted value corresponding to the high-level motion vector expression motion information.
Motion compensation is carried out on the obtained advanced motion vector expression motion information to obtain a predicted value, then Bidirectional Gradient Correction (BGC) is carried out on the predicted value to obtain the best predicted value, and attention is required to be paid to a bidirectional gradient correction related sentence method: whether the BGC is started or not and which mode under the BGC is selected for prediction to obtain a predicted value.
In Bi-directional inter prediction, for two reference prediction blocks after motion compensation, in order to obtain the prediction value of the current block, Bi-directional optical flow (BIO) or normal weighted average operation is usually performed to obtain the current prediction block. However, for the pixel values of the current prediction block, the errors of the two reference prediction blocks are randomly different, and the average value of the two prediction blocks cannot be determined, so a Bi-directional Gradient Correction (BGC) mode is adopted.
The predicted value correction calculation mainly has three modes, and the specific formula is as follows:
Figure BDA0002610728250000222
wherein: pred0 denotes the first reference frame direction predictor; pred1 denotes a second reference frame direction predictor; predBI represents the average value of the direction predicted values of the first reference frame and the second reference frame, and the calculation formula is (pred0+ pred1) > > 1; k represents the correction intensity and is set to a fixed value of 3; an IbgFlag of 0 indicates that no gradient correction is performed, and an IbgFlag of 1 indicates that gradient correction is performed; IbgIdx is 0 to represent forward gradient correction, and 1 to represent backward gradient correction; pred is the predicted value after correction.
By comparing the costs of the three calculation modes, the optimal mode is selected for coding, and the syntax for marking the mode is input into the code stream. The specific schematic diagram is as follows, and is shown in fig. 18, wherein V2 is predBI; v1 is predBI + (pred1-pred0) > > k; v3 is predBI + (pred 0-pred 1) > > k.
As shown in fig. 19, an eleventh embodiment of an inter prediction method based on advanced motion vector expression according to the present application includes:
s1510: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S1511: and judging whether the current frame of the current block meets the list updating condition.
By judging whether the current frame where the current block is located meets the interval of the specified number of frames, for example, the specified number of frames N is 25, judging whether the interval between the current frame where the current block is located and the last update frame is 25 frames, and if the interval is less than 25 frames, not updating. In other embodiments, the specified number of frames may be set as appropriate.
If yes, S1521: and counting the average value of the offset values of all the blocks in the previously specified number of frames, and determining an offset distance list by using the average value.
If the interval between the current frame where the previous block is located and the previous update frame meets the specified number of frames, that is, in the embodiment of the present application, if the interval between the current frame where the previous block is located and the previous update frame is equal to 25 frames, the offset values of the blocks in the 25 frames are counted, and the average value is calculated.
S1520: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
S1530: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S1540: an offset predictor of the current block is calculated using the plurality of offset motion information.
In the prior art, the average value of the statistical deviation value is updated every frame, so that the statistics consumes more time. According to the method and the device, the images of the frames with the interval of the previous appointed number can be updated by setting the primary translation value of the image update meeting the frame with the previous appointed number, the statistical workload caused by updating the statistical deviation value of each frame is reduced, the statistical calculation amount of inter-frame prediction can be effectively reduced, and the inter-frame prediction efficiency is improved.
As shown in fig. 20, a twelfth embodiment of an inter prediction method based on advanced motion vector expression according to the present application includes:
s1610: a base motion information list is constructed for the current block, the base motion information list including at least one base motion information.
S1611: and comparing the value of the motion vector in the basic motion information in at least one direction with a preset threshold value to obtain a comparison result.
In this embodiment of the present application, the motion vector in the basic motion information has components in both the X direction and the Y direction, for example, in this embodiment of the present application, the amplitude in the X direction is compared with a preset threshold, and the preset threshold may be set according to needs, and this embodiment of the present application does not limit the value of the preset threshold, specifically, in this embodiment of the present application, the preset threshold is 8, the amplitude in the X direction of the motion vector in the basic motion information is compared with the preset threshold 8, and it is determined whether the amplitude in the X direction of the motion vector in the basic motion information is greater than the preset threshold 8.
In other embodiments, the magnitude of the motion vector in the Y direction in the basic motion information may also be compared with a preset threshold 8 to obtain a comparison result. Or simultaneously, the amplitudes of the motion vectors in the X direction and the Y direction in the basic motion information are respectively compared with a preset threshold value 8 to obtain a comparison result.
More specifically, in the embodiment of the present application, values of the motion vector of the base motion information in the X or Y direction may be classified, for example, into a preset number Q of segments (Q > -2), in the above embodiment, the motion vector is divided into 2 segments, a preset threshold is correspondingly set, and each segment corresponds to an offset distance list; in other embodiments, the offset distance list may be divided into 3 segments, and two preset thresholds are set correspondingly, where each segment corresponds to one offset distance list.
S1612: and determining an offset distance list according to the comparison result.
For example, in the embodiment of the present application, if the magnitude of the motion vector in the base motion information in the X direction is smaller than or equal to the preset threshold 8, the list with the smaller offset value is selected, such as {1/4,1/2,1,2,4 }.
If the magnitude of the motion vector in the base motion information in the X direction is greater than the preset threshold 8, a list with a larger offset value, such as {1/4,1/2,1,2,4,8,16,32}, is selected.
Or/and if the magnitude of the motion vector in the basic motion information in the Y direction is less than or equal to a preset threshold 8, selecting a list with a smaller offset value, such as {1/4,1/2,1,2,4 }.
If the magnitude of the motion vector in the basic motion information in the Y direction is greater than the preset threshold 8, a list with a larger offset value, such as {1/4,1/2,1,2,4,8,16,32}, is selected.
S1620: a plurality of offset motion vectors are determined using the offset distance list and the offset direction list.
S1630: and offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
S1640: an offset predictor of the current block is calculated using the plurality of offset motion information.
According to the method and the device, the amplitude of the motion vector of the basic motion information is selected from different offset value lists to correct the motion vector of the basic motion information, correction accuracy can be improved, accuracy of a predicted value is improved, and extra transmission syntax is not needed in the inter-frame prediction method.
Fig. 21 is a schematic structural diagram of a first embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 21, the apparatus may include: a building block 11, an offset motion vector determination block 12, an offset motion information acquisition block 13 and a calculation block 14.
The building module 11 is configured to build a base motion information list for the current block using the enhanced repetition check, where the base motion information list includes at least one piece of base motion information.
The offset motion vector determination module 12 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 13 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
The calculation module 14 is configured to calculate an offset predictor of the current block using the plurality of offset motion information.
Fig. 22 is a schematic structural diagram of a second embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 22, the apparatus may include: a building module 21, a filling module 25, an offset motion vector determining module 22, an offset motion information obtaining module 23 and a calculating module 24.
The construction module 21 is configured to construct a base motion information list for the current block, where the base motion information list includes at least two base motion information.
The filling module 25 is configured to fill the base motion information list with new base motion information calculated by using at least two base motion information.
The offset motion vector determination module 22 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 23 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors, so as to obtain a plurality of pieces of offset motion information.
The calculation module 24 is configured to calculate an offset predictor of the current block using the plurality of offset motion information.
Fig. 23 is a schematic structural diagram of a third embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 23, the apparatus may include: a construction module 31, an offset motion vector determination module 32, an offset motion information acquisition module 33, and a calculation module 34.
The building module 31 is configured to build a base motion information list for the current block based on the candidate motion information of the current block, where the base motion information list includes at least one piece of base motion information, a source of the candidate motion information includes a temporal candidate block of the current block, and the temporal candidate block includes a temporal co-located block of at least one sub-block of the current block.
Offset motion vector determination module 32 is operative to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 33 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
The calculation module 34 is configured to calculate an offset predictor of the current block using the plurality of offset motion information.
Fig. 24 is a schematic structural diagram of a fourth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 24, the apparatus may include: a construction module 41, an offset motion vector determination module 42, an offset motion information acquisition module 43 and a calculation module 44.
The construction module 41 is configured to construct a base motion information list for the current block based on the candidate motion information of the current block, where the base motion information list includes at least one base motion information, and a source of the candidate motion information includes at least one historical motion vector.
The offset motion vector determination module 42 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 43 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors, so as to obtain a plurality of pieces of offset motion information.
The calculation module 44 is configured to calculate an offset predictor of the current block using the plurality of offset motion information.
Fig. 25 is a schematic structural diagram of a fifth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 25, the apparatus may include: a construction module 51, a padding module 55, an offset motion vector determination module 52, an offset motion information acquisition module 53 and a calculation module 54.
The construction module 51 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The padding module 55 is configured to scale the base motion information to a specified precision, and pad the scaled base motion information into a base motion information list.
The offset motion vector determination module 52 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 53 is configured to offset the base motion vector in each piece of base motion information by using a plurality of offset motion vectors, so as to obtain a plurality of pieces of offset motion information.
The calculation module 54 is configured to calculate an offset predictor of the current block using the plurality of offset motion information.
Fig. 26 is a schematic structural diagram of a sixth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 26, the apparatus may include: a construction module 61, an offset motion vector determination module 62, an offset motion information acquisition module 63, a calculation module 64, a roughing module 65, a search predictor acquisition module 66, and an advanced motion vector expression motion information acquisition module 67.
The construction module 61 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The offset motion vector determination module 62 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 63 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors, so as to obtain a plurality of pieces of offset motion information.
The calculation module 64 is used to calculate an offset predictor of the current block using a plurality of offset motion information.
The rough selection module 65 is configured to perform rough selection on the multiple offset motion information according to the offset prediction value to obtain a rough selection result.
The search prediction value obtaining module 66 is configured to perform motion search according to preset different motion vector precisions with the rough selection result as a starting point to obtain a plurality of search prediction values.
The advanced motion vector expression motion information acquisition module 67 selects the motion information corresponding to the search prediction value having the smallest evaluation index as the advanced motion vector expression motion information of the current block.
Fig. 27 is a schematic structural diagram of a seventh embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 27, the apparatus may include: a construction module 71, an offset motion vector determination module 72, an offset motion information acquisition module 73, a calculation module 74, a selection module 75, a correction module 76, a correction calculation module 77, and an advanced motion vector expression motion information acquisition module 78.
The construction module 71 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The offset motion vector determination module 72 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 73 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors, so as to obtain a plurality of pieces of offset motion information.
The calculation module 74 is configured to calculate an offset predictor of the current block using the plurality of offset motion information.
The selection module 75 is configured to select the target motion information from the plurality of offset motion information according to the offset prediction value.
The correcting module 76 is configured to correct the motion vector in the target motion information by using a plurality of corrected motion vectors, so as to obtain a plurality of corrected motion information.
The modified calculating module 77 is used for calculating a modified prediction value of the current block using the plurality of modified motion information.
The advanced motion vector expression motion information obtaining module 78 is configured to select the motion information corresponding to the modified prediction value with the smallest evaluation index as the advanced motion vector expression motion information of the current block.
Fig. 28 is a schematic structural diagram of an eighth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 28, the apparatus may include: a building module 81, an offset initial value obtaining module 85, a motion vector offset value determining module 86, an offset motion vector determining module 82, an offset motion information obtaining module 83, and a calculating module 84.
The construction module 81 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The offset initial value obtaining module 85 is configured to obtain a plurality of offset initial values by using the offset distance list.
The motion vector offset value determining module 86 is configured to scale the offset initial value by using the difference value of the image display sequence to obtain a motion vector offset value when the motion vector in the basic motion information is a bidirectional motion vector; when the motion vector in the base motion information is unidirectional, an offset initial value is used as the motion vector offset value.
The offset motion vector determination module 82 is configured to determine a plurality of offset motion vectors in combination with the motion vector offset value and the plurality of offset directions in the list of offset directions.
The offset motion information obtaining module 83 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of motion vector offset values to obtain a plurality of pieces of offset motion information.
The calculation module 84 is used to calculate an offset predictor of the current block using a plurality of offset motion information.
Fig. 29 is a schematic structural diagram of a ninth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 29, the apparatus may include: a construction module 91, an offset motion vector determination module 92, an offset motion information acquisition module 93, a calculation module 94, an advanced motion vector expression motion information acquisition module 95, and a filtering module 96.
The construction module 91 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The offset motion vector determination module 92 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 93 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
The calculation module 94 is configured to calculate an offset predictor of the current block using the plurality of offset motion information.
The advanced motion vector expression motion information obtaining module 95 is configured to obtain advanced motion vector expression motion information of the current block from a plurality of offset motion information according to the offset predictor.
The filtering module 96 is configured to perform inter-frame filtering on the prediction value corresponding to the advanced motion vector expressed motion information.
Fig. 30 is a schematic structural diagram of a tenth embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 30, the apparatus may include: a construction module 101, an offset motion vector determination module 102, an offset motion information acquisition module 103, a calculation module 104, an advanced motion vector expression motion information acquisition module 105, and a correction module 106.
The construction module 101 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The offset motion vector determination module 102 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 103 is configured to offset the basic motion vector in each piece of basic motion information by using a plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
The calculation module 104 is configured to calculate an offset predictor of the current block using a plurality of offset motion information.
The advanced motion vector expression motion information acquisition module 105 is configured to acquire advanced motion vector expression motion information of the current block from a plurality of offset motion information according to the offset predictor.
The correction module 105 is configured to perform bi-directional gradient correction on the prediction value corresponding to the advanced motion vector expressed motion information.
Fig. 31 is a schematic structural diagram of an eleventh embodiment of an inter-frame prediction apparatus based on advanced motion vector representation according to the present application. As shown in fig. 31, the apparatus may include: a construction module 111, a judgment module 115, an offset distance list determination module 116, an offset motion vector determination module 112, an offset motion information acquisition module 113, and a calculation module 14.
The construction module 111 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The judging module 115 is configured to judge whether a current frame where the current block is located meets a list updating condition.
The offset distance list determining module 116 is configured to count an average value of offset values of blocks in a previously specified number of frames if the list updating condition is satisfied, and determine the offset distance list using the average value.
The offset motion vector determination module 112 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information obtaining module 113 is configured to offset the base motion vector in each piece of base motion information by using a plurality of offset motion vectors, so as to obtain a plurality of pieces of offset motion information.
The calculation module 114 is used for calculating an offset predictor of the current block using a plurality of offset motion information.
Fig. 32 is a schematic structural diagram of a twelfth embodiment of an inter-frame prediction apparatus based on advanced motion vector expression according to the present application. As shown in fig. 32, the apparatus may include: a construction module 121, a comparison module 125, an offset distance list determination module 126, an offset motion vector determination module 122, an offset motion information acquisition module 123, and a calculation module 124.
The construction module 121 is configured to construct a base motion information list for the current block, where the base motion information list includes at least one base motion information.
The comparing module 125 is configured to compare the value of the motion vector in the base motion information in at least one direction with a preset threshold to obtain a comparison result.
The offset distance list determining module 126 is configured to determine an offset distance list according to the comparison result.
The offset motion vector determination module 122 is configured to determine a plurality of offset motion vectors using the offset distance list and the offset direction list.
The offset motion information determining module 123 offsets the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information.
The calculation module 124 is used to calculate an offset predictor of the current block using a plurality of offset motion information.
FIG. 33 is a block diagram of an embodiment of an encoder of the present application. As shown in fig. 33, the encoder includes a processor 131, a memory 132 coupled to the processor.
Wherein the memory 132 stores a computer program for implementing the method of any of the above embodiments; the processor 131 is adapted to execute computer programs stored by the memory 132 to implement the steps of the above-described method embodiments. Among them, the processor 131 may also be referred to as a CPU (central processing unit). The processor 131 may be an integrated circuit chip having signal processing capabilities. Processor 131 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
FIG. 34 is a schematic structural diagram of an embodiment of a storage medium according to the present application. As shown in fig. 34, the storage medium 140 of the embodiment of the present application stores a computer program 141, and the computer program 141 implements the method provided by the above-mentioned embodiment of the present application when executed. The computer program 141 may form a program file stored in the storage medium 140 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium 140 includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-ONly Memory (ROM), a RaNdom Access Memory (RAM), a magnetic disk or an optical disk, or a terminal device such as a computer, a server, a mobile phone, or a tablet.
The structure schematic diagram of an embodiment of the electronic device is provided. As shown in fig. 35, the electronic device 150 may include, but is not limited to, an encoder 151 (the encoder mentioned above), and may also be other encoders capable of implementing the above method steps, and is not limited in particular herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only one type of division of a logical function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely embodiments of the present application, and not intended to limit the scope of the present application, and all equivalent structures or equivalent flow transformations that may be implemented by using the contents of the specification and drawings, or that may be directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (52)

1. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a basic motion information list for the current block by adopting enhanced repetition checking, wherein the basic motion information list comprises at least one piece of basic motion information;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
2. The method of claim 1,
the constructing of the basic motion information list for the current block by adopting the enhanced duplication checking comprises the following steps:
sequentially judging whether the candidate motion information of the current block is available;
if the candidate motion information is available, performing duplicate checking with all the previous candidate motion information;
and if the duplicate checking is passed, filling the candidate motion information serving as the basic motion information into a basic motion information list until the number of the basic motion information in the basic motion information list reaches a preset value.
3. The method of claim 1,
the constructing of the basic motion information list for the current block by adopting the enhanced duplication checking comprises the following steps:
sequentially judging whether the candidate motion information of the current block is available;
if the candidate motion information is available and is from a spatial candidate block of the current block, extracting at least one candidate motion information from other spatial candidate blocks of the current block for duplication checking;
and if the duplicate checking is passed, filling the candidate motion information serving as the basic motion information into a basic motion information list until the number of the basic motion information in the basic motion information list reaches a preset value.
4. The method of claim 3,
the extracting at least one duplicate from the candidate motion information of other spatial candidate blocks from the current block comprises:
selecting candidate motion information from other spatial candidate blocks adjacent to the spatial candidate block for duplicate checking;
and selecting candidate motion information from other spatial candidate blocks which are not adjacent to the spatial candidate block for duplicate checking.
5. The method of claim 1,
the constructing of the basic motion information list for the current block by adopting the enhanced duplication checking comprises the following steps:
constructing a base motion information list for the current block;
and checking the basic motion information in the basic motion information list.
6. The method according to any one of claims 2 to 5,
the duplicate checking comprises the following steps:
and judging whether the image display sequence of the reference frames in the two candidate motion information is the same and whether the motion vectors are the same.
7. The method according to claim 6, wherein said determining whether the image display order of the reference frames in the two candidate motion information is the same and the motion vector is the same comprises: wherein the two candidate motion information comprise a first candidate motion information and a second candidate motion information;
judging whether the first candidate motion information is available in a first reference frame direction of the current frame and whether the second candidate motion information is available in a second reference frame direction of the current frame;
if the first candidate motion information is unavailable in the first reference frame direction of the current frame and the second candidate motion information is unavailable in the second reference frame direction of the current frame, judging whether the display sequence of a first image of the first candidate motion information is the same as the display sequence of a second image of the second candidate motion information;
if so, judging whether the motion vector information of the first candidate motion information in the second reference frame direction of the current frame is the same as the motion vector information of the second candidate motion information in the first reference frame direction of the current frame;
if so, the first candidate motion information and the second candidate motion information are the same;
if the first candidate motion information and the second candidate motion information are available in both the first reference frame direction of the current frame and the second reference frame direction of the current frame, judging whether the motion vector information of the first candidate motion information in the first reference frame direction of the current frame and the first reference frame direction of the current frame is the same as the motion vector information of the second candidate motion information in the first reference frame direction of the current frame or the first reference frame direction of the current frame;
if so, the first candidate motion information and the second candidate motion information are the same.
8. The method according to claim 6, wherein said determining whether the image display order of the reference frames in the two candidate motion information is the same and the motion vector is the same comprises: wherein the two candidate motion information comprise a first candidate motion information and a second candidate motion information;
judging whether the first candidate motion information is available in a first reference frame direction of the current frame and whether the second candidate motion information is available in a second reference frame direction of the current frame;
if the first candidate motion information is not available or not available, judging whether a first image display sequence of the first candidate motion information is the same as a second image display sequence of the second candidate motion information;
if so, judging whether the motion vector information of the first candidate motion information in the second reference frame direction of the current frame is the same as the motion vector information of the second candidate motion information in the first reference frame direction of the current frame;
if so, the first candidate motion information and the second candidate motion information are the same.
9. The method of claim 1, wherein the constructing the base motion information list for the current block using the enhanced repetition check comprises:
obtaining candidate motion information of a current block, wherein the candidate motion information of the current block comprises spatial candidate blocks adjacent to the current block;
wherein, the original spatial candidate block adjacent to the current block comprises:
the first spatial candidate block is adjacent to the pixel at the lower left corner of the current block, and the bottom edge of the first spatial candidate block and the bottom edge of the current block are positioned on the same straight line;
the second spatial candidate block is adjacent to the pixel at the upper right corner of the current block, and the right of the second spatial candidate block and the right of the current block are on the same straight line;
the third spatial candidate block is adjacent to the pixel at the upper left corner of the current block, and the upper edge of the third spatial candidate block and the upper edge of the current block are on the same straight line;
the fourth spatial candidate block is adjacent to the pixel at the upper right corner of the current block, and the left side of the fourth spatial candidate block and the right side of the current block are on the same straight line;
the fifth spatial candidate block is adjacent to the pixel at the upper left corner of the current block, and the bottom edge of the fifth spatial candidate block and the upper edge of the current block are on the same straight line;
a second number of spatial candidate blocks are formed by adding new spatial candidate blocks or deleting portions of the original spatial candidate blocks.
10. The method of claim 9, wherein the adding a new spatial candidate block comprises:
by adding a sixth spatial candidate block and/or a seventh spatial candidate block; the sixth spatial candidate block is adjacent to the pixel at the upper left corner of the current block, and the left side of the sixth spatial candidate block and the left side of the current block are on the same straight line; the seventh spatial candidate block is respectively adjacent to the first spatial candidate block and the current block, the bottom edge of the seventh spatial candidate block and the upper edge of the first spatial candidate block are positioned on the same straight line, and the right edge of the seventh spatial candidate block and the left edge of the current block are positioned on the same straight line.
11. The method of claim 9, wherein the removing of the portion of the original spatial candidate blocks comprises:
the fourth spatial candidate block and/or the fifth spatial candidate block are reduced.
12. The method of any of claims 9-11, wherein the ordering of the spatial candidate blocks in the second number of spatial candidate blocks is freely combinable.
13. The method of claim 1, wherein the base motion information list comprises at least two base motion information;
the constructing a base motion information list for the current block further comprises:
and calculating to obtain new basic motion information by utilizing at least two pieces of basic motion information and filling the new basic motion information into the basic motion information list.
14. The method of claim 13,
the calculating of the new basic motion information by using at least two pieces of basic motion information includes:
calculating a weighted average of at least two of the base motion information as the new base motion information; and/or
And calculating a filtering result of at least two pieces of basic motion information as the new basic motion information.
15. The method of claim 1, wherein constructing a base motion information list for the current block, the base motion information list comprising at least one base motion information comprises:
constructing a base motion information list for a current block based on candidate motion information of the current block, the base motion information list including at least one base motion information, a source of the candidate motion information including a temporal candidate block of the current block, the temporal candidate block including a temporal co-located block of at least one sub-block of the current block.
16. The method of claim 1, wherein constructing a base motion information list for the current block, the base motion information list comprising at least one base motion information comprises:
a base motion information list is constructed for the current block based on candidate motion information of the current block, the base motion information list including at least one base motion information, a source of the candidate motion information including at least one historical motion vector.
17. The method of claim 1, wherein constructing a base motion information list for the current block, the base motion information list comprising at least one base motion information comprises:
and scaling the basic motion information to a specified precision, and filling the scaled basic motion information into the basic motion information list.
18. The method of claim 1, wherein after calculating the offset predictor of the current block using the plurality of offset motion information, further comprising:
performing rough selection on the plurality of offset motion information according to the offset prediction value to obtain rough selection results;
taking the rough selection result as a starting point, and performing motion search according to preset different motion vector precisions to obtain a plurality of search predicted values;
and selecting the motion information corresponding to the search prediction value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
19. The method of claim 1, wherein after calculating the offset predictor of the current block using the plurality of offset motion information, further comprising:
selecting target motion information from the plurality of offset motion information according to the offset prediction value;
correcting the motion vector in the target motion information by using a plurality of corrected motion vectors to obtain a plurality of corrected motion information;
calculating an offset predictor of the current block using the plurality of offset motion information;
and selecting the motion information corresponding to the offset prediction value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
20. The method of claim 1, wherein determining the plurality of offset motion vectors using the offset distance list and the offset direction list comprises:
acquiring a plurality of offset initial values by using an offset distance list;
if the motion vector in the basic motion information is a bidirectional motion vector, calculating a first image display sequence difference value and a second image display sequence difference value between a current frame and two reference frames, wherein the first image display sequence difference value is greater than or equal to the second image display sequence difference value, scaling the offset initial value by using the first image display sequence difference value to obtain a first offset value, scaling the offset initial value by using the second image display sequence difference value to obtain a second offset value, wherein the first offset value is greater than or equal to the offset initial value, the second offset value is less than or equal to the offset initial value, and the first offset value and the second offset value form the motion vector offset value;
and if the motion vector in the basic motion information is unidirectional, using an offset initial value as the motion vector offset value.
21. The method of claim 1, wherein after calculating the offset predictor of the current block using the plurality of offset motion information, the method comprises:
acquiring advanced motion vector expression motion information of the current block from the plurality of offset motion information according to the offset prediction value;
and performing interframe filtering on the predicted value corresponding to the high-grade motion vector expression motion information.
22. The method of claim 1, wherein after calculating the offset predictor of the current block using the plurality of offset motion information, the method comprises:
acquiring advanced motion vector expression motion information of the current block from the plurality of offset motion information according to the offset prediction value;
and performing bidirectional gradient correction on the predicted value corresponding to the high-level motion vector expression motion information.
23. The method of claim 1, wherein prior to determining the plurality of offset motion vectors using the offset distance list and the offset direction list, comprising:
judging whether the current frame of the current block meets a list updating condition;
and if so, counting the average value of the deviation values of all the blocks in the frames of the prior designated number, and determining the deviation distance list by using the average value.
24. The method of claim 1, wherein prior to determining the plurality of offset motion vectors using the offset distance list and the offset direction list, comprising:
comparing the value of the motion vector in the basic motion information in at least one direction with a preset threshold value to obtain a comparison result;
and determining an offset distance list according to the comparison result.
25. An inter-frame prediction apparatus based on an advanced motion vector expression, comprising:
a construction module for constructing a base motion information list for the current block using enhanced repetition checking, the base motion information list including at least one base motion information;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
26. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list comprising at least two base motion information;
calculating to obtain new basic motion information by utilizing at least two pieces of basic motion information and filling the new basic motion information into the basic motion information list;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
27. The method of claim 26,
the calculating of the new basic motion information by using at least two pieces of basic motion information includes:
calculating a weighted average of at least two of the base motion information as the new base motion information; and/or
And calculating a filtering result of at least two pieces of basic motion information as the new basic motion information.
28. An inter-frame prediction apparatus based on an advanced motion vector expression, comprising:
a construction module configured to construct a base motion information list for a current block, the base motion information list including at least two pieces of base motion information;
the filling module is used for calculating to obtain new basic motion information by utilizing at least two pieces of basic motion information and filling the new basic motion information into the basic motion information list;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
29. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block based on candidate motion information of the current block, the base motion information list including at least one piece of base motion information, a source of the candidate motion information including a temporal candidate block of the current block, the temporal candidate block including a temporal co-located block of at least one sub-block of the current block;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
30. An inter-frame prediction apparatus based on an advanced motion vector expression, comprising:
a construction module, configured to construct a base motion information list for a current block based on candidate motion information of the current block, where the base motion information list includes at least one piece of base motion information, a source of the candidate motion information includes a temporal candidate block of the current block, and the temporal candidate block includes a temporal co-located block of at least one sub-block of the current block;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
31. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for the current block based on candidate motion information of the current block, wherein the base motion information list comprises at least one piece of base motion information, and a source of the candidate motion information comprises at least one historical motion vector;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
32. An inter-frame prediction apparatus based on an advanced motion vector expression, comprising:
a construction module configured to construct a base motion information list for a current block based on candidate motion information of the current block, the base motion information list including at least one base motion information, a source of the candidate motion information including at least one historical motion vector;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
33. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
scaling the basic motion information to a specified precision, and filling the scaled basic motion information into the basic motion information list;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
34. An inter-frame prediction apparatus based on an advanced motion vector expression, comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
the filling module is used for scaling the basic motion information to specified precision and filling the scaled basic motion information into the basic motion information list;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
35. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information;
performing rough selection on the plurality of offset motion information according to the offset prediction value to obtain rough selection results;
taking the rough selection result as a starting point, and performing motion search according to preset different motion vector precisions to obtain a plurality of search predicted values;
and selecting the motion information corresponding to the search prediction value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
36. An inter-frame prediction apparatus based on an advanced motion vector expression, comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module for calculating an offset predictor of the current block using the plurality of offset motion information;
the rough selection module is used for performing rough selection on the plurality of offset motion information according to the offset prediction value to obtain rough selection results;
the search predicted value acquisition module is used for carrying out motion search according to preset different motion vector precisions by taking the rough selection result as a starting point to obtain a plurality of search predicted values;
and the advanced motion vector expression motion information acquisition module is used for selecting the motion information corresponding to the search prediction value with the minimum evaluation index as the advanced motion vector expression motion information of the current block.
37. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information;
selecting target motion information from the plurality of offset motion information according to the offset prediction value;
correcting the motion vector in the target motion information by using a plurality of corrected motion vectors to obtain a plurality of corrected motion information;
calculating a modified prediction value of the current block using the plurality of modified motion information;
and selecting the motion information corresponding to the corrected predicted value with the minimum evaluation index as the high-level motion vector expression motion information of the current block.
38. The inter-prediction method of claim 37, wherein the method comprises: determining a plurality of offset motion vectors using the offset distance list and the offset direction list, comprising:
if the motion vector in the basic motion information is a bidirectional motion vector, calculating a first image display sequence difference value and a second image display sequence difference value between the current frame and two reference frames, fixing the weight corresponding to the image display sequence difference value with the smaller difference value in the first image display sequence difference value and the second image display sequence difference value, and zooming the offset value of the motion vector using the image display sequence difference value with the larger difference value to obtain a second offset value; calculating a first offset difference value based on the weight corresponding to the image display sequence difference value with the smaller difference value; the first offset value and the second offset value constitute the motion vector offset value;
determining a plurality of offset motion vectors in combination with the motion vector offset value and a plurality of offset directions in an offset direction list.
39. An apparatus for inter prediction based on an advanced motion vector representation, the apparatus comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module for calculating an offset predictor of the current block using the plurality of offset motion information;
a selection module for selecting target motion information from the plurality of offset motion information according to the offset prediction value;
the correction module is used for correcting the motion vector in the target motion information by using a plurality of corrected motion vectors to obtain a plurality of corrected motion information;
a modified prediction module for calculating a modified prediction value of the current block using the plurality of modified motion information;
and the advanced motion vector expression motion information acquisition module is used for selecting the motion information corresponding to the corrected predicted value with the minimum evaluation index as the advanced motion vector expression motion information of the current block.
40. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
acquiring a plurality of offset initial values by using an offset distance list;
if the motion vector in the basic motion information is a bidirectional motion vector, scaling an offset initial value by using an image display sequence difference value to obtain a motion vector offset value, and if the motion vector in the basic motion information is unidirectional, using the offset initial value as the motion vector offset value;
determining a plurality of offset motion vectors in combination with the motion vector offset value and a plurality of offset directions in an offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of motion vector shift values to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
41. An inter-frame prediction apparatus based on an advanced motion vector expression, comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
the offset initial value acquisition module is used for acquiring a plurality of offset initial values by using the offset distance list;
a motion vector offset value determining module, configured to scale an offset initial value by using an image display sequence difference value to obtain a motion vector offset value when a motion vector in the basic motion information is a bidirectional motion vector; the motion vector in the basic motion information is unidirectional, and an offset initial value is used as the offset value of the motion vector;
an offset motion vector determination module for determining a plurality of offset motion vectors in combination with the motion vector offset value and a plurality of offset directions in an offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of motion vector offset values to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
42. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information;
acquiring advanced motion vector expression motion information of the current block from the plurality of offset motion information according to the offset prediction value;
and performing interframe filtering on the predicted value corresponding to the high-grade motion vector expression motion information.
43. An apparatus for inter prediction based on an advanced motion vector representation, the apparatus comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module for calculating an offset predictor of the current block using the plurality of offset motion information;
an advanced motion vector expression motion information obtaining module, configured to obtain advanced motion vector expression motion information of the current block from the multiple offset motion information according to the offset prediction value;
and the filtering module is used for performing interframe filtering on the predicted value corresponding to the high-grade motion vector expression motion information.
44. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information;
acquiring advanced motion vector expression motion information of the current block from the plurality of offset motion information according to the offset prediction value;
and performing bidirectional gradient correction on the predicted value corresponding to the high-level motion vector expression motion information.
45. An apparatus for inter prediction based on an advanced motion vector representation, the apparatus comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module for calculating an offset predictor of the current block using the plurality of offset motion information;
an advanced motion vector expression motion information obtaining module, configured to obtain advanced motion vector expression motion information of the current block from the multiple offset motion information according to the offset prediction value;
and the correction module is used for performing bidirectional gradient correction on the predicted value corresponding to the high-grade motion vector expression motion information.
46. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
judging whether the current frame of the current block meets a list updating condition;
if so, counting the average value of the deviation values of all the blocks in the frames of the prior appointed number, and determining a deviation distance list by using the average value;
determining a plurality of offset motion vectors using the offset distance list and offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
47. An apparatus for inter prediction based on an advanced motion vector representation, the apparatus comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
the judging module is used for judging whether the current frame where the current block is located meets a list updating condition;
the offset distance list determining module is used for counting the average value of the offset values of all the blocks in the frames of the prior appointed number and determining an offset distance list by using the average value if the list updating condition is met;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information acquisition module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
48. An inter prediction method based on advanced motion vector representation, the method comprising:
constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
comparing the value of the motion vector in the basic motion information in at least one direction with a preset threshold value to obtain a comparison result;
determining an offset distance list according to the comparison result;
determining a plurality of offset motion vectors using the offset distance list and offset direction list;
shifting the basic motion vector in each basic motion information by using the plurality of shifted motion vectors to obtain a plurality of shifted motion information;
calculating an offset predictor of the current block using the plurality of offset motion information.
49. An apparatus for inter prediction based on an advanced motion vector representation, the apparatus comprising:
a construction module for constructing a base motion information list for a current block, the base motion information list including at least one base motion information;
the comparison module is used for comparing the value of the motion vector in the basic motion information in at least one direction with a preset threshold value to obtain a comparison result;
the offset distance list determining module is used for determining an offset distance list according to the comparison result;
an offset motion vector determination module for determining a plurality of offset motion vectors using the offset distance list and the offset direction list;
the offset motion information determining module is used for offsetting the basic motion vector in each piece of basic motion information by using the plurality of offset motion vectors to obtain a plurality of pieces of offset motion information;
a calculation module to calculate an offset predictor of the current block using the plurality of offset motion information.
50. An encoder, comprising a processor, a memory coupled to the processor, wherein,
the memory stores a computer program;
the processor is configured to execute the memory-stored computer program to implement the method of any of claims 1-24, 26-27, 29, 31, 33, 35, 37, 38, 40, 42, 44, 46, or 48.
51. A storage medium storing a computer program which, when executed, implements the method of any one of claims 1-24, 26-27, 29, 31, 33, 35, 37, 38, 40, 42, 44, 46 or 48.
52. An electronic device comprising the encoder of claim 50.
CN202010753396.2A 2020-07-30 2020-07-30 Inter-frame prediction method, device and equipment based on advanced motion vector expression Pending CN112040242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010753396.2A CN112040242A (en) 2020-07-30 2020-07-30 Inter-frame prediction method, device and equipment based on advanced motion vector expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010753396.2A CN112040242A (en) 2020-07-30 2020-07-30 Inter-frame prediction method, device and equipment based on advanced motion vector expression

Publications (1)

Publication Number Publication Date
CN112040242A true CN112040242A (en) 2020-12-04

Family

ID=73583638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010753396.2A Pending CN112040242A (en) 2020-07-30 2020-07-30 Inter-frame prediction method, device and equipment based on advanced motion vector expression

Country Status (1)

Country Link
CN (1) CN112040242A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201217455D0 (en) * 2012-09-28 2012-11-14 Canon Kk Method,device,and computer program for motion vector prediction in scalable video encoder and decoder
KR20160051344A (en) * 2014-11-03 2016-05-11 세종대학교산학협력단 A method and an apparatus for searching motion information of a multi-layer video
CN109862369A (en) * 2018-12-28 2019-06-07 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment
KR20190087329A (en) * 2018-01-15 2019-07-24 김기백 Image encoding/decoding method and apparatus for performing intra prediction using a plurality of prediction mode candidates
CN110312130A (en) * 2019-06-25 2019-10-08 浙江大华技术股份有限公司 Inter-prediction, method for video coding and equipment based on triangle model
CN110719482A (en) * 2018-07-13 2020-01-21 腾讯美国有限责任公司 Video coding and decoding method, device, equipment and storage medium
CN110809161A (en) * 2019-03-11 2020-02-18 杭州海康威视数字技术股份有限公司 Motion information candidate list construction method and device
US20200154135A1 (en) * 2017-08-29 2020-05-14 Kt Corporation Method and device for video signal processing
WO2020141915A1 (en) * 2019-01-01 2020-07-09 엘지전자 주식회사 Method and device for processing video signals on basis of history-based motion vector prediction

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201217455D0 (en) * 2012-09-28 2012-11-14 Canon Kk Method,device,and computer program for motion vector prediction in scalable video encoder and decoder
KR20160051344A (en) * 2014-11-03 2016-05-11 세종대학교산학협력단 A method and an apparatus for searching motion information of a multi-layer video
US20200154135A1 (en) * 2017-08-29 2020-05-14 Kt Corporation Method and device for video signal processing
KR20190087329A (en) * 2018-01-15 2019-07-24 김기백 Image encoding/decoding method and apparatus for performing intra prediction using a plurality of prediction mode candidates
CN110719482A (en) * 2018-07-13 2020-01-21 腾讯美国有限责任公司 Video coding and decoding method, device, equipment and storage medium
CN109862369A (en) * 2018-12-28 2019-06-07 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment
CN111385581A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Coding and decoding method and equipment thereof
WO2020141915A1 (en) * 2019-01-01 2020-07-09 엘지전자 주식회사 Method and device for processing video signals on basis of history-based motion vector prediction
CN110809161A (en) * 2019-03-11 2020-02-18 杭州海康威视数字技术股份有限公司 Motion information candidate list construction method and device
CN110312130A (en) * 2019-06-25 2019-10-08 浙江大华技术股份有限公司 Inter-prediction, method for video coding and equipment based on triangle model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈振;贺建军;: "基于可变尺寸块运动矢量恢复的H.264时域差错掩盖算法", 中国图象图形学报, no. 07, 15 July 2008 (2008-07-15) *
魏星, ***, 齐美彬: "基于选择预测的自适应运动估计算法", 中国图象图形学报, no. 07, 25 July 2005 (2005-07-25) *

Similar Documents

Publication Publication Date Title
CN110419220B (en) Method and apparatus for image motion compensation
TWI815974B (en) Modification of motion vector with adaptive motion vector resolution
CN110809887B (en) Method and apparatus for motion vector modification for multi-reference prediction
CN109644272B (en) Geometric priority for constructing candidate lists
CN110121883B (en) Method and apparatus for decoding image in image coding system
CN111630859B (en) Method and apparatus for image decoding based on inter prediction in image coding system
TW201904299A (en) Motion vector prediction
TWI734147B (en) Motion prediction based on updated motion vectors
CN111818342B (en) Inter-frame prediction method and prediction device
TW202017375A (en) Symmetric bi-prediction mode for video coding
CN114080810A (en) Image coding and decoding method and device based on inter-frame prediction
CN114303378A (en) Image encoding method and apparatus using motion vector difference
CN114051726A (en) Method and apparatus for image coding using motion vector difference
CN114270833A (en) Method and apparatus for removing overlay signaling in a video/image coding system
US11595640B2 (en) Video or image coding for inducing weight index information for bi-prediction
CN112040242A (en) Inter-frame prediction method, device and equipment based on advanced motion vector expression
CN115398910A (en) Image decoding method and apparatus for the same
CN115462084A (en) Image decoding method and apparatus therefor
RU2820303C1 (en) Method and device for encoding images using motion vector differences
RU2784417C1 (en) Method and device for encoding images using motion vector differences
RU2807635C2 (en) Method and device for image encoding using motion vector differences
CN115699758A (en) Image decoding method and device
CN114080813A (en) Method and apparatus for image coding using motion vector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination