WO2020057556A1 - 运动信息候选者列表构建方法、装置及可读存储介质 - Google Patents

运动信息候选者列表构建方法、装置及可读存储介质 Download PDF

Info

Publication number
WO2020057556A1
WO2020057556A1 PCT/CN2019/106473 CN2019106473W WO2020057556A1 WO 2020057556 A1 WO2020057556 A1 WO 2020057556A1 CN 2019106473 W CN2019106473 W CN 2019106473W WO 2020057556 A1 WO2020057556 A1 WO 2020057556A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion information
encoded
decoded
block
coded
Prior art date
Application number
PCT/CN2019/106473
Other languages
English (en)
French (fr)
Inventor
徐丽英
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020057556A1 publication Critical patent/WO2020057556A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present application relates to video coding technologies, and in particular, to a method, a device, and a readable storage medium for constructing a motion information candidate list.
  • Inter-prediction refers to the use of the time-domain correlation of video to predict the pixels of the current image using the pixels of neighboring coded images in order to effectively remove the video time-domain redundancy.
  • the inter-prediction part of the main video coding standards uses block-based motion compensation technology.
  • the main principle is to find a best matching block for each pixel block of the current image in the previously encoded image. This process is called Motion estimation (Motion Estimation, referred to as ME).
  • ME Motion Estimation
  • the image used for prediction is called a reference image frame, and the displacement from the reference block to the current pixel block is called a motion vector (MV), and the difference between the current pixel block and the reference block is called the prediction residual. (Prediction Residual).
  • the motion information of neighboring blocks in the spatial domain has a strong correlation, and the motion information also has a certain correlation in the time domain
  • the motion information of the neighboring block in the spatial domain or the time domain is used to perform motion information on the current block Prediction, to obtain predicted pixel values, only the residuals need to be encoded, which can greatly save the number of encoding bits of motion information.
  • the sequence number such as Merge_idx
  • the current video coding standards propose a merge technology (Merge), Advanced Motion Vector Prediction (AMVP) and simulation in motion information prediction.
  • Affine technology uses spatial and temporal motion information prediction, establish a motion information candidate list, and select the best candidate from the list as the prediction information of the current unit according to preset rules.
  • the present application provides a method, a device, and a readable storage medium for constructing a candidate list of motion information.
  • a method for constructing a motion information candidate list including: acquiring existing motion information of a current image block, where the existing motion information includes at least a motion vector; and for the existing motion The information is transformed; and the transformed motion information is added as candidate motion information to the motion information candidate list of the current image block.
  • a method for constructing a motion information candidate list includes: filtering the coded / decoded blocks before the current image block according to preset filtering conditions, and based on the filtered coded / decoded Building a motion information list of coded / decoded blocks in blocks; selecting candidate motion information from the coded / decoded block motion information list to add a motion information candidate list of the current image block, the candidate motion information including at least a motion vector.
  • a method for constructing a motion information candidate list includes: classifying encoded / decoded blocks before a current image block; and classifying all encoded / decoded blocks according to the type of the encoded / decoded block.
  • the encoded / decoded block is added to the corresponding encoded / decoded block motion information list, wherein different categories correspond to different encoded / decoded block motion information lists; from the encoded / decoded block motion information list, selecting
  • the candidate motion information is added to a motion information candidate list of the current image block, and the candidate motion information includes at least a motion vector.
  • a method for constructing a motion information candidate list includes: constructing a motion information list of encoded / decoded blocks according to motion information of an encoded / decoded block before a current image block; Reordering the motion information of the encoded / decoded block in the encoded / decoded block motion information list; selecting candidate motion information to add to the current image from the reordered encoded / decoded block motion information list A motion information candidate list for a block, the candidate motion information including at least a motion vector.
  • a method for constructing a motion information candidate list includes: constructing a coded / decoded block motion information list, where the coded / decoded block motion information list includes information before a current image block. Motion information of the encoded / decoded block; when the prediction mode of the current image block is an affine Affine mode, selecting candidate motion information from the motion information of the encoded / decoded block to add the motion information of the current image block Candidate list.
  • an exercise candidate list constructing device including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory pass through the The communication bus completes communication with each other; the memory is configured to store a computer program; and the processor is configured to implement the foregoing method of constructing a candidate list of motion information when the program stored in the memory is executed.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method for constructing the above-mentioned athletic information candidate list is implemented. .
  • the existing motion information of the current image block is transformed, and the transformed existing motion information is added as candidate motion information to the motion information candidate list of the current image block. Increase the richness of candidate samples.
  • the method for constructing a motion information candidate list in the embodiment of the present application is to increase the number of candidates by filtering, classifying, and / or reordering the encoded / decoded blocks before the current image block when constructing the encoded / decoded block motion information list. Based on the richness of the user samples, the accuracy of constructing the coded / decoded block motion information list is improved.
  • FIG. 1A- (a) to FIG. 1A- (f) are schematic diagrams of block division shown in an exemplary embodiment of the present application;
  • FIG. 1B is a schematic diagram of block division according to another exemplary embodiment of the present application.
  • FIG. 2A is a schematic flowchart of a method for constructing a motion information candidate list according to an exemplary embodiment of the present application
  • FIG. 2B to FIG. 2D are schematic diagrams of a motion information candidate list shown in an exemplary embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for constructing a sports information candidate list according to an exemplary embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for constructing a sports information candidate list according to another exemplary embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a method for constructing a sports information candidate list, according to another exemplary embodiment of the present application.
  • FIG. 6A is a schematic flowchart of a method for constructing a sports information candidate list according to still another exemplary embodiment of the present application.
  • FIG. 6B is a schematic diagram of a motion information candidate list according to an exemplary embodiment of the present application.
  • FIG. 7A is a schematic diagram of scaling existing motion information according to an exemplary embodiment of the present application.
  • FIG. 7B is a schematic diagram illustrating a correspondence relationship between a frame motion intensity and a contraction amplitude according to an exemplary embodiment of the present application.
  • FIG. 7C is a schematic diagram illustrating a correspondence relationship between the similarity of adjacent blocks at different positions and the amplitude of expansion and contraction, according to an exemplary embodiment of the present application.
  • FIG. 7D is a schematic diagram of filtering an encoded / decoded block according to an exemplary embodiment of the present application.
  • FIG. 7E is a schematic diagram of filtering a coded / decoded block according to another exemplary embodiment of the present application.
  • FIG. 7F is a schematic diagram of filtering a coded / decoded block according to another exemplary embodiment of the present application.
  • FIG. 7G is a schematic diagram of classifying coded / decoded blocks according to an exemplary embodiment of the present application.
  • FIG. 7H is a schematic diagram illustrating selecting candidate motion information from a plurality of encoded / decoded block motion information lists, according to an exemplary embodiment of the present application.
  • FIG. 7I is a schematic diagram of classifying encoded / decoded blocks according to another exemplary embodiment of the present application.
  • FIG. 7J is a schematic diagram illustrating classification of encoded / decoded blocks according to another exemplary embodiment of the present application.
  • FIG. 7K is a schematic diagram of reordering motion information of encoded / decoded blocks in a motion information list of encoded / decoded blocks according to an exemplary embodiment of the present application;
  • FIG. 7L is a schematic diagram of reordering motion information of encoded / decoded blocks in a motion information list of encoded / decoded blocks according to another exemplary embodiment of the present application.
  • FIG. 7M is a schematic diagram of constructing a motion information list of encoded / decoded blocks in a motion information prediction terminal of an Affine mode, according to an exemplary embodiment of the present application.
  • FIG. 7N is a schematic diagram illustrating classification of encoded / decoded blocks in an Affine mode according to an exemplary embodiment of the present application.
  • FIG. 7O is a schematic diagram illustrating storage of encoded / decoded block motion information according to an exemplary embodiment of the present application.
  • Fig. 8 is a schematic diagram of a hardware structure of a device for constructing a motion information candidate list, according to an exemplary embodiment of the present application.
  • a CTU Coding Tree Unit
  • CUs Coding Units
  • quadtree It is determined at the leaf node CU level whether to use intra coding or inter coding.
  • the CU can be further divided into two or four PUs (Prediction Units, prediction units), and the same PU uses the same prediction information.
  • PUs Prediction Units, prediction units
  • a CU can be further quad-divided into multiple TUs (Transform Units).
  • the current image block in this application is a PU.
  • a partition structure combining a binary tree, a tri-tree, and a quad-tree replaces the original division mode, eliminates the original distinction between CU, PU, and TU concepts, and supports a more flexible division of CU.
  • the CU may be a square or a rectangular partition.
  • the CTU first divides the quadtree, and then the leaf nodes of the quadtree can further divide the binary tree and the tritree.
  • Figure 1A there are five types of CUs.
  • Figures 1A- (a) represent a CU
  • Figures 1A- (b) represent quadtree partitions
  • Figures 1A- (c) represent horizontal binary tree partitions.
  • Figures 1A- (d ) Indicates a vertical binary tree partition
  • FIG. 1A- (e) indicates a horizontal tri-tree partition
  • FIG. 1A- (f) indicates a vertical tri-tree partition.
  • a CU partition in a CTU may be the above-mentioned five partition types. Any combination of. It can be seen from the above that the different division methods make the shapes of each PU different, such as rectangles and squares of different sizes.
  • H.265 / HEVC proposes merging technology (motion information prediction in Merge mode) and AMVP technology (that is, motion information prediction in AMVP mode). Both use spatial and temporal motion information prediction.
  • merging technology motion information prediction in Merge mode
  • AMVP technology that is, motion information prediction in AMVP mode. Both use spatial and temporal motion information prediction.
  • an optimal motion information candidate is selected as the predicted motion information of the current data block.
  • the motion information of the current data block is directly predicted based on the motion information of adjacent data blocks in the spatial or time domain, and there is no motion vector difference (MVD). If the encoder and decoder construct the motion information candidate list in the same way, the encoder only needs to transmit the index of the predicted motion information in the motion information candidate list, which can greatly save the number of encoding bits of the motion information. .
  • the motion information prediction of the AMVP mode also uses the correlation of the motion information of adjacent data blocks in the spatial and temporal domains to establish a motion information candidate list for the current data block.
  • the encoder selects the optimal prediction motion information from it, and performs differential encoding on the motion information.
  • the decoder can calculate the motion information of the current data block only by establishing the same motion information candidate list, and only the sequence number of the motion vector residual and the predicted motion information in the list.
  • the length of the motion information candidate list in the AMVP mode is two.
  • Affine mode is a new inter prediction mode introduced by H.266. It has a good prediction effect for rotated and zoomed scenes.
  • Affine mode There are two types of Affine mode in JEM (Joint Explorer Model), one is Affine Inter (corresponding to Affine AMVP), and the other is Affine Merge.
  • Affine Merge can traverse the candidate image blocks to find the first candidate that is Affine mode encoding. In the Affine Merge method, there is no need to transmit some additional index values, only the flag of whether to use Affine Merge or not can be transmitted.
  • the method for constructing the motion information candidate list described in this article can be applied to both the encoding end device and the decoding end device.
  • the encoded / decoded block described in this article refers to the encoded block; when applied to the decoding device, the encoded / decoded block described in this article refers to the decoded block, This article will not repeat it later.
  • new candidate motion information can be obtained by transforming existing motion information to increase the richness of the candidate samples.
  • FIG. 2A is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S200 Acquire existing motion information before the current image block.
  • the current image block refers to a data block on which motion information prediction is currently performed.
  • the data block when applied to the encoding end, the data block may be a data block to be encoded (this may be referred to as an encoding block); when applied to the decoding end, the data block may be a data block to be decoded (this may be referred to as a decoding block herein) ).
  • the existing motion information may include, but is not limited to, motion information of a spatial domain candidate, motion information of a time domain candidate, and / or motion information of an encoded / decoded block.
  • the motion information of the spatial candidate and the motion information of the time candidate are candidate motion information located in the motion information candidate list of the current image block.
  • the motion information candidate list of the current image block includes Spatial domain candidates, time domain candidates, zero motion information, etc.
  • the amount of motion information of the spatial domain candidates and the motion information of the time domain candidates in the motion information candidate list of the current image block are different.
  • the spatial domain candidates of the current image block may include spatially adjacent blocks, may also include some spatially adjacent blocks, and may also include some spatially non-adjacent blocks.
  • the spatial domain candidate block of the current image block may include spatially adjacent blocks in the same CTU as the current image block, or may include spatially adjacent blocks different from the CTU in which the current image block is located.
  • the time domain candidate of the current image block may be an image block in a reference image frame of the current image block, especially a time-domain adjacent block in the reference frame image, including a position in the reference image frame of the current image block and the current image block.
  • the same intermediate image block and the spatially adjacent blocks of the intermediate image block may be an image block in a reference image frame of the current image block, especially a time-domain adjacent block in the reference frame image, including a position in the reference image frame of the current image block and the current image block.
  • the motion information of the spatial domain candidate in the motion information candidate list of the current image block includes the motion information of some spatial domain neighboring blocks of the current image block.
  • the motion information of the time domain candidate in the motion information candidate list of the current image block includes the motion of the intermediate image block in the reference image frame of the current image block with the same position as the current image block and the spatially adjacent blocks of the intermediate image block. information.
  • the motion information of the encoded / decoded block refers to the spatial information candidate and the time domain candidate in the motion information candidate list of the current image block.
  • the coded image block may include the remaining spatial-domain neighboring blocks except for the spatial-domain candidate among the spatial-domain neighboring blocks of the current image block; or / and, except for the motion information in the temporal-domain neighboring blocks of the current image block The remaining time-domain neighboring blocks other than the time-domain candidate in the candidate list.
  • the current image block may be any image unit, and the image unit may be a CTU but not limited to the CTU, or may be a block or unit that is further divided by the CTU, or may be a unit of a block larger than the CTU.
  • the existing motion information includes at least a motion vector.
  • the existing motion information is not limited to including motion vectors, and may also be encoding information other than motion vectors. Accordingly, the transformation of existing motion information may also include motion information other than motion vectors. In this embodiment, details are not described in this embodiment of the present application.
  • Step S210 Transform the acquired existing motion information.
  • the existing motion information may be transformed to obtain new candidate motion information.
  • the transforming the acquired existing motion information may include: scaling the acquired existing motion information in a specified direction.
  • the existing motion information can be transformed by scaling the existing motion information to obtain new candidate motion information.
  • the acquired existing motion information may be scaled in a specified direction.
  • the specified direction may be a motion direction of a motion vector of the current image block, and a MV (Motion Vector) direction included in the motion information.
  • the current image block may be a bidirectional inter prediction block or a unidirectional inter prediction block.
  • the scaling of the existing motion information is not limited to scaling in the MV direction, and may also be scaling in other specified directions, and the specific implementation thereof will not be repeated here.
  • the scale of scaling can be flexibly adjusted according to the actual scene.
  • the existing motion information in order to improve the construction efficiency of the motion information candidate list, can be scaled at the frame level, slice (slice) level, or line level when scaling the existing motion information, that is, at the same frame, Existing motion information of the same slice or the same line can be scaled the same.
  • a frame image can be divided into one or more slices; a slice can include one or more CTUs.
  • the foregoing scaling of the acquired existing motion information in a specified direction may include: determining the existing motion information according to the proportion of the zero motion vector in the previous adjustment unit of the adjustment unit to which the current image block belongs. Amplification and contraction in the specified direction; according to the determined amplitude, the existing motion information is expanded and contracted in the specified direction.
  • the adjustment unit may include a frame, a slice, or a line. Accordingly, when scaling existing motion information in a specified direction, frame-level, slice-level, or line-level syntax control may be adopted.
  • the amplitude of stretching the existing motion information can be determined according to the motion severity of the adjustment unit.
  • the intensity of motion can be characterized by the proportion of zero motion vectors.
  • the proportion of the zero motion vectors of the adjustment unit is the ratio of the number of zero motion vectors in the adjustment unit to the number of all motion vectors in the adjustment unit.
  • the existing motion information in the specified direction may be determined according to the proportion of zero motion vectors in the previous adjustment unit of the adjustment unit to which the current image block belongs.
  • the amplitude of the stretching may be determined according to the proportion of zero motion vectors in the previous adjustment unit of the adjustment unit to which the current image block belongs.
  • the amplitude of the existing motion information in the specified direction is negatively related to the proportion of the zero motion vector, that is, the higher the proportion of the zero motion vector in the previous adjustment unit of the adjustment unit to which the current image block belongs, the higher the determined
  • the adjustment unit as a frame (that is, frame-level adjustment) as an example, the proportion of the zero motion vector in the previous frame of the frame to which the current image block belongs and the determined motion information in the specified direction are performed in the specified direction.
  • the corresponding relationship of the expansion and contraction amplitude can be shown in the following table:
  • the proportion of zero motion vectors in the previous adjustment unit of the adjustment unit to which the current image block belongs exceeds a relatively large proportion threshold (that is, the intensity of the motion of the previous adjustment unit is very weak) )
  • a relatively large proportion threshold that is, the intensity of the motion of the previous adjustment unit is very weak
  • the probability that the zero motion information is selected as the final predicted motion information will be relatively large.
  • the amplitude of scaling the existing motion information in the specified direction can be zero, that is, Existing motion information is not scaled to increase the probability that zero motion information is added to the final motion information candidate list.
  • the existing motion information may be scaled at the block level when scaling is performed, that is, for each data block, it may be determined separately. There is a range of motion information.
  • the above-mentioned scaling of the acquired existing motion information in a specified direction may include: determining, based on the similarity of the motion information of spatially adjacent blocks in different positions of the current image block, that the existing motion information is specified in the specified direction. Amplification and contraction in the direction; according to the determined amplitude, the existing motion information is expanded and contracted in the specified direction.
  • the amplitude of scaling existing motion information may be determined according to the similarity of the motion information of neighboring blocks of the current image block.
  • the amplitude of the existing motion information to be scaled may be determined according to the similarity of the motion information of the spatially neighboring blocks in different positions of the current image block.
  • the existing motion information is performed in a specified direction.
  • the amplitude of the stretching can be zero, that is, the existing motion information is not scaled.
  • the transforming the acquired existing motion information may include: weighting at least two acquired existing motion information.
  • the existing motion information can be transformed by weighting the existing motion information to obtain new candidate motion information.
  • At least two of the acquired existing motion information may be weighted, that is, a weighted average of the at least two acquired existing motion information may be determined.
  • the weighting coefficient when weighting at least two pieces of existing motion information obtained can be adaptively adjusted according to the characteristics of the source block of the existing motion information (the existing motion information is the motion information of the source block).
  • Step S220 Add the transformed motion information as candidate motion information to a motion information candidate list of the current image block.
  • the transformed motion information may be added as candidate motion information to the motion information candidate list of the current image block.
  • the motion information candidate list constructed for the motion information prediction of the merge mode may include airspace candidates, time domain candidates, and zero motion information.
  • it may also include candidate motion information obtained by transforming existing motion information, and a schematic diagram thereof may be shown in FIG. 2C.
  • the transformation of the existing motion information may include, but is not limited to, scaling or weighting the existing motion information.
  • the candidate motion information obtained by transforming the existing motion information may be located before the combination candidate or after the combination candidate, but it is not Limited to the above examples.
  • FIG. 2C there is a combination candidate, and the candidate motion information obtained by transforming the existing motion information is located before the combination candidate (that is, after the time domain candidate and before the combination candidate).
  • the motion information prediction of the AMPV mode is taken as an example.
  • the motion information candidate list constructed for the motion information prediction of the AMVP mode may include airspace candidates, time domain candidates, and zero motion. In addition to the information, it may also include candidate motion information obtained by transforming the existing motion information. The candidate motion information obtained by transforming the existing motion information may be located in the motion information candidate list after the time domain candidate with zero motion. Before information, its schematic diagram can be shown in Figure 2D.
  • the existing motion information is transformed, and the transformed motion information is added as candidate motion information to the motion information candidate list of the current image block, thereby increasing the richness of the candidate samples. , And increase the flexibility of candidate selection of motion information.
  • the motion information prediction of a mode such as a merge mode or an AMVP mode
  • One or more of the methods of screening, classification, and sorting are used to construct a coded / decoded block motion information list based on the coded / decoded block motion information, and select candidate motion information from the coded / decoded block motion information list to join
  • the motion information candidate list of the current image block on the basis of increasing the candidate sample richness, improves the accuracy of the construction of the encoded / decoded block motion information list.
  • FIG. 3 is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S300 Filter the coded / decoded blocks before the current image block according to the preset filtering conditions, and construct a coded / decoded block motion information list based on the filtered coded / decoded blocks.
  • a coded / decoded block motion information list may be constructed based on the motion information of the coded / decoded blocks before the current image block, and the coded / decoded block motion information list may be constructed.
  • the candidate motion information is selected in the motion information candidate list of the current image block.
  • the probability of being selected as the final predicted motion information is small. Therefore, in order to improve the accuracy of the coded / decoded block motion information list construction, construct When the coded / decoded block motion information list is used, the coded / decoded blocks before the current image block can be filtered, and the coded / decoded block motion information list can be constructed based on the filtered coded / decoded blocks.
  • the above-mentioned filtering / coding / decoding block before the current image block is performed according to a preset filtering condition, and a motion information list of the coding / decoding block is constructed based on the filtered coding / decoding block.
  • the number of non-zero coefficients in the residual coefficients of the coded / decoded block can intuitively reflect whether the motion information prediction is accurate. Therefore, when constructing the coded / decoded block motion information list, it can be based on the coded
  • the number of non-zero coefficients in the residual coefficients of the decoding block is used to filter the encoded / decoded blocks. The greater the number of non-zero coefficients in the residual coefficients of the encoded / decoded block, the worse the prediction accuracy of the motion information of the encoded / decoded block is.
  • the number of non-zero coefficients in the residual coefficients of the coded / decoded block can be determined, and the coded / decoded block can be determined. Whether the number of non-zero coefficients in the residual coefficient of the block is greater than a preset number threshold (the preset number threshold can be set according to an actual scene).
  • the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block motion information list.
  • the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list.
  • the motion information of the coded / decoded blocks with poor prediction accuracy is eliminated, which improves the Accuracy of encoding / decoding block motion information list construction.
  • the above-mentioned filtering / coding / decoding block before the current image block is performed according to a preset filtering condition, and a motion information list of the coding / decoding block is constructed based on the filtered coding / decoding block.
  • the probability that the motion information of the encoded / decoded block whose width and height are too large is selected as the final prediction information will be very low. Therefore, when constructing the motion information list of the encoded / decoded block, it can be based on the The width and height of the coded / decoded blocks are filtered for the coded / decoded blocks.
  • the width of the coded / decoded block is greater than or equal to a first preset threshold (the first preset threshold can be determined according to Actual scene setting), and whether the height of the coded / decoded block is greater than or equal to a second preset threshold (the second preset threshold can be set according to the actual scene).
  • the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block.
  • List of decoded block motion information When the width of the encoded / decoded block is less than the first preset threshold, and / or the height of the encoded / decoded block is less than the second preset threshold, motion information of the encoded / decoded block is added to the encoded / decoded List of block motion information.
  • the foregoing filtering and encoding / decoding blocks according to preset filtering conditions, and constructing a list of encoded / decoding block motion information based on the filtered and encoded / decoded blocks may include: for the current image Any encoded / decoded block before the block, when the quantization step size of the motion information of the encoded / decoded block is greater than or equal to a preset step threshold, the motion information of the encoded / decoded block is refused to be added to the encoded / decoded Block motion information list; when the quantization step size of the motion information of the encoded / decoded block is less than a preset step size threshold, add the motion information of the encoded / decoded block to the encoded / decoded block motion information list.
  • the quantization step size of the motion information of the encoded / decoded block can intuitively reflect the accuracy of the motion information of the encoded / decoded block. Therefore, when constructing the motion information list of the encoded / decoded block, it can be based on the motion information list of the encoded / decoded block.
  • the quantization step size of the motion information of the coded / decoded blocks is used to filter the coded / decoded blocks.
  • the quantization step size is 1, its quantization value is 5, 6, 7, and 8; when the quantization step size is 2, its quantization value is 3 (corresponding to Parameters 5 and 6) and 4 (corresponding to parameters 7 and 8); when the quantization step size is 4, its quantization value is 2, that is, the larger the quantization step size, the more parameters will be quantized to the same quantization value. Its accuracy decreases accordingly. Therefore, the quantization step size is inversely related to accuracy.
  • the quantization step size of the motion information of the coded / decoded block can be determined, and the motion information of the coded / decoded block can be determined. Whether the quantization step size is greater than or equal to a preset step size threshold (the preset step size threshold can be set according to an actual scene).
  • the quantization step size of the motion information of the encoded / decoded block is greater than or equal to a preset step size threshold, the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block motion information list.
  • the quantization step size of the motion information of the coded / decoded block is less than a preset step size threshold, the motion information of the coded / decoded block is added to the coded motion information list.
  • the coded / decoded blocks are filtered based on the quantization step size of the motion information of the coded / decoded blocks, and the motion information of the coded / decoded blocks with too low prediction accuracy is eliminated to improve the coded / decoded Accuracy of block motion information list construction.
  • Step S330 Select candidate motion information from the encoded / decoded block motion information list and add it to the motion information candidate list of the current image block, where the candidate motion information includes at least a motion vector.
  • candidate motion information may be selected from the coded / decoded block motion information list to add motion information candidates for the current image block. List.
  • the above-mentioned selection of candidate motion information from the encoded / decoded block motion information list to join the motion information candidate list of the current image block may include: encoded / decoded block motion information corresponding to each category
  • the candidate motion information is sequentially selected from the motion information list of each encoded / decoded block to be added to the motion information candidate list of the current image block; wherein different categories correspond to different priorities.
  • different classes of filtered encoded / decoded blocks have different priorities as candidate motion information.
  • the priority of the encoded / decoded block motion information list corresponding to each category can be selected from high to low. Sequentially, select candidate motion information from each encoded / decoded block motion information list in order to add the motion information candidate list of the current image block to ensure that the motion information of the encoded / decoded block with high accuracy is included in the motion information candidate list.
  • the motion information of the coded / decoded blocks with lower ordering accuracy is ranked higher in the motion information candidate list.
  • the candidate motion information is not limited to the motion vector, and may also be encoding information other than the motion vector. Accordingly, the screening, classification, and / or ranking of the motion information may also include the The filtering, classification, and / or ranking of coding information other than motion vectors is not described in this embodiment of the present application.
  • the selecting the candidate motion information from the encoded / decoded block motion information list to add the motion information candidate list of the current image block may include: determining an encoded / decoded block that matches the current image block. Motion information list; select candidate motion information from the encoded / decoded block motion information list that matches the current image block to join the motion information candidate list of the current image block.
  • the above determining the encoded / decoded block motion information list matching the current image block may include: determining a category of the current image block according to the shape of the current image block; and determining a A list of category-matched encoded / decoded block motion information.
  • the category of the current image block may be determined first according to the shape of the current image block, and further, The coded / decoded block motion information list matching the type of the current image block may be determined according to the type of the current image block, and candidate motion information is selected from the coded / decoded block motion information list to be added to the motion information candidate list.
  • the encoded / decoded blocks are divided into three categories based on the shape of the encoded / decoded blocks (the first category, the second category, and the third category, respectively).
  • step S310 you need to select candidate motion information from the coded / decoded block motion information list to add the motion information candidate list of the current image block, if the current image block aspect ratio Greater than 1, the candidate motion information is selected from the encoded / decoded block motion information list corresponding to the encoded / decoded block of the first category to be added to the motion information candidate list of the current image block; if the current image block has an aspect ratio less than 1, Then, candidate motion information is selected from the encoded / decoded block motion information list corresponding to the encoded / decoded block of the second category to be added to the motion information candidate list of the current image block; if the aspect ratio of the current image block is equal to 1, the The motion information list of the encoded / decoded blocks corresponding to the three types of encoded / decoded blocks. Select candidate motion information to add to the motion information candidate column of the current image block. table.
  • the coded / decoded blocks can also be classified, and different coded / decoded block motion information lists can be constructed according to the types of the coded / decoded blocks. , And / or, reordering the motion information of the encoded / decoded blocks in the constructed encoded / decoded block motion information list, further improving the accuracy of constructing the encoded / decoded block motion information list, and can improve Video encoding performance.
  • the filtered The encoded / decoded blocks are classified, and different types of filtered encoded / decoded blocks are added to different encoded / decoded block motion information lists.
  • step S310 is further included: classifying the screened coded / decoded blocks, and classifying the screened screens according to the types of the screened coded / decoded blocks.
  • the encoded / decoded block is added to the corresponding encoded / decoded block motion information list.
  • the filtered coded blocks can also be filtered according to the characteristics of the coded / decoded blocks, such as shape, size, or prediction mode. Classify the decoded / decoded blocks and add the filtered coded / decoded blocks to the corresponding coded / decoded block motion information list according to the category of the filtered coded / decoded blocks.
  • the above-mentioned classification of the filtered encoded / decoded blocks may include: classifying the filtered encoded / decoded blocks according to the shape of the filtered encoded / decoded blocks.
  • classifying the filtered encoded / decoded blocks according to the shape of the filtered encoded / decoded blocks may include: when the aspect ratio of the filtered encoded / decoded blocks is greater than 1, filtering The subsequent encoded / decoded blocks are divided into the first category; and / or, when the aspect ratio of the filtered encoded / decoded blocks is less than 1, the filtered encoded / decoded blocks are divided into the second category.
  • the shape of the encoded / decoded block can be characterized by the aspect ratio of the encoded / decoded block.
  • the different shapes of the current image block can be characterized by the aspect ratio of the current image block and the size relationship of 1. Wherein, if the width-to-height ratio of the current image block is greater than 1, the shape of the current image block is a rectangle whose width is greater than the height; if the width-to-height ratio of the current image block is equal to 1, the shape of the current image block is square; if the current image block is square If the aspect ratio of is less than 1, the shape of the current image block is a rectangle with a width less than a height.
  • the filtered encoded / decoded blocks can be divided into different categories according to the aspect ratio of the filtered encoded / decoded blocks.
  • the category to which the filtered encoded / decoded blocks whose aspect ratio is greater than 1 belongs is referred to as the first category; the category to which the filtered encoded / decoded blocks whose aspect ratio is less than 1 belongs is referred to as the first category Two categories.
  • a filtered encoded / decoded block having an aspect ratio equal to 1 it may be divided into a first category, or it may be divided into a second category, or it may be divided into For a new category.
  • the category to which the filtered coded / decoded block having an aspect ratio equal to 1 belongs may be referred to as a third category.
  • the above-mentioned classification of the coded / decoded blocks after filtering may include: when the product of the width and height of the screened coded / decoded blocks is greater than a preset threshold, The encoded / decoded blocks are divided into the first category; when the product of the width and height of the filtered encoded / decoded blocks is less than or equal to a preset threshold, the filtered encoded / decoded blocks are divided into the second category.
  • the filtered encoded / decoded blocks can be divided into different categories according to the size (ie, the product of width and height) of the filtered encoded / decoded blocks.
  • the category to which the filtered encoded / decoded block that the product of width and height is greater than a preset threshold (the preset threshold can be set according to the actual scene) belong to is called the first category; the width and height
  • the category to which the filtered encoded / decoded block whose product is less than or equal to a preset threshold belongs is called a second category.
  • the filtered coded / decoded blocks are classified according to the product of the width and height of the filtered coded / decoded blocks, two or more than two
  • the preset threshold of is divided into a product of width and height into three or more intervals, and a category is corresponding to each interval.
  • the coded / decoded blocks whose product of width and height is less than or equal to Ta can be divided into the first category, and the product of width and height is greater than Ta.
  • the coded / decoded blocks less than or equal to Tb are divided into the second category, and the coded / decoded blocks whose product of width and height is greater than Tb are divided into the third category.
  • the coded / decoded block is filtered, the coded / decoded block is filtered based on the width and height of the coded / decoded block (that is, the width described in the above embodiment is greater than or equal to The first preset threshold and the height is greater than or equal to the second preset threshold), when classifying the filtered encoded / decoded blocks according to the size of the encoded / decoded blocks, the threshold used for classification needs to be less than the first preset The product of the threshold and the second preset threshold.
  • the above-mentioned classification of the filtered encoded / decoded blocks may include: classifying the filtered encoded / decoded blocks according to a prediction mode of the filtered encoded / decoded blocks. .
  • the above classifying the filtered encoded / decoded blocks according to the prediction mode of the filtered encoded / decoded blocks includes: when the prediction mode of the filtered encoded / decoded blocks is a merge mode, The filtered encoded / decoded blocks are classified into the first category; when the prediction mode of the filtered encoded / decoded blocks is the AMVP mode, the filtered encoded / decoded blocks are divided into the second category.
  • the category to which the filtered encoded / decoded block whose prediction mode is the merge mode belongs can be referred to as the first category; the category to which the filtered encoded / decoded block whose preset side mode is the AMVP mode belongs belongs. Called the second category.
  • the classification is not limited to the above two categories, and other types may also be classified.
  • the filtered coded / decoded blocks of the prediction mode are divided into other categories (such as the third category, the fourth category, etc.), and the specific implementation thereof will not be repeated here.
  • the prediction motion information finally selected in the motion information candidate list is ranked first among the motion information candidates, the number of bits required for index value encoding can be reduced, and the performance of video encoding can be improved; moreover, when performing motion information During prediction, when the motion information candidate finally selected in the motion information candidate list is ranked first among the motion information candidates, the consumption of predictive motion information selection can be reduced, that is, the cost of the same coding index is used to rank the high-relevance information. It is beneficial to improve the performance of video coding.
  • the coded / decoded block motion information in the coded / decoded block motion information list can be reordered, and the probability of being selected as the final prediction information is higher.
  • the motion information of the coded / decoded block is listed behind the motion information list of the coded / decoded block (when selecting candidate motion information from the coded / decoded block motion information list, the coded / decoded block The motion information of the encoded / decoded block is selected from the motion information list) so that the candidate motion information selected as the final prediction information can be ranked as high as possible among the motion information candidates.
  • a motion information list of coded / decoded blocks is constructed based on the coded / decoded blocks after filtering (the coded / decoded blocks are directly constructed after filtering the coded / decoded blocks) Block motion information list, or after filtering the coded / decoded blocks and classifying the filtered coded / decoded blocks, constructing multiple coded / decoded block motion information lists corresponding to different categories),
  • the motion information of the filtered encoded / decoded blocks in the encoded / decoded block motion information list is reordered.
  • the filtered motion information of the coded / decoded blocks in each coded / decoded block motion information list may be reordered.
  • the method further includes step S320: reordering the motion information of the filtered encoded / decoded blocks based on the residual coefficients of the filtered encoded / decoded blocks.
  • step S320 may be after step S310, and step S320 may also be replaced by: reordering the motion information of the filtered encoded / decoded blocks based on the residual coefficients of the classified encoded / decoded blocks.
  • the above-mentioned reordering the motion information of the filtered encoded / decoded block based on the residual coefficients of the filtered encoded / decoded block may include: In the least order, the motion information of the filtered encoded / decoded blocks is reordered.
  • candidate motion information when candidate motion information is selected from the list of encoded / decoded block motion information, it is usually selected in order from back to front. Therefore, it can be selected as the final predicted motion information when performing reordering.
  • the motion information of the encoded / decoded block with a high probability is ranked at the bottom of the motion information list of the encoded / decoded block.
  • the non-zero number of residual coefficients can be used to filter the list.
  • the motion information of the encoded / decoded blocks after reordering that is, the motion information of the filtered encoded / decoded blocks with the most non-zero number of residual coefficients is ranked first in the motion information list of the encoded / decoded blocks, and the residuals
  • the motion information of the filtered encoded / decoded block with the least number of non-zero coefficients is ranked at the end of the motion information list of the encoded / decoded block.
  • reordering the filtered motion information of the coded / decoded blocks in the motion information list of the coded / decoded blocks may include: based on the shape of the current image block and the filtered coded The relative position of the / decoded block and the current image block, reorders the motion information of the filtered coded / decoded block.
  • the encoded information is constructed based on the filtered motion information of the encoded / decoded blocks. After the list of motion information of the decoded block, the motion information of the filtered encoded / decoded block can be reordered based on the shape of the current image block and the relative position of the filtered encoded / decoded block and the current image block.
  • the above re-ranking the motion information of the filtered encoded / decoded block based on the shape of the current image block and the relative position of the filtered encoded / decoded block and the current image block may include: when the current When the aspect ratio of the image block is greater than 1, the filtered coded / decoded block on the left side of the current image block is the first, and the filtered coded / decoded block on the upper side of the current image block is the last.
  • the motion information of the encoded / decoded blocks is reordered.
  • the correlation between the surrounding block on the upper side of the data block and the data block is higher than the correlation between the surrounding block on the left side of the data block and the data block, and the motion information candidate
  • the motion information of highly relevant candidate blocks in the user list is ranked first, on the one hand, it can reduce the coding index overhead, and on the other hand, it can improve the probability that the motion information of highly relevant candidate blocks is selected as the final predicted motion information. Furthermore, the performance of video coding can be improved.
  • the filtered encoded / decoded block on the left side of the current image block may be the first, and the filtered already on the upper side of the current image block.
  • the motion information of the filtered encoded / decoded block is reordered, so that when candidate motion information is selected from the encoded / decoded block motion information list, the top of the current image block can be selected first.
  • the motion information of the filtered encoded / decoded block on the side can further make the motion information of the filtered encoded / decoded block on the upper side of the current image block in the motion information candidate list rank higher than that of the current image block.
  • the motion information of the filtered encoded / decoded blocks on the left is advanced.
  • the above re-ranking the motion information of the filtered encoded / decoded block based on the shape of the current image block and the relative position of the filtered encoded / decoded block and the current image block may include: when When the aspect ratio of the current image block is less than 1, the filtered coded / decoded block on the upper side of the current image block is the first, and the filtered coded / decoded block on the left side of the current image block is the last.
  • the motion information of the following encoded / decoded blocks is reordered.
  • the correlation between the surrounding block on the left side of the data block and the data block is higher than the correlation between the surrounding block on the upper side of the data block and the data block.
  • the filtered encoded / decoded block on the upper side of the current image block may be the first, and the filtered encoded on the left side of the current image block may be
  • the sequence of the decoded / decoded blocks is to reorder the filtered motion information of the coded / decoded blocks so that when selecting candidate motion information from the coded / decoded block motion information list, you can first select the left side of the current image block.
  • the filtered motion information of the encoded / decoded block, and further, the motion information of the filtered encoded / decoded block to the left of the current image block may be ranked higher in the motion information candidate list than the current image block.
  • the motion information of the filtered encoded / decoded block on the side is ranked higher in the motion information candidate list.
  • the encoding / decoding can be implemented according to the reordering method in the case where the aspect ratio of the current image block described in the above example is greater than 1.
  • the reordering of the filtered encoded / decoded block motion information in the block motion information list, or the reordering of the encoded / decoded block motion information in the case where the aspect ratio of the current image block described in the above example is less than 1 may be used. Reorder the filtered encoded / decoded block motion information in the decoded block motion information list.
  • the filtered encoded / decoded blocks are classified according to the manner described in step S310, and the filtered encoded / decoded blocks are added to the filtered encoded / decoded block according to the category of the filtered encoded / decoded blocks.
  • the priority of each coded / decoded block motion information list can be determined separately.
  • step S310 classify the filtered encoded / decoded blocks according to the prediction mode of the filtered encoded / decoded blocks described in step S310 as an example. Since the accuracy of the motion information of the encoded / decoded block in the AMVP mode is higher than the accuracy of the motion information of the encoded / decoded block in the merge mode, the motion of the filtered encoded / decoded block whose prediction mode is the AMVP mode can be determined. The priority of the motion information list of the encoded / decoded block where the information is located is higher than the priority of the motion information list of the encoded / decoded block where the motion information of the filtered encoded / decoded block whose prediction mode is the merge mode.
  • the filtered encoded / decoded blocks are classified according to the manner described in step S310, and the filtered encoded / decoded blocks are added to the filtered encoded / decoded block according to the category of the filtered encoded / decoded blocks.
  • the already-matched motion information candidate list of the current image block may be determined first. Encoding / decoding block motion information list, and then selecting candidate motion information from the encoded / decoding block motion information list matching the current image block to join the motion information candidate list.
  • the encoded / decoded blocks may be classified. And the coded / decoded block motion information lists corresponding to the coded / decoded blocks of different categories are respectively constructed to improve the accuracy of the coded / decoded block motion information list construction.
  • FIG. 4 is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S400 Classify the encoded / decoded blocks before the current image block.
  • Step S410 Add the coded / decoded block to the corresponding coded / decoded block motion information list according to the type of the coded / decoded block; wherein different categories correspond to different coded / decoded block motion information lists.
  • Step S420 Select candidate motion information from the coded / decoded block motion information list and add it to the motion information candidate list of the current image block, where the candidate motion information includes at least a motion vector.
  • the specific implementation of classifying the coded / decoded blocks before the current image block and constructing the coded / decoded block motion information lists corresponding to different types of coded / decoded blocks can refer to the method shown in FIG. 3
  • the relevant description in the process differs in that the coded / decoded blocks classified in the method flow shown in FIG. 3 are replaced by the screened coded / decoded blocks with unscreened coded / decoded blocks, and the classified coded
  • the method for classifying / decoding blocks is the same as that in step 310, and details are not described herein in this embodiment of the present application.
  • the coded / decoded blocks are filtered, and the encoded / decoded block motion information list is constructed based on the filtered encoded / decoded blocks.
  • the coded / decoded blocks are filtered, and the encoded / decoded block motion information list is constructed based on the filtered encoded / decoded blocks.
  • the coded / decoded block can also be The motion information of the encoded / decoded blocks in the block motion information list is reordered. For specific implementation, reference may be made to the related description in the method flow shown in FIG. 3, which is not described in this embodiment of the present application.
  • the candidate motion information is not limited to including motion vectors, and may also be coding information other than the motion vectors. Accordingly, the classification of the motion information may also include information other than the motion vectors. The classification of other encoding information is not described in detail in the embodiments of the present application.
  • the coded / decoded block motion information list when the coded / decoded block motion information list is constructed based on the coded / decoded block motion information, the coded / decoded block motion information list may be used.
  • the motion information of the encoded / decoded blocks in the sequence is reordered to ensure that the motion information of the encoded / decoded block selected as the final predicted motion information is ranked high in the motion information candidate list to improve encoding performance.
  • FIG. 5 is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps:
  • Step S500 Construct a coded motion information list according to the motion information of the coded / decoded blocks before the current image block.
  • the motion information of all the coded / decoded blocks may be directly added to the same coded / decoded block motion information list, or the coded / decoded block may be first After the blocks are filtered and / or classified, a motion information list of the encoded / decoded blocks is constructed based on the filtered and / or classified encoded / decoded blocks. The specific implementation is not described herein.
  • Step S510 Reorder the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list.
  • Step S520 Select candidate motion information from the reordered encoded / decoded block motion information list to add a motion information candidate list of the current image block, where the candidate motion information includes at least a motion vector.
  • the specific implementation of reordering the motion information of the coded / decoded blocks in the coded / decoded block motion information list can refer to the related description in the method flow shown in FIG. 3, and the embodiment of the present application is here Do not go into details.
  • the candidate motion information obtained by transforming the existing motion information when constructing the final motion information candidate list, the candidate motion information obtained by transforming the existing motion information, or from the encoded / decoded
  • the candidate motion information selected in the block motion information list can be ranked behind the airspace candidates (if they exist) and time domain candidates (if they exist), that is, when the number of airspace candidates and time domain candidates does not meet the requirements,
  • the candidate motion information obtained by transforming the existing motion information is added to the final motion information candidate list, or the candidate motion information is selected from the encoded / decoded block motion information list.
  • the candidate motion information obtained by transforming the existing motion information is added to the final motion information candidate list, or after the candidate motion information selected from the encoded / decoded block motion information list, The number still does not meet the requirements, and a combination candidate (for a merge mode) and zero motion information can be further added to the final motion information candidate list.
  • the candidate motion information is not limited to including motion vectors, and may also be encoding information other than the motion vectors. Accordingly, the ordering of the motion information may also include information other than the motion vectors. The classification of other encoding information is not described in detail in the embodiments of the present application.
  • a coded / decoded block motion information list may also be constructed based on the operation information of the coded / decoded blocks, and candidate motion information is selected from the coded / decoded block motion information list.
  • the motion information candidate list of the current image block is added to increase the richness of the candidate samples.
  • the Affine mode may include the Affine Merge mode or the Affine AMVP mode.
  • the parameter model of the general Affine mode is composed of six parameters: a, b, c, d, e, and f.
  • the inter prediction block is divided into several small regions of equal size, and the motion speed is consistent in each small region (i.e., the sub-block), while the motion compensation model of each small region is still a plane translation model (The image block is only translated in the image plane, and the shape and size are not changed. Therefore, the description of the motion of the subblock can still be parameterized as a motion vector).
  • the scaling ratio of any two points of the affine object is consistent (the size of the angle formed by any two straight lines remains unchanged).
  • the affine motion of the six parameters a, b, c, d, e, f is degraded to a four-parameter affine motion model, that is, there is a certain between the four parameters of a, b, c, d. relationship.
  • only 4 sets of known (x, y) and (x ', y') pairs are needed to derive these four parameters.
  • FIG. 6A is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S600 Construct a coded / decoded block motion information list, where the coded / decoded block motion information list includes motion information of the coded / decoded block before the current image block.
  • a coded / decoded block motion information list may be constructed based on the motion information of the coded / decoded blocks, and candidate motion information is selected from the coded / decoded block motion information list. Join the motion information candidate list of the current image block.
  • Step S610 When the prediction mode of the current image block is the Affine mode, select candidate motion information from the motion information of the encoded / decoded block and add it to the motion information candidate list of the current image block.
  • the candidate motion information may also be selected from the motion information of the coded / decoded block and added to the motion information candidate list of the current image block to increase the motion information prediction of the Affine mode.
  • the richness of the candidate sample may also be selected from the motion information of the coded / decoded block and added to the motion information candidate list of the current image block to increase the motion information prediction of the Affine mode.
  • the motion information may include a motion vector, a reference frame index, a motion direction, and a parameter model.
  • the control point of the current image block can be determined according to the motion information of the control points (including the upper-left control point and the upper-right control point) of the encoded / decoded block.
  • Motion information and according to the control point motion information of the current image block (assuming V0 (V x0 , V y0 ,) and V1 (V x1 , V y1 )), use formula (5) to obtain the Motion information; among them, the motion information under the 4-parameter model can represent the angle and speed of MV rotation in the plane.
  • the motion of the current image block can be determined based on the motion information of the control points of the encoded / decoded block (including the upper left control point, upper right control point, and lower left control point).
  • Control point motion information and based on the control point motion information of the current image block (assuming V0 (V x0 , V y0 ,), V1 (V x1 , V y1 ), and V2 (V x2 , V y2 )), use Formula (3) obtains the motion information of each sub-block of the current image block.
  • the motion information in the 6-parameter model can represent the angle, speed, and direction of MV rotation in stereo space.
  • the motion information candidate list constructed for the motion information prediction of the Affine mode may include, in addition to airspace candidates and zero motion information, a list of motion information from coded / decoded blocks.
  • a schematic diagram of candidate motion information selected in FIG. 6B may be shown in FIG. 6B.
  • selecting the candidate motion information from the motion information of the encoded / decoded block and adding it to the motion information candidate list of the current image block includes: from the prediction mode of the encoded / decoded block of the AffineMerge mode. Select candidate motion information in the motion information to add to the motion information candidate list of the current image block; or, select motion information from the motion information of the encoded / decoded block whose prediction mode is Affine AMVP mode to add the motion information candidate of the current image block List.
  • the motion information prediction of the Affine mode is different from the motion information prediction of the merge mode or the AMVP mode. Therefore, for the motion information prediction of the Affine mode, when the motion from the encoded / decoded block is required, the candidate motion information may be selected from the motion information of the encoded / decoded block whose prediction mode is Affine mode to be added to the motion information candidate list of the current image block.
  • candidate motion information when the prediction mode of the current image block is the Affine mode, may be selected from the motion information of the encoded / decoded block whose prediction mode is the Affine Merge mode to add the motion information of the current image block.
  • candidate motion information may be selected from the motion information of the encoded / decoded blocks whose prediction mode is Affine AMVP mode and added to the motion information candidate list of the current image block.
  • the motion information prediction of the Affine mode when the candidate motion information is selected from the motion information of the coded / decoded block, it is not limited to the encoding / decoding of the prediction mode to the Affine mode.
  • the motion information of the decoded block is selected, and the motion information of the encoded / decoded block in non-Affine mode can also be selected.
  • the selected candidate motion information is the motion information of a non-Affine mode coded / decoded block
  • the motion information for a 4-parameter model
  • the top left the motion information of the upper right corner and the lower left corner (for a 6-parameter model), its specific implementation is not repeated here.
  • the coded / decoded block motion information list when constructing the coded / decoded block motion information list based on the coded / decoded block whose prediction mode is Affine mode, it may be Classify the coded / decoded blocks whose prediction mode is Affine mode, and add the coded / decoded blocks whose prediction mode is Affine mode to the corresponding coded / decoded according to the type of coded / decoded blocks whose prediction mode is Affine mode.
  • List of block motion information is Among them, different categories correspond to different coded / decoded block motion information lists.
  • the encoded / decoded blocks whose prediction mode is Affine mode can be performed according to the parameter model of the encoded / decoded blocks whose prediction mode is Affine mode. classification.
  • the coded / decoded block whose prediction mode is Affine mode can be divided into the first category;
  • the parameter model of the encoding / decoding block is a 4-parameter model, the encoded / decoded block whose prediction mode is Affine mode can be classified into the second category;
  • the parameter model of the encoded / decoded block whose prediction mode is Affine mode is 6 parameters
  • the coded / decoded blocks whose prediction mode is Affine mode can be divided into a third category.
  • the selection of candidate motion information from the motion information of the encoded / decoded block to be added to the motion information candidate list of the current image block may include: the motion information of the encoded / decoded block corresponding to each category
  • the candidate motion information is sequentially selected from the motion information list of each encoded / decoded block to be added to the motion information candidate list of the current image block; wherein different categories correspond to different priorities.
  • the priority of the encoded / decoded block motion information list corresponding to each category can be ranked from high to low in order from each coded / Select candidate motion information from the decoded block motion information list and add it to the motion information candidate list of the current image block to ensure that the motion information of the encoded / decoded block with higher accuracy is ranked in the motion information candidate list than the encoded /
  • the motion information of the decoded block is ranked first in the motion information candidate list.
  • the coded / decoded blocks of the Affine mode are classified according to the parameter model of the coded / decoded blocks of the Affine mode (see the related description in the above embodiment), they can be listed according to List2 (corresponding to the 6-parameter model) , List1 (corresponding to the 4-parameter model) and List0 (corresponding to the 2-parameter model), in the order from the first to the last, select candidate motion information from the encoded / decoded block motion information list to add to the motion information candidate list of the current image block.
  • List2 corresponding to the 6-parameter model
  • List1 corresponding to the 4-parameter model
  • List0 corresponding to the 2-parameter model
  • the selecting the candidate motion information from the encoded / decoded block motion information list to add the motion information candidate list of the current image block may include: determining an encoded / decoded block that matches the current image block. Motion information list; select candidate motion information from the encoded / decoded block motion information list that matches the current image block to join the motion information candidate list of the current image block.
  • the 2-parameter model and 4-parameter model of the spatial candidate of the current image block can be determined respectively.
  • the number of airspace candidates of the 6-parameter model, and select candidate motion information from the encoded / decoded block motion information list corresponding to the parameter model with the largest number of airspace candidates to add to the motion information candidate list of the current image block can be determined respectively.
  • the candidate motion information may be selected from List1 (corresponding to the 4-parameter model) and added to the motion information candidate list of the current image block.
  • the coded / decoded block motion information when constructing the coded / decoded block motion information list, the coded / decoded block motion information may be implemented in a first-in-first-out (FIFO) manner.
  • the motion information of the encoded / decoded blocks in the list is updated, that is, when the number of motion information of the encoded / decoded blocks in the encoded / decoded block motion information list reaches a preset maximum number, and there is a new
  • the motion information of the first encoded / decoded block added to the encoded / decoded block motion information list can be deleted, and the motion of the new encoded / decoded block can be added. Information added.
  • the above FIFO method is only a specific implementation for updating the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list, and is not a limitation on the scope of protection of the present application, that is, the present invention
  • the motion information of the coded / decoded blocks in the coded / decoded block motion information list may also be updated in other ways.
  • the specific implementation is not described herein.
  • the existing motion information can be scaled in a specified direction. Add new candidate motion information.
  • the existing motion information may include motion information of a spatial domain candidate, motion information of a time domain candidate, and motion information of an encoded / decoded block.
  • the existing motion information in the motion information candidate list can be specified. Scaling in the direction to obtain new candidate motion information.
  • the constructed motion information candidate list can include spatial domain candidates, time domain candidates, combined candidates, and zero motion information, and can also include existing motion information in a specified direction.
  • the motion vector can be scaled along the direction of the motion vector.
  • the scaled motion vector 702 is (mx + delta_mv_x, my + delta_mv_y).
  • delta_mv_x is the expansion and contraction in the x direction
  • delta_mv_y is the expansion and contraction in the y direction.
  • the expansion and contraction of the existing motion information is not limited to the expansion and contraction along the direction of the motion vector, and may be other directions, which is not described in the embodiment of the present application.
  • frame-level, slice-level, or row-level scaling may be performed.
  • the amplitude of scaling the existing motion information can be determined according to the frame complexity and the intensity of the motion of the previous frame of the frame to which the existing motion information belongs. .
  • the complexity of the frame and the intensity of the motion can be characterized by the proportion of the zero motion vector.
  • block-level scaling when scaling existing motion information, block-level scaling can be performed.
  • any data block can be a spatial candidate, a temporal candidate, or an encoded / decoded block
  • it can be determined according to the similarity of the motion information of neighboring blocks of the block. Whether or not the block needs to be scaled.
  • the expansion and contraction width is 0, it can be determined that there is no need to perform expansion and contraction.
  • the determined expansion and contraction amplitude is delta_mv0, where the neighboring blocks are A0, A1, B0, B1, and B2; data block 2
  • the similarity of adjacent blocks at different positions is low, and the determined expansion and contraction is delta_mv1, where the illustration of adjacent blocks is omitted in the figure; the similarity of adjacent blocks at different positions of data block 3 is high, and the determined
  • the amplitude of the expansion and contraction is delta_mv2, where the illustration of adjacent blocks is omitted in the figure; then delta_mv2 ⁇ delta_mv1 ⁇ deltamv0.
  • the existing motion information may include motion information of a spatial domain candidate, motion information of a time domain candidate, and motion information of an encoded / decoded block.
  • the weighting ratio of each existing motion information can be adaptively adjusted according to the characteristics of the source block of the existing motion information (which can include spatial domain candidates, time domain candidates, or encoded / decoded blocks).
  • the source block A and the source block B of the existing motion information both refer to the same frame, and the motion vector (amx, amy) of the source block A and the motion vector of the source block B are (bmx, bmy).
  • the weighting coefficient of the source block A is W0 and the weighting coefficient of the source block B is W1.
  • the weighted motion vector can be calculated by the following formulas (6) and (7) as (Mv_x, Mv_y):
  • a coded / decoded block motion information list may be constructed based on the motion information of the coded / decoded blocks, and candidate motion information is selected from the coded / decoded block motion information list. Join the motion information candidate list of the current image block.
  • the encoded / decoded blocks can be filtered first and based on the motion of the filtered encoded / decoded blocks.
  • Information builds a list of encoded / decoded block motion information.
  • the number of non-zero coefficients in the residual coefficients of the coded / decoded block intuitively reflects the accuracy of the prediction. Therefore, based on the number of non-zero coefficients in the residual coefficients of the coded / decoded block, the codes can be encoded. / Decode block to filter out coded / decoded blocks with poor prediction accuracy.
  • a non-zero coefficient in the residual coefficient of the coded / decoded block can be determined. Whether the number is greater than or equal to a preset number threshold; if yes, the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block motion information list; otherwise, the motion information of the encoded / decoded block is added to the encoded / decoded block List of decoded block motion information.
  • the coded / decoded block may be based on the coded / decoded block.
  • the width and height of the blocks are used to filter the coded / decoded blocks to eliminate the coded / decoded blocks that are selected as the final predicted motion information with a low probability.
  • any coded / decoded block when adding the coded / decoded block to the coded / decoded block motion information list, it can be determined whether the width of the coded / decoded block is greater than or equal to the first preset Threshold (taking 64 as an example) and whether the height is greater than or equal to a second preset threshold (taking 64 as an example); if the width of the encoded / decoded block is greater than or equal to 64 and the height is greater than or equal to 64, the encoded The motion information of the decoded block is added to the list of encoded / decoded block motion information; otherwise (the width of the encoded / decoded block is less than 64 and / or the height is less than 64), the motion information of the encoded / decoded block is added to the encoded / decoded block List of decoded block motion information.
  • Threshold taking 64 as an example
  • a second preset threshold taking 64 as an example
  • the quantization step size of the encoded / decoded block can intuitively reflect the accuracy of the motion information of the encoded / decoded block. Therefore, the encoded / decoded block can be based on the quantization step size of the motion information of the encoded / decoded block. Blocks are filtered to remove motion information from coded / decoded blocks that are too inaccurate.
  • any coded / decoded block when adding the coded / decoded block to the coded / decoded block motion information list, it can be determined whether the quantization step of the motion information of the coded / decoded block is greater than Equal to the preset step size threshold (take 2 as an example); if yes, the motion information of the coded / decoded block is refused to be added to the coded / decoded block motion information list; otherwise, the motion information of the coded / decoded block is added List of coded / decoded block motion information.
  • the encoded / decoded blocks may be classified first, and different types of The encoding / decoding block adds a list of different encoded / decoded block motion information.
  • the encoding / decoding block is classified according to the shape of the encoded / decoded block as an example.
  • the shape of the coded / decoded block can be determined (taking the aspect ratio as an example) ), When the aspect ratio of the coded / decoded block is greater than or equal to 1, add the motion information of the coded / decoded block to the coded / decoded block motion information list 1 (may be called List0); When the aspect ratio of the decoded block is less than 1, the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 2 (may be called list1).
  • the matching encoded motion information list may be determined according to the shape of the current image block.
  • the candidate motion information is selected from the matched coded / decoded block motion information list to join the motion information candidate list of the current image block.
  • the candidate motion information needs to be selected from the motion information list of the encoded / decoded blocks to be added to the motion information candidate list of the current image block to determine the aspect ratio of the current image block, if the width and height of the current image block If the ratio is greater than or equal to 1, the candidate motion information is selected from List0 and added to the motion information candidate list of the current image block; if the aspect ratio of the current image block is less than 1, the candidate motion information is selected from List1 to be added to the motion information of the current image block.
  • Candidate list if the width and height of the current image block If the ratio is greater than or equal to 1, the candidate motion information is selected from List0 and added to the motion information candidate list of the current image block; if the aspect ratio of the current image block is less than 1, the candidate motion information is selected from List1 to be added to the motion information of the current image block.
  • the coded / decoded blocks can be classified according to the size of the coded / decoded block (taking the product of width and height as an example).
  • the product of the width and height of the coded / decoded block can be determined and judged. Whether the product of the width and height of the coded / decoded block is greater than or equal to a preset threshold (taking 2048 as an example), and if so, add the motion information of the coded / decoded block to the coded / decoded block motion information list 1 (yes It is called List0); otherwise, the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 2 (may be called list1).
  • the encoding / decoding can be performed from each of the encoded / decoded blocks in order of priority from high to low.
  • the candidate motion information is selected from the block motion information list and added to the motion information candidate list of the current image block.
  • the priority of List1 is higher than the priority of List0. Therefore, when candidate motion information needs to be selected from the motion information list of encoded / decoded blocks to be added to the motion information candidate list of the current image block, In the order of List1 and List0, candidate motion information is selected from List1 and List0 and added to the motion information candidate list of the current image block.
  • the coded / decoded blocks can be classified according to the prediction mode of the coded / decoded blocks.
  • the prediction mode of the coded / decoded block can be determined.
  • the prediction mode of the decoded block is the merge mode
  • the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 1 (may be called List0); when the prediction mode of the encoded / decoded block is the AMVP mode At this time, the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 2 (may be referred to as list1).
  • the encoding / decoding can be performed from each of the encoded / decoded blocks in order of priority from high to low.
  • the candidate motion information is selected from the block motion information list and added to the motion information candidate list of the current image block.
  • the priority of List1 is higher than the priority of List0. Therefore, when candidate motion information needs to be selected from the motion information list of encoded / decoded blocks to be added to the motion information candidate list of the current image block, In the order of List1 and List0, candidate motion information is selected from List1 and List0 and added to the motion information candidate list of the current image block.
  • the motion information based on the motion information of the encoded / decoded blocks is constructed. After encoding / decoding the block motion information list, the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list can be reordered.
  • the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list can be reordered based on the residual coefficients of the encoded / decoded blocks. For example, for the motion information of the encoded / decoded blocks in the motion information list of the encoded / decoded blocks, the motion information of the encoded / decoded blocks can be re-ordered in the order of the number of non-zero coefficients of the residual coefficient from ascending to the least. Sort.
  • HMVP History-based Motion Vector Prediction
  • candidate motion information is selected from the encoded / decoded block motion information list and added to the motion information candidate list of the current image block, the selection is performed in the order from the end of the list to the head (that is, the later the first, the earlier the select).
  • the motion information of the encoded / decoded blocks can be reordered based on the shape of the current image block and the relative position of the encoded / decoded block and the current image block.
  • the encoded / decoded block A is the encoded / decoded block on the left side of the current image block
  • the encoded / decoded block B is the encoded / decoded block on the upper side of the current image block
  • the motion information of the encoded / decoded block A may be placed before the motion information of the encoded / decoded block B;
  • the motion information of the encoded / decoded block A may be arranged behind the motion information of the encoded / decoded block B.
  • the coded / decoded blocks can also be constructed based on the motion information of the coded / decoded blocks.
  • Motion information list and select candidate motion information from the encoded / decoded block motion information list to join the motion information candidate list of the current image block.
  • the motion parameter model (Motion model) information of the coded / decoded blocks in the Affine mode can be stored as a candidate list (that is, the coded / decoded block motion information list).
  • the length of the list is L.
  • the list member (Motion model information member (referred to as MMIC)) is updated in the FIFO method or other methods, and its schematic diagram can be as shown in FIG. 7M.
  • the candidate motion information may be selected from the encoded / decoded block motion information list (also may be referred to as candidate motion parameter model information in this embodiment).
  • the encoded / decoded blocks of the Affine mode when constructing the motion information list of the encoded / decoded blocks, can be classified according to the parameter model of the encoded / decoded blocks of the Affine mode. And add coded / decoded blocks of different types of Affine modes to different coded / decoded block motion information lists.
  • Motion parameter model information of the encoded / decoded block is added to the motion information list 1 (may be referred to as List0); when the parameter model of the encoded / decoded block of the Affine mode is a 4-parameter model, the encoded / decoded of the Affine mode is The motion parameter model information of the block is added to the encoded / decoded block motion information list 2 (may be called List1); when the parameter model of the encoded / decoded block of the Affine mode is a 6-parameter model, the encoded / decoded block of the Affine mode is The motion parameter model information of the decoded block is added to the encoded / decoded block motion information list 3 (may be referred to as List2).
  • the motion information list of the encoded / decoded block that matches the current image block may be selected Or in the order of priority of each coded / decoded block motion information list from high to low, and sequentially selected from each coded / decoded block motion information list, the specific implementation can refer to the method shown in FIG. 6 Relevant descriptions in the process are not repeated here in this embodiment of the present application.
  • the coded / decoded block motion information list usually includes only the CTU to the left of the CTU to which the current image block belongs.
  • the motion information of each encoded / decoded block is for other encoded / decoded blocks that are adjacent to the current image block, but the encoding order is much earlier than the current image block (such as the encoded / decoded block above the current image block) Sport info is no longer in the sport info list.
  • the candidate motion information may also be selected from the motion information of each of the encoded / decoded blocks in the CTUs in several rows above the CTU to which the current image block belongs.
  • the motion information of each encoded / decoded block in the CTU of the previous row of the CTU to which the current image block belongs can be stored separately.
  • the number of stored motion information of each coded / decoded block in the CTU does not exceed L2 (L1 and L2 may be the same or different).
  • the motion information of each coded / decoded block in the CTU can be stored in accordance with its coding order.
  • the candidate motion information can also be selected from the encoded / decoded block motion information in the CTU directly above the CTU to which the current image block belongs, further increasing the richness of the candidate samples.
  • FIG. 8 is a schematic diagram of a hardware structure of a device for constructing a motion information candidate list provided by an embodiment of the present application.
  • the motion information candidate list construction device may include a processor 801, a communication interface 802, a memory 803, and a communication bus 804.
  • the processor 801, the communication interface 802, and the memory 803 complete communication with each other through a communication bus 804.
  • a computer program is stored in the memory 803; the processor 801 may implement the method stored in the memory 803 to implement a method for constructing a candidate list of sports information corresponding to FIG. 2A, FIG. 3, FIG. 4, FIG. 5 or FIG. 6A.
  • the memory 803 mentioned herein may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information such as executable instructions, data, and so on.
  • the memory 802 may be: RAM (Radom Access Memory), volatile memory, nonvolatile memory, flash memory, storage drive (such as hard drive), solid state hard disk, any type of storage disk (such as optical disk , Dvd, etc.), or similar storage media, or a combination thereof.
  • the apparatus for constructing the motion information candidate list may be an encoding device or a decoding device.
  • An embodiment of the present application further provides a machine-readable storage medium storing a computer program, such as the memory 803 in FIG. 8, and the computer program may be executed by the processor 801 in the motion information candidate list construction apparatus shown in FIG. 8.
  • a method for constructing a motion information candidate list corresponding to FIG. 2A, FIG. 3, FIG. 4, FIG. 5 or FIG. 6A.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请提供一种运动信息候选者列表构建方法、装置及可读存储介质,该方法包括:获取当前图像块的已有运动信息,所述已有运动信息至少包括运动矢量;对所述已有运动信息进行变换;将变换后的运动信息作为候选运动信息加入所述当前图像块的运动信息候选者列表。

Description

运动信息候选者列表构建方法、装置及可读存储介质 技术领域
本申请涉及视频编码技术,尤其涉及一种运动信息候选者列表构建方法、装置及可读存储介质。
背景技术
帧间预测是指利用视频时域的相关性,使用邻近已编码图像像素预测当前图像的像素,以达到有效去除视频时域冗余的目的。
目前,主要的视频编码标准的帧间预测部分均采用基于块的运动补偿技术,其主要原理是为当前图像的每一个像素块在之前的已编码图像中寻找一个最佳匹配块,该过程称为运动估计(Motion Estimation,简称ME)。其中用于预测的图像称为参考图像帧(Reference Frame),参考块到当前像素块的位移称为运动矢量(Motion Vector,简称MV),当前像素块与参考块的差值称为预测残差(Prediction Residual)。
由于空域相邻块的运动信息具有较强的相关性,同时运动信息在时域上也有一定的相关性,因此,若利用空域或者时域上相邻块的运动信息对当前块的运动信息进行预测,得到预测像素值,则仅需要对残差进行编码,可以大幅度的节省运动信息的编码比特数。通过在编码端和解码端构建相同的运动信息候选者列表,不需要直接编码运动信息,仅需要编码最终选择的候选者列表中的序号(例如Merge_idx)就可以表达运动信息,从而大幅度减少编码比特数。
为了利用相邻块的空域相关性和时域相关性,目前的视频编码标准在运动信息的预测方面提出了合并技术(Merge)、高级运动矢量预测技术(Advanced Motion Vector Prediction,简称AMVP)以及仿射(Affine)技术。其均使用空域和时域的运动信息预测,通过建立运动信息候选者列表,并按照预设规则从列表中择取最优的一个候选者作为当前单元的预测信息。
现有视频编码标准中在构建运动信息候选者列表时,仅从空域候选者列表和时域候选者列表中选择候选运动信息,容易出现不存在可用运动信息或可用运动信息数量不足的情况。
发明内容
有鉴于此,本申请提供一种运动信息候选者列表构建方法、装置及可读存储介质。
具体地,本申请是通过如下技术方案实现的:
根据本申请实施例的第一方面,提供一种运动信息候选者列表构建方法,包括:获取当前图像块的已有运动信息,所述已有运动信息至少包括运动矢量;对所述已有运动信息进行变换;将变换后的运动信息作为候选运动信息加入所述当前图像块的运动信息候选者列表。
根据本申请实施例的第二方面,提供一种运动信息候选者列表构建方法,包括:根据预设过滤条件对当前图像块之前的已编码/解码块进行筛选,基于筛选后的已编码/解码块构建已编码/解码块运动信息列表;从所述已编码/解码块运动信息列表中选择候选运动信息加入所述当前图像块的运动信息候选者列表,所述候选运动信息至少包括运动矢量。
根据本申请实施例的第三方面,提供一种运动信息候选者列表构建方法,包括:对当前图像块之前的已编码/解码块进行分类;根据所述已编码/解码块的类别,将所述已编码/解码块加入到对应的已编码/解码块运动信息列表,其中,不同的类别对应不同的已编码/解码块运动信息列表;从所述已编码/解码块运动信息列表中,选择候选运动信息加入所述当前图像块的运动信息候选者列表,所述候选运动信息至少包括运动矢量。
根据本申请实施例的第四方面,提供一种运动信息候选者列表构建方法,包括:根据当前图像块之前的已编码/解码块的运动信息,构建已编码/解码块运动信息列表;对所述已编码/解码块运动信息列表中的所述已编码/解码块的运动信息进行重排序;从重排序后的所述已编码/解码块运动信息列表中,选择候选运动信息加入所述当前图像块的运动信息候选者列表,所述候选运动信息至少包括运动矢量。
根据本申请实施例的第五方面,提供一种运动信息候选者列表构建方法,包括:构建已编码/解码块运动信息列表,所述已编码/解码块运动信息列表中包括当前图像块之前的已编码/解码块的运动信息;当所述当前图像块的预测模式为仿射Affine模式时,从所述已编码/解码块的运动信息中选择候选运动信息加入所述当前图像块的运动信息候选者列表。
根据本申请实施例的第六方面,提供一种运动候选者列表构建装置,包括处理器、通信接口、存储器和通信总线,其中,所述处理器,所述通信接口,所述存储器通过所述通信总线完成相互间的通信;所述存储器,用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的所述程序时,实现上述运动信息候选者列表构建方法。
根据本申请实施例的第七方面,提供一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述运动信息候选者列表构建方法。
本申请实施例的运动信息候选者列表构建方法,通过对当前图像块的已有运动信息进行变换,并将变换后的已有运动信息作为候选运动信息加入当前图像块的运动信息候选者列表,增加了候选者样本的丰富性。
本申请实施例的运动信息候选者列表构建方法,通过在构建已编码/解码块运动信息列表时,对当前图像块之前的已编码/解码块进行筛选、分类和/或重排序,在增加候选者样本的丰富性的基础上,提高了已编码/解码块运动信息列表构建的精准度。
本申请实施例的运动信息候选者列表构建方法,当当前图像块的预测模式为Affine模式时,从已编码/解码块的运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,增加了Affine模式的运动信息预测的候选者样本的丰富性。
附图说明
图1A-(a)~图1A-(f)是本申请一示例性实施例示出的块划分的示意图;
图1B是本申请另一示例性实施例示出的块划分的示意图;
图2A是本申请一示例性实施例示出的一种运动信息候选者列表构建方法的流程示意图;
图2B~图2D是本申请示例性实施例示出的运动信息候选者列表的示意图;
图3是本申请一示例性实施例示出的一种运动信息候选者列表构建方法的流程示意图;
图4是本申请另一示例性实施例示出的一种运动信息候选者列表构建方法的流程示意图;
图5是本申请又一示例性实施例示出的一种运动信息候选者列表构建方法的流程示意图;
图6A是本申请再一示例性实施例示出的一种运动信息候选者列表构建方法的流程示意图;
图6B是本申请一示例性实施例示出的一种运动信息候选者列表的示意图;
图7A是本申请一示例性实施例示出的一种对已有运动信息进行伸缩的示意图;
图7B是本申请一示例性实施例示出的一种帧运动强度与伸缩的幅度的对应关系的示意图;
图7C是本申请一示例性实施例示出的一种不同位置的相邻块的相似度与伸缩的幅度的对应关系的示意图;
图7D是本申请一示例性实施例示出的一种对已编码/解码块进行筛选的示意图;
图7E是本申请另一示例性实施例示出的一种对已编码/解码块进行筛选的示意图;
图7F是本申请又一示例性实施例示出的一种对已编码/解码块进行筛选的示意图;
图7G是本申请一示例性实施例示出的一种对已编码/解码块进行分类的示意图;
图7H是本申请一示例性实施例示出的一种从多个已编码/解码块运动信息列表中选择候选运动信息的示意图;
图7I是本申请另一示例性实施例示出的一种对已编码/解码块进行分类的示意图;
图7J是本申请又一示例性实施例示出的一种对已编码/解码块进行分类的示意图;
图7K是本申请一示例性实施例示出的一种对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序的示意图;
图7L是本申请另一示例性实施例示出的一种对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序的示意图;
图7M是本申请一示例性实施例示出的一种Affine模式的运动信息预测终中的已编码/解码块运动信息列表构建的示意图;
图7N是本申请一示例性实施例示出的一种对Affine模式的已编码/解码块进行分类的示意图;
图7O是本申请一示例性实施例示出的一种已编码/解码块运动信息存储的示意图;
图8是本申请一示例性实施例示出的一种运动信息候选者列表构建装置的硬件结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,下面先对现有视频编码标准中块划分技术进行简单说明。
在HEVC(High Efficiency Video Coding,高效率视频编码)中,一个CTU(Coding Tree Unit,编码树单元)使用四叉树递归划分成一个或多个CU(Coding Unit,编码单元)。在叶子节点CU一级确定是否使用帧内编码或者帧间编码。CU可以进一步划分成两个或者四个PU(Prediction Unit,预测单元),同一个PU内使用相同的预测信息。在预测完成后得到残差信息后,一个CU可进一步四叉划分成多个TU(Transform Units)。例如,本申请中的当前图像块即为一个PU。
但是在最新提出的VVC(Versatile Video Coding,通用视频编码)中的块划分技术有了较大变化。一种混合了二叉树/三叉树/四叉树的划分结构取代了原先的划分模式,取消了原先CU,PU,TU的概念的区分,支持CU的更灵活的划分方式。其中,CU可以是正方形也可以是矩形划分。CTU首先进行四叉树的划分,然后四叉树划分的叶子节点可以进一步进行二叉树和三叉树的划分。图1A所示,CU共有五种划分类型,图1A-(a)表示一个CU,图1A-(b)表示四叉树划分,图1A-(c)表示水平二叉树划分,图1A-(d)表示垂直二叉树划分,图1A-(e)表示水平三叉树划分,图1A-(f)表示垂直三叉树划分,如图1B所示,一种CTU内的CU划分可以是上述五种划分类型的任意组合。由上可知不同的划分方式,使得各个PU的形状有所不同,如不同尺寸的矩形,正方形。
对本申请中的预测模式进行如下介绍。
1、合并模式
H.265/HEVC在运动信息预测方面提出了合并技术(Merge模式的运动信息预测) 和AMVP技术(即AMVP模式的运动信息预测)。两者均使用了空域和时域的运动信息预测,通过建立候选运动信息列表,基于率失真代价择取最优的一个运动信息候选者作为当前数据块的预测运动信息。
Merge模式下,当前数据块的运动信息直接基于空域或者时域上相邻的数据块的运动信息预测得到,不存在运动矢量差(Motion Vector Difference,MVD)。若编码端和解码端依照一样的方式构建运动信息候选者列表,则编码器只需要传输预测运动信息在运动信息候选者列表中的索引即可,这样可以大幅度节省了运动信息的编码比特数。
2、AMVP模式
AMVP模式的运动信息预测也是利用空域和时域上相邻数据块的运动信息的相关性,为当前数据块建立运动信息候选者列表。编码器从中选出最优的预测运动信息,并对运动信息进行差分编码。解码端通过建立相同的运动信息候选者列表,仅需要运动矢量残差和预测运动信息在该列表中的序号就可以计算出当前数据块的运动信息。其中,AMVP模式下的运动信息候选者列表的长度为2。
3、Affine模式
Affine模式是H.266新引进的帧间预测模式,针对旋转,缩放的场景有很好的预测效果。
JEM(Joint Exploration Model,共同开发模型)中Affine模式分为两种,一种是Affine Inter(对应Affine AMVP),一种是Affine Merge。Affine Merge通过遍历候选图像块找到第一个是Affine模式编码的候选者即可。Affine Merge方式中,无需传输一些额外的索引值,只需传输是否使用Affine Merge的Flag即可。
为了使本申请实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本申请实施例中技术方案作进一步详细的说明。
需要说明的是,本文中所描述的运动信息候选者列表构建方法可以应用于编码端设备,也可以应用于解码端设备。其中,当应用于编码端设备时,本文中所描述的已编码/解码块指代已编码块;当应用于解码端设备时,本文中所描述的已编码/解码块指代已解码块,本文后续不再复述。
本申请实施例中,对于合并模式或AMVP模式等模式的运动信息预测,可以通过对已有运动信息进行变换的方式得到新的候选运动信息,以增加候选者样本的丰富性。
请参见图2A,为本申请实施例提供的一种运动信息候选者列表构建方法的流程示意图,如图2A所示,该运动信息候选者选择列表构建方法可以包括以下步骤。
步骤S200、获取当前图像块之前的已有运动信息。
本申请实施例中,当前图像块是指当前进行运动信息预测的数据块。其中,应用于编码端,该数据块可以为待编码的数据块(本文中可以称为编码块);应用于解码端,该数据块可以为待解码的数据块(本文中可以称为解码块)。
本申请实施例中,已有运动信息可以包括但不限于空域候选者的运动信息、时域候选者的运动信息和/或已编码/解码块的运动信息。
其中,空域候选者的运动信息和时域候选者的运动信息是位于当前图像块的运动信息候选者列表中的候选运动信息,如图2B所示,当前图像块的运动信息候选者列表中包括空域候选者、时域候选者和零运动信息等,对于不同的模式,当前图像块的运动信息候选者列表中的空域候选者的运动信息和时域候选者的运动信息的数量不同。
当前图像块的空域候选者可以包括空域相邻块,也可以同时包括一些空域次相邻块,还可以同时包括一些空域非相邻块。当前图像块的空域候选块可以包括与当前图像块在同一个CTU的空域相邻块,也可以包括与当前图像块所在的CTU不同的空域相邻块。当前图像块的时域候选者,可以是当前图像块的参考图像帧中的图像块,尤其是参考帧图像中的时域相邻块,包括当前图像块的参考图像帧中与当前图像块位置相同的中间图像块,以及该中间图像块的空域相邻块。
当前图像块的运动信息候选者列表中的空域候选者的运动信息,包括当前图像块的一些空域相邻块的运动信息。当前图像块的运动信息候选者列表中的时域候选者的运动信息包括当前图像块的参考图像帧中,与当前图像块位置相同的中间图像块及该中间图像块的空域相邻块的运动信息。
需要说明的是,在本申请实施例中,若未特殊说明,所提及的已编码/解码块的运动信息是指当前图像块的运动信息候选者列表中除空域候选者和时域候选者之外的其他已编码图像块的运动信息,本申请实施例后续不再复述。
在一个例子中,已编码图像块可以包括当前图像块的空域相邻块中除空域候选者之外的其余空域相邻块;或/和,当前图像块的时域相邻块中除运动信息候选者列表中的时域候选者之外的其余时域相邻块。
其中,当前图像块可以是任意一个图像单元,图像单元可以是一个CTU但不限于CTU,也可以是CTU继续划分的块或单元,也可以是比CTU更大的块的单元。
其中,已有运动信息至少包括运动矢量。
但应该认识到,已有运动信息并不限于包括运动矢量,也可以是运动矢量之外的其他编码信息,相应地,对已有运动信息的变换也可以包括对运动矢量之外的其他运动信息的变换,本申请实施例对此不做赘述。
步骤S210、对获取到的已有运动信息进行变换。
本申请实施例中,为了增加候选者样本的丰富性,可以对已有运动信息进行变换,以得到新的候选运动信息。
在本申请其中一个实施例中,上述对获取到的已有运动信息进行变换,可以包括:对获取到的已有运动信息在指定方向上进行伸缩。
在该实施例中,可以通过对已有运动信息进行伸缩的方式实现对已有运动信息的变换,以得到新的候选运动信息。
相应地,当获取到已有运动信息之后,可以对获取到的已有运动信息在指定方向上进行伸缩。
在一个示例中,该指定方向可以为当前图像块的运动矢量的运动方向,运动信息包 括的MV(Motion Vector,运动矢量)方向。其中,当前图像块可以是双向帧间预测块,也可以是单向帧间预测块。
但应该认识到,对已有运动信息进行伸缩时并不限于在MV方向上进行伸缩,也可以在其他指定方向上进行伸缩,其具体实现在此不做赘述。
进一步地,在该实施例中,为了提高对已有运动信息进行伸缩的灵活性和合理性,对于不同的已有运动信息,进行伸缩的幅度可以根据实际场景灵活调整。
作为该实施例的一种实施方式,为了提高运动信息候选者列表的构建效率,对已有运动信息进行伸缩时可以为帧级、slice(条带)级或行级的伸缩,即同一帧、同一slice或同一行的已有运动信息的伸缩幅度可以相同。
其中,一帧图像可以被划分为一个或多个slice;一个slice可以包括一个或多个CTU。
在一个示例中,上述对获取到的已有运动信息在指定方向上进行伸缩,可以包括:根据当前图像块所属调整单元的上一调整单元中零运动矢量的占比,确定对已有运动信息在指定方向上进行伸缩的幅度;根据所确定的幅度,对已有运动信息在指定方向上进行伸缩。
在该示例中,调整单元可以包括帧、slice或行,相应地,对已有运动信息在指定方向上进行伸缩时,可以采用帧级、slice级或行级语法控制。
在该示例中,可以根据调整单元的运动剧烈程度确定对已有运动信息进行伸缩的幅度。
其中,运动剧烈程度可以通过零运动矢量的占比表征,零运动矢量的占比越高,运动剧烈程度越低;零运动矢量的占比越低,运动剧烈程度越高。调整单元的零运动矢量的占比为调整单元中零运动矢量的数量与该调整单元中与全部运动矢量的数量的比值。
在该示例中,当需要对已有运动信息进行指定方向上的伸缩时,可以根据当前图像块所属调整单元的上一调整单元中零运动矢量的占比,确定对已有运动信息在指定方向上进行伸缩的幅度。
其中,对已有运动信息在指定方向上进行伸缩的幅度与该零运动矢量的占比负相关,即当前图像块所属调整单元的上一调整单元中零运动矢量的占比越高,所确定的对已有运动信息在指定方向上进行伸缩的幅度越小;当前图像块所属调整单元的上一调整单元中零运动矢量的占比越低,所确定的对已有运动信息在指定方向上进行伸缩的幅度越大。
举例来说,以调整单元为帧(即进行帧级的调整)为例,当前图像块所属帧的上一帧中零运动矢量的占比与所确定的对已有运动信息在指定方向上进行伸缩的幅度的对应关系可以如下表所示:
零运动矢量占比 幅度
(T1,T2) A1
(T2,T3) A2
(T3,T4) A3
其中,0≤T1<T2<T3<T4≤100%,A1>A2>A3≥0。
需要说明的是,在该示例中,当当前图像块所属调整单元的上一调整单元中零运动矢量的占比超过一个比较大的比例阈值时(即该上一调整单元的运动剧烈程度很弱),对当前图像块进行帧间预测时,零运动信息被选中为最终的预测运动信息的概率会比较大,此时,对已有运动信息在指定方向上进行伸缩的幅度可以为零,即不对已有运动信息进行伸缩,以提高零运动信息被加入最终的运动信息候选者列表的概率。
作为该实施例的另一种实施方式,为了提高对已有运动信息进行伸缩的准确性,对已有运动信息进行伸缩时可以为块级的伸缩,即对于每一个数据块,可以分别确定已有运动信息的伸缩幅度。
在一个示例中,上述对获取的已有运动信息在指定方向上进行伸缩,可以包括:根据当前图像块的不同位置的空域相邻块的运动信息的相似度,确定对已有运动信息在指定方向上进行伸缩的幅度;根据所确定的幅度,对已有运动信息在指定方向上进行伸缩。
在该示例中,可以根据当前图像块相邻块的运动信息的相似度确定对已有运动信息进行伸缩的幅度。
在该示例中,当需要对已有运动信息进行指定方向上的伸缩时,可以根据当前图像块的不同位置的空域相邻块的运动信息的相似度确定已有运动信息进行伸缩的幅度。
其中,当前图像块的不同位置的空域相邻块的运动信息的相似度越高,所确定的对已有运动信息在指定方向上进行伸缩的幅度越小;当前图像块的不同位置的空域相邻块的运动信息的相似度越低,所确定的对已有运动信息在指定方向上进行伸缩的幅度越大。
需要说明的是,在该示例中,当当前图像块的任意两个不同位置的空域相邻块的运动信息的相似度均超过预设相似度阈值时,对已有运动信息在指定方向上进行伸缩的幅度可以为零,即不对已有运动信息进行伸缩。
在本申请另一个实施例中,上述对获取到的已有运动信息进行变换,可以包括:对至少两个获取到的已有运动信息进行加权。
在该实施例中,可以通过对已有运动信息进行加权的方式实现对已有运动信息的变换,以得到新的候选运动信息。
相应地,当获取到已有运动信息之后,可以对至少两个获取到的已有运动信息进行加权,即确定该至少两个获取到的已有运动信息的加权平均值。
其中,对至少两个获取到的已有运动信息进行加权时的加权系数可以根据已有运动信息的来源块(该已有运动信息为该来源块的运动信息)的特点自适应调整。
步骤S220、将变换后的运动信息作为候选运动信息加入当前图像块的运动信息候选者列表。
本申请实施例中,对获取到的已有运动信息进行变换之后,可以将变换后的运动信息作为候选运动信息加入当前图像块的运动信息候选者列表。
举例来说,以合并模式的运动信息预测为例,在本申请实施例中,针对合并模式的运动信息预测构建的运动信息候选者列表除了可以包括空域候选者、时域候选者以及零运动信息之外,还可以包括对已有运动信息进行变换得到的候选运动信息,其示意图可以如图2C所示。
其中,对已有运动信息进行变换可以包括但不限于对已有运动信息进行伸缩或加权等。
需要说明的是,对于合并模式的运动信息预测,当存在组合候选者时,对已有运动信息进行变换得到的候选运动信息可以位于组合候选者之前,也可以位于组合候选者之后,但并不限于上述举例。以图2C为例,其存在组合候选者,且对已有运动信息进行变换得到的候选运动信息位于组合候选者之前(即位于时域候选者之后,组合候选者之前)。
又举例来说,以AMPV模式的运动信息预测为例,在本申请实施例中,针对AMVP模式的运动信息预测构建的运动信息候选者列表除了可以包括空域候选者、时域候选者以及零运动信息之外,还可以包括对已有运动信息进行变换得到的候选运动信息,该对已有运动信息进行变换得到的候选运动信息在运动信息候选者列表中可以位于时域候选者之后,零运动信息之前,其示意图可以如图2D所示。
可见,在图2A所示方法流程中,通过对已有运动信息进行变换,并将变换后的运动信息作为候选运动信息加入当前图像块的运动信息候选者列表,增加了候选者样本的丰富性,并提高了运动信息候选者选择的灵活性。
本申请实施例中,对于合并模式或AMVP模式等模式的运动信息预测,除了可以按照图2A所示的方法对已有运动信息进行变换,以增加候选者样本的丰富性之外,还可以通过筛选、分类以及排序等方式中的一种或多种方式基于已编码/解码块的运动信息构建已编码/解码块运动信息列表,并从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,在增加候选者样本丰富性的基础上,提高已编码/解码块运动信息列表构建的精准度。
请参见图3,为本申请实施例提供的一种运动信息候选者列表构建方法的流程示意图,如图3所示,该运动信息候选者选择列表构建方法可以包括以下步骤。
步骤S300、根据预设过滤条件对当前图像块之前的已编码/解码块进行筛选,并基于筛选后的已编码/解码块构建已编码/解码块运动信息列表。
本申请实施例中,为了增加候选者样本的丰富性,可以基于当前图像块之前的已编码/解码块的运动信息构建已编码/解码块运动信息列表,并从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
本申请实施例中,对于某些已编码/解码块的运动信息,其被选中为最终的预测运动信息的概率很小,因此,为了提高已编码/解码块运动信息列表构建的精准度,构建已编码/解码块运动信息列表时,可以对当前图像块之前的已编码/解码块进行筛选,并基于筛选后的已编码/解码块构建已编码/解码块运动信息列表。
在本申请其中一个实施例中,上述根据预设过滤条件对当前图像块之前的已编 码/解码块进行筛选,并基于筛选后的已编码/解码块构建已编码/解码块运动信息列表,可以包括:对于当前图像块之前的任一已编码/解码块,当该已编码/解码块的残差系数中的非零系数个数大于等于预设数量阈值时,拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表;当该已编码/解码块的残差系数中的非零系数个数小于所述预设数量阈值时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
在该实施例中,已编码/解码块的残差系数中的非零系数个数可以直观地体现运动信息预测是否准确,因此,在构建已编码/解码块运动信息列表时,可以基于已编码/解码块的残差系数中的非零系数个数对已编码/解码块进行筛选。其中,已编码/解码块的残差系数中的非零系数个数越多,已编码/解码块的运动信息的预测准确性越差。
相应地,在该实施例中,对于当前图像块之前的任一已编码/解码块,可以确定该已编码/解码块的残差系数中的非零系数个数,并判断该已编码/解码块的残差系数中的非零系数个数是否大于预设数量阈值(该预设数量阈值可以根据实际场景设定)。
当该已编码/解码块的残差系数中的非零系数个数大于等于预设数量阈值时,拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。当该已编码/解码块的残差系数中的非零系数个数小于预设数量阈值时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
在该实施例中,通过基于已编码/解码块的残差系数中非零系数个数对已编码/解码块进行过滤,剔除预测准确性过差的已编码/解码块的运动信息,提高了已编码/解码块运动信息列表构建的精准度。
在本申请另一个实施例中,上述根据预设过滤条件对当前图像块之前的已编码/解码块进行筛选,并基于筛选后的已编码/解码块构建已编码/解码块运动信息列表,可以包括:对于当前图像块之前的任一已编码/解码块,当该已编码/解码块的宽大于等于第一预设阈值,且该已编码/解码块的高大于等于第二预设阈值时,拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表;当该已编码/解码块的宽小于所述第一预设阈值,和/或该已编码/解码块的高小于第二预设阈值时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
在该实施例中,宽高均过大的已编码/解码块的运动信息被选中为最终的预测信息的概率会很低,因此,在构建已编码/解码块运动信息列表时,可以基于已编码/解码块的宽和高对已编码/解码块进行筛选。
相应地,在该实施例中,对于当前图像块之前的任一已编码/解码块,可以判断该已编码/解码块的宽是否大于等于第一预设阈值(该第一预设阈值可以根据实际场景设定),以及该已编码/解码块的高是否大于等于第二预设阈值(该第二预设阈值可以根据实际场景设定)。
当该已编码/解码块的宽大于等于第一预设阈值,且该已编码/解码块的高大于等于第二预设阈值时,拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。当该已编码/解码块的宽小于第一预设阈值,和/或该已编码/解码块的高小于第二预设阈值时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
在该实施例中,通过基于已编码/解码块的宽和高对已编码/解码块进行过滤,剔除宽和高均过大的已编码/解码块的运动信息,提高了已编码/解码块运动信息列表构建的精准度。
在本申请又一个实施例中,上述根据预设过滤条件对已编码/解码块进行筛选,并基于筛选后的已编码/解码块构建已编码/解码块运动信息列表,可以包括:对于当前图像块之前的任一已编码/解码块,当该已编码/解码块的运动信息的量化步长大于等于预设步长阈值时,拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表;当该已编码/解码块的运动信息的量化步长小于预设步长阈值时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
在该实施例中,已编码/解码块的运动信息的量化步长可以直观地体现已编码/解码块的运动信息的精度,因此,在构建已编码/解码块运动信息列表时,可以基于已编码/解码块的运动信息的量化步长对已编码/解码块进行筛选。
其中,已编码/解码块的运动信息的量化步长越大,已编码/解码块的运动信息的精度越低,相应地,其运动信息预测准确性越低。
举例来说,对于参数5、6、7和8,当量化步长为1时,其量化值分别为5、6、7和8;当量化步长为2时,其量化值分别3(对应参数5和6)和4(对应参数7和8);当量化步长为4时,其量化值均为2,即量化步长越大,会有越多参数被量化成同一个量化值,其精度相应下降。因此,量化步长与精度负相关。
相应地,在该实施例中,对于当前图像块之前的任一已编码/解码块,可以确定该已编码/解码块的运动信息的量化步长,并判断该已编码/解码块的运动信息的量化步长是否大于等于预设步长阈值(该预设步长阈值可以根据实际场景设定)。
当该已编码/解码块的运动信息的量化步长大于等于预设步长阈值时,拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。当该已编码/解码块的运动信息的量化步长小于预设步长阈值时,将该已编码/解码块的运动信息加入已编码运动信息列表。
在该实施例中,通过基于已编码/解码块的运动信息的量化步长对已编码/解码块进行过滤,剔除预测精度过低的已编码/解码块的运动信息,提高了已编码/解码块运动信息列表构建的精准度。
步骤S330、从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,该候选运动信息至少包括运动矢量。
本申请实施例中,基于筛选后的已编码/解码块构建已编码/解码块运动信息列表之后,可以从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
在本申请其中一个实施例中,上述从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,可以包括:按照各类别对应的已编码/解码块运动信息列表的优先级从高到低的顺序,依次从各已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表;其中,不同的类别对应不同 的优先级。
相应地,在该实施例中,不同类别的筛选后的已编码/解码块作为候选运动信息的优先级不同。其中,筛选后的已编码/解码块的运动信息的精度越高,筛选后的已编码/解码块作为候选运动信息的优先级越高。
当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以按照各类别对应的已编码/解码块运动信息列表的优先级从高到低的顺序,依次从各已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,以保证精度高的已编码/解码块的运动信息在运动信息候选者列表中的排序比精度低的已编码/解码块的运动信息在运动信息候选者列表中的排序靠前。
需要说明是,在本申请实施例中,候选运动信息并不限于运动矢量,也可以是运动矢量之外的其他编码信息,相应地,对运动信息的筛选、分类和/或排序也可以包括对运动矢量之外的其他编码信息的筛选、分类和/或排序,本申请实施例对此不做赘述。
在本申请另一个实施例中,上述从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,可以包括:确定与当前图像块匹配的已编码/解码块运动信息列表;从与当前图像块匹配的已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
在一个示例中,上述确定与当前图像块匹配的已编码/解码块运动信息列表,可以包括:根据当前图像块的形状确定当前图像块的类别;根据当前图像块的类别确定与当前图像块的类别匹配的已编码/解码块运动信息列表。
在该示例中,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以先根据当前图像块的形状确定当前图像块的类别,进而,可以根据当前图像块的类别确定与当前图像块的类别匹配的已编码/解码块运动信息列表,并从该已编码/解码块运动信息列表中选择候选运动信息加入运动信息候选者列表。
举例来说,假设对筛选后的已编码/解码块进行分类时是基于已编码/解码块的形状将已编码/解码块划分为三个类别(分别为第一类别、第二类别和第三类别,具体实现可以参见步骤S310中的相关描述),则需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,若当前图像块的宽高比大于1,则从第一类别的已编码/解码块对应的已编码/解码块运动信息列表选择候选运动信息加入当前图像块的运动信息候选者列表;若当前图像块的宽高比小于1,则从第二类别的已编码/解码块对应的已编码/解码块运动信息列表选择候选运动信息加入当前图像块的运动信息候选者列表;若当前图像块的宽高比等于1,则从第三类别的已编码/解码块对应的已编码/解码块运动信息列表选择候选运动信息加入当前图像块的运动信息候选者列表。
在图3所示方法流程中,通过在构建已编码/解码块运动信息列表时,对已编码/解码块进行筛选,剔除被选中为最终的预测运动信息的概率过低的已编码/解码块的运动信息,提高了已编码/解码块运动信息列表构建的精准度。
本申请实施例中,在对已编码/解码块进行筛选的基础上,还可以对已编码/解码块进行分类,并根据已编码/解码块的类别构建不同的已编码/解码块运动信息列表,和/或,对所构建的已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序,进一步提高了已编码/解码块运动信息列表构建的精准度,并可以提高视频编码的性能。
作为又一种可选示例,在本申请实施例中,为了进一步提高运动信息候选者列表构建的精确度,在上述步骤S300对已编码/解码块进行筛选的基础上,还可以对筛选后的已编码/解码块进行分类,并将不同类别的筛选后的已编码/解码块加入到不同的已编码/解码块运动信息列表。
相应地,在本申请其中一个实施例中,在上步骤S300之后,还包括步骤S310:对筛选后的已编码/解码块进行分类,并根据筛选后的已编码/解码块的类别将筛选后的已编码/解码块加入到对应的已编码/解码块运动信息列表。
在该实施例中,按照上述实施例中描述的方式对已编码/解码块进行筛选之后,还可以根据已编码/解码块的特性,如形状、尺寸或预测模式等,对筛选后的已编码/解码块进行分类,并根据筛选后的已编码/解码块的类别将筛选后的已编码/解码块加入到对应的已编码/解码块运动信息列表。
其中,不同的类别对应不同的已编码/解码块运动信息列表。
作为该实施例的一种实施方式,上述对筛选后的已编码/解码块进行分类,可以包括:根据筛选后的已编码/解码块的形状对筛选后的已编码/解码块进行分类。
在一个示例中,根据筛选后的已编码/解码块的形状对筛选后的已编码/解码块进行分类,可以包括:当筛选后的已编码/解码块的宽高比大于1时,将筛选后的已编码/解码块划分为第一类别;和/或,当筛选后的已编码/解码块的宽高比小于1时,将筛选后的已编码/解码块划分为第二类别。
在该示例中,已编码/解码块的形状可以通过已编码/解码块的宽高比表征。
例如,可以分别通过当前图像块的宽高比与1的大小关系来表征当前图像块的不同形状。其中,若当前图像块的宽高比大于1,则当前图像块的形状为宽大于高的矩形;若当前图像块的宽高比等于1,则当前图像块的形状为正方形;若当前图像块的宽高比小于1,则当前图像块的形状为宽小于高的矩形。
相应地,在该示例中,可以根据筛选后的已编码/解码块的宽高比将筛选后的已编码/解码块划分为不同类别。在该示例中,将宽高比大于1的筛选后的已编码/解码块所属的类别称为第一类别;将宽高比小于1的筛选后的已编码/解码块所属的类别称为第二类别。
进一步地,在该示例中,对于宽高比等于1的筛选后的已编码/解码块,可以将其划分为第一类别,或,可以将其划分为第二类别,或,可以将其划分为一个新的类别。其中,当将宽高比等于1的筛选后的已编码/解码块被划分为一个新的类别时,可以将宽高比等于1的筛选后的已编码/解码块所属的类别称为第三类别。
作为该实施例另一种实施方式,上述对筛选后的已编码/解码块进行分类,可以 包括:当筛选后的已编码/解码块的宽与高的乘积大于预设阈值时,将筛选后的已编码/解码块划分为第一类别;当筛选后的已编码/解码块的宽与高的乘积小于等于预设阈值时,将筛选后的已编码/解码块划分为第二类别。
在该实施方式中,可以根据筛选后的已编码/解码块的尺寸(即宽与高的乘积)将筛选后的已编码/解码块划分为不同类别。
在一个示例中,可以将宽与高的乘积大于预设阈值(该预设阈值可以根据实际场景设定)的筛选后的已编码/解码块所属的类别称为第一类别;将宽与高的乘积小于等于预设阈值的筛选后的已编码/解码块所属的类别称为第二类别。
需要说明的是,在本申请实施例中,当根据筛选后的已编码/解码块的宽与高的乘积对筛选后的已编码/解码块进行分类时,也可以通过两个或两个以上的预设阈值将宽与高的乘积划分为三个或三个以上的区间,并分别对应一个区间划分一个类别。
举例来说,假设预设阈值包括Ta和Tb(Tb>Ta),则可以将宽与高的乘积小于等于Ta的已编码/解码块划分为第一类别,将宽与高的乘积大于Ta,且小于等于Tb的已编码/解码块划分为第二类别,将宽与高的乘积大于Tb的已编码/解码块划分为第三类别。
此外,在本申请实施例中,若对已编码/解码块进行筛选时是基于已编码/解码块的宽和高对已编码/解码块进行筛选(即上述实施例中描述的将宽大于等于第一预设阈值,且高大于等于第二预设阈值的),则根据已编码/解码块的尺寸对筛选后的已编码/解码块进行分类时,分类使用的阈值需要小于第一预设阈值与第二预设阈值的乘积。
作为该实施例又一种实施方式,上述对筛选后的已编码/解码块进行分类,可以包括:根据筛选后的已编码/解码块的预测模式,对筛选后的已编码/解码块进行分类。
在一个示例中,上述根据筛选后的已编码/解码块的预测模式对筛选后的已编码/解码块进行分类,包括:当筛选后的已编码/解码块的预测模式为合并模式时,将筛选后的已编码/解码块划分为第一类别;当筛选后的已编码/解码块的预测模式为AMVP模式时,将筛选后的已编码/解码块划分为第二类别。
在该示例中,可以将预测模式为合并模式的筛选后的已编码/解码块所属的类别称为第一类别;将预设侧模式为AMVP模式的筛选后的已编码/解码块所属的类别称为第二类别。
需要说明的是,在本申请实施例中,根据筛选后的已编码/解码块的预测模式对筛选后的已编码/解码块进行分类时,并不限于划分上述两个类别,还可以将其他预测模式的筛选后的已编码/解码块划分为其他类别(如第三类别、第四类别等),其具体实现在此不做赘述。
进一步地,运动信息候选者列表中最终选中的预测运动信息在运动信息候选者中排序靠前时,可以减少进行索引值编码时所需比特位,提高视频编码的性能;此外,在进行运动信息预测时,运动信息候选者列表中最终选中的预测运动信息在运动信息候选者中排序靠前时可以减少进行预测运动信息选择的消耗,即在同样编码索引的消耗下,将相关性高的排在前面有利于提高视频编码的性能。因此,在构建已编码/解码块运动信 息列表时,可以对已编码/解码块运动信息列表中的已编码/解码块运动信息进行重排序,将被选中为最终的预测信息的概率更高的已编码/解码块的运动信息排在已编码/解码块运动信息列表的后面(从已编码/解码块运动信息列表中选择候选运动信息时,按照从后往前的顺序从已编码/解码块运动信息列表中选择已编码/解码块的运动信息),以便被选中为最终的预测信息的候选运动信息能够尽量在运动信息候选者中排序靠前。
作为又一种可选示例,在本申请实施例中,在基于筛选后的已编码/解码块构建已编码/解码块运动信息列表(对已编码/解码块进行筛选后直接构建已编码/解码块运动信息列表,或者对已编码/解码块进行筛选,并对筛选后的已编码/解码块进行分类后,构建对应不同类别的多个已编码/解码块运动信息列表)之后,可以对已编码/解码块运动信息列表中的筛选后的已编码/解码块的运动信息进行重排序。
其中,当构建了对应不同类别的多个已编码/解码块运动信息列表时,可以分别对各已编码/解码块运动信息列表中的筛选后的已编码/解码块的运动信息进行重排序。
在本申请其中一个实施例中,在上步骤S300之后,还包括步骤S320:基于筛选后的已编码/解码块的残差系数对筛选后的已编码/解码块的运动信息进行重排序。
可选的,步骤S320也可以在步骤S310之后,则步骤S320也可替换为:基于分类后的已编码/解码块的残差系数对筛选后的已编码/解码块的运动信息进行重排序。
在该实施例中,已编码/解码块的残差系数中的非零个数越少,已编码/解码块的运动信息的预测准确性越高;已编码/解码块的运动信息的预测准确性越高,其被选中为最终的预测运动信息的概率越高。因此,可以根据筛选后的已编码/解码块的残差系数中非零个数对筛选后的已编码/解码块的运动信息进行重排序。
在一个示例中,上述基于筛选后的已编码/解码块的残差系数对筛选后的已编码/解码块的运动信息进行重排序,可以包括:按照残差系数的非零系数个数从多到少的顺序对筛选后的已编码/解码块的运动信息进行重排序。
在该示例中,从已编码/解码块运动信息列表中选择候选运动信息时,通常是按照从后往前的顺序选择,因此,在进行重排序时可以将被选中为最终的预测运动信息的概率高的已编码/解码块的运动信息排在已编码/解码块运动信息列表的后列。
相应地,在该示例中,对于基于筛选后的已编码/解码块的运动信息构建的已编码/解码块运动信息列表,可以按照残差系数的非零个数从多到少的顺序对筛选后的已编码/解码块运动信息进行重排序,即残差系数的非零个数最多的筛选后的已编码/解码块的运动信息排在已编码/解码块运动信息列表的最前,残差系数的非零个数最少的筛选后的已编码/解码块的运动信息排在已编码/解码块运动信息列表的最后。
在本申请另一个实施例中,对已编码/解码块运动信息列表中的筛选后的已编码/解码块的运动信息进行重排序,可以包括:基于当前图像块的形状以及筛选后的已编码/解码块与当前图像块的相对位置,对筛选后的已编码/解码块的运动信息进行重排序。
在该实施例中,对于不同形状的数据块,对该数据块进行运动信息预测时,该数据块的不同位置的周围块与该数据块的相关性不完全相同。
当对数据块进行运动信息预测时,相关性越高的候选运动信息被选中为最终的预测运动信息对编码效果越有利,因此,在基于筛选后的已编码/解码块的运动信息构建已编码/解码块运动信息列表之后,可以基于当前图像块的形状以及筛选后的已编码/解码块与当前图像块的相对位置,对筛选后的已编码/解码块的运动信息进行重排序。
在一个示例中,上述基于当前图像块的形状以及筛选后的已编码/解码块与当前图像块的相对位置,对筛选后的已编码/解码块的运动信息进行重排序,可以包括:当当前图像块的宽高比大于1时,按照当前图像块左侧的筛选后的已编码/解码块在前,当前图像块上侧的筛选后的已编码/解码块在后的顺序,对筛选后的已编码/解码块的运动信息进行重排序。
在该示例中,当数据块的宽高比大于1时,数据块的上侧的周围块与数据块的相关性比数据块左侧的周围块与数据块的相关性高,而运动信息候选者列表中相关性高候选块的运动信息的排在前面,一方面可以减少编码索引的开销,另一方面可以提高相关度高的候选块的运动信息被选为最终的预测运动信息的概率,进而,可以提高视频编码的性能。
相应地,在该示例中,当当前图像块的宽高比大于1时,可以按照当前图像块左侧的筛选后的已编码/解码块在前,当前图像块的上侧的筛选后的已编码/解码块在后的顺序,对筛选后的已编码/解码块的运动信息进行重排序,以便从已编码/解码块运动信息列表中选择候选运动信息时,可以先选中当前图像块的上侧的筛选后的已编码/解码块的运动信息,进而,可以使当前图像块的上侧的筛选后的已编码/解码块的运动信息在运动信息候选者列表中的排序比当前图像块的左侧的筛选后的已编码/解码块的运动信息靠前。
在另一示例中,上述基于当前图像块的形状以及筛选后的已编码/解码块与当前图像块的相对位置,对筛选后的已编码/解码块的运动信息进行重排序,可以包括:当当前图像块的宽高比小于1时,按照当前图像块上侧的筛选后的已编码/解码块在前,当前图像块左侧的筛选后的已编码/解码块在后的顺序,对筛选后的已编码/解码块的运动信息进行重排序。
在该示例中,当数据块的宽高比小于1时,数据块的左侧的周围块与数据块的相关性比数据块上侧的周围块与数据块的相关性高。
因此,在该示例中,当当前图像块的宽高比小于1时,可以按照当前图像块上侧的筛选后的已编码/解码块在前,当前图像块的左侧的筛选后的已编码/解码块在后的顺序,对筛选后的已编码/解码块的运动信息进行重排序,以便从已编码/解码块运动信息列表中选择候选运动信息时,可以先选中当前图像块的左侧的筛选后的已编码/解码块的运动信息,进而,可以使当前图像块的左侧的筛选后的已编码/解码块的运动信息在运动信息候选者列表中的排序比当前图像块的上侧的筛选后的已编码/解码块的运动信息在运动信息候选者列表中的排序靠前。
此外,在该实施例中,对于当前图像块的宽高比等于1的情况,可以按照上述示例中描述的当前图像块的宽高比大于1的情况下的重排序方式实现对已编码/解码块运动信息列表中的筛选后的已编码/解码块运动信息的重排序,或者,可以按照上述示例 中描述的当前图像块的宽高比小于1的情况下的重排序方式实现对已编码/解码块运动信息列表中的筛选后的已编码/解码块运动信息的重排序。
在一实施例中,当按照步骤S310中描述的方式对筛选后的已编码/解码块进行了分类,并根据筛选后的已编码/解码块的类别将筛选后的已编码/解码块加入到对应的已编码/解码块运动信息列表之后,可以分别确定各已编码/解码块运动信息列表的优先级高低。
举例来说,以步骤S310中描述的根据筛选后的已编码/解码块的预测模式对筛选后的已编码/解码块进行分类为例。由于AMVP模式的已编码/解码块的运动信息的精度高于合并模式的已编码/解码块的运动信息的精度,因此,可以确定预测模式为AMVP模式的筛选后的已编码/解码块的运动信息所在的已编码/解码块运动信息列表的优先级高于预测模式为合并模式的筛选后的已编码/解码块的运动信息所在的已编码/解码块运动信息列表的优先级。
在该实施例中,当按照步骤S310中描述的方式对筛选后的已编码/解码块进行了分类,并根据筛选后的已编码/解码块的类别将筛选后的已编码/解码块加入到对应的已编码/解码块运动信息列表之后,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以先确定与当前图像块匹配的已编码/解码块运动信息列表,然后从与当前图像块匹配的已编码/解码块运动信息列表中选择候选运动信息加入所述运动信息候选者列表。
本申请实施例中,对于合并模式或AMVP模式等模式的运动信息预测,在基于已编码/解码块的运动信息构建已编码/解码块运动信息列表时,可以对已编码/解码块进行分类,并分别构建对应不同类别的已编码/解码块的已编码/解码块运动信息列表,以提高已编码/解码块运动信息列表构建的精准度。
请参见图4,为本申请实施例提供的一种运动信息候选者列表构建方法的流程示意图,如图4所示,该运动信息候选者选择列表构建方法可以包括以下步骤。
步骤S400、对当前图像块之前的已编码/解码块进行分类。
步骤S410、根据已编码/解码块的类别将已编码/解码块加入到对应的已编码/解码块运动信息列表;其中,不同的类别对应不同的已编码/解码块运动信息列表。
步骤S420、从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,该候选运动信息至少包括运动矢量。
本申请实施例中,对当前图像块之前的已编码/解码块进行分类,并构建对应不同类别的已编码/解码块的已编码/解码块运动信息列表的具体实现可以参见图3所示方法流程中的相关描述,区别在于图3所示方法流程中进行分类的已编码/解码块由筛选后的已编码/解码块替换为未进行筛选的已编码/解码块,对进行分类的已编码/解码块进行分类的方法与步骤310的相同,本申请实施例在此不做赘述。
需要说明的是,在本申请实施例中,对已编码/解码块进行分类之后,在构建已编码/解码块的已编码/解码块运动信息列表时,还可以分别对各类别的已编码/解码块进行筛选,并基于筛选后的已编码/解码块构建已编码/解码块运动信息列表,其具体实现 可以参见图3所示方法流程中的相关描述,本申请实施例在此不做赘述。
此外,对于所构建多个不同的已编码/解码块运动信息列表(包括进行了已编码/解码块的筛选和未进行已编码/解码块的筛选两种情况),还可以对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序,其具体实现可以参见图3所示方法流程中的相关描述,本申请实施例在此不做赘述。
需要说明是,在本申请实施例中,候选运动信息并不限于包括运动矢量,也可以是运动矢量之外的其他编码信息,相应地,对运动信息的分类也可以包括对运动矢量之外的其他编码信息的分类,本申请实施例对此不做赘述。
本申请实施例中,对于合并模式或AMVP模式等模式的运动信息预测,在基于已编码/解码块的运动信息构建已编码/解码块运动信息列表时,可以对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序,以尽量保证被选中为最终预测运动信息的已编码/解码块的运动信息在运动信息候选者列表中排序靠前,以提高编码性能。
请参见图5,为本申请实施例提供的一种运动信息候选者列表构建方法的流程示意图,如图5所示,该运动信息候选者选择列表构建方法可以包括以下步骤:
步骤S500、根据当前图像块之前的已编码/解码块的运动信息,构建已编码运动信息列表。
本申请实施例中,构建已编码运动信息列表时,可以直接将所有的已编码/解码块的运动信息均加入同一个已编码/解码块运动信息列表,或者,也可以先对已编码/解码块进行筛选和/或分类后,再基于筛选和/或分类后的已编码/解码块构建已编码/解码块运动信息列表,其具体实现在此不做赘述。
步骤S510、对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序。
步骤S520、从重排序后的已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,该候选运动信息至少包括运动矢量。
本申请实施例中,对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序的具体实现可以参见图3所示方法流程中的相关描述,本申请实施例在此不做赘述。
需要说明的是,在本申请实施例中,对于合并模式或AMVP模式,在构建最终的运动信息候选者列表时,对已有运动信息进行变换得到的候选运动信息,或,从已编码/解码块运动信息列表中选择的候选运动信息可以排在空域候选者(若存在)和时域候选者(若存在)的后面,即当空域候选者和时域候选者的数量不满足要求时,可以在最终的运动信息候选者列表中加入对已有运动信息进行变换得到的候选运动信息,或,从已编码/解码块运动信息列表中选择的候选运动信息。
进一步地,若在最终的运动信息候选者列表中加入对已有运动信息进行变换得到的候选运动信息,或,从已编码/解码块运动信息列表中选择的候选运动信息之后,候选运动信息的数量仍不满足要求,还可以进一步在最终的运动信息候选者列表中加入组 合候选者(对于合并模式)以及零运动信息。
需要说明是,在本申请实施例中,候选运动信息并不限于包括运动矢量,也可以是运动矢量之外的其他编码信息,相应地,对运动信息的排序也可以包括对运动矢量之外的其他编码信息的分类,本申请实施例对此不做赘述。
本申请实施例中,对于Affine模式的运动信息预测,也可以基于已编码/解码块的运信息构建已编码/解码块运动信息列表,并从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,以增加候选者样本的丰富性。
其中,Affine模式可以包括Affine Merge模式或Affine AMVP模式。
为了使本领域技术人员更好地理解Affine模式的运动信息预测,下面先对Affine模式的参数模型进行简单说明。
参见公式(1),一般Affine模式的参数模型,由a,b,c,d,e,f六个参数构成,
Figure PCTCN2019106473-appb-000001
其中(x,y)是当前帧上的点;(x’,y’)是对应的参考帧上的点;a,b,c,d,e,f是参数模型的6个参数。
为了导出参数模型的6个参数,需要3组已知的(x,y)和(x’,y’)对,即三个运动矢量以及运动矢量的起始点位置。假设这三对运动矢量为v0,v1和v2,三个运动矢量的起始点分别为(0,0),(S,0)和(0,S),即分别为左上角、右上角和左下角的坐标;其中,S为当前PU的边长。
联立三组方程组,从而计算得到a,b,c,d,e,f这6个参数的值,如公式(2)所示:
Figure PCTCN2019106473-appb-000002
将其代入公式(1),可以得到如公式(3)所示的6参数模型下的运动矢量导出方程:
Figure PCTCN2019106473-appb-000003
为了简化参数模型,帧间预测块被分为等大小的若干小区域,每个小区域(即子块)内运动速度是一致的,而每个小区域的运动补偿模型仍是平面平动模型(图像块在图像平面内只有平移,不改变形状、大小,因此对子块运动的描述仍可参数化为一个运动矢量)。
由于当物体的旋转运动的旋转轴垂直于图像平面时,仿射物体的任意两点的尺度缩放比例是一致的(任意两条直线构成的夹角大小保持不变)。在此限制下a,b,c,d,e,f这6个参数的仿射运动退化为4参数的仿射运动模型,即a,b,c,d这4个参数之间存在一定的关系。此时,仅需2组已知的(x,y)和(x’,y’)对,即可推导出这4个参数。
假设这两个运动矢量为v0和v1,两个运动矢量的起始点分别为(0,0)和(S,0),即分别是左上角和右上角的坐标。联立两组方程组,从而计算得到a,b,c,d四个参数的值,如公式(4)所示:
Figure PCTCN2019106473-appb-000004
将这4个参数的值代入公式(1),即可得到如公式(5)所示的4参数模型下的运动矢量导出方程:
Figure PCTCN2019106473-appb-000005
请参见图6A,为本申请实施例提供的一种运动信息候选者列表构建方法的流程示意图,如图6A所示,该运动信息候选者选择列表构建方法可以包括以下步骤。
步骤S600、构建已编码/解码块运动信息列表,该已编码/解码块运动信息列表中包括当前图像块之前的已编码/解码块的运动信息。
本申请实施例中,为了增加候选者样本的丰富性,可以基于已编码/解码块的运动信息构建已编码/解码块运动信息列表,并从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
步骤S610、当当前图像块的预测模式为Affine模式时,从已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表。
本申请实施例中,对于Affine模式的运动信息预测,也可以从已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表,以增加Affine模式的运动信息预测的候选者样本的丰富性。
其中,对于Affine模式的运动信息预测,运动信息可以包括运动矢量,参考帧索引,运动方向以及参数模型。
其中,当已编码/解码块的参数模型为4参数模型时,可以根据已编码/解码块的控制点(包括左上角控制点和右上角控制点)的运动信息确定当前图像块的控制点的运动信息,并根据当前图像块的控制点的运动信息(假设为V0(V x0、V y0、)和V1(V x1、V y1)),利用公式(5)得到当前图像块各子块的运动信息;其中,4参数模型下的运动信息可以表征MV在平面内旋转的角度和速度。
当已编码/解码块的参数模型为6参数模型时,可以根据已编码/解码块的控制点(包括左上角控制点、右上角控制点和左下角控制点)的运动信息确定当前图像块的控制点的运动信息,并根据当前图像块的控制点的运动信息(假设为V0(V x0、V y0、)、V1(V x1、V y1)以及V2(V x2、V y2)),利用公式(3)得到当前图像块各子块的运动信息。其中,6参数模型下的运动信息可以表征MV在立体空间旋转的角度、速度和方向。
举例来说,在本申请实施例中,针对Affine模式的运动信息预测构建的运动信息候选者列表除了可以包括空域候选者和零运动信息之外,还可以包括从已编码/解码块运动信息列表中选择的候选运动信息,其示意图可以如图6B所示。
本申请其中一个实施例中,上述从已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表,包括:从预测模式为Affine Merge模式的已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表;或,从预测模式为Affine AMVP模式的已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表。
在该实施例中,考虑到Affine模式的运动信息预测与合并模式或AMVP模式的运动信息预测使用的运动信息不同,因此,对于Affine模式的运动信息预测,当需要从已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以从预测模式为Affine模式的已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表。
相应地,在该实施例中,当当前图像块的预测模式为Affine模式时,可以从预测模式为Affine Merge模式的已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表;或者,可以从预测模式为Affine AMVP模式的已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表。
需要说明的是,在本申请实施例中,对于Affine模式的运动信息预测,当从已编码/解码块的运动信息中选择候选运动信息时,并不限于从预测模式为Affine模式的已编码/解码块的运动信息中选择,也可以选择非Affine模式的已编码/解码块的运动信息。
其中,当选择的候选运动信息为非Affine模式的已编码/解码块的运动信息时,可以使用该已编码/解码块的左上角和右上角的运动信息(对于4参数模型)或左上角、右上角和左下角的运动信息(对于6参数模型),其具体实现在此不做赘述。
进一步地,在本申请实施例中,为了提高已编码/解码块运动信息列表构建的精准度,当基于预测模式为Affine模式的已编码/解码块构建已编码/解码块运动信息列表时,可以对预测模式为Affine模式的已编码/解码块进行分类,并根据预测模式为Affine模式的已编码/解码块的类别将预测模式为Affine模式的已编码/解码块加入到对应的已编码/解码块运动信息列表。其中,不同的类别对应不同的已编码/解码块运动信息列表。
在一个示例中,对预测模式为Affine模式的已编码/解码块进行分类时,可以根据预测模式为Affine模式的已编码/解码块的参数模型对预测模式为Affine模式的已编码/解码块进行分类。
例如,当预测模式为Affine模式的已编码/解码块的参数模型为2参数模型时,可以将预测模式为Affine模式的已编码/解码块划分为第一类别;当预测模式为Affine模式的已编码/解码块的参数模型为4参数模型时,可以将预测模式为Affine模式的已编码/解码块划分为第二类别;当预测模式为Affine模式的已编码/解码块的参数模型为6参数模型时,可以将预测模式为Affine模式的已编码/解码块划分为第三类别。
在本申请其中一个实施例中,上述从已编码/解码块的运动信息中选择候选运动信息加入当前图像块的运动信息候选者列表,可以包括:按照各类别对应的已编码/解码块运动信息列表的优先级从高到低的顺序,依次从各已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表;其中,不同的类别对应不同的优先级。
在该实施例中,若对预测模式为Affine模式的已编码/解码块进行分类,并分别构建了对应不同类别的多个已编码/解码块运动信息列表,则当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以按照各类别对应的已编码/解码块运动信息列表的优先级从高到低的顺序,依次从各已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,以保证精度高的已编码/解码块的运动信息在运动信息候选者列表中的排序比精度低的已编码/解码块的运动信息在运动信息候选者列表中的排序靠前。
举例来说,当按照Affine模式的已编码/解码块的参数模型对Affine模式的已编码/解码块进行了分类(参见上述实施例中的相关描述)时,可以按照List2(对应6参数模型)、List1(对应4参数模型)和List0(对应2参数模型)从先到后的顺序,从各已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
在本申请另一个实施例中,上述从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,可以包括:确定与当前图像块匹配的已编码/解码块运动信息列表;从与当前图像块匹配的已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
在该实施例中,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以先确定与当前图像块匹配的已编码/解码块运动信息列表,并从与当前图像块匹配的已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
举例来说,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以分别确定当前图像块的空域候选者中2参数模型、4参数模型和6参数模型的空域候选者的数量,并从空域候选者的数量最多的参数模型对应的已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
例如,假设当前图像块的空域候选者中4参数模型的空域候选者的数量最多,则可以从List1(对应4参数模型)中选择候选运动信息加入当前图像块的运动信息候选者列表。
可见,在图6A所示方法流程中,对于Affine模式的运动信息预测,也可以通过构建已编码/解码块运动信息列表,并从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表,增加了候选者样本的丰富性。
需要说明的是,在本申请实施例中,构建已编码/解码块运动信息列表时,可以按照先进先出(First-In-First-Out,简称FIFO)的方式对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行更新,即当已编码/解码块运动信息列表中已编码/解码块的运动信息数量达到预设最大数量,且有新的已编码/解码块的运动信息需要加入已编码/解码块运动信息列表时,可以将已编码/解码块运动信息列表中最先加入的已编码/解码块运动信息删除,并将该新的已编码/解码块的运动信息加入。
但应该认识到,上述FIFO方式仅仅是对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行更新的一种具体实现方式,而并不是对本申请保护范围的限定,即本申请实施例中也可以通过其他方式实现对已编码/解码块运动信息列表中的已编码/解码块的运动信息的更新,其具体实现在此不做赘述。
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,下面结合具体实例对本申请实施例提供的技术方案进行说明。
实施例一
在该实施例中,为了增加候选者样本的丰富性,并提高运动信息候选者选择的灵活性,在构建运动信息候选者列表时,可以通过对已有运动信息在指定方向上进行伸缩的方式增加新的候选运动信息。
其中,已有运动信息可以包括空域候选者的运动信息,时域候选者的运动信息,已编码/解码块的运动信息。
以合并模式的运动信息预测为例,在构建运动信息候选者列表时,可以在运动信息候选者列表中已有候选运动信息的基础上,对运动信息候选者列表中已有候选运动信息在指定方向上进行伸缩,以得到新的候选运动信息。
对于合并模式的运动信息预测,构建的运动信息候选者列表中除了可以包括空域候选者、时域候选者、组合候选者以及零运动信息之外,还可以包括对已有运动信息在指定方向上伸缩后得到的候选运动信息。
举例来说,请参见图7A,对于运动矢量701(mx,my),可以沿着运动矢量的 方向度该运动矢量进行伸缩,伸缩后的运动矢量702为(mx+delta_mv_x,my+delta_mv_y)。其中,delta_mv_x为x方向上的伸缩幅度,delta_mv_y为y方向上的伸缩幅度。
需要说明的是,在本申请实施例中,对已有运动信息进行伸缩并不限于沿着运动矢量的方向进行伸缩,也可以是其他方向,本申请实施例对此不做赘述。
实施例二
在该实施例中,对已有运动信息进行伸缩时,可以进行帧级、slice级或行级的伸缩。
以帧级为例,可以以帧为单位,确定某一帧中已有运动信息是否需要进行伸缩以及进行伸缩时的幅度。其中,当伸缩幅度为0时,可以确定为不需要进行伸缩。
举例来说,请参见图7B,对于任一已有运动信息,可以根据该已有运动信息所属帧的上一帧的帧复杂度以及运动剧烈程度来确定对该已有运动信息进行伸缩的幅度。
其中,可以通过零运动矢量的占比来表征帧复杂度以及运动剧烈程度。零运动矢量占比(帧中零运动矢量的数量与运动矢量的总数的比值)越高,帧复杂度越低,运动剧烈程度越低;零运动矢量占比越低,帧复杂度越高,运动剧烈程度越高。帧复杂度越高,运动剧烈程度越高,伸缩的幅度越大;帧复杂度越低,运动剧烈程度越低,伸缩的幅度越小。
如图7B所示,假设帧1的帧复杂度高,且运动剧烈,所确定的伸缩的幅度为delta_mv0;帧2的帧复杂度极高,且运动极剧烈,所确定的伸缩的幅度为delta_mv1;帧3的帧复杂度低,且运动较弱,所确定的伸缩的幅度为delta_mv2,则delta_mv2<delta_mv0<deltamv1。
实施例三
在该实施例中,对已有运动信息进行伸缩时,可以进行块级的伸缩。
举例来说,请参见图7C,对于任一数据块(可以为空域候选者、时域候选者或已编码/解码块),可以根据该块的相邻块的运动信息的相似度,来确定是否需要对该块进行伸缩以及进行伸缩的幅度。其中,当伸缩幅度为0时,可以确定为不需要进行伸缩。
其中,相邻块的运动信息的相似度越高,所确定的伸缩的幅度越小;相邻块的运动信息的相似度越低,所确定的伸缩的幅度越大。
如图7C所示,假设数据块1的不同位置的相邻块的相似度极低,所确定的伸缩的幅度为delta_mv0,其中相邻块为A0、A1、B0、B1、B2;数据块2的不同位置的相邻块的相似度低,所确定的伸缩的幅度为delta_mv1,其中相邻块的示意在图中省略;数据块3的不同位置的相邻块的相似度高,所确定的伸缩的幅度为delta_mv2,其中相邻块的示意在图中省略;则delta_mv2<delta_mv1<deltamv0。
实施例四
在该实施例中,为了增加候选者样本的丰富性,并提高运动信息候选者选择的 灵活性,在构建运动信息候选者列表时,可以通过对已有运动信息进行加权的方式增加新的候选运动信息。其中,已有运动信息可以包括空域候选者的运动信息,时域候选者的运动信息,已编码/解码块的运动信息。
对已有运动信息进行加权时各已有运动信息的加权比例可以根据已有运动信息的来源块(可以包括空域候选者、时域候选者或已编码/解码块)的特点自适应调整。
举例来说,假设已有运动信息的来源块A和来源块B均参考同一帧,且来源块A的运动矢量(amx,amy),来源块B的运动矢量为(bmx,bmy)。来源块A的加权系数为W0,来源块B的加权系数为W1,则可以通过如下的公式(6)和(7)计算加权后的运动矢量为(Mv_x,Mv_y):
Mv_x=W0*amv_x+W1*bmv_x     (6)
Mv_y=W0*amv_y+W1*bmv_y      (7)
实施例五
在该实施例中,为了增加候选者样本的丰富性,可以基于已编码/解码块的运动信息构建已编码/解码块运动信息列表,并从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
为了提高构建已编码/解码块运动信息列表的精准度,在构建已编码/解码块运动信息列表时,可以先对已编码/解码块进行筛选,并基于筛选后的已编码/解码块的运动信息构建已编码/解码块运动信息列表。
在该实施例中,已编码/解码块的残差系数中非零系数个数直观体现预测的准确性,因此,可以基于已编码/解码块的残差系数中非零系数个数对已编码/解码块进行筛选,以剔除预测准确性过低的已编码/解码块。
请参见图7D,对于任一已编码/解码块,在将该已编码/解码块加入已编码/解码块运动信息列表时,可以判断该已编码/解码块的残差系数中的非零系数个数是否大于等于预设数量阈值;若是,则拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表;否则,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
实施例六
在该实施例中,已编码/解码块的宽和高均过大时,该已编码/解码块的运动信息被选中为最终的预测运动信息的概率过低,因此,可以基于已编码/解码块的宽高对已编码/解码块进行筛选,以剔除被选中为最终的预测运动信息的概率过低的已编码/解码块。
请参见图7E,对于任一已编码/解码块,在将该已编码/解码块加入已编码/解码块运动信息列表时,可以判断该已编码/解码块的宽是否大于等于第一预设阈值(以64为例),以及高是否大于等于第二预设阈值(以64为例);若该已编码/解码块的宽大于等于64,且高大于等于64,则拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表;否则(该已编码/解码块的宽小于64和/或高小于64),将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
实施例七
在该实施例中,已编码/解码块的量化步长可以直观体现已编码/解码块的运动信息的精度,因此,可以基于已编码/解码块的运动信息的量化步长对已编码/解码块进行筛选,以剔除精度过低的已编码/解码块的运动信息。
请参见图7F,对于任一已编码/解码块,在将该已编码/解码块加入已编码/解码块运动信息列表时,可以判断该已编码/解码块的运动信息的量化步长是否大于等于预设步长阈值(以2为例);若是,则拒绝将该已编码/解码块的运动信息加入已编码/解码块运动信息列表;否则,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表。
实施例八
在该实施例中,为了提高已编码/解码块运动信息列表构建的精准度,在构建已编码/解码块运动信息列表时,可以先对已编码/解码块进行分类,并将不同类别的已编码/解码块加入不同的已编码/解码块运动信息列表。
在该实施例中,以根据已编码/解码块的形状对已编码/解码块进行分类为例。
请参见图7G,对于任一已编码/解码块,在将该已编码/解码块加入已编码/解码块运动信息列表时,可以确定该已编码/解码块的形状(以宽高比为例),当该已编码/解码块的宽高比大于等于1时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表1(可以称为List0);当该已编码/解码块的宽高比小于1时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表2(可以称为list1)。
在该实施例中,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以根据当前图像块的形状确定匹配的已编码运动信息列表,并从匹配的已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
请参见图7H,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,以确定当前图像块的宽高比,若当前图像块的宽高比大于等于1,则从List0中选择候选运动信息加入当前图像块的运动信息候选者列表;若当前图像块的宽高比小于1,则从List1中选择候选运动信息加入当前图像块的运动信息候选者列表。
实施例九
在该实施例中,可以根据已编码/解码块的尺寸(以宽与高的乘积为例)对已编码/解码块进行分类。
请参见图7I,对于任一已编码/解码块,在将该已编码/解码块加入已编码/解码块运动信息列表时,可以确定该已编码/解码块的宽与高的乘积,并判断该已编码/解码块的宽与高的乘积是否大于等于预设阈值(以2048为例),若是,则将该已编码/解码块的运动信息加入已编码/解码块运动信息列表1(可以称为List0);否则,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表2(可以称为list1)。
在该实施例中,当需要从已编码/解码块运动信息列表中选择候选运动信息加入 当前图像块的运动信息候选者列表时,可以按照优先级从高到低的顺序从各已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
在该实施例中,List1的优先级高于List0的优先级,因此,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以按照先List1,后List0的顺序,从List1和List0中选择候选运动信息加入当前图像块的运动信息候选者列表。
实施例十
在该实施例中,可以根据已编码/解码块的预测模式对已编码/解码块进行分类。
请参见图7J,对于任一已编码/解码块,在将该已编码/解码块加入已编码/解码块运动信息列表时,可以确定该已编码/解码块的预测模式,当该已编码/解码块的预测模式为合并模式时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表1(可以称为List0);当该已编码/解码块的预测模式为AMVP模式时,将该已编码/解码块的运动信息加入已编码/解码块运动信息列表2(可以称为list1)。
在该实施例中,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以按照优先级从高到低的顺序从各已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
在该实施例中,List1的优先级高于List0的优先级,因此,当需要从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,可以按照先List1,后List0的顺序,从List1和List0中选择候选运动信息加入当前图像块的运动信息候选者列表。
实施例十一
在该实施例中,为了使被选中为最终的预测运动信息的候选运动信息在运动信息候选者列表中排序尽量靠前,以提高编码性能,在基于已编码/解码块的运动信息构建了已编码/解码块运动信息列表之后,可以对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序。
在该实施例中,可以基于已编码/解码块的残差系数对已编码/解码块运动信息列表中的已编码/解码块的运动信息进行重排序。例如,对于已编码/解码块运动信息列表中的已编码/解码块的运动信息,可以按照残差系数的非零系数个数从多到少的顺序对已编码/解码块的运动信息进行重排序。
举例来说,请参见图7K,假设已编码/解码块3的运动信息(假设为HMVP(History-based Motion Vector Prediction,基于已有运动信息的运动信息预测)2)的残差系数的非零个数少于已编码/解码块4的运动信息(假设为HMVP3)的残差系数的非零个数,因此,可以将HMVP2排在HMVP3的后面。
其中,当从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表时,按照从列表尾部到头部的顺序进行选择(即排序越靠后的越先被选择)。
实施例十二
在该实施例中,可以基于当前图像块的形状以及已编码/解码块与当前图像块的相对位置,对已编码/解码块的运动信息进行重排序。
举例来说,参见图7L,假设已编码/解码块A为当前图像块左侧的已编码/解码块;已编码/解码块B为当前图像块上侧的已编码/解码块,则:
当当前图像块的宽高比大于等于1时,可以将已编码/解码块A的运动信息排在已编码/解码块B的运动信息的前面;
当当前图像块的宽高比小于1时,可以将已编码/解码块A的运动信息排在已编码/解码块B的运动信息的后面。
实施例十三
在该实施例中,对于Affine模式(包括Affine Merge模式或Affine AMVP模式)的运动信息预测,为了增加候选者样本的丰富性,也可以基于已编码/解码块的运动信息构建已编码/解码块运动信息列表,并从已编码/解码块运动信息列表中选择候选运动信息加入当前图像块的运动信息候选者列表。
在该实施例中,可以将Affine模式的已编码/解码块的运动参数模型(Motion model)信息存成一个候选者列表(即已编码/解码块运动信息列表),列表长度为L,可以按照FIFO方式或其他方式更新列表成员(运动模型信息成员(Motion model information candidate,简称MMIC)),其示意图可以如图7M。当候选者列表中的候选者的数量达到预设最大数量L时,若有新的候选者(即图中的新加入的候选者)需要加入该列表时,可以将列表中最先加入的成员(即图中的MMIC0)删除,并将该新的候选者加入到最后。
当构建Affine模式块的运动信息候选者列表时,可以从已编码/解码块运动信息列表中选择候选运动信息(在该实施例中也可以称为候选运动参数模型信息)。
实施例十四
在该实施例中,对于Affine模式的运动信息预测,在构建已编码/解码块运动信息列表时,可以根据Affine模式的已编码/解码块的参数模型对Affine模式的已编码/解码块进行分类,并将不同类别的Affine模式的已编码/解码块加入到不同的已编码/解码块运动信息列表。
举例来说,请参见图7N,对于任一Affine模式的已编码/解码块,当该Affine模式的已编码/解码块的参数模型为2参数模型时,将该Affine模式的已编码/解码块的运动参数模型信息加入已编码/解码块运动信息列表1(可以称为List0);当该Affine模式的已编码/解码块的参数模型为4参数模型时,将该Affine模式的已编码/解码块的运动参数模型信息加入已编码/解码块运动信息列表2(可以称为List1);当该Affine模式的已编码/解码块的参数模型为6参数模型时,将该Affine模式的已编码/解码块的运动参数模型信息加入已编码/解码块运动信息列表3(可以称为List2)。
在该实施例中,当需要从已编码/解码块运动信息列表中选择候选运动信息加入 当前图像块的运动信息候选者列表时,可以从与当前图像块匹配的已编码/解码块运动信息列表中选择,或者,可以按照各已编码/解码块运动信息列表的优先级从高到低的顺序,依次从各已编码/解码块运动信息列表中选择,其具体实现可以参见图6所示方法流程中的相关描述,本申请实施例在此不做赘述。
实施例十五
在该实施例中,考虑到已编码/解码块运动信息列表的长度有限(假设长度为L1),因此,通常已编码/解码块运动信息列表中仅包括当前图像块所属CTU左侧的CTU中的各已编码/解码块的运动信息,对于其他与当前图像块邻近,但编码顺序比当前图像块要早得多的已编码/解码块(如当前图像块上侧的已编码/解码块)的运动信息已不在运动信息列表中。
针对这一问题,在该实施例中,除了按照上述实施例中描述的方式构建已编码/解码块运动信息列表之外,还可以存储当前图像块所属CTU上侧的若干行的CTU中各已编码/解码块的运动信息,在构建运动信息候选者列表时,还可以从当前图像块所属CTU上侧的若干行的CTU中各已编码/解码块的运动信息中选择候选运动信息。
举例来说,请参见图7O,可以分别存储当前图像块所属CTU的上一行CTU中各已编码/解码块的运动信息。对于任一已编码CTU,所存储的该CTU中各已编码/解码块的运动信息的数量不超过L2个(L1与L2可以相同,也可以不同)。其中,CTU内各已编码/解码块的运动信息可以按照其编码顺序进行存储。
在构建运动信息候选者列表时,还可以从当前图像块所属CTU的正上方的CTU中的各已编码/解码块运动信息中选择候选运动信息,进一步增加了候选者样本的丰富性。
以上对本申请提供的方法进行了描述。下面对本申请提供的装置进行描述。
请参见图8,为本申请实施例提供的一种运动信息候选者列表构建装置的硬件结构示意图。该运动信息候选者列表构建装置可以包括处理器801、通信接口802、存储器803和通信总线804。处理器801、通信接口802以及存储器803通过通信总线804完成相互间的通信。其中,存储器803上存放有计算机程序;处理器801可以通过执行存储器803上所存放的程序,实现图2A、图3、图4、图5或图6A所对应的运动信息候选者列表构建方法。
本文中提到的存储器803可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,存储器802可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
需要说明的是,在本申请实施例中,上述运动信息候选者列表构建装置可以为编码端设备,也可以为解码端设备。
本申请实施例还提供了一种存储有计算机程序的机器可读存储介质,例如图8中的存储器803,所述计算机程序可由图8所示运动信息候选者列表构建装置中的处理器801执行图2A、图3、图4、图5或图6A所对应的运动信息候选者列表构建方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (32)

  1. 一种运动信息候选者列表构建方法,应用于编码端设备或解码端设备,其特征在于,包括:
    获取当前图像块的已有运动信息,所述已有运动信息至少包括运动矢量;
    对所述已有运动信息进行变换;
    将变换后的运动信息作为候选运动信息加入所述当前图像块的运动信息候选者列表。
  2. 根据权利要求1所述的方法,其特征在于,对所述已有运动信息进行变换,包括:
    对所述已有运动信息在指定方向上进行伸缩。
  3. 根据权利要求2所述的方法,其特征在于,对所述已有运动信息在所述指定方向上进行伸缩,包括:
    根据所述当前图像块所属的调整单元的上一调整单元中零运动矢量的占比,确定对所述已有运动信息在所述指定方向上进行伸缩的幅度,其中,所述调整单元包括帧、条带(slice)或行,所述幅度与所述零运动矢量的占比负相关;
    根据所述幅度,对所述已有运动信息在所述指定方向上进行伸缩。
  4. 根据权利要求2所述的方法,其特征在于,对所述已有运动信息在所述指定方向上进行伸缩,包括:
    根据所述当前图像块的不同位置的空域相邻块的运动信息的相似度,确定对所述已有运动信息在所述指定方向上进行伸缩的幅度;
    根据所述幅度,对所述已有运动信息在所述指定方向上进行伸缩。
  5. 根据权利要求1所述的方法,其特征在于,对所述已有运动信息进行变换,包括:
    对所述已有运动信息进行加权。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述已有运动信息包括:
    与所述当前图像块关联的空域候选者的运动信息;
    与所述当前图像块关联的时域候选者的运动信息;和
    已编码/解码块的运动信息。
  7. 一种运动信息候选者列表构建方法,应用于编码端设备或解码端设备,其特征在于,包括:
    根据预设过滤条件对当前图像块之前的已编码/解码块进行筛选,基于筛选后的已编 码/解码块构建已编码/解码块运动信息列表;
    从所述已编码/解码块运动信息列表中选择候选运动信息加入所述当前图像块的运动信息候选者列表,所述候选运动信息至少包括运动矢量。
  8. 根据权利要求7所述的方法,其特征在于,根据所述预设过滤条件对所述当前图像块之前的所述已编码/解码块进行筛选,基于所述筛选后的已编码/解码块构建所述已编码/解码块运动信息列表,包括:
    对于当前图像块之前的任一已编码/解码块,
    当该已编码/解码块的残差系数中的非零系数个数小于预设数量阈值时,将该已编码/解码块的运动信息加入所述已编码/解码块运动信息列表;
    或,
    当该已编码/解码块的长小于第一预设阈值,或该已编码/解码块的宽小于第二预设阈值时,将该已编码/解码块的运动信息加入所述已编码/解码块运动信息列表;
    或,
    当该已编码/解码块的运动信息的量化步长小于预设步长阈值时,将该已编码/解码块的运动信息加入所述已编码/解码块运动信息列表。
  9. 根据权利要求7或8任一项所述的方法,其特征在于,基于所述筛选后的已编码/解码块构建所述已编码/解码块运动信息列表,包括:
    对所述筛选后的已编码/解码块进行分类;
    根据所述筛选后的已编码/解码块的类别,将所述筛选后的已编码/解码块加入到对应的已编码/解码块运动信息列表;
    其中,不同的类别对应不同的已编码/解码块运动信息列表。
  10. 根据权利要求9所述的方法,其特征在于,对所述筛选后的已编码/解码块进行分类,包括:
    根据所述筛选后的已编码/解码块的形状,对所述筛选后的已编码/解码块进行分类。
  11. 根据权利要求10所述的方法,其特征在于,根据所述筛选后的已编码/解码块的形状对所述筛选后的已编码/解码块进行分类,包括:
    当所述筛选后的已编码/解码块的宽高比大于1时,将所述筛选后的已编码/解码块划分为第一类别;和/或,
    当所述筛选后的已编码/解码块的所述宽高比小于1时,将所述筛选后的已编码/解码块划分为第二类别;和/或,
    当所述筛选后的已编码/解码块的所述宽高比等于1时,
    将所述筛选后的已编码/解码块划分为所述第一类别;或
    将所述筛选后的已编码/解码块划分为所述第二类别;或
    将所述筛选后的已编码/解码块划分为第三类别。
  12. 根据权利要求9所述的方法,其特征在于,对所述筛选后的已编码/解码块进行分类,包括:
    当所述筛选后的已编码/解码块的宽与高的乘积大于预设阈值时,将所述筛选后的已编码/解码块划分为第一类别;
    当所述筛选后的已编码/解码块的所述宽与所述高的乘积小于等于所述预设阈值时,将所述筛选后的已编码/解码块划分为第二类别。
  13. 根据权利要求9所述的方法,其特征在于,对所述筛选后的已编码/解码块进行分类,包括:
    根据所述筛选后的已编码/解码块的预测模式,对所述筛选后的已编码/解码块进行分类。
  14. 根据权利要求13所述的方法,其特征在于,根据所述筛选后的已编码/解码块的所述预测模式对所述筛选后的已编码/解码块进行分类,包括:
    当所述筛选后的已编码/解码块的所述预测模式为合并模式时,将所述筛选后的已编码/解码块划分为第一类别;
    当所述筛选后的已编码/解码块的所述预测模式为高级运动矢量预测(AMVP)模式时,将所述筛选后的已编码/解码块划分为第二类别。
  15. 根据权利要求9-14任一项所述的方法,其特征在于,从所述已编码/解码块运动信息列表中选择所述候选运动信息加入所述运动信息候选者列表,包括:
    按照各类别对应的已编码/解码块运动信息列表的优先级从高到低的顺序,依次从各已编码/解码块运动信息列表中选择所述候选运动信息加入所述运动信息候选者列表,其中,不同的类别对应不同的优先级。
  16. 根据权利要求9-14任一项所述的方法,其特征在于,从所述已编码/解码块运动信息列表中选择所述候选运动信息加入所述运动信息候选者列表,包括:
    确定与所述当前图像块匹配的已编码/解码块的运动信息列表;
    从所述与所述当前图像块匹配的已编码/解码块的运动信息列表中选择所述候选运动信息加入所述运动信息候选者列表。
  17. 根据权利要求16所述的方法,其特征在于,确定与所述当前图像块匹配的已编码/解码块的运动信息列表,包括:
    根据所述当前图像块的形状,确定所述当前图像块的类别;
    根据所述当前图像块的所述类别,确定与所述当前图像块的类别匹配的已编码/解码块的运动信息列表。
  18. 根据权利要求7-17任一项所述的方法,其特征在于,基于所述筛选后的已编码/解码块构建所述已编码/解码块运动信息列表之后,还包括:
    对所述已编码/解码块运动信息列表中的所述筛选后的已编码/解码块的运动信息进行重排序。
  19. 根据权利要求18所述的方法,其特征在于,对所述已编码/解码块运动信息列表中的所述筛选后的已编码/解码块的运动信息进行重排序,包括:
    基于所述筛选后的已编码/解码块的残差系数,对所述筛选后的已编码/解码块的运动信息进行重排序。
  20. 根据权利要求19所述的方法,其特征在于,基于所述筛选后的已编码/解码块的所述残差系数对所述筛选后的已编码/解码块的运动信息进行重排序,包括:
    按照所述残差系数的非零系数个数从多到少的顺序,对所述筛选后的已编码/解码块的运动信息进行重排序。
  21. 根据权利要求18所述的方法,其特征在于,对所述已编码/解码块运动信息列表中的所述筛选后的已编码/解码块的运动信息进行重排序,包括:
    基于所述当前图像块的形状以及所述筛选后的已编码/解码块与所述当前图像块的相对位置,对所述筛选后的已编码/解码块的运动信息进行重排序。
  22. 根据权利要求21所述的方法,其特征在于,基于所述当前图像块的所述形状以及所述筛选后的已编码/解码块与所述当前图像块的所述相对位置,对所述筛选后的已编码/解码块的运动信息进行重排序,包括:
    当所述当前图像块的宽高比大于1时,按照所述当前图像块左侧的所述筛选后的已编码/解码块在前,所述当前图像块上侧的所述筛选后的已编码/解码块在后的顺序,对所述筛选后的已编码/解码块的运动信息进行重排序;和/或,
    当所述当前图像块的所述宽高比小于1时,按照所述当前图像块上侧的所述筛选后的已编码/解码块在前,所述当前图像块左侧的所述筛选后的已编码/解码块在后的顺序,对所述筛选后的已编码/解码块的运动信息进行重排序;和/或,
    当所述当前图像块的所述宽高比等于1时,
    按照所述当前图像块左侧的所述筛选后的已编码/解码块在前,所述当前图像块上侧的所述筛选后的已编码/解码块在后的顺序,对所述筛选后的已编码/解码块的运动 信息进行重排序;或,
    按照所述当前图像块上侧的所述筛选后的已编码/解码块在前,所述当前图像块左侧的所述筛选后的已编码/解码块在后的顺序,对所述筛选后的已编码/解码块的运动信息进行重排序。
  23. 一种运动信息候选者列表构建方法,应用于编码端设备或解码端设备,其特征在于,包括:
    对当前图像块之前的已编码/解码块进行分类;
    根据所述已编码/解码块的类别,将所述已编码/解码块加入到对应的已编码/解码块运动信息列表,其中,不同的类别对应不同的已编码/解码块运动信息列表;
    从所述已编码/解码块运动信息列表中,选择候选运动信息加入所述当前图像块的运动信息候选者列表,所述候选运动信息至少包括运动矢量。
  24. 一种运动信息候选者列表构建方法,应用于编码端设备或解码端设备,其特征在于,包括:
    根据当前图像块之前的已编码/解码块的运动信息,构建已编码/解码块运动信息列表;
    对所述已编码/解码块运动信息列表中的所述已编码/解码块的运动信息进行重排序;
    从重排序后的所述已编码/解码块运动信息列表中,选择候选运动信息加入所述当前图像块的运动信息候选者列表,所述候选运动信息至少包括运动矢量。
  25. 一种运动信息候选者列表构建方法,应用于编码端设备或解码端设备,其特征在于,包括:
    构建已编码/解码块运动信息列表,所述已编码/解码块运动信息列表中包括当前图像块之前的已编码/解码块的运动信息;
    当所述当前图像块的预测模式为仿射Affine模式时,从所述已编码/解码块的运动信息中选择候选运动信息加入所述当前图像块的运动信息候选者列表。
  26. 根据权利要求25所述的方法,其特征在于,从所述已编码/解码块的运动信息中选择所述候选运动信息加入所述当前图像块的运动信息候选者列表,包括:
    从所述预测模式为Affine Merge模式的已编码/解码块的运动信息中,选择所述候选运动信息加入所述当前图像块的运动信息候选者列表;或,
    从所述预测模式为Affine AMVP模式的已编码/解码块的运动信息中,选择所述候选运动信息加入所述当前图像块的运动信息候选者列表。
  27. 根据权利要求25所述的方法,其特征在于,构建所述已编码/解码块运动信息 列表,包括:
    对所述预测模式为Affine模式的已编码/解码块进行分类;
    根据所述预测模式为Affine模式的已编码/解码块的类别,将所述预测模式为所述Affine模式的已编码/解码块加入到对应的已编码/解码块运动信息列表;
    其中,不同的类别对应不同的已编码/解码块运动信息列表。
  28. 根据权利要求27所述的方法,其特征在于,对所述预测模式为所述Affine模式的已编码/解码块进行分类,包括:
    根据所述预测模式为所述Affine模式的已编码/解码块的参数模型,对所述预测模式为所述Affine模式的已编码/解码块进行分类。
  29. 根据权利要求28所述的方法,其特征在于,根据所述预测模式为所述Affine模式的已编码/解码块的参数模型对所述预测模式为所述Affine模式的已编码/解码块进行分类,包括:
    当所述预测模式为所述Affine模式的已编码/解码块的所述参数模型为2参数模型时,将所述预测模式为所述Affine模式的已编码/解码块划分为第一类别;
    当所述预测模式为所述Affine模式的已编码/解码块的所述参数模型为4参数模型时,将所述预测模式为所述Affine模式的已编码/解码块划分为第二类别;
    当所述预测模式为所述Affine模式的已编码/解码块的所述参数模型为6参数模型时,将所述预测模式为所述Affine模式的已编码/解码块划分为第三类别。
  30. 根据权利要求27-28任一项所述的方法,其特征在于,从所述已编码/解码块的运动信息中选择所述候选运动信息加入所述当前图像块的运动信息候选者列表,包括:
    按照各类别对应的已编码/解码块运动信息列表的优先级从高到低的顺序,依次从各已编码/解码块运动信息列表中选择所述候选运动信息加入所述当前图像块的运动信息候选者列表,其中,不同的类别对应不同的优先级;
    或,
    确定与所述当前图像块匹配的已编码/解码块的运动信息列表;
    从所述与所述当前图像块匹配的已编码/解码块的运动信息列表中选择所述候选运动信息加入所述当前图像块的运动信息候选者列表。
  31. 一种运动候选者列表构建装置,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,所述处理器,所述通信接口,所述存储器通过所述通信总线完成相互间的通信;
    所述存储器,用于存放计算机程序;
    所述处理器,用于执行所述存储器上所存放的所述程序时,实现权利要求1-30任一所述的方法步骤。
  32. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-30任一所述的方法步骤。
PCT/CN2019/106473 2018-09-20 2019-09-18 运动信息候选者列表构建方法、装置及可读存储介质 WO2020057556A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811102877.6A CN110933439B (zh) 2018-09-20 2018-09-20 运动信息候选者列表构建方法、装置及可读存储介质
CN201811102877.6 2018-09-20

Publications (1)

Publication Number Publication Date
WO2020057556A1 true WO2020057556A1 (zh) 2020-03-26

Family

ID=69855583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106473 WO2020057556A1 (zh) 2018-09-20 2019-09-18 运动信息候选者列表构建方法、装置及可读存储介质

Country Status (2)

Country Link
CN (1) CN110933439B (zh)
WO (1) WO2020057556A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110809161B (zh) 2019-03-11 2020-12-29 杭州海康威视数字技术股份有限公司 运动信息候选者列表构建方法及装置
CN112073735B (zh) * 2020-11-16 2021-02-02 北京世纪好未来教育科技有限公司 视频信息处理方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257630A (zh) * 2008-03-25 2008-09-03 浙江大学 结合三维滤波的视频编码方法和装置
CN102714736A (zh) * 2010-01-19 2012-10-03 三星电子株式会社 基于减少的运动矢量预测候选对运动矢量进行编码和解码的方法和设备
CN102934434A (zh) * 2010-07-12 2013-02-13 联发科技股份有限公司 时间运动矢量预测的方法与装置
CN103430547A (zh) * 2011-03-08 2013-12-04 Jvc建伍株式会社 动图像编码装置、动图像编码方法及动图像编码程序、及动图像解码装置、动图像解码方法及动图像解码程序
WO2017157259A1 (en) * 2016-03-15 2017-09-21 Mediatek Inc. Method and apparatus of video coding with affine motion compensation
WO2017209455A2 (ko) * 2016-05-28 2017-12-07 세종대학교 산학협력단 비디오 신호의 부호화 또는 복호화 방법 및 장치
WO2018097117A1 (ja) * 2016-11-22 2018-05-31 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置、復号装置、符号化方法及び復号方法
CN108141588A (zh) * 2015-09-24 2018-06-08 Lg电子株式会社 图像编码***中的帧间预测方法和装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257630A (zh) * 2008-03-25 2008-09-03 浙江大学 结合三维滤波的视频编码方法和装置
CN102714736A (zh) * 2010-01-19 2012-10-03 三星电子株式会社 基于减少的运动矢量预测候选对运动矢量进行编码和解码的方法和设备
CN102934434A (zh) * 2010-07-12 2013-02-13 联发科技股份有限公司 时间运动矢量预测的方法与装置
CN103430547A (zh) * 2011-03-08 2013-12-04 Jvc建伍株式会社 动图像编码装置、动图像编码方法及动图像编码程序、及动图像解码装置、动图像解码方法及动图像解码程序
CN108141588A (zh) * 2015-09-24 2018-06-08 Lg电子株式会社 图像编码***中的帧间预测方法和装置
WO2017157259A1 (en) * 2016-03-15 2017-09-21 Mediatek Inc. Method and apparatus of video coding with affine motion compensation
WO2017209455A2 (ko) * 2016-05-28 2017-12-07 세종대학교 산학협력단 비디오 신호의 부호화 또는 복호화 방법 및 장치
WO2018097117A1 (ja) * 2016-11-22 2018-05-31 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置、復号装置、符号化方法及び復号方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHOU, MINHUA: "Parallelized merge/skip mode for HEVC", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/ IEC JTC1/SC29/WG11, 22 July 2011 (2011-07-22), XP030049045 *

Also Published As

Publication number Publication date
CN110933439B (zh) 2022-05-31
CN110933439A (zh) 2020-03-27

Similar Documents

Publication Publication Date Title
TWI736903B (zh) 非對稱加權雙向預測Merge
CN111279695B (zh) 用于基于非对称子块的图像编码/解码的方法及装置
TWI729402B (zh) 加權交織預測
TWI736905B (zh) 色度解碼器側運動向量細化
TW201904299A (zh) 運動向量預測
CN110740321B (zh) 基于更新的运动矢量的运动预测
CN111164978A (zh) 用于对图像进行编码/解码的方法和设备以及用于存储比特流的记录介质
CN112369021A (zh) 用于吞吐量增强的图像编码/解码方法和设备以及存储比特流的记录介质
TWI722465B (zh) 子塊的邊界增強
TW202007154A (zh) 交織預測的改善
CN113273188A (zh) 图像编码/解码方法和装置以及存储有比特流的记录介质
WO2020057556A1 (zh) 运动信息候选者列表构建方法、装置及可读存储介质
TWI833795B (zh) 交織預測的快速編碼方法
CN110876064B (zh) 部分交织的预测
TWI719524B (zh) 降低非相鄰Merge設計的複雜度
CN110557639B (zh) 交织预测的应用

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19862237

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19862237

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19862237

Country of ref document: EP

Kind code of ref document: A1