WO2020057556A1 - Procédé et appareil pour la création d'une liste d'informations de mouvement candidates, et support de stockage lisible par machine - Google Patents

Procédé et appareil pour la création d'une liste d'informations de mouvement candidates, et support de stockage lisible par machine Download PDF

Info

Publication number
WO2020057556A1
WO2020057556A1 PCT/CN2019/106473 CN2019106473W WO2020057556A1 WO 2020057556 A1 WO2020057556 A1 WO 2020057556A1 CN 2019106473 W CN2019106473 W CN 2019106473W WO 2020057556 A1 WO2020057556 A1 WO 2020057556A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion information
encoded
decoded
block
coded
Prior art date
Application number
PCT/CN2019/106473
Other languages
English (en)
Chinese (zh)
Inventor
徐丽英
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020057556A1 publication Critical patent/WO2020057556A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present application relates to video coding technologies, and in particular, to a method, a device, and a readable storage medium for constructing a motion information candidate list.
  • Inter-prediction refers to the use of the time-domain correlation of video to predict the pixels of the current image using the pixels of neighboring coded images in order to effectively remove the video time-domain redundancy.
  • the inter-prediction part of the main video coding standards uses block-based motion compensation technology.
  • the main principle is to find a best matching block for each pixel block of the current image in the previously encoded image. This process is called Motion estimation (Motion Estimation, referred to as ME).
  • ME Motion Estimation
  • the image used for prediction is called a reference image frame, and the displacement from the reference block to the current pixel block is called a motion vector (MV), and the difference between the current pixel block and the reference block is called the prediction residual. (Prediction Residual).
  • the motion information of neighboring blocks in the spatial domain has a strong correlation, and the motion information also has a certain correlation in the time domain
  • the motion information of the neighboring block in the spatial domain or the time domain is used to perform motion information on the current block Prediction, to obtain predicted pixel values, only the residuals need to be encoded, which can greatly save the number of encoding bits of motion information.
  • the sequence number such as Merge_idx
  • the current video coding standards propose a merge technology (Merge), Advanced Motion Vector Prediction (AMVP) and simulation in motion information prediction.
  • Affine technology uses spatial and temporal motion information prediction, establish a motion information candidate list, and select the best candidate from the list as the prediction information of the current unit according to preset rules.
  • the present application provides a method, a device, and a readable storage medium for constructing a candidate list of motion information.
  • a method for constructing a motion information candidate list including: acquiring existing motion information of a current image block, where the existing motion information includes at least a motion vector; and for the existing motion The information is transformed; and the transformed motion information is added as candidate motion information to the motion information candidate list of the current image block.
  • a method for constructing a motion information candidate list includes: filtering the coded / decoded blocks before the current image block according to preset filtering conditions, and based on the filtered coded / decoded Building a motion information list of coded / decoded blocks in blocks; selecting candidate motion information from the coded / decoded block motion information list to add a motion information candidate list of the current image block, the candidate motion information including at least a motion vector.
  • a method for constructing a motion information candidate list includes: classifying encoded / decoded blocks before a current image block; and classifying all encoded / decoded blocks according to the type of the encoded / decoded block.
  • the encoded / decoded block is added to the corresponding encoded / decoded block motion information list, wherein different categories correspond to different encoded / decoded block motion information lists; from the encoded / decoded block motion information list, selecting
  • the candidate motion information is added to a motion information candidate list of the current image block, and the candidate motion information includes at least a motion vector.
  • a method for constructing a motion information candidate list includes: constructing a motion information list of encoded / decoded blocks according to motion information of an encoded / decoded block before a current image block; Reordering the motion information of the encoded / decoded block in the encoded / decoded block motion information list; selecting candidate motion information to add to the current image from the reordered encoded / decoded block motion information list A motion information candidate list for a block, the candidate motion information including at least a motion vector.
  • a method for constructing a motion information candidate list includes: constructing a coded / decoded block motion information list, where the coded / decoded block motion information list includes information before a current image block. Motion information of the encoded / decoded block; when the prediction mode of the current image block is an affine Affine mode, selecting candidate motion information from the motion information of the encoded / decoded block to add the motion information of the current image block Candidate list.
  • an exercise candidate list constructing device including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory pass through the The communication bus completes communication with each other; the memory is configured to store a computer program; and the processor is configured to implement the foregoing method of constructing a candidate list of motion information when the program stored in the memory is executed.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method for constructing the above-mentioned athletic information candidate list is implemented. .
  • the existing motion information of the current image block is transformed, and the transformed existing motion information is added as candidate motion information to the motion information candidate list of the current image block. Increase the richness of candidate samples.
  • the method for constructing a motion information candidate list in the embodiment of the present application is to increase the number of candidates by filtering, classifying, and / or reordering the encoded / decoded blocks before the current image block when constructing the encoded / decoded block motion information list. Based on the richness of the user samples, the accuracy of constructing the coded / decoded block motion information list is improved.
  • FIG. 1A- (a) to FIG. 1A- (f) are schematic diagrams of block division shown in an exemplary embodiment of the present application;
  • FIG. 1B is a schematic diagram of block division according to another exemplary embodiment of the present application.
  • FIG. 2A is a schematic flowchart of a method for constructing a motion information candidate list according to an exemplary embodiment of the present application
  • FIG. 2B to FIG. 2D are schematic diagrams of a motion information candidate list shown in an exemplary embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for constructing a sports information candidate list according to an exemplary embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for constructing a sports information candidate list according to another exemplary embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a method for constructing a sports information candidate list, according to another exemplary embodiment of the present application.
  • FIG. 6A is a schematic flowchart of a method for constructing a sports information candidate list according to still another exemplary embodiment of the present application.
  • FIG. 6B is a schematic diagram of a motion information candidate list according to an exemplary embodiment of the present application.
  • FIG. 7A is a schematic diagram of scaling existing motion information according to an exemplary embodiment of the present application.
  • FIG. 7B is a schematic diagram illustrating a correspondence relationship between a frame motion intensity and a contraction amplitude according to an exemplary embodiment of the present application.
  • FIG. 7C is a schematic diagram illustrating a correspondence relationship between the similarity of adjacent blocks at different positions and the amplitude of expansion and contraction, according to an exemplary embodiment of the present application.
  • FIG. 7D is a schematic diagram of filtering an encoded / decoded block according to an exemplary embodiment of the present application.
  • FIG. 7E is a schematic diagram of filtering a coded / decoded block according to another exemplary embodiment of the present application.
  • FIG. 7F is a schematic diagram of filtering a coded / decoded block according to another exemplary embodiment of the present application.
  • FIG. 7G is a schematic diagram of classifying coded / decoded blocks according to an exemplary embodiment of the present application.
  • FIG. 7H is a schematic diagram illustrating selecting candidate motion information from a plurality of encoded / decoded block motion information lists, according to an exemplary embodiment of the present application.
  • FIG. 7I is a schematic diagram of classifying encoded / decoded blocks according to another exemplary embodiment of the present application.
  • FIG. 7J is a schematic diagram illustrating classification of encoded / decoded blocks according to another exemplary embodiment of the present application.
  • FIG. 7K is a schematic diagram of reordering motion information of encoded / decoded blocks in a motion information list of encoded / decoded blocks according to an exemplary embodiment of the present application;
  • FIG. 7L is a schematic diagram of reordering motion information of encoded / decoded blocks in a motion information list of encoded / decoded blocks according to another exemplary embodiment of the present application.
  • FIG. 7M is a schematic diagram of constructing a motion information list of encoded / decoded blocks in a motion information prediction terminal of an Affine mode, according to an exemplary embodiment of the present application.
  • FIG. 7N is a schematic diagram illustrating classification of encoded / decoded blocks in an Affine mode according to an exemplary embodiment of the present application.
  • FIG. 7O is a schematic diagram illustrating storage of encoded / decoded block motion information according to an exemplary embodiment of the present application.
  • Fig. 8 is a schematic diagram of a hardware structure of a device for constructing a motion information candidate list, according to an exemplary embodiment of the present application.
  • a CTU Coding Tree Unit
  • CUs Coding Units
  • quadtree It is determined at the leaf node CU level whether to use intra coding or inter coding.
  • the CU can be further divided into two or four PUs (Prediction Units, prediction units), and the same PU uses the same prediction information.
  • PUs Prediction Units, prediction units
  • a CU can be further quad-divided into multiple TUs (Transform Units).
  • the current image block in this application is a PU.
  • a partition structure combining a binary tree, a tri-tree, and a quad-tree replaces the original division mode, eliminates the original distinction between CU, PU, and TU concepts, and supports a more flexible division of CU.
  • the CU may be a square or a rectangular partition.
  • the CTU first divides the quadtree, and then the leaf nodes of the quadtree can further divide the binary tree and the tritree.
  • Figure 1A there are five types of CUs.
  • Figures 1A- (a) represent a CU
  • Figures 1A- (b) represent quadtree partitions
  • Figures 1A- (c) represent horizontal binary tree partitions.
  • Figures 1A- (d ) Indicates a vertical binary tree partition
  • FIG. 1A- (e) indicates a horizontal tri-tree partition
  • FIG. 1A- (f) indicates a vertical tri-tree partition.
  • a CU partition in a CTU may be the above-mentioned five partition types. Any combination of. It can be seen from the above that the different division methods make the shapes of each PU different, such as rectangles and squares of different sizes.
  • H.265 / HEVC proposes merging technology (motion information prediction in Merge mode) and AMVP technology (that is, motion information prediction in AMVP mode). Both use spatial and temporal motion information prediction.
  • merging technology motion information prediction in Merge mode
  • AMVP technology that is, motion information prediction in AMVP mode. Both use spatial and temporal motion information prediction.
  • an optimal motion information candidate is selected as the predicted motion information of the current data block.
  • the motion information of the current data block is directly predicted based on the motion information of adjacent data blocks in the spatial or time domain, and there is no motion vector difference (MVD). If the encoder and decoder construct the motion information candidate list in the same way, the encoder only needs to transmit the index of the predicted motion information in the motion information candidate list, which can greatly save the number of encoding bits of the motion information. .
  • the motion information prediction of the AMVP mode also uses the correlation of the motion information of adjacent data blocks in the spatial and temporal domains to establish a motion information candidate list for the current data block.
  • the encoder selects the optimal prediction motion information from it, and performs differential encoding on the motion information.
  • the decoder can calculate the motion information of the current data block only by establishing the same motion information candidate list, and only the sequence number of the motion vector residual and the predicted motion information in the list.
  • the length of the motion information candidate list in the AMVP mode is two.
  • Affine mode is a new inter prediction mode introduced by H.266. It has a good prediction effect for rotated and zoomed scenes.
  • Affine mode There are two types of Affine mode in JEM (Joint Explorer Model), one is Affine Inter (corresponding to Affine AMVP), and the other is Affine Merge.
  • Affine Merge can traverse the candidate image blocks to find the first candidate that is Affine mode encoding. In the Affine Merge method, there is no need to transmit some additional index values, only the flag of whether to use Affine Merge or not can be transmitted.
  • the method for constructing the motion information candidate list described in this article can be applied to both the encoding end device and the decoding end device.
  • the encoded / decoded block described in this article refers to the encoded block; when applied to the decoding device, the encoded / decoded block described in this article refers to the decoded block, This article will not repeat it later.
  • new candidate motion information can be obtained by transforming existing motion information to increase the richness of the candidate samples.
  • FIG. 2A is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S200 Acquire existing motion information before the current image block.
  • the current image block refers to a data block on which motion information prediction is currently performed.
  • the data block when applied to the encoding end, the data block may be a data block to be encoded (this may be referred to as an encoding block); when applied to the decoding end, the data block may be a data block to be decoded (this may be referred to as a decoding block herein) ).
  • the existing motion information may include, but is not limited to, motion information of a spatial domain candidate, motion information of a time domain candidate, and / or motion information of an encoded / decoded block.
  • the motion information of the spatial candidate and the motion information of the time candidate are candidate motion information located in the motion information candidate list of the current image block.
  • the motion information candidate list of the current image block includes Spatial domain candidates, time domain candidates, zero motion information, etc.
  • the amount of motion information of the spatial domain candidates and the motion information of the time domain candidates in the motion information candidate list of the current image block are different.
  • the spatial domain candidates of the current image block may include spatially adjacent blocks, may also include some spatially adjacent blocks, and may also include some spatially non-adjacent blocks.
  • the spatial domain candidate block of the current image block may include spatially adjacent blocks in the same CTU as the current image block, or may include spatially adjacent blocks different from the CTU in which the current image block is located.
  • the time domain candidate of the current image block may be an image block in a reference image frame of the current image block, especially a time-domain adjacent block in the reference frame image, including a position in the reference image frame of the current image block and the current image block.
  • the same intermediate image block and the spatially adjacent blocks of the intermediate image block may be an image block in a reference image frame of the current image block, especially a time-domain adjacent block in the reference frame image, including a position in the reference image frame of the current image block and the current image block.
  • the motion information of the spatial domain candidate in the motion information candidate list of the current image block includes the motion information of some spatial domain neighboring blocks of the current image block.
  • the motion information of the time domain candidate in the motion information candidate list of the current image block includes the motion of the intermediate image block in the reference image frame of the current image block with the same position as the current image block and the spatially adjacent blocks of the intermediate image block. information.
  • the motion information of the encoded / decoded block refers to the spatial information candidate and the time domain candidate in the motion information candidate list of the current image block.
  • the coded image block may include the remaining spatial-domain neighboring blocks except for the spatial-domain candidate among the spatial-domain neighboring blocks of the current image block; or / and, except for the motion information in the temporal-domain neighboring blocks of the current image block The remaining time-domain neighboring blocks other than the time-domain candidate in the candidate list.
  • the current image block may be any image unit, and the image unit may be a CTU but not limited to the CTU, or may be a block or unit that is further divided by the CTU, or may be a unit of a block larger than the CTU.
  • the existing motion information includes at least a motion vector.
  • the existing motion information is not limited to including motion vectors, and may also be encoding information other than motion vectors. Accordingly, the transformation of existing motion information may also include motion information other than motion vectors. In this embodiment, details are not described in this embodiment of the present application.
  • Step S210 Transform the acquired existing motion information.
  • the existing motion information may be transformed to obtain new candidate motion information.
  • the transforming the acquired existing motion information may include: scaling the acquired existing motion information in a specified direction.
  • the existing motion information can be transformed by scaling the existing motion information to obtain new candidate motion information.
  • the acquired existing motion information may be scaled in a specified direction.
  • the specified direction may be a motion direction of a motion vector of the current image block, and a MV (Motion Vector) direction included in the motion information.
  • the current image block may be a bidirectional inter prediction block or a unidirectional inter prediction block.
  • the scaling of the existing motion information is not limited to scaling in the MV direction, and may also be scaling in other specified directions, and the specific implementation thereof will not be repeated here.
  • the scale of scaling can be flexibly adjusted according to the actual scene.
  • the existing motion information in order to improve the construction efficiency of the motion information candidate list, can be scaled at the frame level, slice (slice) level, or line level when scaling the existing motion information, that is, at the same frame, Existing motion information of the same slice or the same line can be scaled the same.
  • a frame image can be divided into one or more slices; a slice can include one or more CTUs.
  • the foregoing scaling of the acquired existing motion information in a specified direction may include: determining the existing motion information according to the proportion of the zero motion vector in the previous adjustment unit of the adjustment unit to which the current image block belongs. Amplification and contraction in the specified direction; according to the determined amplitude, the existing motion information is expanded and contracted in the specified direction.
  • the adjustment unit may include a frame, a slice, or a line. Accordingly, when scaling existing motion information in a specified direction, frame-level, slice-level, or line-level syntax control may be adopted.
  • the amplitude of stretching the existing motion information can be determined according to the motion severity of the adjustment unit.
  • the intensity of motion can be characterized by the proportion of zero motion vectors.
  • the proportion of the zero motion vectors of the adjustment unit is the ratio of the number of zero motion vectors in the adjustment unit to the number of all motion vectors in the adjustment unit.
  • the existing motion information in the specified direction may be determined according to the proportion of zero motion vectors in the previous adjustment unit of the adjustment unit to which the current image block belongs.
  • the amplitude of the stretching may be determined according to the proportion of zero motion vectors in the previous adjustment unit of the adjustment unit to which the current image block belongs.
  • the amplitude of the existing motion information in the specified direction is negatively related to the proportion of the zero motion vector, that is, the higher the proportion of the zero motion vector in the previous adjustment unit of the adjustment unit to which the current image block belongs, the higher the determined
  • the adjustment unit as a frame (that is, frame-level adjustment) as an example, the proportion of the zero motion vector in the previous frame of the frame to which the current image block belongs and the determined motion information in the specified direction are performed in the specified direction.
  • the corresponding relationship of the expansion and contraction amplitude can be shown in the following table:
  • the proportion of zero motion vectors in the previous adjustment unit of the adjustment unit to which the current image block belongs exceeds a relatively large proportion threshold (that is, the intensity of the motion of the previous adjustment unit is very weak) )
  • a relatively large proportion threshold that is, the intensity of the motion of the previous adjustment unit is very weak
  • the probability that the zero motion information is selected as the final predicted motion information will be relatively large.
  • the amplitude of scaling the existing motion information in the specified direction can be zero, that is, Existing motion information is not scaled to increase the probability that zero motion information is added to the final motion information candidate list.
  • the existing motion information may be scaled at the block level when scaling is performed, that is, for each data block, it may be determined separately. There is a range of motion information.
  • the above-mentioned scaling of the acquired existing motion information in a specified direction may include: determining, based on the similarity of the motion information of spatially adjacent blocks in different positions of the current image block, that the existing motion information is specified in the specified direction. Amplification and contraction in the direction; according to the determined amplitude, the existing motion information is expanded and contracted in the specified direction.
  • the amplitude of scaling existing motion information may be determined according to the similarity of the motion information of neighboring blocks of the current image block.
  • the amplitude of the existing motion information to be scaled may be determined according to the similarity of the motion information of the spatially neighboring blocks in different positions of the current image block.
  • the existing motion information is performed in a specified direction.
  • the amplitude of the stretching can be zero, that is, the existing motion information is not scaled.
  • the transforming the acquired existing motion information may include: weighting at least two acquired existing motion information.
  • the existing motion information can be transformed by weighting the existing motion information to obtain new candidate motion information.
  • At least two of the acquired existing motion information may be weighted, that is, a weighted average of the at least two acquired existing motion information may be determined.
  • the weighting coefficient when weighting at least two pieces of existing motion information obtained can be adaptively adjusted according to the characteristics of the source block of the existing motion information (the existing motion information is the motion information of the source block).
  • Step S220 Add the transformed motion information as candidate motion information to a motion information candidate list of the current image block.
  • the transformed motion information may be added as candidate motion information to the motion information candidate list of the current image block.
  • the motion information candidate list constructed for the motion information prediction of the merge mode may include airspace candidates, time domain candidates, and zero motion information.
  • it may also include candidate motion information obtained by transforming existing motion information, and a schematic diagram thereof may be shown in FIG. 2C.
  • the transformation of the existing motion information may include, but is not limited to, scaling or weighting the existing motion information.
  • the candidate motion information obtained by transforming the existing motion information may be located before the combination candidate or after the combination candidate, but it is not Limited to the above examples.
  • FIG. 2C there is a combination candidate, and the candidate motion information obtained by transforming the existing motion information is located before the combination candidate (that is, after the time domain candidate and before the combination candidate).
  • the motion information prediction of the AMPV mode is taken as an example.
  • the motion information candidate list constructed for the motion information prediction of the AMVP mode may include airspace candidates, time domain candidates, and zero motion. In addition to the information, it may also include candidate motion information obtained by transforming the existing motion information. The candidate motion information obtained by transforming the existing motion information may be located in the motion information candidate list after the time domain candidate with zero motion. Before information, its schematic diagram can be shown in Figure 2D.
  • the existing motion information is transformed, and the transformed motion information is added as candidate motion information to the motion information candidate list of the current image block, thereby increasing the richness of the candidate samples. , And increase the flexibility of candidate selection of motion information.
  • the motion information prediction of a mode such as a merge mode or an AMVP mode
  • One or more of the methods of screening, classification, and sorting are used to construct a coded / decoded block motion information list based on the coded / decoded block motion information, and select candidate motion information from the coded / decoded block motion information list to join
  • the motion information candidate list of the current image block on the basis of increasing the candidate sample richness, improves the accuracy of the construction of the encoded / decoded block motion information list.
  • FIG. 3 is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S300 Filter the coded / decoded blocks before the current image block according to the preset filtering conditions, and construct a coded / decoded block motion information list based on the filtered coded / decoded blocks.
  • a coded / decoded block motion information list may be constructed based on the motion information of the coded / decoded blocks before the current image block, and the coded / decoded block motion information list may be constructed.
  • the candidate motion information is selected in the motion information candidate list of the current image block.
  • the probability of being selected as the final predicted motion information is small. Therefore, in order to improve the accuracy of the coded / decoded block motion information list construction, construct When the coded / decoded block motion information list is used, the coded / decoded blocks before the current image block can be filtered, and the coded / decoded block motion information list can be constructed based on the filtered coded / decoded blocks.
  • the above-mentioned filtering / coding / decoding block before the current image block is performed according to a preset filtering condition, and a motion information list of the coding / decoding block is constructed based on the filtered coding / decoding block.
  • the number of non-zero coefficients in the residual coefficients of the coded / decoded block can intuitively reflect whether the motion information prediction is accurate. Therefore, when constructing the coded / decoded block motion information list, it can be based on the coded
  • the number of non-zero coefficients in the residual coefficients of the decoding block is used to filter the encoded / decoded blocks. The greater the number of non-zero coefficients in the residual coefficients of the encoded / decoded block, the worse the prediction accuracy of the motion information of the encoded / decoded block is.
  • the number of non-zero coefficients in the residual coefficients of the coded / decoded block can be determined, and the coded / decoded block can be determined. Whether the number of non-zero coefficients in the residual coefficient of the block is greater than a preset number threshold (the preset number threshold can be set according to an actual scene).
  • the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block motion information list.
  • the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list.
  • the motion information of the coded / decoded blocks with poor prediction accuracy is eliminated, which improves the Accuracy of encoding / decoding block motion information list construction.
  • the above-mentioned filtering / coding / decoding block before the current image block is performed according to a preset filtering condition, and a motion information list of the coding / decoding block is constructed based on the filtered coding / decoding block.
  • the probability that the motion information of the encoded / decoded block whose width and height are too large is selected as the final prediction information will be very low. Therefore, when constructing the motion information list of the encoded / decoded block, it can be based on the The width and height of the coded / decoded blocks are filtered for the coded / decoded blocks.
  • the width of the coded / decoded block is greater than or equal to a first preset threshold (the first preset threshold can be determined according to Actual scene setting), and whether the height of the coded / decoded block is greater than or equal to a second preset threshold (the second preset threshold can be set according to the actual scene).
  • the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block.
  • List of decoded block motion information When the width of the encoded / decoded block is less than the first preset threshold, and / or the height of the encoded / decoded block is less than the second preset threshold, motion information of the encoded / decoded block is added to the encoded / decoded List of block motion information.
  • the foregoing filtering and encoding / decoding blocks according to preset filtering conditions, and constructing a list of encoded / decoding block motion information based on the filtered and encoded / decoded blocks may include: for the current image Any encoded / decoded block before the block, when the quantization step size of the motion information of the encoded / decoded block is greater than or equal to a preset step threshold, the motion information of the encoded / decoded block is refused to be added to the encoded / decoded Block motion information list; when the quantization step size of the motion information of the encoded / decoded block is less than a preset step size threshold, add the motion information of the encoded / decoded block to the encoded / decoded block motion information list.
  • the quantization step size of the motion information of the encoded / decoded block can intuitively reflect the accuracy of the motion information of the encoded / decoded block. Therefore, when constructing the motion information list of the encoded / decoded block, it can be based on the motion information list of the encoded / decoded block.
  • the quantization step size of the motion information of the coded / decoded blocks is used to filter the coded / decoded blocks.
  • the quantization step size is 1, its quantization value is 5, 6, 7, and 8; when the quantization step size is 2, its quantization value is 3 (corresponding to Parameters 5 and 6) and 4 (corresponding to parameters 7 and 8); when the quantization step size is 4, its quantization value is 2, that is, the larger the quantization step size, the more parameters will be quantized to the same quantization value. Its accuracy decreases accordingly. Therefore, the quantization step size is inversely related to accuracy.
  • the quantization step size of the motion information of the coded / decoded block can be determined, and the motion information of the coded / decoded block can be determined. Whether the quantization step size is greater than or equal to a preset step size threshold (the preset step size threshold can be set according to an actual scene).
  • the quantization step size of the motion information of the encoded / decoded block is greater than or equal to a preset step size threshold, the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block motion information list.
  • the quantization step size of the motion information of the coded / decoded block is less than a preset step size threshold, the motion information of the coded / decoded block is added to the coded motion information list.
  • the coded / decoded blocks are filtered based on the quantization step size of the motion information of the coded / decoded blocks, and the motion information of the coded / decoded blocks with too low prediction accuracy is eliminated to improve the coded / decoded Accuracy of block motion information list construction.
  • Step S330 Select candidate motion information from the encoded / decoded block motion information list and add it to the motion information candidate list of the current image block, where the candidate motion information includes at least a motion vector.
  • candidate motion information may be selected from the coded / decoded block motion information list to add motion information candidates for the current image block. List.
  • the above-mentioned selection of candidate motion information from the encoded / decoded block motion information list to join the motion information candidate list of the current image block may include: encoded / decoded block motion information corresponding to each category
  • the candidate motion information is sequentially selected from the motion information list of each encoded / decoded block to be added to the motion information candidate list of the current image block; wherein different categories correspond to different priorities.
  • different classes of filtered encoded / decoded blocks have different priorities as candidate motion information.
  • the priority of the encoded / decoded block motion information list corresponding to each category can be selected from high to low. Sequentially, select candidate motion information from each encoded / decoded block motion information list in order to add the motion information candidate list of the current image block to ensure that the motion information of the encoded / decoded block with high accuracy is included in the motion information candidate list.
  • the motion information of the coded / decoded blocks with lower ordering accuracy is ranked higher in the motion information candidate list.
  • the candidate motion information is not limited to the motion vector, and may also be encoding information other than the motion vector. Accordingly, the screening, classification, and / or ranking of the motion information may also include the The filtering, classification, and / or ranking of coding information other than motion vectors is not described in this embodiment of the present application.
  • the selecting the candidate motion information from the encoded / decoded block motion information list to add the motion information candidate list of the current image block may include: determining an encoded / decoded block that matches the current image block. Motion information list; select candidate motion information from the encoded / decoded block motion information list that matches the current image block to join the motion information candidate list of the current image block.
  • the above determining the encoded / decoded block motion information list matching the current image block may include: determining a category of the current image block according to the shape of the current image block; and determining a A list of category-matched encoded / decoded block motion information.
  • the category of the current image block may be determined first according to the shape of the current image block, and further, The coded / decoded block motion information list matching the type of the current image block may be determined according to the type of the current image block, and candidate motion information is selected from the coded / decoded block motion information list to be added to the motion information candidate list.
  • the encoded / decoded blocks are divided into three categories based on the shape of the encoded / decoded blocks (the first category, the second category, and the third category, respectively).
  • step S310 you need to select candidate motion information from the coded / decoded block motion information list to add the motion information candidate list of the current image block, if the current image block aspect ratio Greater than 1, the candidate motion information is selected from the encoded / decoded block motion information list corresponding to the encoded / decoded block of the first category to be added to the motion information candidate list of the current image block; if the current image block has an aspect ratio less than 1, Then, candidate motion information is selected from the encoded / decoded block motion information list corresponding to the encoded / decoded block of the second category to be added to the motion information candidate list of the current image block; if the aspect ratio of the current image block is equal to 1, the The motion information list of the encoded / decoded blocks corresponding to the three types of encoded / decoded blocks. Select candidate motion information to add to the motion information candidate column of the current image block. table.
  • the coded / decoded blocks can also be classified, and different coded / decoded block motion information lists can be constructed according to the types of the coded / decoded blocks. , And / or, reordering the motion information of the encoded / decoded blocks in the constructed encoded / decoded block motion information list, further improving the accuracy of constructing the encoded / decoded block motion information list, and can improve Video encoding performance.
  • the filtered The encoded / decoded blocks are classified, and different types of filtered encoded / decoded blocks are added to different encoded / decoded block motion information lists.
  • step S310 is further included: classifying the screened coded / decoded blocks, and classifying the screened screens according to the types of the screened coded / decoded blocks.
  • the encoded / decoded block is added to the corresponding encoded / decoded block motion information list.
  • the filtered coded blocks can also be filtered according to the characteristics of the coded / decoded blocks, such as shape, size, or prediction mode. Classify the decoded / decoded blocks and add the filtered coded / decoded blocks to the corresponding coded / decoded block motion information list according to the category of the filtered coded / decoded blocks.
  • the above-mentioned classification of the filtered encoded / decoded blocks may include: classifying the filtered encoded / decoded blocks according to the shape of the filtered encoded / decoded blocks.
  • classifying the filtered encoded / decoded blocks according to the shape of the filtered encoded / decoded blocks may include: when the aspect ratio of the filtered encoded / decoded blocks is greater than 1, filtering The subsequent encoded / decoded blocks are divided into the first category; and / or, when the aspect ratio of the filtered encoded / decoded blocks is less than 1, the filtered encoded / decoded blocks are divided into the second category.
  • the shape of the encoded / decoded block can be characterized by the aspect ratio of the encoded / decoded block.
  • the different shapes of the current image block can be characterized by the aspect ratio of the current image block and the size relationship of 1. Wherein, if the width-to-height ratio of the current image block is greater than 1, the shape of the current image block is a rectangle whose width is greater than the height; if the width-to-height ratio of the current image block is equal to 1, the shape of the current image block is square; if the current image block is square If the aspect ratio of is less than 1, the shape of the current image block is a rectangle with a width less than a height.
  • the filtered encoded / decoded blocks can be divided into different categories according to the aspect ratio of the filtered encoded / decoded blocks.
  • the category to which the filtered encoded / decoded blocks whose aspect ratio is greater than 1 belongs is referred to as the first category; the category to which the filtered encoded / decoded blocks whose aspect ratio is less than 1 belongs is referred to as the first category Two categories.
  • a filtered encoded / decoded block having an aspect ratio equal to 1 it may be divided into a first category, or it may be divided into a second category, or it may be divided into For a new category.
  • the category to which the filtered coded / decoded block having an aspect ratio equal to 1 belongs may be referred to as a third category.
  • the above-mentioned classification of the coded / decoded blocks after filtering may include: when the product of the width and height of the screened coded / decoded blocks is greater than a preset threshold, The encoded / decoded blocks are divided into the first category; when the product of the width and height of the filtered encoded / decoded blocks is less than or equal to a preset threshold, the filtered encoded / decoded blocks are divided into the second category.
  • the filtered encoded / decoded blocks can be divided into different categories according to the size (ie, the product of width and height) of the filtered encoded / decoded blocks.
  • the category to which the filtered encoded / decoded block that the product of width and height is greater than a preset threshold (the preset threshold can be set according to the actual scene) belong to is called the first category; the width and height
  • the category to which the filtered encoded / decoded block whose product is less than or equal to a preset threshold belongs is called a second category.
  • the filtered coded / decoded blocks are classified according to the product of the width and height of the filtered coded / decoded blocks, two or more than two
  • the preset threshold of is divided into a product of width and height into three or more intervals, and a category is corresponding to each interval.
  • the coded / decoded blocks whose product of width and height is less than or equal to Ta can be divided into the first category, and the product of width and height is greater than Ta.
  • the coded / decoded blocks less than or equal to Tb are divided into the second category, and the coded / decoded blocks whose product of width and height is greater than Tb are divided into the third category.
  • the coded / decoded block is filtered, the coded / decoded block is filtered based on the width and height of the coded / decoded block (that is, the width described in the above embodiment is greater than or equal to The first preset threshold and the height is greater than or equal to the second preset threshold), when classifying the filtered encoded / decoded blocks according to the size of the encoded / decoded blocks, the threshold used for classification needs to be less than the first preset The product of the threshold and the second preset threshold.
  • the above-mentioned classification of the filtered encoded / decoded blocks may include: classifying the filtered encoded / decoded blocks according to a prediction mode of the filtered encoded / decoded blocks. .
  • the above classifying the filtered encoded / decoded blocks according to the prediction mode of the filtered encoded / decoded blocks includes: when the prediction mode of the filtered encoded / decoded blocks is a merge mode, The filtered encoded / decoded blocks are classified into the first category; when the prediction mode of the filtered encoded / decoded blocks is the AMVP mode, the filtered encoded / decoded blocks are divided into the second category.
  • the category to which the filtered encoded / decoded block whose prediction mode is the merge mode belongs can be referred to as the first category; the category to which the filtered encoded / decoded block whose preset side mode is the AMVP mode belongs belongs. Called the second category.
  • the classification is not limited to the above two categories, and other types may also be classified.
  • the filtered coded / decoded blocks of the prediction mode are divided into other categories (such as the third category, the fourth category, etc.), and the specific implementation thereof will not be repeated here.
  • the prediction motion information finally selected in the motion information candidate list is ranked first among the motion information candidates, the number of bits required for index value encoding can be reduced, and the performance of video encoding can be improved; moreover, when performing motion information During prediction, when the motion information candidate finally selected in the motion information candidate list is ranked first among the motion information candidates, the consumption of predictive motion information selection can be reduced, that is, the cost of the same coding index is used to rank the high-relevance information. It is beneficial to improve the performance of video coding.
  • the coded / decoded block motion information in the coded / decoded block motion information list can be reordered, and the probability of being selected as the final prediction information is higher.
  • the motion information of the coded / decoded block is listed behind the motion information list of the coded / decoded block (when selecting candidate motion information from the coded / decoded block motion information list, the coded / decoded block The motion information of the encoded / decoded block is selected from the motion information list) so that the candidate motion information selected as the final prediction information can be ranked as high as possible among the motion information candidates.
  • a motion information list of coded / decoded blocks is constructed based on the coded / decoded blocks after filtering (the coded / decoded blocks are directly constructed after filtering the coded / decoded blocks) Block motion information list, or after filtering the coded / decoded blocks and classifying the filtered coded / decoded blocks, constructing multiple coded / decoded block motion information lists corresponding to different categories),
  • the motion information of the filtered encoded / decoded blocks in the encoded / decoded block motion information list is reordered.
  • the filtered motion information of the coded / decoded blocks in each coded / decoded block motion information list may be reordered.
  • the method further includes step S320: reordering the motion information of the filtered encoded / decoded blocks based on the residual coefficients of the filtered encoded / decoded blocks.
  • step S320 may be after step S310, and step S320 may also be replaced by: reordering the motion information of the filtered encoded / decoded blocks based on the residual coefficients of the classified encoded / decoded blocks.
  • the above-mentioned reordering the motion information of the filtered encoded / decoded block based on the residual coefficients of the filtered encoded / decoded block may include: In the least order, the motion information of the filtered encoded / decoded blocks is reordered.
  • candidate motion information when candidate motion information is selected from the list of encoded / decoded block motion information, it is usually selected in order from back to front. Therefore, it can be selected as the final predicted motion information when performing reordering.
  • the motion information of the encoded / decoded block with a high probability is ranked at the bottom of the motion information list of the encoded / decoded block.
  • the non-zero number of residual coefficients can be used to filter the list.
  • the motion information of the encoded / decoded blocks after reordering that is, the motion information of the filtered encoded / decoded blocks with the most non-zero number of residual coefficients is ranked first in the motion information list of the encoded / decoded blocks, and the residuals
  • the motion information of the filtered encoded / decoded block with the least number of non-zero coefficients is ranked at the end of the motion information list of the encoded / decoded block.
  • reordering the filtered motion information of the coded / decoded blocks in the motion information list of the coded / decoded blocks may include: based on the shape of the current image block and the filtered coded The relative position of the / decoded block and the current image block, reorders the motion information of the filtered coded / decoded block.
  • the encoded information is constructed based on the filtered motion information of the encoded / decoded blocks. After the list of motion information of the decoded block, the motion information of the filtered encoded / decoded block can be reordered based on the shape of the current image block and the relative position of the filtered encoded / decoded block and the current image block.
  • the above re-ranking the motion information of the filtered encoded / decoded block based on the shape of the current image block and the relative position of the filtered encoded / decoded block and the current image block may include: when the current When the aspect ratio of the image block is greater than 1, the filtered coded / decoded block on the left side of the current image block is the first, and the filtered coded / decoded block on the upper side of the current image block is the last.
  • the motion information of the encoded / decoded blocks is reordered.
  • the correlation between the surrounding block on the upper side of the data block and the data block is higher than the correlation between the surrounding block on the left side of the data block and the data block, and the motion information candidate
  • the motion information of highly relevant candidate blocks in the user list is ranked first, on the one hand, it can reduce the coding index overhead, and on the other hand, it can improve the probability that the motion information of highly relevant candidate blocks is selected as the final predicted motion information. Furthermore, the performance of video coding can be improved.
  • the filtered encoded / decoded block on the left side of the current image block may be the first, and the filtered already on the upper side of the current image block.
  • the motion information of the filtered encoded / decoded block is reordered, so that when candidate motion information is selected from the encoded / decoded block motion information list, the top of the current image block can be selected first.
  • the motion information of the filtered encoded / decoded block on the side can further make the motion information of the filtered encoded / decoded block on the upper side of the current image block in the motion information candidate list rank higher than that of the current image block.
  • the motion information of the filtered encoded / decoded blocks on the left is advanced.
  • the above re-ranking the motion information of the filtered encoded / decoded block based on the shape of the current image block and the relative position of the filtered encoded / decoded block and the current image block may include: when When the aspect ratio of the current image block is less than 1, the filtered coded / decoded block on the upper side of the current image block is the first, and the filtered coded / decoded block on the left side of the current image block is the last.
  • the motion information of the following encoded / decoded blocks is reordered.
  • the correlation between the surrounding block on the left side of the data block and the data block is higher than the correlation between the surrounding block on the upper side of the data block and the data block.
  • the filtered encoded / decoded block on the upper side of the current image block may be the first, and the filtered encoded on the left side of the current image block may be
  • the sequence of the decoded / decoded blocks is to reorder the filtered motion information of the coded / decoded blocks so that when selecting candidate motion information from the coded / decoded block motion information list, you can first select the left side of the current image block.
  • the filtered motion information of the encoded / decoded block, and further, the motion information of the filtered encoded / decoded block to the left of the current image block may be ranked higher in the motion information candidate list than the current image block.
  • the motion information of the filtered encoded / decoded block on the side is ranked higher in the motion information candidate list.
  • the encoding / decoding can be implemented according to the reordering method in the case where the aspect ratio of the current image block described in the above example is greater than 1.
  • the reordering of the filtered encoded / decoded block motion information in the block motion information list, or the reordering of the encoded / decoded block motion information in the case where the aspect ratio of the current image block described in the above example is less than 1 may be used. Reorder the filtered encoded / decoded block motion information in the decoded block motion information list.
  • the filtered encoded / decoded blocks are classified according to the manner described in step S310, and the filtered encoded / decoded blocks are added to the filtered encoded / decoded block according to the category of the filtered encoded / decoded blocks.
  • the priority of each coded / decoded block motion information list can be determined separately.
  • step S310 classify the filtered encoded / decoded blocks according to the prediction mode of the filtered encoded / decoded blocks described in step S310 as an example. Since the accuracy of the motion information of the encoded / decoded block in the AMVP mode is higher than the accuracy of the motion information of the encoded / decoded block in the merge mode, the motion of the filtered encoded / decoded block whose prediction mode is the AMVP mode can be determined. The priority of the motion information list of the encoded / decoded block where the information is located is higher than the priority of the motion information list of the encoded / decoded block where the motion information of the filtered encoded / decoded block whose prediction mode is the merge mode.
  • the filtered encoded / decoded blocks are classified according to the manner described in step S310, and the filtered encoded / decoded blocks are added to the filtered encoded / decoded block according to the category of the filtered encoded / decoded blocks.
  • the already-matched motion information candidate list of the current image block may be determined first. Encoding / decoding block motion information list, and then selecting candidate motion information from the encoded / decoding block motion information list matching the current image block to join the motion information candidate list.
  • the encoded / decoded blocks may be classified. And the coded / decoded block motion information lists corresponding to the coded / decoded blocks of different categories are respectively constructed to improve the accuracy of the coded / decoded block motion information list construction.
  • FIG. 4 is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S400 Classify the encoded / decoded blocks before the current image block.
  • Step S410 Add the coded / decoded block to the corresponding coded / decoded block motion information list according to the type of the coded / decoded block; wherein different categories correspond to different coded / decoded block motion information lists.
  • Step S420 Select candidate motion information from the coded / decoded block motion information list and add it to the motion information candidate list of the current image block, where the candidate motion information includes at least a motion vector.
  • the specific implementation of classifying the coded / decoded blocks before the current image block and constructing the coded / decoded block motion information lists corresponding to different types of coded / decoded blocks can refer to the method shown in FIG. 3
  • the relevant description in the process differs in that the coded / decoded blocks classified in the method flow shown in FIG. 3 are replaced by the screened coded / decoded blocks with unscreened coded / decoded blocks, and the classified coded
  • the method for classifying / decoding blocks is the same as that in step 310, and details are not described herein in this embodiment of the present application.
  • the coded / decoded blocks are filtered, and the encoded / decoded block motion information list is constructed based on the filtered encoded / decoded blocks.
  • the coded / decoded blocks are filtered, and the encoded / decoded block motion information list is constructed based on the filtered encoded / decoded blocks.
  • the coded / decoded block can also be The motion information of the encoded / decoded blocks in the block motion information list is reordered. For specific implementation, reference may be made to the related description in the method flow shown in FIG. 3, which is not described in this embodiment of the present application.
  • the candidate motion information is not limited to including motion vectors, and may also be coding information other than the motion vectors. Accordingly, the classification of the motion information may also include information other than the motion vectors. The classification of other encoding information is not described in detail in the embodiments of the present application.
  • the coded / decoded block motion information list when the coded / decoded block motion information list is constructed based on the coded / decoded block motion information, the coded / decoded block motion information list may be used.
  • the motion information of the encoded / decoded blocks in the sequence is reordered to ensure that the motion information of the encoded / decoded block selected as the final predicted motion information is ranked high in the motion information candidate list to improve encoding performance.
  • FIG. 5 is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps:
  • Step S500 Construct a coded motion information list according to the motion information of the coded / decoded blocks before the current image block.
  • the motion information of all the coded / decoded blocks may be directly added to the same coded / decoded block motion information list, or the coded / decoded block may be first After the blocks are filtered and / or classified, a motion information list of the encoded / decoded blocks is constructed based on the filtered and / or classified encoded / decoded blocks. The specific implementation is not described herein.
  • Step S510 Reorder the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list.
  • Step S520 Select candidate motion information from the reordered encoded / decoded block motion information list to add a motion information candidate list of the current image block, where the candidate motion information includes at least a motion vector.
  • the specific implementation of reordering the motion information of the coded / decoded blocks in the coded / decoded block motion information list can refer to the related description in the method flow shown in FIG. 3, and the embodiment of the present application is here Do not go into details.
  • the candidate motion information obtained by transforming the existing motion information when constructing the final motion information candidate list, the candidate motion information obtained by transforming the existing motion information, or from the encoded / decoded
  • the candidate motion information selected in the block motion information list can be ranked behind the airspace candidates (if they exist) and time domain candidates (if they exist), that is, when the number of airspace candidates and time domain candidates does not meet the requirements,
  • the candidate motion information obtained by transforming the existing motion information is added to the final motion information candidate list, or the candidate motion information is selected from the encoded / decoded block motion information list.
  • the candidate motion information obtained by transforming the existing motion information is added to the final motion information candidate list, or after the candidate motion information selected from the encoded / decoded block motion information list, The number still does not meet the requirements, and a combination candidate (for a merge mode) and zero motion information can be further added to the final motion information candidate list.
  • the candidate motion information is not limited to including motion vectors, and may also be encoding information other than the motion vectors. Accordingly, the ordering of the motion information may also include information other than the motion vectors. The classification of other encoding information is not described in detail in the embodiments of the present application.
  • a coded / decoded block motion information list may also be constructed based on the operation information of the coded / decoded blocks, and candidate motion information is selected from the coded / decoded block motion information list.
  • the motion information candidate list of the current image block is added to increase the richness of the candidate samples.
  • the Affine mode may include the Affine Merge mode or the Affine AMVP mode.
  • the parameter model of the general Affine mode is composed of six parameters: a, b, c, d, e, and f.
  • the inter prediction block is divided into several small regions of equal size, and the motion speed is consistent in each small region (i.e., the sub-block), while the motion compensation model of each small region is still a plane translation model (The image block is only translated in the image plane, and the shape and size are not changed. Therefore, the description of the motion of the subblock can still be parameterized as a motion vector).
  • the scaling ratio of any two points of the affine object is consistent (the size of the angle formed by any two straight lines remains unchanged).
  • the affine motion of the six parameters a, b, c, d, e, f is degraded to a four-parameter affine motion model, that is, there is a certain between the four parameters of a, b, c, d. relationship.
  • only 4 sets of known (x, y) and (x ', y') pairs are needed to derive these four parameters.
  • FIG. 6A is a schematic flowchart of a method for constructing a motion information candidate list according to an embodiment of the present application.
  • the method for constructing a motion information candidate selection list may include the following steps.
  • Step S600 Construct a coded / decoded block motion information list, where the coded / decoded block motion information list includes motion information of the coded / decoded block before the current image block.
  • a coded / decoded block motion information list may be constructed based on the motion information of the coded / decoded blocks, and candidate motion information is selected from the coded / decoded block motion information list. Join the motion information candidate list of the current image block.
  • Step S610 When the prediction mode of the current image block is the Affine mode, select candidate motion information from the motion information of the encoded / decoded block and add it to the motion information candidate list of the current image block.
  • the candidate motion information may also be selected from the motion information of the coded / decoded block and added to the motion information candidate list of the current image block to increase the motion information prediction of the Affine mode.
  • the richness of the candidate sample may also be selected from the motion information of the coded / decoded block and added to the motion information candidate list of the current image block to increase the motion information prediction of the Affine mode.
  • the motion information may include a motion vector, a reference frame index, a motion direction, and a parameter model.
  • the control point of the current image block can be determined according to the motion information of the control points (including the upper-left control point and the upper-right control point) of the encoded / decoded block.
  • Motion information and according to the control point motion information of the current image block (assuming V0 (V x0 , V y0 ,) and V1 (V x1 , V y1 )), use formula (5) to obtain the Motion information; among them, the motion information under the 4-parameter model can represent the angle and speed of MV rotation in the plane.
  • the motion of the current image block can be determined based on the motion information of the control points of the encoded / decoded block (including the upper left control point, upper right control point, and lower left control point).
  • Control point motion information and based on the control point motion information of the current image block (assuming V0 (V x0 , V y0 ,), V1 (V x1 , V y1 ), and V2 (V x2 , V y2 )), use Formula (3) obtains the motion information of each sub-block of the current image block.
  • the motion information in the 6-parameter model can represent the angle, speed, and direction of MV rotation in stereo space.
  • the motion information candidate list constructed for the motion information prediction of the Affine mode may include, in addition to airspace candidates and zero motion information, a list of motion information from coded / decoded blocks.
  • a schematic diagram of candidate motion information selected in FIG. 6B may be shown in FIG. 6B.
  • selecting the candidate motion information from the motion information of the encoded / decoded block and adding it to the motion information candidate list of the current image block includes: from the prediction mode of the encoded / decoded block of the AffineMerge mode. Select candidate motion information in the motion information to add to the motion information candidate list of the current image block; or, select motion information from the motion information of the encoded / decoded block whose prediction mode is Affine AMVP mode to add the motion information candidate of the current image block List.
  • the motion information prediction of the Affine mode is different from the motion information prediction of the merge mode or the AMVP mode. Therefore, for the motion information prediction of the Affine mode, when the motion from the encoded / decoded block is required, the candidate motion information may be selected from the motion information of the encoded / decoded block whose prediction mode is Affine mode to be added to the motion information candidate list of the current image block.
  • candidate motion information when the prediction mode of the current image block is the Affine mode, may be selected from the motion information of the encoded / decoded block whose prediction mode is the Affine Merge mode to add the motion information of the current image block.
  • candidate motion information may be selected from the motion information of the encoded / decoded blocks whose prediction mode is Affine AMVP mode and added to the motion information candidate list of the current image block.
  • the motion information prediction of the Affine mode when the candidate motion information is selected from the motion information of the coded / decoded block, it is not limited to the encoding / decoding of the prediction mode to the Affine mode.
  • the motion information of the decoded block is selected, and the motion information of the encoded / decoded block in non-Affine mode can also be selected.
  • the selected candidate motion information is the motion information of a non-Affine mode coded / decoded block
  • the motion information for a 4-parameter model
  • the top left the motion information of the upper right corner and the lower left corner (for a 6-parameter model), its specific implementation is not repeated here.
  • the coded / decoded block motion information list when constructing the coded / decoded block motion information list based on the coded / decoded block whose prediction mode is Affine mode, it may be Classify the coded / decoded blocks whose prediction mode is Affine mode, and add the coded / decoded blocks whose prediction mode is Affine mode to the corresponding coded / decoded according to the type of coded / decoded blocks whose prediction mode is Affine mode.
  • List of block motion information is Among them, different categories correspond to different coded / decoded block motion information lists.
  • the encoded / decoded blocks whose prediction mode is Affine mode can be performed according to the parameter model of the encoded / decoded blocks whose prediction mode is Affine mode. classification.
  • the coded / decoded block whose prediction mode is Affine mode can be divided into the first category;
  • the parameter model of the encoding / decoding block is a 4-parameter model, the encoded / decoded block whose prediction mode is Affine mode can be classified into the second category;
  • the parameter model of the encoded / decoded block whose prediction mode is Affine mode is 6 parameters
  • the coded / decoded blocks whose prediction mode is Affine mode can be divided into a third category.
  • the selection of candidate motion information from the motion information of the encoded / decoded block to be added to the motion information candidate list of the current image block may include: the motion information of the encoded / decoded block corresponding to each category
  • the candidate motion information is sequentially selected from the motion information list of each encoded / decoded block to be added to the motion information candidate list of the current image block; wherein different categories correspond to different priorities.
  • the priority of the encoded / decoded block motion information list corresponding to each category can be ranked from high to low in order from each coded / Select candidate motion information from the decoded block motion information list and add it to the motion information candidate list of the current image block to ensure that the motion information of the encoded / decoded block with higher accuracy is ranked in the motion information candidate list than the encoded /
  • the motion information of the decoded block is ranked first in the motion information candidate list.
  • the coded / decoded blocks of the Affine mode are classified according to the parameter model of the coded / decoded blocks of the Affine mode (see the related description in the above embodiment), they can be listed according to List2 (corresponding to the 6-parameter model) , List1 (corresponding to the 4-parameter model) and List0 (corresponding to the 2-parameter model), in the order from the first to the last, select candidate motion information from the encoded / decoded block motion information list to add to the motion information candidate list of the current image block.
  • List2 corresponding to the 6-parameter model
  • List1 corresponding to the 4-parameter model
  • List0 corresponding to the 2-parameter model
  • the selecting the candidate motion information from the encoded / decoded block motion information list to add the motion information candidate list of the current image block may include: determining an encoded / decoded block that matches the current image block. Motion information list; select candidate motion information from the encoded / decoded block motion information list that matches the current image block to join the motion information candidate list of the current image block.
  • the 2-parameter model and 4-parameter model of the spatial candidate of the current image block can be determined respectively.
  • the number of airspace candidates of the 6-parameter model, and select candidate motion information from the encoded / decoded block motion information list corresponding to the parameter model with the largest number of airspace candidates to add to the motion information candidate list of the current image block can be determined respectively.
  • the candidate motion information may be selected from List1 (corresponding to the 4-parameter model) and added to the motion information candidate list of the current image block.
  • the coded / decoded block motion information when constructing the coded / decoded block motion information list, the coded / decoded block motion information may be implemented in a first-in-first-out (FIFO) manner.
  • the motion information of the encoded / decoded blocks in the list is updated, that is, when the number of motion information of the encoded / decoded blocks in the encoded / decoded block motion information list reaches a preset maximum number, and there is a new
  • the motion information of the first encoded / decoded block added to the encoded / decoded block motion information list can be deleted, and the motion of the new encoded / decoded block can be added. Information added.
  • the above FIFO method is only a specific implementation for updating the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list, and is not a limitation on the scope of protection of the present application, that is, the present invention
  • the motion information of the coded / decoded blocks in the coded / decoded block motion information list may also be updated in other ways.
  • the specific implementation is not described herein.
  • the existing motion information can be scaled in a specified direction. Add new candidate motion information.
  • the existing motion information may include motion information of a spatial domain candidate, motion information of a time domain candidate, and motion information of an encoded / decoded block.
  • the existing motion information in the motion information candidate list can be specified. Scaling in the direction to obtain new candidate motion information.
  • the constructed motion information candidate list can include spatial domain candidates, time domain candidates, combined candidates, and zero motion information, and can also include existing motion information in a specified direction.
  • the motion vector can be scaled along the direction of the motion vector.
  • the scaled motion vector 702 is (mx + delta_mv_x, my + delta_mv_y).
  • delta_mv_x is the expansion and contraction in the x direction
  • delta_mv_y is the expansion and contraction in the y direction.
  • the expansion and contraction of the existing motion information is not limited to the expansion and contraction along the direction of the motion vector, and may be other directions, which is not described in the embodiment of the present application.
  • frame-level, slice-level, or row-level scaling may be performed.
  • the amplitude of scaling the existing motion information can be determined according to the frame complexity and the intensity of the motion of the previous frame of the frame to which the existing motion information belongs. .
  • the complexity of the frame and the intensity of the motion can be characterized by the proportion of the zero motion vector.
  • block-level scaling when scaling existing motion information, block-level scaling can be performed.
  • any data block can be a spatial candidate, a temporal candidate, or an encoded / decoded block
  • it can be determined according to the similarity of the motion information of neighboring blocks of the block. Whether or not the block needs to be scaled.
  • the expansion and contraction width is 0, it can be determined that there is no need to perform expansion and contraction.
  • the determined expansion and contraction amplitude is delta_mv0, where the neighboring blocks are A0, A1, B0, B1, and B2; data block 2
  • the similarity of adjacent blocks at different positions is low, and the determined expansion and contraction is delta_mv1, where the illustration of adjacent blocks is omitted in the figure; the similarity of adjacent blocks at different positions of data block 3 is high, and the determined
  • the amplitude of the expansion and contraction is delta_mv2, where the illustration of adjacent blocks is omitted in the figure; then delta_mv2 ⁇ delta_mv1 ⁇ deltamv0.
  • the existing motion information may include motion information of a spatial domain candidate, motion information of a time domain candidate, and motion information of an encoded / decoded block.
  • the weighting ratio of each existing motion information can be adaptively adjusted according to the characteristics of the source block of the existing motion information (which can include spatial domain candidates, time domain candidates, or encoded / decoded blocks).
  • the source block A and the source block B of the existing motion information both refer to the same frame, and the motion vector (amx, amy) of the source block A and the motion vector of the source block B are (bmx, bmy).
  • the weighting coefficient of the source block A is W0 and the weighting coefficient of the source block B is W1.
  • the weighted motion vector can be calculated by the following formulas (6) and (7) as (Mv_x, Mv_y):
  • a coded / decoded block motion information list may be constructed based on the motion information of the coded / decoded blocks, and candidate motion information is selected from the coded / decoded block motion information list. Join the motion information candidate list of the current image block.
  • the encoded / decoded blocks can be filtered first and based on the motion of the filtered encoded / decoded blocks.
  • Information builds a list of encoded / decoded block motion information.
  • the number of non-zero coefficients in the residual coefficients of the coded / decoded block intuitively reflects the accuracy of the prediction. Therefore, based on the number of non-zero coefficients in the residual coefficients of the coded / decoded block, the codes can be encoded. / Decode block to filter out coded / decoded blocks with poor prediction accuracy.
  • a non-zero coefficient in the residual coefficient of the coded / decoded block can be determined. Whether the number is greater than or equal to a preset number threshold; if yes, the motion information of the encoded / decoded block is refused to be added to the encoded / decoded block motion information list; otherwise, the motion information of the encoded / decoded block is added to the encoded / decoded block List of decoded block motion information.
  • the coded / decoded block may be based on the coded / decoded block.
  • the width and height of the blocks are used to filter the coded / decoded blocks to eliminate the coded / decoded blocks that are selected as the final predicted motion information with a low probability.
  • any coded / decoded block when adding the coded / decoded block to the coded / decoded block motion information list, it can be determined whether the width of the coded / decoded block is greater than or equal to the first preset Threshold (taking 64 as an example) and whether the height is greater than or equal to a second preset threshold (taking 64 as an example); if the width of the encoded / decoded block is greater than or equal to 64 and the height is greater than or equal to 64, the encoded The motion information of the decoded block is added to the list of encoded / decoded block motion information; otherwise (the width of the encoded / decoded block is less than 64 and / or the height is less than 64), the motion information of the encoded / decoded block is added to the encoded / decoded block List of decoded block motion information.
  • Threshold taking 64 as an example
  • a second preset threshold taking 64 as an example
  • the quantization step size of the encoded / decoded block can intuitively reflect the accuracy of the motion information of the encoded / decoded block. Therefore, the encoded / decoded block can be based on the quantization step size of the motion information of the encoded / decoded block. Blocks are filtered to remove motion information from coded / decoded blocks that are too inaccurate.
  • any coded / decoded block when adding the coded / decoded block to the coded / decoded block motion information list, it can be determined whether the quantization step of the motion information of the coded / decoded block is greater than Equal to the preset step size threshold (take 2 as an example); if yes, the motion information of the coded / decoded block is refused to be added to the coded / decoded block motion information list; otherwise, the motion information of the coded / decoded block is added List of coded / decoded block motion information.
  • the encoded / decoded blocks may be classified first, and different types of The encoding / decoding block adds a list of different encoded / decoded block motion information.
  • the encoding / decoding block is classified according to the shape of the encoded / decoded block as an example.
  • the shape of the coded / decoded block can be determined (taking the aspect ratio as an example) ), When the aspect ratio of the coded / decoded block is greater than or equal to 1, add the motion information of the coded / decoded block to the coded / decoded block motion information list 1 (may be called List0); When the aspect ratio of the decoded block is less than 1, the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 2 (may be called list1).
  • the matching encoded motion information list may be determined according to the shape of the current image block.
  • the candidate motion information is selected from the matched coded / decoded block motion information list to join the motion information candidate list of the current image block.
  • the candidate motion information needs to be selected from the motion information list of the encoded / decoded blocks to be added to the motion information candidate list of the current image block to determine the aspect ratio of the current image block, if the width and height of the current image block If the ratio is greater than or equal to 1, the candidate motion information is selected from List0 and added to the motion information candidate list of the current image block; if the aspect ratio of the current image block is less than 1, the candidate motion information is selected from List1 to be added to the motion information of the current image block.
  • Candidate list if the width and height of the current image block If the ratio is greater than or equal to 1, the candidate motion information is selected from List0 and added to the motion information candidate list of the current image block; if the aspect ratio of the current image block is less than 1, the candidate motion information is selected from List1 to be added to the motion information of the current image block.
  • the coded / decoded blocks can be classified according to the size of the coded / decoded block (taking the product of width and height as an example).
  • the product of the width and height of the coded / decoded block can be determined and judged. Whether the product of the width and height of the coded / decoded block is greater than or equal to a preset threshold (taking 2048 as an example), and if so, add the motion information of the coded / decoded block to the coded / decoded block motion information list 1 (yes It is called List0); otherwise, the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 2 (may be called list1).
  • the encoding / decoding can be performed from each of the encoded / decoded blocks in order of priority from high to low.
  • the candidate motion information is selected from the block motion information list and added to the motion information candidate list of the current image block.
  • the priority of List1 is higher than the priority of List0. Therefore, when candidate motion information needs to be selected from the motion information list of encoded / decoded blocks to be added to the motion information candidate list of the current image block, In the order of List1 and List0, candidate motion information is selected from List1 and List0 and added to the motion information candidate list of the current image block.
  • the coded / decoded blocks can be classified according to the prediction mode of the coded / decoded blocks.
  • the prediction mode of the coded / decoded block can be determined.
  • the prediction mode of the decoded block is the merge mode
  • the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 1 (may be called List0); when the prediction mode of the encoded / decoded block is the AMVP mode At this time, the motion information of the encoded / decoded block is added to the encoded / decoded block motion information list 2 (may be referred to as list1).
  • the encoding / decoding can be performed from each of the encoded / decoded blocks in order of priority from high to low.
  • the candidate motion information is selected from the block motion information list and added to the motion information candidate list of the current image block.
  • the priority of List1 is higher than the priority of List0. Therefore, when candidate motion information needs to be selected from the motion information list of encoded / decoded blocks to be added to the motion information candidate list of the current image block, In the order of List1 and List0, candidate motion information is selected from List1 and List0 and added to the motion information candidate list of the current image block.
  • the motion information based on the motion information of the encoded / decoded blocks is constructed. After encoding / decoding the block motion information list, the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list can be reordered.
  • the motion information of the encoded / decoded blocks in the encoded / decoded block motion information list can be reordered based on the residual coefficients of the encoded / decoded blocks. For example, for the motion information of the encoded / decoded blocks in the motion information list of the encoded / decoded blocks, the motion information of the encoded / decoded blocks can be re-ordered in the order of the number of non-zero coefficients of the residual coefficient from ascending to the least. Sort.
  • HMVP History-based Motion Vector Prediction
  • candidate motion information is selected from the encoded / decoded block motion information list and added to the motion information candidate list of the current image block, the selection is performed in the order from the end of the list to the head (that is, the later the first, the earlier the select).
  • the motion information of the encoded / decoded blocks can be reordered based on the shape of the current image block and the relative position of the encoded / decoded block and the current image block.
  • the encoded / decoded block A is the encoded / decoded block on the left side of the current image block
  • the encoded / decoded block B is the encoded / decoded block on the upper side of the current image block
  • the motion information of the encoded / decoded block A may be placed before the motion information of the encoded / decoded block B;
  • the motion information of the encoded / decoded block A may be arranged behind the motion information of the encoded / decoded block B.
  • the coded / decoded blocks can also be constructed based on the motion information of the coded / decoded blocks.
  • Motion information list and select candidate motion information from the encoded / decoded block motion information list to join the motion information candidate list of the current image block.
  • the motion parameter model (Motion model) information of the coded / decoded blocks in the Affine mode can be stored as a candidate list (that is, the coded / decoded block motion information list).
  • the length of the list is L.
  • the list member (Motion model information member (referred to as MMIC)) is updated in the FIFO method or other methods, and its schematic diagram can be as shown in FIG. 7M.
  • the candidate motion information may be selected from the encoded / decoded block motion information list (also may be referred to as candidate motion parameter model information in this embodiment).
  • the encoded / decoded blocks of the Affine mode when constructing the motion information list of the encoded / decoded blocks, can be classified according to the parameter model of the encoded / decoded blocks of the Affine mode. And add coded / decoded blocks of different types of Affine modes to different coded / decoded block motion information lists.
  • Motion parameter model information of the encoded / decoded block is added to the motion information list 1 (may be referred to as List0); when the parameter model of the encoded / decoded block of the Affine mode is a 4-parameter model, the encoded / decoded of the Affine mode is The motion parameter model information of the block is added to the encoded / decoded block motion information list 2 (may be called List1); when the parameter model of the encoded / decoded block of the Affine mode is a 6-parameter model, the encoded / decoded block of the Affine mode is The motion parameter model information of the decoded block is added to the encoded / decoded block motion information list 3 (may be referred to as List2).
  • the motion information list of the encoded / decoded block that matches the current image block may be selected Or in the order of priority of each coded / decoded block motion information list from high to low, and sequentially selected from each coded / decoded block motion information list, the specific implementation can refer to the method shown in FIG. 6 Relevant descriptions in the process are not repeated here in this embodiment of the present application.
  • the coded / decoded block motion information list usually includes only the CTU to the left of the CTU to which the current image block belongs.
  • the motion information of each encoded / decoded block is for other encoded / decoded blocks that are adjacent to the current image block, but the encoding order is much earlier than the current image block (such as the encoded / decoded block above the current image block) Sport info is no longer in the sport info list.
  • the candidate motion information may also be selected from the motion information of each of the encoded / decoded blocks in the CTUs in several rows above the CTU to which the current image block belongs.
  • the motion information of each encoded / decoded block in the CTU of the previous row of the CTU to which the current image block belongs can be stored separately.
  • the number of stored motion information of each coded / decoded block in the CTU does not exceed L2 (L1 and L2 may be the same or different).
  • the motion information of each coded / decoded block in the CTU can be stored in accordance with its coding order.
  • the candidate motion information can also be selected from the encoded / decoded block motion information in the CTU directly above the CTU to which the current image block belongs, further increasing the richness of the candidate samples.
  • FIG. 8 is a schematic diagram of a hardware structure of a device for constructing a motion information candidate list provided by an embodiment of the present application.
  • the motion information candidate list construction device may include a processor 801, a communication interface 802, a memory 803, and a communication bus 804.
  • the processor 801, the communication interface 802, and the memory 803 complete communication with each other through a communication bus 804.
  • a computer program is stored in the memory 803; the processor 801 may implement the method stored in the memory 803 to implement a method for constructing a candidate list of sports information corresponding to FIG. 2A, FIG. 3, FIG. 4, FIG. 5 or FIG. 6A.
  • the memory 803 mentioned herein may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information such as executable instructions, data, and so on.
  • the memory 802 may be: RAM (Radom Access Memory), volatile memory, nonvolatile memory, flash memory, storage drive (such as hard drive), solid state hard disk, any type of storage disk (such as optical disk , Dvd, etc.), or similar storage media, or a combination thereof.
  • the apparatus for constructing the motion information candidate list may be an encoding device or a decoding device.
  • An embodiment of the present application further provides a machine-readable storage medium storing a computer program, such as the memory 803 in FIG. 8, and the computer program may be executed by the processor 801 in the motion information candidate list construction apparatus shown in FIG. 8.
  • a method for constructing a motion information candidate list corresponding to FIG. 2A, FIG. 3, FIG. 4, FIG. 5 or FIG. 6A.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé et un appareil pour la création d'une liste d'informations de mouvement candidates, et un support de stockage lisible par machine. Le procédé consiste à : acquérir des informations de mouvement existantes d'un bloc d'image actuel, les informations de mouvement existantes comprenant au moins un vecteur de mouvement ; transformer les informations de mouvement existantes ; et ajouter les informations de mouvement transformées, en tant qu'informations de mouvement candidates, à une liste d'informations de mouvement candidates du bloc d'image actuel.
PCT/CN2019/106473 2018-09-20 2019-09-18 Procédé et appareil pour la création d'une liste d'informations de mouvement candidates, et support de stockage lisible par machine WO2020057556A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811102877.6 2018-09-20
CN201811102877.6A CN110933439B (zh) 2018-09-20 2018-09-20 运动信息候选者列表构建方法、装置及可读存储介质

Publications (1)

Publication Number Publication Date
WO2020057556A1 true WO2020057556A1 (fr) 2020-03-26

Family

ID=69855583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106473 WO2020057556A1 (fr) 2018-09-20 2019-09-18 Procédé et appareil pour la création d'une liste d'informations de mouvement candidates, et support de stockage lisible par machine

Country Status (2)

Country Link
CN (1) CN110933439B (fr)
WO (1) WO2020057556A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698506B (zh) 2019-03-11 2022-04-26 杭州海康威视数字技术股份有限公司 运动信息候选者列表构建方法、三角预测解码方法及装置
CN112073735B (zh) * 2020-11-16 2021-02-02 北京世纪好未来教育科技有限公司 视频信息处理方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257630A (zh) * 2008-03-25 2008-09-03 浙江大学 结合三维滤波的视频编码方法和装置
CN102714736A (zh) * 2010-01-19 2012-10-03 三星电子株式会社 基于减少的运动矢量预测候选对运动矢量进行编码和解码的方法和设备
CN102934434A (zh) * 2010-07-12 2013-02-13 联发科技股份有限公司 时间运动矢量预测的方法与装置
CN103430547A (zh) * 2011-03-08 2013-12-04 Jvc建伍株式会社 动图像编码装置、动图像编码方法及动图像编码程序、及动图像解码装置、动图像解码方法及动图像解码程序
WO2017157259A1 (fr) * 2016-03-15 2017-09-21 Mediatek Inc. Procédé et appareil de codage vidéo avec compensation de mouvement affine
WO2017209455A2 (fr) * 2016-05-28 2017-12-07 세종대학교 산학협력단 Procédé et appareil de codage ou de décodage d'un signal vidéo
WO2018097117A1 (fr) * 2016-11-22 2018-05-31 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage
CN108141588A (zh) * 2015-09-24 2018-06-08 Lg电子株式会社 图像编码***中的帧间预测方法和装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257630A (zh) * 2008-03-25 2008-09-03 浙江大学 结合三维滤波的视频编码方法和装置
CN102714736A (zh) * 2010-01-19 2012-10-03 三星电子株式会社 基于减少的运动矢量预测候选对运动矢量进行编码和解码的方法和设备
CN102934434A (zh) * 2010-07-12 2013-02-13 联发科技股份有限公司 时间运动矢量预测的方法与装置
CN103430547A (zh) * 2011-03-08 2013-12-04 Jvc建伍株式会社 动图像编码装置、动图像编码方法及动图像编码程序、及动图像解码装置、动图像解码方法及动图像解码程序
CN108141588A (zh) * 2015-09-24 2018-06-08 Lg电子株式会社 图像编码***中的帧间预测方法和装置
WO2017157259A1 (fr) * 2016-03-15 2017-09-21 Mediatek Inc. Procédé et appareil de codage vidéo avec compensation de mouvement affine
WO2017209455A2 (fr) * 2016-05-28 2017-12-07 세종대학교 산학협력단 Procédé et appareil de codage ou de décodage d'un signal vidéo
WO2018097117A1 (fr) * 2016-11-22 2018-05-31 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHOU, MINHUA: "Parallelized merge/skip mode for HEVC", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/ IEC JTC1/SC29/WG11, 22 July 2011 (2011-07-22), XP030049045 *

Also Published As

Publication number Publication date
CN110933439A (zh) 2020-03-27
CN110933439B (zh) 2022-05-31

Similar Documents

Publication Publication Date Title
TWI736903B (zh) 非對稱加權雙向預測Merge
CN111279695B (zh) 用于基于非对称子块的图像编码/解码的方法及装置
TWI729402B (zh) 加權交織預測
TWI736905B (zh) 色度解碼器側運動向量細化
TW201904299A (zh) 運動向量預測
CN110740321B (zh) 基于更新的运动矢量的运动预测
CN111164978A (zh) 用于对图像进行编码/解码的方法和设备以及用于存储比特流的记录介质
CN112369021A (zh) 用于吞吐量增强的图像编码/解码方法和设备以及存储比特流的记录介质
TWI722465B (zh) 子塊的邊界增強
TW202007154A (zh) 交織預測的改善
CN113273188A (zh) 图像编码/解码方法和装置以及存储有比特流的记录介质
WO2020057556A1 (fr) Procédé et appareil pour la création d'une liste d'informations de mouvement candidates, et support de stockage lisible par machine
TWI833795B (zh) 交織預測的快速編碼方法
CN110876064B (zh) 部分交织的预测
TWI719524B (zh) 降低非相鄰Merge設計的複雜度
CN110557639B (zh) 交织预测的应用

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19862237

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19862237

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19862237

Country of ref document: EP

Kind code of ref document: A1