CN112055220A - Encoding and decoding method, device and equipment - Google Patents

Encoding and decoding method, device and equipment Download PDF

Info

Publication number
CN112055220A
CN112055220A CN201910487541.4A CN201910487541A CN112055220A CN 112055220 A CN112055220 A CN 112055220A CN 201910487541 A CN201910487541 A CN 201910487541A CN 112055220 A CN112055220 A CN 112055220A
Authority
CN
China
Prior art keywords
motion information
prediction mode
block
peripheral matching
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910487541.4A
Other languages
Chinese (zh)
Other versions
CN112055220B (en
Inventor
方树清
陈方栋
王莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202211098101.8A priority Critical patent/CN115460424A/en
Priority to CN201910487541.4A priority patent/CN112055220B/en
Priority to PCT/CN2020/092406 priority patent/WO2020244425A1/en
Publication of CN112055220A publication Critical patent/CN112055220A/en
Application granted granted Critical
Publication of CN112055220B publication Critical patent/CN112055220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a coding and decoding method, a device and equipment thereof, wherein the method comprises the following steps: acquiring at least one motion information angle prediction mode of a current block; selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of each motion information angle prediction mode based on the pre-configuration angle of the motion information angle prediction mode; if the motion information of a plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list; the current block is encoded or decoded according to the motion information prediction mode candidate list. By the scheme, the coding performance is improved.

Description

Encoding and decoding method, device and equipment
Technical Field
The present application relates to the field of encoding and decoding technologies, and in particular, to an encoding and decoding method, apparatus, and device.
Background
In order to achieve the purpose of saving space, video images are transmitted after being coded, and the complete video coding method can comprise the processes of prediction, transformation, quantization, entropy coding, filtering and the like. The predictive coding comprises intra-frame coding and inter-frame coding, wherein the inter-frame coding uses the correlation of a video time domain to predict the pixels of the current image by using the pixels adjacent to the coded image so as to achieve the aim of effectively removing the video time domain redundancy.
In inter-frame coding, a Motion Vector (MV) is used to represent a relative displacement between a current image block of a current frame video image and a reference image block of a reference frame video image. For example, when a video image a of the current frame has a strong temporal correlation with a video image B of the reference frame, and an image block a1 (current image block) of the video image a needs to be transmitted, a motion search may be performed in the video image B to find an image block B1 (i.e., reference image block) that best matches image block a1, and determine a relative displacement between image block a1 and image block B1, which is also a motion vector of image block a 1.
In the prior art, a current coding unit does not need to be divided into blocks, but only one piece of motion information can be determined for the current coding unit directly by indicating a motion information index or a difference information index.
Since all sub-blocks inside the current coding unit share one motion information, for some moving objects that are small, the best motion information can be obtained only after the coding unit is divided into blocks. However, if the current coding unit is divided into a plurality of sub-blocks, additional bit overhead is generated.
Disclosure of Invention
The application provides a coding and decoding method, device and equipment thereof, which can improve coding performance.
The application provides a coding and decoding method, which comprises the following steps:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
encoding or decoding the current block according to the motion information prediction mode candidate list.
The present application provides a coding and decoding device, the device includes:
an obtaining module, configured to obtain at least one motion information angle prediction mode of a current block;
a processing module, configured to select, for each motion information angle prediction mode, a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode; if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and the coding and decoding module is used for coding or decoding the current block according to the motion information prediction mode candidate list.
The application provides a decoding side device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and decoding the current block according to the motion information prediction mode candidate list.
The application provides a coding end device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and encoding the current block according to the motion information prediction mode candidate list.
According to the technical scheme, the current block does not need to be divided, the bit overhead caused by sub-block division can be effectively solved, namely, on the basis that the current block is not divided into sub-blocks, motion information is provided for each sub-region of the current block, different sub-regions of the current block can correspond to the same or different motion information, so that the coding performance is improved, the problem of transmitting a large amount of motion information is solved, and a large amount of bits can be saved. The coding performance can be further improved by adding the motion information angle prediction modes with the motion information which is not completely the same to the motion information prediction mode candidate list, thereby reducing the number of the motion information angle prediction modes in the motion information prediction mode candidate list.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a schematic diagram of a video coding framework in one embodiment of the present application;
FIGS. 2A and 2B are schematic diagrams of the partitioning in one embodiment of the present application;
3A-3F are schematic diagrams of an application scenario in an embodiment of the present application;
FIG. 4 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
fig. 5A and 5B are schematic diagrams of a motion information angle prediction mode in an embodiment of the present application;
FIG. 6 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIG. 7 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIGS. 8A and 8B are padding diagrams of an unencoded block and an intra-coded block;
FIGS. 9A-9C are schematic diagrams of peripheral blocks of a current block in one embodiment of the present application;
FIGS. 10A-10N are schematic diagrams of a perimeter matching block in one embodiment of the present application;
fig. 11 is a block diagram of a codec device according to an embodiment of the present application;
fig. 12 is a hardware configuration diagram of a decoding-side device according to an embodiment of the present application;
fig. 13 is a hardware configuration diagram of an encoding-side device according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, etc. may be used herein to describe various information in embodiments of the present application, the information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiment of the application provides a coding and decoding method, which can relate to the following concepts:
motion Vector (MV): in inter-frame coding, a motion vector is used to represent a relative displacement between a current image block of a current frame video image and a reference image block of a reference frame video image, for example, there is a strong temporal correlation between video image a of the current frame and video image B of the reference frame, when transmitting image block a1 (current image block) of video image a, a motion search may be performed in video image B to find image block B1 (reference image block) that best matches image block a1, and determine a relative displacement between image block a1 and image block B1, which is also the motion vector of image block a 1. Each divided image block has a corresponding motion vector transmitted to a decoding side, and if the motion vector of each image block is independently encoded and transmitted, especially divided into a large number of image blocks of small size, a considerable number of bits are consumed. In order to reduce the bit number used for encoding the motion vector, the spatial correlation between adjacent image blocks can be utilized, the motion vector of the current image block to be encoded is predicted according to the motion vector of the adjacent encoded image block, and then the prediction difference is encoded, so that the bit number representing the motion vector can be effectively reduced.
For example, in the process of encoding a Motion Vector of a current image block, a Motion Vector of a current macroblock may be predicted by using Motion vectors of neighboring encoded image blocks, and then a Difference value (MVD) between a predicted value (MVP) of the Motion Vector and a true estimate of the Motion Vector may be encoded, so as to effectively reduce the number of encoding bits of the Motion Vector.
Motion Information (Motion Information): since the motion vector indicates the position offset of the current image block from a certain reference image block, in order to accurately acquire information pointing to the image block, index information of the reference frame image is required in addition to the motion vector to indicate which reference frame image is used. In the video coding technology, for a current frame image, a reference frame image list may be generally established, and the reference frame image index information indicates that the current image block adopts the several reference frame images in the reference frame image list. Many coding techniques also support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. In the video encoding technology, motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
Rate-Distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: code rate and Peak Signal to Noise Ratio (PSNR), the smaller the bit stream, the larger the compression rate, and the larger the PSNR, the better the reconstructed image quality, and in the mode selection, the discrimination formula is essentially the comprehensive evaluation of the two. For example, the cost for a mode: j (mode) ═ D + λ R, illustratively, D denotes Distortion, which can generally be measured using SSE metrics, SSE referring to the sum of the mean square of the differences between the reconstructed image block and the source image; and λ is a lagrange multiplier, and R is the actual number of bits required for encoding the image block in the mode, including the sum of bits required for encoding mode information, motion information, residual errors and the like.
Intra and inter prediction (intra and inter) techniques: the intra prediction is to perform predictive coding using reconstructed pixel values of spatial neighboring image blocks of a current image block (i.e., the image block is in the same frame as the current image block). Inter-frame prediction refers to performing predictive coding by using reconstructed pixel values of time-domain adjacent image blocks (located in different frame images from the current image block) of the current image block, and inter-frame prediction refers to using correlation of a video time domain, and because a video sequence contains strong time domain correlation, pixels of a current image are predicted by using pixels of an adjacent coded image, so that the purpose of effectively removing video time domain redundancy is achieved.
CTU (Coding Tree Unit) refers to a maximum Coding Unit supported by a Coding end and a maximum decoding Unit supported by a decoding end. For example, a frame of picture may be divided into several disjoint CTUs, and each CTU then determines whether to divide it further into smaller blocks based on the actual situation.
Before the technical scheme of the embodiment of the application is introduced, the following basic knowledge is briefly introduced:
referring to fig. 1, a schematic diagram of a video encoding framework is shown, where the video encoding framework can be used to implement a processing flow at an encoding end in the embodiment of the present application, the schematic diagram of the video decoding framework is similar to that in fig. 1, and is not described herein again, and the video decoding framework can be used to implement a processing flow at a decoding end in the embodiment of the present application.
Illustratively, in the video encoding framework and the video decoding framework, intra prediction, motion estimation/motion compensation, reference picture buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. modules may be included. At the encoding end, the processing flow at the encoding end can be realized through the matching among the modules, and at the decoding end, the processing flow at the decoding end can be realized through the matching among the modules.
In the image block partitioning technique, one CTU (Coding Tree Unit) may be recursively partitioned into CUs (Coding units) using a quadtree, and the CUs may be further partitioned into two or four PUs (Prediction units). After prediction is completed and residual information is obtained, a CU may be further divided into TUs (Transform Units) in four branches.
The partitioning of image blocks in a VVC (Universal Video Coding) is greatly changed, a binary tree/ternary tree/quaternary tree partitioning structure is mixed, namely concepts of CU, PU and TU are cancelled, a more flexible partitioning mode of the CU is supported, the CU can be divided into squares or rectangles, the CTU firstly performs the quaternary tree partitioning, and then leaf nodes of the quaternary tree partitioning perform the binary tree and ternary tree partitioning.
Referring to fig. 2A, a CU may have five partition types, which are quadtree partition, horizontal binary tree partition, vertical binary tree partition, horizontal ternary tree partition, vertical ternary tree partition, and the like. Referring to fig. 2B, the CU in the CTU may be divided in any combination of the above five division types.
Brief introduction of Merge mode: in the inter-frame prediction module, because a video has strong time-domain correlation, even if two adjacent frames of images in a time domain have many similar image blocks, the image block of a current frame is often subjected to motion search in an adjacent reference image, and a block which is most matched with the current image block is found to be used as a reference image block. Because the similarity between the reference image block and the current image block is high and the difference between the reference image block and the current image block is very small, the code rate overhead of coding the difference is usually much smaller than the code rate overhead brought by directly coding the pixel value of the current image block.
In order to indicate the position of the reference image block that best matches the current image block, much motion information needs to be encoded and transmitted to the decoding end, so that the decoding end can know the position of the reference image block, and the motion information, especially the motion vector, needs to consume a very large code rate for encoding and transmitting. In order to save the rate overhead of this part, an encoding mode that saves motion information comparatively, namely Merge mode, is currently designed.
In the Merge mode, the motion information of the current image block completely multiplexes the motion information of some neighboring block in the time domain or the space domain, that is, one motion information is selected from the motion information sets of a plurality of surrounding image blocks as the motion information of the current image block. Therefore, in the Merge mode, only one index value needs to be encoded to indicate which motion information in the motion information set is used by the current image block, so that the encoding overhead is saved.
Simple introduction of AMVP (Advanced Motion Vector Prediction) mode: the AMVP mode is similar to the Merge mode, and both spatial domain and temporal domain motion information prediction ideas are used, and an optimal candidate is selected as motion information of a current image block through rate distortion cost by establishing a candidate motion information list. The distinction between AMVP mode and Merge mode is: in the Merge mode, the MV of the current unit is directly predicted by the prediction unit adjacent in the spatial domain or the temporal domain, and there is no Motion Vector Difference (MVD), while AMVP can be regarded as an MV prediction technique, and the encoder only needs to encode the Difference between the actual MV and the predicted MV, and thus there is an MVD. The length of the candidate MV queue is different, and the mode of constructing the MV list is also different.
In the Merge mode, a candidate list is created for the current PU, and 5 MVs (and their corresponding reference frame information) exist in the candidate list. And traversing the 5 candidate MVs and calculating the rate distortion cost, and finally selecting the candidate MV with the minimum rate distortion cost as the best MV. If the encoding end and the decoding end construct the candidate list according to the same way, the encoding end only needs to transmit the index of the optimal MV in the candidate list, so that the encoding bit number of the motion information can be greatly saved. The candidate list established in the Merge mode includes two situations of a space domain and a time domain, and for the B Slice, a mode of a combined list is also included, and the space domain candidate list, the time domain candidate list and the combined list are explained below.
And establishing a spatial domain candidate list. Referring to fig. 3A, a1 denotes a prediction unit at the bottom left of the current prediction unit, B1 denotes a prediction unit at the right above the current prediction unit, B0 and a0 denote prediction units closest to the top right and bottom left of the current prediction unit, respectively, and B2 denotes a prediction unit closest to the top left corner of the current prediction unit. In the HEVC standard, a spatial candidate list is established in the order of a1-B1-B0-a0- (B2), where B2 is a replacement, providing at most 4 candidate MVs, i.e., using at most the motion information of 4 candidate blocks of the above 5 candidate blocks. When one or more of a1, B1, B0, a0 is not present, then motion information of B2 is needed, otherwise, motion information of B2 is not used.
And establishing a time domain candidate list. Referring to fig. 3B, a temporal candidate list may be built using motion information of a prediction unit of a corresponding position of a current prediction unit in a neighboring coded image. Unlike the spatial candidate list, the temporal candidate list cannot directly use the motion information of the candidate blocks, and needs to be scaled according to the position relationship of the reference image. In the HEVC standard, the temporal candidate list provides only one candidate MV at most, which is scaled from the MV of the co-located prediction unit at H position in fig. 3B, and if H position is not available, the co-located PU at C3 is used for replacement. It should be noted that if the number of candidate MVs in the current candidate list is less than 5, default motion information (e.g., motion information (0,0), etc.) is needed to be used for padding to reach the specified number.
And establishing a combined list. For the prediction unit in B Slice, since there are two MVs, its candidate list also needs to provide two predicted MVs. In the HEVC standard, the first 4 candidate MVs of MV candidates are pairwise combined, which may result in a combined list of B slices.
And (4) establishing a candidate list of the AMVP mode, wherein the candidate list is established for the current prediction unit by utilizing the correlation of motion vectors in a space domain and a time domain. The encoding end selects the optimal MV from the candidate list and performs differential encoding on the MV, and the decoding end establishes the same candidate list, and can calculate the MV of the current prediction unit only by motion vector residual (MVD) and the index value of the predicted MV in the candidate list.
And establishing a spatial domain candidate list. Referring to FIG. 3C, the left side and top side of the current PU each generate a candidate MV, the left side is selected in the order A0-A1-scaled A0-scaled A1, and the top side is selected in the order B0-B1-B2(scaled B0-scaled B1-scaled B2). For the upper three PUs, the scaling of the MV can only be done if neither of the left two PUs is available or is in intra prediction mode. When the first "available" MV is detected on the left or top side, the MV is used as a candidate MV for the current prediction unit and the remaining steps are not performed. At most one candidate among A0, A1, scaled A0, scaled A1, B0, B1, B2, scaled B0, scaled B1, and scaled B2.
It should be noted that a candidate MV can be marked as "available" only when the reference picture corresponding to the candidate MV is the same as the current prediction unit; otherwise, the candidate MVs need to be scaled accordingly.
And establishing a time domain candidate list. The time domain candidate list of AMVP is constructed in the same manner as the time domain candidate list of Merge. And (0,0) is used for complementing when the spatial domain candidate and the time domain candidate are less than two.
Although the Merge mode can greatly save the coding overhead of motion information, and the AMVP mode can improve the prediction accuracy of motion information, both modes have only one motion information for the current coding unit, i.e., all subblocks within the current coding unit share one motion information. For an application scene in which a moving object is small and the best motion information can be obtained only after a coding unit is divided into blocks, if the current coding unit is not divided, the current coding unit only has one motion information, and the prediction precision is not very high. For example, referring to fig. 3D, the area C, the area G, and the area H are areas within the current coding unit and are not sub image blocks divided within the current coding unit. Assuming that the current coding unit uses the motion information of the image block F, each area within the current coding unit uses the motion information of the image block F.
Obviously, since the region H in the current coding unit is far from the image block F, if the region H also uses the motion information of the image block F, the prediction accuracy of the motion information of the region H is not very high.
For example, if the current coding unit is divided by the division method shown in fig. 2A or fig. 2B, a plurality of sub image blocks are obtained. For example, referring to fig. 3E, the sub image block C, the sub image block G, the sub image block H, and the sub image block I are sub image blocks divided within the current coding unit, and since the current coding unit is divided into a plurality of sub image blocks, each sub image block within the current coding unit can use motion information alone. However, since the current coding unit is divided by using the division method of fig. 2A or fig. 2B, additional bits are required to be consumed to transmit the division method, which brings about a certain bit overhead.
Based on the working principle of the Merge mode and the AMVP mode, the motion information of a part of sub image blocks inside the current coding unit cannot utilize the coded motion information around the current coding unit, so that the available motion information is reduced, and the accuracy of the motion information is not high. For example, for a sub image block I within the current coding unit, only the motion information of sub image block C, sub image block G, and sub image block H can be used, but the motion information of image block a, image block B, image block F, image block D, and image block E cannot be used.
In view of the above findings, an encoding and decoding method provided in the embodiments of the present application can enable a current image block to correspond to multiple pieces of motion information without dividing the current image block, that is, without increasing overhead caused by sub-block division, so as to improve prediction accuracy of the motion information of the current image block. Because the current image block is not divided, the division mode can be transmitted without consuming extra bits, and the bit overhead is saved. For each area of the current image block (note that, here, any area in the current image block, where the size of the area is smaller than the size of the current image block and is not a sub image block obtained by dividing the current image block), the motion information of each area of the current image block may be obtained by using the encoded motion information around the current image block.
Referring to fig. 3D, C is a sub-region inside the current image block (i.e. the current coding unit), A, B, D, E and F are encoded blocks around the current image block, the motion information of the current sub-region C can be directly obtained by using angular prediction, and other sub-regions (e.g. G, H) inside the current coding unit are obtained by using the same method. Therefore, for the current coding unit, different motion information can be obtained without carrying out block division on the current coding unit, and bit overhead of part of block division is saved.
The current image block (hereinafter, simply referred to as the current block) in the embodiment of the present application is an arbitrary image unit in the encoding and decoding process, and the encoding and decoding process is performed by taking the current block as a unit, such as the CU in the above embodiment. Referring to FIG. 3F, the current block includes 9 regions (subsequently referred to as sub-regions within the current block), such as sub-region F1-sub-region F9, which are sub-regions within the current block, not sub-image blocks into which the current block is divided.
For different sub-regions of the sub-region f 1-the sub-region f9, the same or different motion information may be associated, and therefore, on the basis of not dividing the current block, the current block may also be associated with a plurality of motion information, for example, the sub-region f1 is associated with motion information 1, the sub-region f2 is associated with motion information 2, and so on.
For example, in determining the motion information of the sub-region f5, the motion information of the block a1, the block a2, the block A3, the block E, the block B1, the block B2 and the block B3, i.e., the motion information of the encoded blocks around the current block, may be utilized to provide more motion information for the sub-region f 5. Of course, the motion information of the image block a1, the image block a2, the image block A3, the image block E, the image block B1, the image block B2, the image block B3 may also be utilized for the motion information of other sub-regions of the current block.
The following describes the encoding and decoding method in the embodiments of the present application with reference to several specific embodiments.
Example 1: referring to fig. 4, a schematic flowchart of a coding/decoding method in an embodiment of the present application is shown, where the method may be applied to a decoding end or an encoding end, and the method may include the following steps:
step 401, at least one motion information angle prediction mode of a current block is obtained.
For example, the motion information angle prediction mode is used to indicate a preconfigured angle, select a peripheral matching block from peripheral blocks of the current block for the sub-region of the current block according to the preconfigured angle, and determine one or more motion information of the current block according to the motion information of the peripheral matching block, that is, for each sub-region of the current block, determine the motion information of the sub-region according to the motion information of the peripheral matching block. Also, the peripheral matching block is a block at a specified position determined from the peripheral blocks in accordance with the preconfigured angle.
Illustratively, the peripheral blocks include blocks adjacent to the current block; alternatively, the peripheral blocks include a block adjacent to the current block and a non-adjacent block. Of course, the peripheral block may also include other blocks, which is not limited in this regard.
For example, the motion information angle prediction mode may include, but is not limited to, one or any combination of the following: horizontal prediction mode, vertical prediction mode, horizontal up prediction mode, horizontal down prediction mode, vertical right prediction mode. Of course, the above are just a few examples of the motion information angle prediction mode, and there may be other types of motion information angle prediction modes, and the motion information angle prediction mode is related to the preconfigured angle, for example, the preconfigured angle may also be 10 degrees, 20 degrees, and the like. Referring to fig. 5A, a schematic diagram of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode is shown, where different motion information angle prediction modes correspond to different preconfigured angles.
Step 402, for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode.
In step 403, if the motion information of the plurality of peripheral matching blocks pointed by the pre-configured angle is not identical, the motion information angle prediction mode is added to the motion information prediction mode candidate list.
For example, if the motion information of the plurality of peripheral matching blocks pointed by the pre-configured angle is identical, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For example, based on the preconfigured angle of the horizontal prediction mode, the peripheral matching blocks, such as the peripheral matching block a1, the peripheral matching block a2, and the peripheral matching block A3, to which the preconfigured angle points are selected from the peripheral blocks of the current block. If the peripheral matching block a1, the peripheral matching block a2, and the peripheral matching block A3 are not identical, the horizontal prediction mode is added to the motion information prediction mode candidate list. If the peripheral matching block a1, the peripheral matching block a2, and the peripheral matching block A3 are identical, the addition of the horizontal prediction mode to the motion information prediction mode candidate list is prohibited.
For example, after selecting a plurality of peripheral matching blocks pointed to by the preconfigured angle from among peripheral blocks of the current block, at least one first peripheral matching block may be selected from among the plurality of peripheral matching blocks. For each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks. If the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block, determining that the comparison result of the first peripheral matching block is different from the motion information; and if the motion information of the first peripheral matching block is the same as the motion information of the second peripheral matching block, determining that the comparison result of the first peripheral matching block is the same as the motion information. Then, if the comparison result of any first peripheral matching block is that the motion information is different, determining that the motion information of the plurality of peripheral matching blocks is not completely the same; and if the comparison results of all the first peripheral matching blocks are the same in motion information, determining that the motion information of the plurality of peripheral matching blocks is completely the same.
Illustratively, selecting at least one first peripheral matching block from the plurality of peripheral matching blocks may include, but is not limited to: taking any one or more of the plurality of peripheral matching blocks as a first peripheral matching block; alternatively, one or more designated peripheral matching blocks among the plurality of peripheral matching blocks are set as the first peripheral matching block.
For example, the selecting the second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks may include, but is not limited to: selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks according to the traversal step size and the position of the first peripheral matching block; wherein the traversal step size can be a block spacing between the first and second perimeter matched blocks.
For example, for the peripheral matching block a1, the peripheral matching block a2, and the peripheral matching block A3 that are sequentially arranged in this order, assuming that the peripheral matching block a1 is the first peripheral matching block and the traversal step size is 2, the second peripheral matching block corresponding to the peripheral matching block a1 is the peripheral matching block A3. Based on this, if the motion information of the peripheral matching block a1 is different from the motion information of the peripheral matching block A3, it can be determined that the comparison result of the peripheral matching block a1 is different in motion information; if the motion information of the peripheral matching block a1 is the same as the motion information of the peripheral matching block A3, the result of the comparison of the peripheral matching block a1 is determined to be the same motion information.
For another example, with respect to the peripheral matching block a1, the peripheral matching block a2, and the peripheral matching block A3 which are sequentially arranged in this order, assuming that the peripheral matching block a1 and the peripheral matching block a2 are regarded as the first peripheral matching block and the traversal step is 1, the second peripheral matching block corresponding to the peripheral matching block a1 is the peripheral matching block a2, and the second peripheral matching block corresponding to the peripheral matching block a2 is the peripheral matching block A3. Based on this, if the motion information of the peripheral matching block a1 is different from the motion information of the peripheral matching block a2, it can be determined that the comparison result of the peripheral matching block a1 is different in motion information; if the motion information of the peripheral matching block a1 is the same as the motion information of the peripheral matching block a2, the result of the comparison of the peripheral matching block a1 is determined to be the same motion information. If the motion information of the peripheral matching block a2 is different from the motion information of the peripheral matching block A3, it may be determined that the comparison result of the peripheral matching block a2 is different in motion information; if the motion information of the peripheral matching block a2 is the same as the motion information of the peripheral matching block A3, the result of the comparison of the peripheral matching block a2 is determined to be the same motion information.
For example, before selecting the second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks, the traversal step size may be further determined based on the size of the current block. And controlling the comparison times of the motion information through the traversal step length.
For example, assuming that the size of the peripheral matching block is 4 × 4 and the size of the current block is 16 × 16, the current block corresponds to 4 peripheral matching blocks for the horizontal prediction mode. In order to control the comparison frequency of the motion information to be 1, the traversal step size may be 2 or 3, if the traversal step size is 2, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 3 rd peripheral matching block; or the first peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. If the traversal step size is 3, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. For another example, in order to control the number of comparison times of the motion information to be 2, the traversal step size may be 1, the first peripheral matching block is the 1 st peripheral matching block and the 3 rd peripheral matching block, the second peripheral matching block corresponding to the 1 st peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block corresponding to the 3 rd peripheral matching block is the 4 th peripheral matching block.
Of course, the above is only an example for the horizontal prediction mode, and the traversal step size may also be determined in other ways, which is not limited to this. Moreover, for other motion information angle prediction modes except the horizontal prediction mode, the mode of determining the traversal step length refers to the horizontal prediction mode, and is not repeated herein.
In step 404, the current block is encoded or decoded according to the motion information prediction mode candidate list.
For example, for the encoding side, the current block is encoded according to the motion information prediction mode candidate list. For the decoding side, the current block is decoded according to the motion information prediction mode candidate list.
As can be seen from the above technical solutions, in the embodiments of the present application, the current block does not need to be divided, the division information of the sub-regions of the current block can be determined based on the motion information angle prediction mode, and the bit overhead caused by sub-block division can be effectively solved, that is, on the basis of not dividing the current block into sub-blocks, motion information is provided for each sub-region of the current block, and different sub-regions of the current block can correspond to the same or different motion information, thereby improving the encoding performance, solving the problem of transmitting a large amount of motion information, and saving the overhead of a large amount of encoding bits. The motion information angle prediction modes with incompletely identical motion information are added into the motion information prediction mode candidate list, so that the motion information angle prediction modes only with single motion information are removed, the number of the motion information angle prediction modes in the motion information prediction mode candidate list is reduced, the bit number of a plurality of pieces of motion information is reduced, and the coding performance is further improved.
Fig. 5B is a schematic diagram of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode. As can be seen from fig. 5B, some motion information angle prediction modes may make the motion information of each sub-region inside the current block the same, for example, a horizontal prediction mode, a vertical prediction mode, and a horizontal upward prediction mode, and such motion information angle prediction modes need to be eliminated. Some motion information angle prediction modes, such as a horizontal downward prediction mode and a vertical rightward prediction mode, may cause different motion information of each sub-region inside the current block, and such motion information angle prediction modes need to be reserved, i.e., may be added to the motion information prediction mode candidate list.
Obviously, if a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode are added to the motion information prediction mode candidate list, when encoding the index of the horizontal downward prediction mode, since there are a horizontal prediction mode, a vertical prediction mode, and a horizontal upward prediction mode in the front (the order of each motion information angle prediction mode is not fixed, and is only an example here), it may be necessary to encode 0001 to represent them. However, in the embodiment of the present application, only the horizontal downward prediction mode and the vertical rightward prediction mode are added to the motion information prediction mode candidate list, and the addition of the horizontal prediction mode, the vertical prediction mode, and the horizontal upward prediction mode to the motion information prediction mode candidate list is prohibited, that is, the horizontal downward prediction mode is not preceded by the horizontal prediction mode, and therefore, when the index of the horizontal downward prediction mode is coded, only 0 may be coded to represent it. In summary, bit overhead caused by coding motion information angle prediction mode index information is reduced, hardware complexity is reduced while bit overhead is saved, the problem of low performance gain caused by a motion information angle prediction mode of single motion information is avoided, and the number of bits for coding a plurality of motion information angle prediction modes is reduced.
Example 2: based on the pre-configured angle of the motion information angle prediction mode, after a plurality of peripheral matching blocks pointed by the pre-configured angle are selected from the peripheral blocks of the current block, if an uncoded block and/or an intra-coded block exist in the peripheral matching blocks, filling motion information of the uncoded block and/or the intra-coded block. For example, the available motion information of the neighboring blocks of the non-coded blocks and/or intra-coded blocks is padded as the motion information of the non-coded blocks and/or intra-coded blocks; or filling the available motion information of the reference block at the corresponding position of the uncoded block and/or the intra-frame coding block in the time domain reference frame into the motion information of the uncoded block and/or the intra-frame coding block; alternatively, default motion information is padded with motion information for non-coded blocks and/or intra-coded blocks.
For example, if there is an uncoded block in the plurality of peripheral matching blocks, the motion information of the uncoded block may be padded with the available motion information of a neighboring block of the uncoded block; or, the available motion information of the reference block at the position corresponding to the uncoded block in the time domain reference frame may be filled as the motion information of the uncoded block; alternatively, default motion information may be padded with motion information for uncoded blocks.
For example, if an intra-coded block exists in the plurality of peripheral matching blocks, the motion information of the intra-coded block may be padded with the available motion information of neighboring blocks of the intra-coded block; or, the available motion information of the reference block at the corresponding position of the intra-frame coding block in the time domain reference frame may be filled as the motion information of the intra-frame coding block; alternatively, default motion information may be padded as motion information for intra-coded blocks.
Illustratively, if an uncoded block and an intra-coded block exist in the plurality of peripheral matching blocks, filling available motion information of a neighboring block of the uncoded block as the motion information of the uncoded block; and filling the available motion information of the adjacent blocks of the intra-frame coding block into the motion information of the intra-frame coding block, or filling the available motion information of a reference block at the corresponding position of the intra-frame coding block in a time domain reference frame into the motion information of the intra-frame coding block, or filling default motion information into the motion information of the intra-frame coding block. Or filling the available motion information of the reference block at the corresponding position of the uncoded block in the time domain reference frame as the motion information of the uncoded block; and filling the available motion information of the adjacent blocks of the intra-frame coding block into the motion information of the intra-frame coding block, or filling the available motion information of a reference block at the corresponding position of the intra-frame coding block in a time domain reference frame into the motion information of the intra-frame coding block, or filling default motion information into the motion information of the intra-frame coding block. Or filling default motion information into the motion information of the uncoded blocks; and filling the available motion information of the adjacent blocks of the intra-frame coding block into the motion information of the intra-frame coding block, or filling the available motion information of a reference block at the corresponding position of the intra-frame coding block in a time domain reference frame into the motion information of the intra-frame coding block, or filling default motion information into the motion information of the intra-frame coding block.
Example 3: when encoding or decoding the current block according to the motion information prediction mode candidate list, selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, determining motion information of the current block according to the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
Illustratively, determining motion information of the current block according to the target motion information angle prediction mode includes: selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of a current block on the basis of the preset angle corresponding to the target motion information angle prediction mode; dividing the current block into at least one sub-region; for each sub-region, a peripheral matching block corresponding to the sub-region may be selected from the plurality of peripheral matching blocks, and the motion information of the sub-region may be determined according to the motion information of the selected peripheral matching block.
Illustratively, determining motion information of the current block according to the target motion information angle prediction mode includes: determining a selection condition for acquiring motion information of the current block and sub-region partition information of the current block according to the target motion information angle prediction mode and the size of the current block, wherein the selection condition is a first selection condition or a second selection condition, the first selection condition is that motion information selected from motion information of a peripheral matching block is not allowed to be bidirectional motion information, and the second selection condition is that motion information selected from motion information of the peripheral matching block is allowed to be bidirectional motion information; selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of a current block on the basis of the preset angle corresponding to the target motion information angle prediction mode; and determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the plurality of peripheral matching blocks.
Illustratively, determining motion information of the current block according to the target motion information angle prediction mode includes: according to a pre-configuration angle corresponding to the target motion information angle prediction mode, selecting a peripheral matching block pointed by the pre-configuration angle from peripheral blocks of a current block; determining the motion information of the current block according to the motion information of the peripheral matching block; if the width and height of the current block are both greater than or equal to 8, the current block is divided into sub-blocks by 8 × 8, and the motion information selected from the motion information of the peripheral matching blocks is allowed to be bidirectional motion information.
Example 4: referring to the above-described embodiments, it is referred to padding motion information of a peripheral matching block (embodiment 2), determining a motion information angle prediction mode that needs to be added to a motion information prediction mode candidate list using motion information of the peripheral matching block (embodiment 1), and performing motion compensation using the motion information angle prediction mode (embodiment 3). On this basis, it is possible to combine embodiment 1 and embodiment 2, fill in the motion information of the peripheral matching block, and determine the motion information angle prediction mode that needs to be added to the motion information prediction mode candidate list using the motion information of the peripheral matching block. Alternatively, embodiment 2 and embodiment 3 are combined, the motion information angle prediction mode to be added to the motion information prediction mode candidate list is determined using the motion information of the peripheral matching block, and motion compensation is performed using the motion information angle prediction mode. Alternatively, the embodiment 1, the embodiment 2, and the embodiment 3 are combined, the motion information of the peripheral matching block is padded, the motion information angle prediction mode to be added to the motion information prediction mode candidate list is determined using the motion information of the peripheral matching block, and the motion compensation is performed using the motion information angle prediction mode.
In embodiment 4, a flow of implementing the encoding and decoding method is described by taking the combination of embodiment 1, embodiment 2, and embodiment 3 as an example. The implementation procedure of the coding and decoding method combining embodiment 1 and embodiment 2 and combining embodiment 2 and embodiment 3 is similar to the implementation procedure of embodiment 4, and is not described herein again. After combining embodiment 1, embodiment 2, and embodiment 3, referring to fig. 6, a flow chart of a coding and decoding method is shown, where the method may be applied to a coding end, and the method may include:
step 601, the encoding end fills the motion information of the surrounding blocks of the current block.
For example, if there are uncoded blocks in the surrounding blocks of the current block, the available motion information of the neighboring blocks of the uncoded blocks is filled as the motion information of the uncoded blocks; or, filling the available motion information of the reference block at the corresponding position of the uncoded block in the time domain reference frame as the motion information of the uncoded block; or, padding the default motion information as the motion information of the uncoded block. If the intra-frame coding blocks exist in the peripheral blocks of the current block, filling the available motion information of the adjacent blocks of the intra-frame coding blocks into the motion information of the intra-frame coding blocks; or, filling the available motion information of the reference block at the corresponding position of the intra-frame coding block in the time domain reference frame into the motion information of the intra-frame coding block; or, padding the default motion information as the motion information of the intra-coded block.
In step 602, the encoding end creates a motion information prediction mode candidate list corresponding to the current block, where the motion information prediction mode candidate list may include a motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes, which is not limited in this respect.
For example, the motion information angle prediction mode is used to indicate a preconfigured angle, select a peripheral matching block from peripheral blocks of the current block for the sub-region of the current block according to the preconfigured angle, and determine one or more motion information of the current block according to the motion information of the peripheral matching block, that is, for each sub-region of the current block, determine the motion information of the sub-region according to the motion information of the peripheral matching block. Also, the peripheral matching block is a block at a specified position determined from the peripheral blocks in accordance with the preconfigured angle.
Illustratively, the peripheral blocks include blocks adjacent to the current block; alternatively, the peripheral blocks include a block adjacent to the current block and a non-adjacent block. Of course, the peripheral block may also include other blocks, which is not limited in this regard.
Exemplary motion information angular prediction modes include, but are not limited to: horizontal prediction mode, vertical prediction mode, horizontal up prediction mode, horizontal down prediction mode, vertical right prediction mode. Of course, the above are just a few examples, and other types of motion information angle prediction modes are also possible.
Illustratively, a motion information prediction mode candidate list corresponding to the current block needs to be created, and both the encoding side and the decoding side create the motion information prediction mode candidate list corresponding to the current block. The motion information prediction mode candidate list at the encoding end and the motion information prediction mode candidate list at the decoding end are determined to be the same according to a protocol. The encoding side and the decoding side may create the same motion information prediction mode candidate list using the same strategy.
For example, a motion information prediction mode candidate list may be created for the current block, that is, all sub-regions in the current block may correspond to the same motion information prediction mode candidate list; alternatively, a plurality of motion information prediction mode candidate lists may be created for the current block. The same or different motion information prediction mode candidate lists may correspond to different current blocks. For convenience of description, a motion information prediction mode candidate list is created for each current block, for example, the current block a corresponds to the motion information prediction mode candidate list 1, the current block B corresponds to the motion information prediction mode candidate list 1, and so on.
In an example, the motion information angle prediction mode in the embodiment of the present application may be an angle prediction mode for predicting motion information, that is, an angle prediction mode used in an inter-frame coding process, rather than an angle prediction mode applied in an intra-frame coding process, and the motion information angle prediction mode selects a matching block rather than a matching pixel.
Illustratively, for the process of step 602, the process may include the steps of:
step a1, obtaining at least one motion information angle prediction mode of the current block.
For example, the following motion information angle prediction modes may be acquired in sequence: horizontal prediction mode, vertical prediction mode, horizontal up prediction mode, horizontal down prediction mode, vertical right prediction mode. Of course, the above manners are only a few examples, and are not limited thereto, the preconfigured angle may be any angle between 0-360 degrees, and the horizontal direction of the center point of the sub-region to the right may be located as 0 degree, so that any angle rotated counterclockwise from 0 degree may be the preconfigured angle, or the center point of the sub-region may be located as 0 degree to other directions. In practical applications, the preconfigured angle may be a fractional angle, such as 22.5 degrees, or the like.
Step a2, for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a preconfigured angle from the peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode.
In step a3, if the motion information of the plurality of neighboring matching blocks is not identical, the motion information angle prediction mode is added to the motion information prediction mode candidate list. If the motion information of the plurality of peripheral matching blocks is completely the same, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For example, for the horizontal prediction mode, a plurality of peripheral matching blocks pointed to by a preconfigured angle of the horizontal prediction mode are selected from all peripheral blocks of the current block. At least one first perimeter match block (e.g., all or a portion of all perimeter match blocks) is then selected from the plurality of perimeter match blocks.
For each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks. And if the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block corresponding to the first peripheral matching block, determining that the comparison result of the first peripheral matching block is different from the motion information. And if the motion information of the first peripheral matching block is the same as the motion information of the second peripheral matching block corresponding to the first peripheral matching block, determining that the comparison result of the first peripheral matching block is the same as the motion information.
After the comparison result of each first peripheral matching block is obtained, if the comparison result of any first peripheral matching block is that the motion information is different, it may be determined that the motion information of the plurality of peripheral matching blocks is not completely the same, and the horizontal prediction mode is added to the motion information prediction mode candidate list. If the comparison results of all the first peripheral matching blocks are identical in motion information, it may be determined that the motion information of the plurality of peripheral matching blocks is identical, and the addition of the horizontal prediction mode to the motion information prediction mode candidate list is prohibited.
For the vertical prediction mode, the horizontal upward prediction mode, the horizontal downward prediction mode, the vertical rightward prediction mode, and the like, the processing procedure thereof refers to the processing procedure of the horizontal prediction mode, and the details are not repeated here.
Up to this point, for each motion information angle prediction mode, the motion information angle prediction mode may be added to the motion information prediction mode candidate list or the motion information angle prediction mode may not be added to the motion information prediction mode candidate list. Referring to fig. 5B, it is assumed that a horizontal down prediction mode and a vertical right prediction mode are added to the motion information prediction mode candidate list, and a horizontal prediction mode, a vertical prediction mode, and a horizontal up prediction mode are not added to the motion information prediction mode candidate list.
Through the above-described processes, a motion information prediction mode candidate list may be created, and the motion information prediction mode candidate list includes a horizontal down prediction mode and a vertical right prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes, which is not limited in this respect.
In step 603, the encoding side selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode may be a target motion information angle prediction mode or another type of motion information prediction mode (i.e., a motion information prediction mode in a conventional manner).
For example, assume that the motion information prediction mode candidate list includes: horizontal down prediction mode, vertical right prediction mode, other types of motion information prediction modes R (conventionally derived).
And aiming at the horizontal downward prediction mode, according to the pre-configured angle of the horizontal downward prediction mode, selecting a plurality of peripheral matching blocks pointed by the pre-configured angle from all peripheral blocks of the current block. And determining a rate distortion cost A corresponding to the horizontal downward prediction mode according to a plurality of motion information respectively corresponding to a plurality of peripheral matching blocks.
According to a plurality of motion information respectively corresponding to a plurality of peripheral matching blocks, a rate distortion cost A corresponding to a horizontal downward prediction mode can be determined by adopting a rate distortion principle. The rate-distortion cost a can be determined by the following formula: j (mode) ═ D + λ R, illustratively, D denotes Distortion, which can generally be measured using SSE metrics, SSE referring to the sum of the mean square of the differences between the reconstructed image block and the source image; λ is lagrange multiplier, R is the actual number of bits required for encoding the image block in the mode, including the sum of bits required for encoding mode information, motion information, residual, and the like, and this determination is not limited.
Similarly, the rate distortion cost corresponding to the vertical rightward prediction mode may be determined, and the specific determination manner may be a determination manner of the horizontal downward prediction mode, for example, the vertical rightward prediction mode corresponds to the rate distortion cost B. The rate distortion cost corresponding to the motion information prediction mode R may be determined by using a rate distortion principle, and the determination method is not limited, for example, the rate distortion cost C corresponding to the motion information prediction mode R.
And determining the minimum rate distortion cost from the rate distortion cost A, the rate distortion cost B and the rate distortion cost C, and determining the motion information prediction mode corresponding to the minimum rate distortion cost as the target motion information prediction mode. For example, when the rate-distortion cost a is minimum, the target motion information prediction mode is a horizontal downward prediction mode.
In step 604, if the target motion information prediction mode is the target motion information angle prediction mode, the encoding end encodes the current block according to the target motion information angle prediction mode.
For example, the encoding end may determine motion information of each sub-region in the current block according to the angular prediction mode of the target motion information, and perform motion compensation on each sub-region by using the motion information of the sub-region.
For example, encoding the current block according to the target motion information angle prediction mode may include: determining the motion information of the current block according to the angle prediction mode of the target motion information; based on the motion information of the current block, a prediction value of the current block is determined, which is a motion compensation process.
Determining motion information of the current block according to the target motion information angular prediction mode may include:
mode one, the determination process for the motion information of the current block may include the following steps:
and b1, determining the selection condition of the current block for obtaining the motion information according to the angle prediction mode of the target motion information and the size of the current block. For example, the selection condition may be a first selection condition or a second selection condition, the first selection condition may be that motion information selected from the motion information of the peripheral matching block is not allowed to be bidirectional motion information (i.e., unidirectional motion information is allowed, or forward motion information in bidirectional motion information, or backward motion information in bidirectional motion information), and the second selection condition may be that motion information selected from the motion information of the peripheral matching block is allowed to be bidirectional motion information (i.e., unidirectional motion information is allowed, forward motion information in bidirectional motion information, backward motion information in bidirectional motion information).
For example, if the size of the current block satisfies: the width is greater than or equal to a preset size parameter (which may be configured empirically, such as 8, etc.), the height is greater than or equal to the preset size parameter, and the selection condition is determined as the second selection condition for any motion information angle prediction mode. If the size of the current block satisfies: the width is smaller than the preset size parameter, the height is larger than the preset size parameter, and when the target motion information angle prediction mode is a vertical prediction mode, the selection condition is determined to be a second selection condition; and when the target motion information angle prediction mode is other than the vertical prediction mode, determining that the selection condition is the first selection condition.
For another example, if the size of the current block satisfies: the height is smaller than the preset size parameter, the width is larger than the preset size parameter, and when the target motion information angle prediction mode is a horizontal prediction mode, the selection condition is determined to be a second selection condition; and when the target motion information angle prediction mode is other than the horizontal prediction mode, determining the selection condition as a first selection condition. If the size of the current block satisfies: and if the height is smaller than the preset size parameter and the width is smaller than the preset size parameter, determining the selection condition as a first selection condition aiming at any motion information angle prediction mode. If the size of the current block satisfies: the height is smaller than the preset size parameter, the width is equal to the preset size parameter, or the height is equal to the preset size parameter, the width is smaller than the preset size parameter, and the selection condition is determined to be a first selection condition aiming at any motion information angle prediction mode.
Referring to table 1 in the following embodiments, taking the example that the preset size parameter is 8, the "one-way" in table 1 indicates that the selection condition is the first selection condition, that is, bidirectional motion information is not allowed, and the "two-way" in table 1 indicates that the selection condition is the second selection condition, that is, bidirectional motion information is allowed.
Step b2, according to the target motion information angle prediction mode and the size of the current block, determining the sub-region division information of the current block, namely the sub-region division information indicates the way of dividing the current block into sub-regions.
Illustratively, when the target motion information angle prediction mode is a horizontal upward prediction mode, a horizontal downward prediction mode or a vertical rightward prediction mode, if the width of the current block is greater than or equal to a preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 × 8; if the width of the current block is smaller than the preset size parameter, or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4.
When the target motion information angle prediction mode is a horizontal prediction mode, if the width of the current block is larger than a preset size parameter, the size of the sub-region is 4 × 4 of the current block, or the size of the sub-region is 4 × 4; if the width of the current block is equal to the preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4.
When the target motion information angle prediction mode is a vertical prediction mode, if the height of the current block is greater than a preset size parameter, the size of the sub-region is 4 × that of the current block, or the size of the sub-region is 4 × 4; if the height of the current block is higher than the preset size parameter and the width of the current block is larger than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4.
Referring to table 1 in the following examples, the preset size parameter is 8.
In one example, the size of the current block, the angular prediction mode of motion information, the size of the sub-region, the direction of the sub-region (uni-directionally representing the first selection condition, i.e., not allowing bi-directional motion information, bi-directionally representing the second selection condition, i.e., allowing bi-directional motion information) may be as shown in table 1 shown below.
TABLE 1
Figure BDA0002085894000000141
Figure BDA0002085894000000151
In one example, when the target motion information angular prediction mode is the horizontal prediction mode, if the width of the current block is greater than 8, the size of the sub-region may also be 4 × 4. When the angular prediction mode of the target motion information is the vertical prediction mode, if the height of the current block is greater than 8, the size of the sub-region may also be 4 × 4.
Step b3, based on the pre-configured angle corresponding to the target motion information angle prediction mode, the encoding end selects a plurality of peripheral matching blocks pointed by the pre-configured angle from the peripheral blocks of the current block.
For example, for any motion information angle prediction mode of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode, a preconfigured angle corresponding to the motion information angle prediction mode may be known. After the pre-configured angle is known, the peripheral matching block pointed by the pre-configured angle can be selected from the peripheral blocks of the current block, which is not limited in this respect.
Step b4, determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of a plurality of peripheral matching blocks. For example, the current block is divided into at least one sub-region according to the sub-region division information; for each sub-region of the current block, a peripheral matching block corresponding to the sub-region can be selected from peripheral matching blocks of the current block according to the target motion information angle prediction mode, and motion information of the sub-region is determined according to the motion information and the selection condition of the peripheral matching block corresponding to the sub-region. Then, motion information of the at least one sub-region is determined as motion information of the current block.
For example, referring to the above-described embodiment, it is assumed that the current block is divided into the sub-region 1 and the sub-region 2 according to the sub-region division information. And aiming at the sub-region 1, selecting a peripheral matching block 1 corresponding to the sub-region 1 from peripheral matching blocks of the current block according to the target motion information angle prediction mode. Assuming that the peripheral matching block 1 stores bidirectional motion information (i.e., forward motion information and backward motion information), if the selection condition of the sub-area 1 is the first selection condition, the forward motion information or the backward motion information corresponding to the peripheral matching block 1 is used as the motion information of the sub-area 1. If the selection condition of the sub-area 1 is the second selection condition, the bidirectional motion information (i.e., the forward motion information and the backward motion information) corresponding to the peripheral matching block 1 is used as the motion information of the sub-area 1.
And aiming at the sub-area 2, selecting a peripheral matching block 2 corresponding to the sub-area 2 from peripheral matching blocks of the current block according to the target motion information angle prediction mode. Assuming that the peripheral matching block 2 stores unidirectional motion information, the unidirectional motion information corresponding to the peripheral matching block 2 is used as the motion information of the sub-area 2. Then, both the motion information of the sub-area 1 and the motion information of the sub-area 2 are determined as the motion information of the current block. The motion information of sub-area 1, as well as the motion information of sub-area 2, may be stored in 4 x 4 sub-block sizes.
In a second mode, the process of determining motion information for the current block may include the following steps:
and c1, according to the pre-configuration angle corresponding to the target motion information angle prediction mode, the encoding end selects the peripheral matching block pointed by the pre-configuration angle from the peripheral blocks of the current block.
For example, for any motion information angle prediction mode of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode, a preconfigured angle corresponding to the motion information angle prediction mode may be known. After the pre-configured angle is known, the peripheral matching block pointed by the pre-configured angle can be selected from the peripheral blocks of the current block, which is not limited in this respect.
And c2, determining the motion information of the current block according to the motion information of the peripheral matching block.
Illustratively, if the width and height of the current block are both greater than or equal to 8, the current block is divided into sub-blocks by 8 × 8, and the motion information selected from the motion information of the peripheral matching blocks is allowed to be bidirectional motion information.
For example, if the width and height of the current block are both greater than or equal to 8, the current block is divided into at least one sub-region in a manner of 8 × 8. And for each sub-region of the current block, determining the motion information of the sub-region according to the motion information of a peripheral matching block corresponding to the sub-region, wherein the motion information of the peripheral matching block is allowed to be bidirectional motion information (namely allowing unidirectional motion information, forward motion information in the bidirectional motion information and backward motion information in the bidirectional motion information). Motion information of at least one sub-region is determined as motion information of the current block.
In a third mode, the process of determining motion information for the current block may include the following steps:
d1, determining the selection condition of the current block for obtaining the motion information according to the size of the current block; the selection condition is a second selection condition that the motion information selected from the motion information of the peripheral matching block is allowed to be bidirectional motion information (unidirectional motion information is allowed, forward motion information in bidirectional motion information, backward motion information in bidirectional motion information). For example, if the size of the current block satisfies: the width is greater than or equal to the preset size parameter (empirically configured, e.g. 8), and the height is greater than or equal to the preset size parameter, then the selection condition is determined to be the second selection condition, independent of the target motion information angle prediction mode.
And d2, determining the sub-region partition information of the current block according to the size of the current block. For example, if the size of the current block satisfies: the width is greater than or equal to the preset size parameter (empirically configured, e.g. 8), and the height is greater than or equal to the preset size parameter, then the size of the sub-region is 8 × 8, independent of the target motion information angle prediction mode.
And d3, selecting the peripheral matching block pointed by the pre-configuration angle from the peripheral blocks of the current block according to the pre-configuration angle corresponding to the target motion information angle prediction mode. For example, after learning the preconfigured angle, the peripheral matching block pointed by the preconfigured angle may be selected from the peripheral blocks of the current block, which is not limited in this respect.
And d4, determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching block. For example, the encoding end may divide the current block into at least one sub-region according to the sub-region division information; and aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from peripheral matching blocks of the current block according to the target motion information angle prediction mode, and determining the motion information of the sub-region according to the motion information of the peripheral matching block corresponding to the sub-region and the selection condition. Then, motion information of at least one sub-region may be determined as motion information of the current block.
In a fourth mode, the process of determining motion information for the current block may include the following steps:
step e1, based on the pre-configured angle corresponding to the target motion information angle prediction mode, the encoding end selects a plurality of peripheral matching blocks pointed by the pre-configured angle from the peripheral blocks of the current block.
For example, for any motion information angle prediction mode of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode, a preconfigured angle corresponding to the motion information angle prediction mode may be known. After the pre-configured angle is known, the peripheral matching block pointed by the pre-configured angle can be selected from the peripheral blocks of the current block, which is not limited in this respect.
Step e2, the encoding end divides the current block into at least one sub-region, which is not limited to this dividing method.
And e3, aiming at each sub-region, the encoding end selects a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determines the motion information of the sub-region according to the motion information of the selected peripheral matching block.
For example, for each sub-region of the current block, a peripheral matching block corresponding to the sub-region is selected from among a plurality of peripheral matching blocks, and motion information of the peripheral matching block is determined as motion information of the sub-region.
Step e4, determining the motion information of the at least one sub-region as the motion information of the current block.
Example 5: based on the same application concept as the above method, referring to fig. 7, it is a schematic flow chart of the encoding and decoding method according to the embodiment of the present application, where the method may be applied to a decoding end, and the method may include:
step 701, the decoding end fills the motion information of the surrounding blocks of the current block.
In step 702, the decoding end creates a motion information prediction mode candidate list corresponding to the current block, where the motion information prediction mode candidate list may include a motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes, which is not limited in this respect.
Illustratively, the motion information prediction mode candidate list at the decoding end is the same as the motion information prediction mode candidate list at the encoding end, i.e. the motion information prediction modes of the two are in the same order.
For example, steps 701 to 702 refer to steps 601 to 602, which are not described herein again.
In step 703, the decoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode may be a target motion information angle prediction mode or another type of motion information prediction mode (i.e., a motion information prediction mode in a conventional manner).
Illustratively, for the processing procedure of step 703, the procedure may include the following steps:
in step f1, the decoding end obtains indication information from the coded bit stream, where the indication information is used to indicate index information of the target motion information prediction mode in the motion information prediction mode candidate list.
Illustratively, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream carries indication information, where the indication information is used to indicate index information of the target motion information prediction mode in the motion information prediction mode candidate list. For example, the motion information prediction mode candidate list sequentially includes: a horizontal down prediction mode, a vertical right prediction mode, a motion information prediction mode R, and the indication information is used to indicate index information 1, and index information 1 represents the first motion information prediction mode in the motion information prediction mode candidate list.
And step f2, the decoding end selects the motion information prediction mode corresponding to the index information from the motion information prediction mode candidate list, and determines the selected motion information prediction mode as the target motion information prediction mode of the current block. For example, when the indication information indicates index information 1, the decoding end may determine the 1 st motion information prediction mode in the motion information prediction mode candidate list as the target motion information prediction mode of the current block, that is, the target motion information prediction mode is a horizontal downward prediction mode.
Step 704, if the target motion information prediction mode is the target motion information angle prediction mode, the decoding end decodes the current block according to the target motion information angle prediction mode.
For example, the decoding end may determine motion information of each sub-region in the current block according to the target motion information angular prediction mode, and perform motion compensation on each sub-region by using the motion information of the sub-region.
For example, decoding the current block according to the target motion information angle prediction mode may include: determining the motion information of the current block according to the angle prediction mode of the target motion information; based on the motion information of the current block, a prediction value of the current block is determined, which is a motion compensation process.
For an exemplary implementation of step 704, refer to step 604, which is not described herein.
Example 6: for steps 601 and 701, motion information of surrounding blocks of the current block needs to be filled, and if the width and the height of the current block are both 16, the motion information of the surrounding blocks is stored according to a minimum unit of 4 × 4. See FIG. 8A for aLet A be14、A15、A16And A17And filling the uncoded blocks if the uncoded blocks are uncoded, wherein the filling method can be any one of the following methods: padding with available motion information of neighboring blocks; padding with default motion information (e.g., zero motion vectors); and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the padding can also be performed in the above manner, which is not described herein again.
Example 7: for steps 601 and 701, motion information of surrounding blocks of the current block needs to be filled, and if the width and the height of the current block are both 16, the motion information of the surrounding blocks is stored according to a minimum unit of 4 × 4. Referring to FIG. 8B, assume A7For the intra-coded blocks, the intra-coded blocks need to be padded, and the padding method may be any one of the following methods: padding with available motion information of neighboring blocks; padding with default motion information (e.g., zero motion vectors); and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the padding can also be performed in the above manner, which is not described herein again.
Example 8: for steps 602 and 702, it is necessary to create a motion information prediction mode candidate list corresponding to the current block, the motion information prediction mode candidate list including a motion information angle prediction mode.
Referring to fig. 9A, the peripheral blocks of the current block may include, but are not limited to: a perimeter block a1, a perimeter block A2, a. # a, a perimeter block Am +1, a. #, a perimeter block Am + n +1, a perimeter block Am + n +2, a. # a, a perimeter block A2m + n +1, a perimeter block A2m + n +2, a. # a, a perimeter block A2m +2n +1, or other perimeter blocks. In summary, the peripheral blocks of the current block may include, but are not limited to: the blocks adjacent to the current block, the blocks not adjacent to the current block, and even the blocks in other adjacent frames are included, which is not limited.
The width value of the current block is W, the height value of the current block is H, and the motion information of the peripheral blocks is stored according to the minimum unit of 4 x 4. The sizes of m and n are W/4 and H/4 respectively, i is any integer in [1, m ], j is i + step, 1< (step) Max (m, n), step is traversal step size and is an integer, Max (m, n) is the maximum value of m and n, and k is any integer in [2m + n +2,2m +2n +1], the following comparison process is performed:
and g1, judging whether j is larger than k, if so, exiting the comparison process, otherwise, executing the step g 2.
And step g2, comparing the motion information of the peripheral block Ai with the motion information of the peripheral block Aj.
Illustratively, if the motion information of the peripheral block Ai is the same as that of the peripheral block Aj, Diff [ i ] of the peripheral block Ai may be written as 0; if the motion information of the peripheral block Ai is different from that of the peripheral block Aj, Diff [ i ] of the peripheral block Ai may be written as 1. After step g2, step g3 is performed.
Step g3, i is j, j is j + step, the value of step is any integer of [1, Max (m, n) ], the value of step may be the same each time, the value of step may be different each time, and then the process returns to step g 1.
Through the above-described processing, after exiting the comparison process, it may be decided whether to add the motion information angle prediction mode to the motion information prediction mode candidate list according to the comparison result (i.e., the value of Diff).
In the horizontal prediction mode, arbitrary j Diff [ i ] values in a section [ m +1, m + n ] where i belongs to 1< ═ j < ═ n are determined, and if all the arbitrary j Diff values are 0, mode [0] ═ 0 is noted, meaning that all motion information is the same, otherwise mode [0] -, meaning that all motion information is not the same. For the vertical prediction mode, any j Diff [ i ] values in the interval [ m + n +2,2m + n +1] where i belongs to are judged, 1< ═ j < ═ m, if all the any j Diff values are 0, mode [1] ═ 0 is recorded, otherwise, mode [1] ═ 1 is recorded. For the horizontal upward prediction mode, any j Diff [ i ] values in the interval [ m +1,2m + n +1] where i belongs are judged, 1< ═ j < ═ m + n +1, if any j Diff values are all 0, mode [2] ═ 0 is recorded, otherwise, mode [2] ═ 1 is recorded. For the horizontal downward prediction mode, any j Diff [ i ] values in an interval [1, m + n ] where i belongs are judged, 1< ═ j < ═ m + n, if all the any j Diff values are 0, mode [3] = 0 is recorded, otherwise, mode [3] ═ 1 is recorded. For the vertical rightward prediction mode, any j Diff [ i ] values in the interval [ m + n +2,2m +2n +1] where i belongs are judged, 1< ═ j < ═ m + n, if any j Diff values are all 0, mode [4] ═ 0 is recorded, and if not, mode [4] ═ 1 is recorded.
Through the above-described processing, a mode value of each motion information angle prediction mode can be obtained, and then, a motion information angle prediction mode of which mode value is 1 is added to the motion information prediction mode candidate list, and a motion information angle prediction mode of which mode value is 0 is prohibited from being added to the motion information prediction mode candidate list.
Example 9: for steps 602 and 702, the width value of the current block is W, the height value of the current block is H, W is greater than or equal to 8, H is greater than or equal to 8, the sizes of m and n are W/4 and H/4, respectively, let i be W/8, let j be i + step, and step be W/8, based on which the following comparison procedure is performed:
and h1, judging whether j is larger than 2m +2n +1, if so, exiting the comparison process, otherwise, executing a step h 2.
And h2, comparing the motion information of the peripheral block Ai with the motion information of the peripheral block Aj.
Illustratively, if the motion information of the peripheral block Ai is the same as that of the peripheral block Aj, Diff [ i ] of the peripheral block Ai may be written as 0; if the motion information of the peripheral block Ai is different from that of the peripheral block Aj, Diff [ i ] of the peripheral block Ai may be written as 1. After step h2, step h3 is performed.
Step H3, judging whether m < ═ j < m + n is true, if true, step H/8; otherwise, it may also be determined whether m + n < ═ j < m + n +2 holds. If true, step is 1; otherwise, further judging whether m + n +2< ═ j <2m + n +1 holds. If yes, step is W/8; otherwise, judging whether 2m + n +1< ═ j <2m +2n +1 is true. If yes, step is H/8; otherwise, step remains unchanged.
In step h4, let i be j and j be j + step, and then return to step h1 to perform the processing.
Through the above-described processing, after exiting the comparison process, it may be decided whether to add the motion information angle prediction mode to the motion information prediction mode candidate list according to the comparison result (i.e., the value of Diff).
For the horizontal prediction mode, the value of Diff [ m + n-H/8] is judged, if the Diff value is 0, mode [0] is recorded as 0, and the meaning is that the motion information is all the same; otherwise, mode [0] is 1, which means that the motion information is not all the same. For the vertical prediction mode, the value of Diff [ m + n +2] is judged, and if the Diff value is 0, mode [1] is written to be 0, otherwise, mode [1] is written to be 1. For the horizontal upward prediction mode, the value of Diff [ i ] is judged, and the value of i is m + n and m + n + 1. If all Diff values are 0, then mode [2] is written to 0; otherwise, mode [2] is 1. For the horizontal downward prediction mode, the value of Diff [ i ] is judged, and the value of i is W/8 and m. If all Diff values are 0, then mode [3] is written to 0; otherwise, mode [3] is 1. For the vertical right prediction mode, the value of Diff [ i ] is judged, and the value of i is m + n +2+ W/8 and 2m + n + 2. If all Diff values are 0, then mode [4] is written to 0; otherwise, mode [4] is 1.
Through the above-described processing, a mode value of each motion information angle prediction mode can be obtained, and then, a motion information angle prediction mode of which mode value is 1 is added to the motion information prediction mode candidate list, and a motion information angle prediction mode of which mode value is 0 is prohibited from being added to the motion information prediction mode candidate list.
Example 10: for steps 602 and 702, the width value of the current block is W, the height value of the current block is H, W is 16, and H is 16, and the motion information of the peripheral blocks is stored in the minimum unit of 4 × 4.
Referring to FIG. 9B, for horizontal prediction mode, compare A6Motion information of (A)8Is the same, if not the same, the horizontal prediction mode is added to the motion information prediction mode candidate list, and if the same, the horizontal prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For the vertical prediction mode, compare A10Motion information of (A)12May be added to the motion information prediction mode candidate list if they are not the same, and may be prohibited from being added to the motion information prediction mode candidate list if they are the same.
For the horizontal up-prediction mode, A may then be compared8Motion information of (A)9Is the same, and compare a9Motion information of (A)10Is the same. If A is8Motion information of (A)9Is the same, and A9Motion information of (A)10May be prohibited from being added to the motion information prediction mode candidate list. Or, if A8Motion information of (A)9Is different in motion information, and/or, A9Motion information of (A)10The horizontal upward prediction mode may be added to the motion information prediction mode candidate list.
For horizontal down prediction mode, A may then be compared2Motion information of (A)4Is the same, and compare a4Motion information of (A)6Is the same. If A is2Motion information of (A)4Is the same, and A4Motion information of (A)6May be prohibited from being added to the motion information prediction mode candidate list. Or, if A2Motion information of (A)4Is different in motion information, and/or, A4Motion information of (A)6The horizontal upward prediction mode may be added to the motion information prediction mode candidate list.
For the vertical rightward prediction mode, A may then be compared12Motion information of (A)14Is the same, and compare a14Motion information of (A)16Is the same. If A is12Motion information of (A)14Is the same, and A14Motion information of (A)16May be prohibited from being added to the motion information prediction mode candidate list. Or, if A12Motion information of (A)14Is different in motion information, and/or, A14Motion information of (A)16The vertical right prediction mode may be added to the motion information prediction mode candidate list if the motion information of (1) is different.
Example 11: for steps 602 and 702, the width value of the current block is W, the height value of the current block is H, W is 16, and H is 32, and the motion information of the peripheral blocks is stored in the minimum unit of 4 × 4.
Referring to FIG. 9C, for horizontal prediction mode, compare A8Motion information of (A)12Is the same, if not the same, the horizontal prediction mode is added to the motion information prediction mode candidate list, and if the same, the horizontal prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For the vertical prediction mode, compare A14Motion information of (A)16May be added to the motion information prediction mode candidate list if they are not the same, and may be prohibited from being added to the motion information prediction mode candidate list if they are the same.
For the horizontal up-prediction mode, A may then be compared12Motion information of (A)13Is the same, and compare a13Motion information of (A)14Is the same. If A is12Motion information of (A)13Is the same, and A13Motion information of (A)14May be prohibited from being added to the motion information prediction mode candidate list. Or, if A12Motion information of (A)13Is different in motion information, and/or, A13Motion information of (A)14The horizontal upward prediction mode may be added to the motion information prediction mode candidate list.
For horizontal down prediction mode, A may then be compared2Motion information of (A)4Is the same, and compare a4Motion information of (A)8Is the same. If A is2Motion information of (A)4Is the same, and A4Motion information of (A)8May be prohibited from being added to the motion information prediction mode candidate list. Or, if A2Motion information of (A)4Is different in motion information, and/or, A4Motion information of (A)8The horizontal upward prediction mode may be added to the motion information prediction mode candidate list.
For the vertical rightward prediction mode, A may then be compared16Motion information of (A)18Is the same, and compare a18Motion information of (A)22Is the same. If A is16Motion information of (A)18Is the same, and A18Motion information of (A)22May be prohibited from being added to the motion information prediction mode candidate list. Or, if A16Motion information of (A)18Is different in motion information, and/or, A18Motion information of (A)22The vertical right prediction mode may be added to the motion information prediction mode candidate list if the motion information of (1) is different.
Example 12: for steps 604 and 704, a motion compensation process, i.e., encoding or decoding the current block according to the target motion information angular prediction mode, is required. In the motion compensation process, the selection condition of the current block for obtaining the motion information can be determined according to the angle prediction mode of the target motion information and the size of the current block; the selection condition is a first selection condition or a second selection condition, the first selection condition is that motion information selected from motion information of the peripheral matching block is not allowed to be bidirectional motion information, and the second selection condition is that motion information selected from motion information of the peripheral matching block is allowed to be bidirectional motion information. And determining the sub-region division information of the current block according to the target motion information angle prediction mode and the size of the current block. And selecting a peripheral matching block pointed by a preset angle from peripheral blocks of the current block according to the preset angle corresponding to the target motion information angle prediction mode. And determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching block.
Illustratively, bidirectional motion information is not allowed, and may include: if the motion information of the peripheral matching block is unidirectional motion information, allowing the unidirectional motion information of the peripheral matching block to be selected as the motion information of the current block or the sub-region; if the motion information of the surrounding matching block is bidirectional motion information, it is allowed to select forward motion information or backward motion information among the bidirectional motion information of the surrounding matching block as the motion information of the current block or the sub-region. The allowance is bidirectional motion information, which may include: if the motion information of the peripheral matching block is unidirectional motion information, allowing the unidirectional motion information of the peripheral matching block to be selected as the motion information of the current block or the sub-region; if the motion information of the surrounding matching block is bidirectional motion information, bidirectional motion information of the surrounding matching block is allowed to be selected as the motion information of the current block or the sub-region.
For example, determining the selection condition of the current block for obtaining the motion information according to the motion information angle prediction mode and the size of the current block may include, but is not limited to: if the size of the current block satisfies: and determining the selection condition as a second selection condition aiming at any motion information angle prediction mode, wherein the width is greater than or equal to a preset size parameter, and the height is greater than or equal to the preset size parameter. If the size of the current block satisfies: the width is smaller than the preset size parameter, the height is larger than the preset size parameter, and when the motion information angle prediction mode is a vertical prediction mode, the selection condition is determined to be a second selection condition; when the motion information angle prediction mode is a prediction mode other than the vertical prediction mode, determining the selection condition as a first selection condition. If the size of the current block satisfies: the height is smaller than the preset size parameter, the width is larger than the preset size parameter, and when the motion information angle prediction mode is the horizontal prediction mode, the selection condition is determined to be a second selection condition; when the motion information angle prediction mode is a prediction mode other than the horizontal prediction mode, determining the selection condition as a first selection condition. If the size of the current block satisfies: the height is smaller than the preset size parameter, the width is smaller than the preset size parameter, and the selection condition is determined to be a first selection condition aiming at any motion information angle prediction mode. If the size of the current block satisfies: the height is smaller than the preset size parameter, the width is equal to the preset size parameter, or the height is equal to the preset size parameter, the width is smaller than the preset size parameter, and the selection condition is determined to be a first selection condition aiming at any motion information angle prediction mode.
Determining the sub-region partition information of the current block according to the motion information angle prediction mode and the size of the current block may include: when the motion information angle prediction mode is a horizontal upward prediction mode, a horizontal downward prediction mode or a vertical rightward prediction mode, if the width of the current block is greater than or equal to a preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter, or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4.
When the motion information angle prediction mode is a horizontal prediction mode, if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4 × 4, and the height of the current block can be larger than the preset size parameter, can be equal to the preset size parameter, and can also be smaller than the preset size parameter; if the width of the current block is larger than the preset size parameter, the size of the sub-region is 4 × 4 of the current block, or the size of the sub-region is 4 × 4, and the height of the current block can be larger than the preset size parameter, can be equal to the preset size parameter, and can also be smaller than the preset size parameter; if the width of the current block is equal to the preset size parameter, the height of the current block is greater than or equal to the preset size parameter, and the size of the sub-region is 8 x 8.
When the motion information angle prediction mode is a vertical prediction mode, if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4, and the width of the current block is larger than the preset size parameter, or equal to the preset size parameter, or smaller than the preset size parameter; if the height of the current block is larger than the preset size parameter, the size of the sub-region is 4 × height of the current block, or the size of the sub-region is 4 × 4, and the width of the current block can be larger than the preset size parameter, can be equal to the preset size parameter, and can also be smaller than the preset size parameter; if the height of the current block is higher than the preset size parameter and the width of the current block is larger than or equal to the preset size parameter, the size of the sub-region is 8 x 8.
In one example, the predetermined size parameter may be 8, or may be some other value. When the preset size parameter is 8, determining the sub-region division and selection condition of the current block may be as shown in table 1.
In one example, determining the motion information of the current block according to the selection condition, the sub-region partition information and the motion information of the peripheral matching block may include, but is not limited to:
dividing the current block into at least one sub-region according to the sub-region division information;
for each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from peripheral matching blocks of the current block according to the motion information angle prediction mode, and determining motion information of the sub-region according to the motion information of the peripheral matching block corresponding to the sub-region and the selection condition;
determining motion information of the at least one sub-region as motion information of the current block.
In one example, determining motion information of a current block according to a motion information angle prediction mode includes: determining a selection condition of the current block for acquiring motion information according to the size of the current block; the selection condition is a second selection condition, and the second selection condition is that motion information selected from motion information of the peripheral matching block is allowed to be bidirectional motion information; determining the sub-region partition information of the current block according to the size of the current block; the sub-region division information of the current block includes: the size of the sub-region of the current block is 8 x 8. Selecting a peripheral matching block pointed by a preset angle from peripheral blocks of the current block according to the preset angle corresponding to the motion information angle prediction mode; and determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching block.
The motion compensation process in the above embodiments is described below with reference to several specific embodiments.
Example 13: referring to fig. 10A, the width W (4) of the current block multiplied by the height H (8) of the current block is less than or equal to 32, and for each 4 × 4 sub-region in the current block, unidirectional motion compensation (Uni) is performed at an angle, and bidirectional motion information is not allowed. And if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-area. If the motion information of the peripheral matching block is bidirectional motion information, the bidirectional motion information is not determined as the motion information of the sub-region, but the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region.
Referring to table 1, example 13 is an example in which the width x height is equal to or less than 32 in table 1, and the subblock division size is 4 x 4 for an arbitrary angular prediction mode, and the selection condition is unidirectional.
According to fig. 10A, the size of the current block is 4 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, the current block is divided into two sub-regions having the same size, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, the motion information of the 4 × 4 sub-region is determined according to the motion information of a1, and if the motion information of the peripheral matching block a1 is unidirectional motion information, unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block a1 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region. Another 4 × 4 sub-region corresponds to the peripheral matching block a2, the motion information of the 4 × 4 sub-region is determined according to the motion information of a2, and if the motion information of the peripheral matching block a2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block a2 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10A, the size of the current block is 4 × 8, and when the target motion information prediction mode of the current block is the vertical mode, the current block is divided into two sub-regions having the same size, one of the 4 × 4 sub-regions corresponds to the peripheral matching block B1, the motion information of the 4 × 4 sub-region is determined according to the motion information of B1, and if the motion information of the peripheral matching block B1 is unidirectional motion information, unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block B1 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region. And the other 4 × 4 sub-region corresponds to the peripheral matching block B1, the motion information of the 4 × 4 sub-region is determined according to the motion information of B1, and if the motion information of the peripheral matching block B1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block B1 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10A, the size of the current block is 4 × 8, when the target motion information prediction mode of the current block is horizontal upward, two sub-regions with the same size are divided, one of the 4 × 4 sub-regions corresponds to the peripheral matching block E, the motion information of the 4 × 4 sub-region is determined according to the motion information of E, and if the motion information of the peripheral matching block E is unidirectional, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block E is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the sub-region. Another 4 × 4 sub-region corresponds to the peripheral matching block a1, the motion information of the 4 × 4 sub-region is determined according to the motion information of a1, and if the motion information of the peripheral matching block a1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block a1 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10A, the size of the current block is 4 × 8, and when the target motion information prediction mode of the current block is horizontal downward, the current block is divided into two sub-regions having the same size, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, the motion information of the 4 × 4 sub-region is determined according to the motion information of a2, and if the motion information of the peripheral matching block a2 is unidirectional motion information, unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block a2 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region. Another 4 × 4 sub-region corresponds to the peripheral matching block A3, the motion information of the 4 × 4 sub-region is determined according to the motion information of A3, and if the motion information of the peripheral matching block A3 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block a3 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10A, the size of the current block is 4 × 8, and when the target motion information prediction mode of the current block is horizontal downward, the current block is divided into two sub-regions having the same size, one of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, the motion information of the 4 × 4 sub-region is determined according to the motion information of B2, and if the motion information of the peripheral matching block B2 is unidirectional motion information, unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block B2 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region. And the other 4 × 4 sub-region corresponds to the peripheral matching block B3, the motion information of the 4 × 4 sub-region is determined according to the motion information of B3, and if the motion information of the peripheral matching block B3 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block B3 is bidirectional motion information, the forward motion information or the backward motion information in the bidirectional motion information is determined as the motion information of the sub-region.
Example 14: referring to fig. 10B, if the width W of the current block is less than 8 and the height H of the current block is greater than 8, motion compensation can be performed on each sub-region in the current block as follows:
and if the angular prediction mode is the vertical prediction mode, performing motion compensation on each 4 × H sub-area according to the vertical angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area.
If the angular prediction mode is other angular prediction modes (such as a horizontal prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, a vertical rightward prediction mode, etc.), unidirectional motion compensation may be performed at an angle for each 4 × 4 sub-region in the current block, and bidirectional motion information is not allowed. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the sub-region.
Referring to table 1, example 14 is an example for table 1 with a width less than 8 and a height greater than 8, that is, for the vertical prediction mode, the subblock division size is 4 × high, and the condition is selected to allow bi-direction. For other angular prediction modes, the subblock split size is 4 x 4, and the selection condition is unidirectional.
According to fig. 10B, when the size of the current block is 4 × 16 and the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions having a size of 4 × 4 are divided, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. For any one of a1 to a4, if the motion information of the peripheral matching block is one-way motion information, the one-way motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10B, the size of the current block is 4 × 16, and when the target motion information prediction mode of the current block is the vertical mode, 4 sub-regions having a size of 4 × 4 may be divided, each sub-region of 4 × 4 corresponds to the peripheral matching block B1, and the motion information of each sub-region of 4 × 4 is determined according to the motion information of B1. If the motion information of the peripheral matching block B1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. If the motion information of the peripheral matching block B1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the corresponding sub-region. The motion information of the four sub-regions are the same, so in this embodiment, the current block itself may not be sub-region divided, and the current block itself serves as a sub-region corresponding to a peripheral matching block B1, and the motion information of the current block is determined according to the motion information of B1.
According to fig. 10B, when the size of the current block is 4 × 16, and the target motion information prediction mode of the current block is the horizontal up mode, 4 sub-regions with the size of 4 × 4 are divided, one of the 4 × 4 sub-regions corresponds to the peripheral matching block E, and the motion information of the 4 × 4 sub-region is determined according to the motion information of E. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. For any one of E to a3, if the motion information of the peripheral matching block is one-way motion information, the one-way motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10B, when the size of the current block is 4 × 16 and the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions having a size of 4 × 4 are divided, one 4 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 5. For any one of a2 to a5, if the motion information of the peripheral matching block is one-way motion information, the one-way motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10B, the size of the current block is 4 × 16, and when the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions having a size of 4 × 4 are divided, one 4 × 4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5. For any one of B2 through B5, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
Example 15: referring to fig. 10C, if the width W of the current block is greater than 8 and the height H of the current block is less than 8, then each sub-region in the current block may be motion compensated as follows:
and if the angular prediction mode is the horizontal prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area.
If the angular prediction mode is other angular prediction modes, for each 4 × 4 sub-region in the current block, unidirectional motion compensation may be performed according to a certain angle, and bidirectional motion information is not allowed.
For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the sub-region.
Referring to table 1, example 15 is an example for table 1 with a width greater than 8 and a height less than 8, that is, for the horizontal prediction mode, the subblock division size is 4 × wide, and the condition is selected to allow bi-direction. For other angular prediction modes, the subblock split size is 4 x 4, and the condition is chosen to be unidirectional.
According to fig. 10C, the size of the current block is 16 × 4, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 4 × 4 may be divided, each sub-region with 4 × 4 corresponds to the peripheral matching block a1, and the motion information of each sub-region with 4 × 4 is determined according to the motion information of a 1. If the motion information of the peripheral matching block a1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. If the motion information of the peripheral matching block a1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the corresponding sub-region. The motion information of the four sub-regions are the same, so in this embodiment, the current block itself may not be sub-region divided, and the current block itself serves as a sub-region corresponding to a peripheral matching block a1, and the motion information of the current block is determined according to the motion information of a 1.
According to fig. 10C, the size of the current block is 16 × 4, and when the target motion information prediction mode of the current block is the vertical mode, 4 sub-regions having a size of 4 × 4 are divided, wherein one sub-region of 4 × 4 corresponds to the peripheral matching block B1, and the motion information of the sub-region of 4 × 4 is determined according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. For any one of B1 through B4, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10C, the size of the current block is 16 × 4, and when the target motion information prediction mode of the current block is the horizontal up mode, 4 sub-regions with the size of 4 × 4 are divided, one of the 4 × 4 sub-regions corresponds to the peripheral matching block E, and the motion information of the 4 × 4 sub-region is determined according to the motion information of E. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. For any one of E to B3, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10C, when the size of the current block is 16 × 4 and the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions having a size of 4 × 4 are divided, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 5. For any one of a2 to a5, if the motion information of the peripheral matching block is one-way motion information, the one-way motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10C, the size of the current block is 16 × 4, and when the target motion information prediction mode of the current block is the vertical right mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one sub-region with 4 × 4 corresponds to the peripheral matching block B2, and the motion information of the sub-region with 4 × 4 is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5. For any one of B2 through B5, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the corresponding sub-area.
Example 16: the width W of the current block is equal to 8, and the height H of the current block is equal to 8, then motion compensation is performed on each 8 × 8 sub-region (i.e. the sub-region is the current block itself) in the current block according to a certain angle, and bidirectional motion information is allowed during motion compensation. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area.
If the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block may be selected from the motion information of the plurality of peripheral matching blocks according to the corresponding angle with respect to the motion information of the sub-region.
For example, referring to fig. 10D, for the horizontal prediction mode, motion information of the peripheral matching block a1 may be selected, and motion information of the peripheral matching block a2 may also be selected. Referring to fig. 10E, for the vertical prediction mode, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block B2 may also be selected. Referring to fig. 10F, for the horizontal upward prediction mode, motion information of the peripheral matching block E may be selected, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block a1 may be selected. Referring to fig. 10G, for the horizontal downward prediction mode, motion information of the peripheral matching block a2, motion information of the peripheral matching block A3, and motion information of the peripheral matching block a4 may be selected. Referring to fig. 10H, for the vertical right prediction mode, motion information of the peripheral matching block B2, motion information of the peripheral matching block B3, and motion information of the peripheral matching block B4 may be selected.
Referring to table 1, example 16 is an example of table 1 in which the width is equal to 8 and the height is equal to 8, that is, the subblock division size is 8 × 8 for any angular prediction mode, and the condition is selected to allow bi-direction.
According to fig. 10D, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, the current block is divided into a sub-region having a size of 8 × 8, the sub-region corresponds to the peripheral matching block a1, the motion information of the sub-region is determined according to the motion information of a1, and if the motion information of a1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of a1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region. Or, the sub-region corresponds to the peripheral matching block a2, the motion information of the sub-region is determined according to the motion information of a2, and if the motion information of a2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of a2 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10E, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the vertical mode, the current block is divided into a sub-region with the size of 8 × 8, and the sub-region corresponds to the peripheral matching block B1, the motion information of the sub-region is determined according to the motion information of B1, and if the motion information of B1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region. Or, the sub-region corresponds to the peripheral matching block B2, the motion information of the sub-region is determined according to the motion information of B2, and if the motion information of B2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B2 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10F, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the horizontal up mode, the current block is divided into sub-regions with the size of 8 × 8, the sub-regions correspond to the peripheral matching block E, the motion information of the sub-region is determined according to the motion information of E, and if the motion information of E is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the E is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area. Or, the sub-region corresponding to the peripheral matching block B1 determines the motion information of the sub-region according to the motion information of B1, and if the motion information of B1 is unidirectional motion information, determines the unidirectional motion information as the motion information of the sub-region. If the motion information of B1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region. Or, the sub-region corresponds to the peripheral matching block a1, the motion information of the sub-region is determined according to the motion information of a1, and if the motion information of a1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of a1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10G, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the horizontal down mode, the current block is divided into sub-regions having a size of 8 × 8, the sub-regions correspond to the peripheral matching block a2, the motion information of the sub-regions is determined according to the motion information of a2, and if the motion information of a2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-regions. If the motion information of a2 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region. Or, the sub-region corresponds to the peripheral matching block A3, and determines the motion information of the sub-region according to the motion information of A3, and if the motion information of A3 is unidirectional motion information, determines the unidirectional motion information as the motion information of the sub-region. If the motion information of a3 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region. Or, the sub-region corresponds to the peripheral matching block a4, the motion information of the sub-region is determined according to the motion information of a4, and if the motion information of a4 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of a4 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 10H, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the vertical right mode, the current block is divided into sub-regions of 8 × 8 in size, and the sub-regions correspond to the peripheral matching block B2, the motion information of the sub-region is determined according to the motion information of B2, and if the motion information of B2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B2 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region. Or, the sub-region corresponding to the peripheral matching block B3 determines the motion information of the sub-region according to the motion information of B3, and if the motion information of B3 is unidirectional motion information, determines the unidirectional motion information as the motion information of the sub-region. If the motion information of B3 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region. Or, the sub-region corresponds to the peripheral matching block B4, the motion information of the sub-region is determined according to the motion information of B4, and if the motion information of B4 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B4 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region.
Example 17: the width W of the current block may be equal to or greater than 16 and the height H of the current block may be equal to 8, based on which each sub-region within the current block may be motion compensated in the following manner:
and if the angular prediction mode is the horizontal prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area.
And if the angular prediction mode is other angular prediction modes, performing bidirectional motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region.
For example, referring to fig. 10I, for the horizontal prediction mode, the motion information of the peripheral matching block a1 may be selected for the first W × 4 sub-region, and the motion information of the peripheral matching block a2 may be selected for the second W × 4 sub-region. Referring to fig. 10J, for the vertical prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block B1 may be selected, and the motion information of the peripheral matching block B2 may be selected. For the second 8 x 8 sub-region, the motion information of the peripheral matching block B3 may be selected, and the motion information of the peripheral matching block B4 may be selected. Other angular prediction modes are similar and will not be described herein.
Referring to table 1, example 17 is an example where the width is equal to or greater than 16 and the height is equal to 8 in table 1, and the subblock division size is 4 wide for the horizontal prediction mode, and the condition is selected to allow bi-direction. For other angular prediction modes, the subblock split size is 8 x 8, and the condition is chosen to allow bi-direction.
According to fig. 10I, the size of the current block is 16 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, 2 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block a1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of a 1. Another 16 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 16 × 4 sub-region is determined according to the motion information of a 2. For the two 16 × 4 sub-areas, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10J, the size of the current block is 16 × 8, and when the target motion information prediction mode is the vertical mode, 2 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of B1 or B2. The other 8 × 8 sub-region corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. For the two sub-areas of 8 × 8, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
Example 18: the width W of the current block may be equal to 8 and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner:
and if the angular prediction mode is the vertical prediction mode, performing motion compensation on each 4 × H sub-area according to the vertical angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area.
And if the angular prediction mode is other angular prediction modes, performing bidirectional motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region.
For example, referring to fig. 10K, for the vertical prediction mode, the motion information of the peripheral matching block B1 may be selected for the first 4 × H sub-region, and the motion information of the peripheral matching block B2 may be selected for the second 4 × H sub-region. Referring to fig. 10L, for the horizontal prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block a1 may be selected, and the motion information of the peripheral matching block a2 may be selected. For the second 8 x 8 sub-region, the motion information of the peripheral matching block a1 may be selected, and the motion information of the peripheral matching block a2 may be selected. Other angular prediction modes are similar and will not be described herein.
Referring to table 1, example 18 is an example of table 1 with a height of 16 or more and a width of 8, and for the vertical prediction mode, the subblock division size is 4 × high, and the conditions are selected to allow bi-direction. For other angular prediction modes, the subblock split size is 8 x 8, and the condition is chosen to allow bi-direction.
According to fig. 10K, the size of the current block is 8 × 16, and when the target motion information prediction mode of the current block is the vertical mode, 2 sub-regions having a size of 4 × 16 are divided, wherein one sub-region of 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region of 4 × 16 is determined according to the motion information of B1. Another 4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2. For the two sub-areas of 4 × 16, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10L, the size of the current block is 16 × 8, and when the target motion information prediction mode is the horizontal mode, 2 sub-regions with the size of 8 × 8 are divided, one sub-region with 8 × 8 corresponds to the peripheral matching block a1 or a2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of the corresponding peripheral matching block. Another 8 × 8 sub-region corresponds to the peripheral matching block a1 or a2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of the corresponding peripheral matching block. For two sub-areas of 8 × 8, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
Example 19: the width W of the current block may be equal to or greater than 16, and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner:
and if the angular prediction mode is the vertical prediction mode, performing motion compensation on each 4 × H sub-area according to the vertical angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area.
And if the angular prediction mode is the horizontal prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area.
And if the angular prediction mode is other angular prediction modes, performing bidirectional motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For example, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-area. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region.
Referring to fig. 10M, for the vertical prediction mode, motion information of the peripheral matching block B1 may be selected for the first 4 × H sub-region, motion information of the peripheral matching block B2 may be selected for the second 4 × H sub-region, motion information of the peripheral matching block B3 may be selected for the third 4 × H sub-region, and motion information of the peripheral matching block B4 may be selected for the fourth 4 × H sub-region. For the horizontal prediction mode, the motion information of the peripheral matching block a1 is selected for the first W × 4 sub-region, the motion information of the peripheral matching block a2 is selected for the second W × 4 sub-region, the motion information of the peripheral matching block A3 is selected for the third W × 4 sub-region, and the motion information of the peripheral matching block a4 is selected for the fourth W × 4 sub-region. Other angular prediction modes are similar and will not be described herein.
Referring to table 1, example 19 is an example where the height is 16 or more and the width is 16 or more in table 1, and the subblock division size is 4 × high for the vertical prediction mode, and the condition is selected to allow bi-direction. For the horizontal prediction mode, the subblock split size is 4 wide, and the condition is chosen to allow bi-direction. For other angular prediction modes, the subblock split size is 8 x 8, and the condition is chosen to allow bi-direction.
According to fig. 10M, the size of the current block is 16 × 16, and when the target motion information prediction mode is the vertical mode, 4 sub-regions with the size of 4 × 16 are divided, one 4 × 16 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 16 sub-region is determined according to the motion information of B1. A4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2. A4 × 16 sub-area corresponds to the peripheral matching block B3, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B3. A4 × 16 sub-area corresponds to the peripheral matching block B4, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B4. For four 4 × 16 sub-regions, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10M, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block a1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of a 1. One 16 × 4 sub-area corresponds to the peripheral matching block a2, and the motion information of the 16 × 4 sub-area is determined according to the motion information of a 2. One 16 × 4 sub-area corresponds to the peripheral matching block A3, and the motion information of the 16 × 4 sub-area is determined according to the motion information of A3. One 16 × 4 sub-area corresponds to the peripheral matching block a4, and the motion information of the 16 × 4 sub-area is determined according to the motion information of a 4. For the four 16 × 4 sub-regions, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
Example 20: the width W of the current block may be greater than or equal to 8, and the height H of the current block may be greater than or equal to 8, and then motion compensation is performed on each 8 × 8 sub-region within the current block. Referring to fig. 10N, for each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region.
In embodiment 20, the sub-block partition size is independent of the motion information angle prediction mode, and the sub-block partition size may be 8 × 8 regardless of the motion information angle prediction mode as long as the width is greater than or equal to 8 and the height is greater than or equal to 8. The selection condition is independent of the motion information angle prediction mode, and is a condition allowing bi-directional selection, regardless of the motion information angle prediction mode, as long as the width is 8 or more and the height is 8 or more.
According to fig. 10N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block a1 or a2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of a1 or a 2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block a1 or a2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of a1 or a 2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block A3 or a4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of A3 or a 4. One of the 8 × 8 sub-regions corresponds to the peripheral matching block A3 or a4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of A3 or a 4. For the four sub-areas of 8 × 8, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B1 or B2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. For the four sub-areas of 8 × 8, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal up mode, 4 sub-regions with a size of 8 × 8 may be divided. Then, for each 8 × 8 sub-region, a peripheral matching block (E, B2 or a2) corresponding to the 8 × 8 sub-region may be determined, and the determination method is not limited thereto, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block. For each sub-area of 8 × 8, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions with a size of 8 × 8 are divided. Then, for each 8 × 8 sub-region, a peripheral matching block (A3, a5, or a7) corresponding to the 8 × 8 sub-region may be determined, without limitation, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block. For each sub-area of 8 × 8, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
According to fig. 10N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the vertical right mode, 4 sub-regions with a size of 8 × 8 are divided. Then, for each 8 × 8 sub-region, a peripheral matching block (B3, B5, or B7) corresponding to the 8 × 8 sub-region may be determined, without limitation, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block. For each sub-area of 8 × 8, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-area.
Example 21: when the width W of the current block is greater than or equal to 8 and the height H is greater than or equal to 8, motion compensation is performed on each 8 × 8 sub-region in the current block, and for each sub-region, any one of several pieces of motion information of the surrounding matching blocks is selected according to a corresponding angle, as shown in fig. 10N.
Example 22: based on the same application concept as the method, an embodiment of the present application provides an encoding and decoding apparatus applied to a decoding end or an encoding end, as shown in fig. 11, which is a structural diagram of the apparatus, including:
an obtaining module 111, configured to obtain at least one motion information angle prediction mode of a current block;
a processing module 112, configured to select, for each motion information angle prediction mode, a plurality of peripheral matching blocks pointed to by a preconfigured angle from among peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode; if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
a coding/decoding module 113, configured to encode or decode the current block according to the motion information prediction mode candidate list.
The processing module 112 is further configured to prohibit adding the motion information angular prediction mode to the motion information prediction mode candidate list if the motion information of the plurality of peripheral matching blocks is identical.
The processing module 112 is further configured to select at least one first peripheral matching block from the plurality of peripheral matching blocks; for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks; if the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block, determining that the comparison result of the first peripheral matching block is different from the motion information; if the motion information of the first peripheral matching block is the same as the motion information of the second peripheral matching block, determining that the comparison result of the first peripheral matching block is the same as the motion information;
if the comparison result of any first peripheral matching block is that the motion information is different, determining that the motion information of the peripheral matching blocks is not completely the same; and if the comparison results of all the first peripheral matching blocks are the same in motion information, determining that the motion information of the plurality of peripheral matching blocks is completely the same.
The processing module 112 is specifically configured to, when selecting at least one first peripheral matching block from the plurality of peripheral matching blocks: taking any one or more of the plurality of peripheral matching blocks as the first peripheral matching block; or treating one or more designated peripheral matching blocks of the plurality of peripheral matching blocks as the first peripheral matching block;
when the processing module 112 selects the second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks, specifically: selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks according to the traversal step size and the position of the first peripheral matching block; wherein the traversal stride is a block spacing between the first and second perimeter matched blocks.
The processing module 112 is further configured to determine the traversal step size based on the size of the current block.
In one example, the codec device further comprises (not shown in the figure):
and the filling module is used for filling motion information of the uncoded blocks and/or the intra-frame coding blocks if the uncoded blocks and/or the intra-frame coding blocks exist in the plurality of peripheral matching blocks.
The padding module is specifically configured to, when padding the motion information of the uncoded blocks and/or the intra-coded blocks: padding available motion information of the uncoded blocks and/or neighboring blocks of the intra-coded blocks into motion information of the uncoded blocks and/or the intra-coded blocks; alternatively, the first and second electrodes may be,
filling available motion information of a reference block at a corresponding position of the uncoded block and/or the intra-coded block in a time domain reference frame into the motion information of the uncoded block and/or the intra-coded block; alternatively, the first and second electrodes may be,
padding default motion information as motion information for the non-coded blocks and/or the intra-coded blocks.
In an example, the encoding/decoding module 113 is specifically configured to, when encoding or decoding the current block according to the motion information prediction mode candidate list:
selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, then:
determining the motion information of the current block according to the target motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
The encoding/decoding module 113 is specifically configured to, when determining the motion information of the current block according to the target motion information angle prediction mode: selecting a plurality of peripheral matching blocks pointed by the pre-configuration angle from peripheral blocks of the current block on the basis of the pre-configuration angle corresponding to the target motion information angle prediction mode;
dividing the current block into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks;
and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
The encoding/decoding module 113 is specifically configured to, when determining the motion information of the current block according to the target motion information angle prediction mode: determining a selection condition of the current block for acquiring motion information and sub-region partition information of the current block according to the target motion information angle prediction mode and the size of the current block; the selection condition is a first selection condition or a second selection condition, the first selection condition is that motion information selected from motion information of the peripheral matching block is not allowed to be bidirectional motion information, and the second selection condition is that motion information selected from motion information of the peripheral matching block is allowed to be bidirectional motion information;
selecting a plurality of peripheral matching blocks pointed by the pre-configuration angle from peripheral blocks of the current block on the basis of the pre-configuration angle corresponding to the target motion information angle prediction mode;
and determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the plurality of peripheral matching blocks.
The encoding/decoding module 113 is specifically configured to, when determining the motion information of the current block according to the target motion information angle prediction mode: according to a pre-configuration angle corresponding to the target motion information angle prediction mode, selecting a peripheral matching block pointed by the pre-configuration angle from peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of the peripheral matching block; wherein, if the width and height of the current block are both greater than or equal to 8, the current block is divided into sub-blocks by 8 × 8, and the motion information selected from the motion information of the peripheral matching blocks is allowed to be bidirectional motion information.
In terms of hardware, the hardware architecture diagram of the decoding-side device provided in the embodiment of the present application may specifically refer to fig. 12. The method comprises the following steps: a processor 121 and a machine-readable storage medium 122, the machine-readable storage medium 122 storing machine-executable instructions executable by the processor 121; the processor 121 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 121 is configured to execute machine-executable instructions to perform the following steps:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and decoding the current block according to the motion information prediction mode candidate list.
In terms of hardware, the hardware architecture diagram of the encoding end device provided in the embodiment of the present application may specifically refer to fig. 13. The method comprises the following steps: a processor 131 and a machine-readable storage medium 132, the machine-readable storage medium 132 storing machine-executable instructions executable by the processor 131; the processor 131 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 131 is configured to execute machine-executable instructions to perform the following steps:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and encoding the current block according to the motion information prediction mode candidate list.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the encoding and decoding methods disclosed in the above examples of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A method of encoding and decoding, the method comprising:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
encoding or decoding the current block according to the motion information prediction mode candidate list.
2. The method of claim 1, wherein after selecting the plurality of peripheral matching blocks pointed to by the preconfigured angle from among the peripheral blocks of the current block, the method further comprises:
and if the motion information of the plurality of peripheral matching blocks is completely the same, prohibiting the motion information angle prediction mode from being added into the motion information prediction mode candidate list.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
selecting at least one first peripheral matching block from the plurality of peripheral matching blocks;
for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks; if the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block, determining that the comparison result of the first peripheral matching block is different from the motion information; if the motion information of the first peripheral matching block is the same as the motion information of the second peripheral matching block, determining that the comparison result of the first peripheral matching block is the same as the motion information;
if the comparison result of any first peripheral matching block is that the motion information is different, determining that the motion information of the peripheral matching blocks is not completely the same; and if the comparison results of all the first peripheral matching blocks are the same in motion information, determining that the motion information of the plurality of peripheral matching blocks is completely the same.
4. The method of claim 3,
said selecting at least one first peripheral matching block from said plurality of peripheral matching blocks, comprising: taking any one or more of the plurality of peripheral matching blocks as the first peripheral matching block; or, one or more specified peripheral matching blocks in the plurality of peripheral matching blocks are used as the first peripheral matching block;
the selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks comprises: selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks according to the traversal step size and the position of the first peripheral matching block; wherein the traversal stride is a block spacing between the first and second perimeter matched blocks.
5. The method according to claim 4, wherein before said selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks, the method further comprises:
determining the traversal step size based on a size of the current block.
6. The method of claim 1,
after selecting a plurality of peripheral matching blocks pointed to by the preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angular prediction mode, the method further comprises:
if an uncoded block and/or an intra-frame coded block exist in the plurality of peripheral matching blocks, filling motion information of the uncoded block and/or the intra-frame coded block.
7. The method of claim 6,
the padding motion information of the non-coded blocks and/or the intra-coded blocks comprises:
padding available motion information of the uncoded blocks and/or neighboring blocks of the intra-coded blocks into motion information of the uncoded blocks and/or the intra-coded blocks; alternatively, the first and second electrodes may be,
filling available motion information of a reference block at a corresponding position of the uncoded block and/or the intra-coded block in a time domain reference frame into the motion information of the uncoded block and/or the intra-coded block; alternatively, the first and second electrodes may be,
padding default motion information as motion information for the non-coded blocks and/or the intra-coded blocks.
8. The method according to any one of claims 1 to 7,
the encoding or decoding the current block according to the motion information prediction mode candidate list includes:
selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, then:
determining the motion information of the current block according to the target motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
9. The method of claim 8, wherein determining the motion information of the current block according to the target motion information angular prediction mode comprises:
selecting a plurality of peripheral matching blocks pointed by the pre-configuration angle from peripheral blocks of the current block on the basis of the pre-configuration angle corresponding to the target motion information angle prediction mode;
dividing the current block into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks;
and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
10. The method of claim 8, wherein determining the motion information of the current block according to the target motion information angular prediction mode comprises:
determining a selection condition of the current block for acquiring motion information and sub-region partition information of the current block according to the target motion information angle prediction mode and the size of the current block; the selection condition is a first selection condition or a second selection condition, the first selection condition is that motion information selected from motion information of the peripheral matching block is not allowed to be bidirectional motion information, and the second selection condition is that motion information selected from motion information of the peripheral matching block is allowed to be bidirectional motion information;
selecting a plurality of peripheral matching blocks pointed by the pre-configuration angle from peripheral blocks of the current block on the basis of the pre-configuration angle corresponding to the target motion information angle prediction mode;
and determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the plurality of peripheral matching blocks.
11. The method of claim 8, wherein determining the motion information of the current block according to the target motion information angular prediction mode comprises:
according to a pre-configuration angle corresponding to the target motion information angle prediction mode, selecting a peripheral matching block pointed by the pre-configuration angle from peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of the peripheral matching block; wherein, if the width and height of the current block are both greater than or equal to 8, the current block is divided into sub-blocks by 8 × 8, and the motion information selected from the motion information of the peripheral matching blocks is allowed to be bidirectional motion information.
12. An apparatus for encoding and decoding, the apparatus comprising:
an obtaining module, configured to obtain at least one motion information angle prediction mode of a current block;
a processing module, configured to select, for each motion information angle prediction mode, a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode; if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and the coding and decoding module is used for coding or decoding the current block according to the motion information prediction mode candidate list.
13. A decoding-side apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and decoding the current block according to the motion information prediction mode candidate list.
14. An encoding side device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring at least one motion information angle prediction mode of a current block;
for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a pre-configuration angle from peripheral blocks of a current block based on the pre-configuration angle of the motion information angle prediction mode;
if the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode into a motion information prediction mode candidate list;
and encoding the current block according to the motion information prediction mode candidate list.
CN201910487541.4A 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment Active CN112055220B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202211098101.8A CN115460424A (en) 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment
CN201910487541.4A CN112055220B (en) 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment
PCT/CN2020/092406 WO2020244425A1 (en) 2019-06-05 2020-05-26 Encoding and decoding method and apparatus, and devices thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910487541.4A CN112055220B (en) 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202211098101.8A Division CN115460424A (en) 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment

Publications (2)

Publication Number Publication Date
CN112055220A true CN112055220A (en) 2020-12-08
CN112055220B CN112055220B (en) 2022-07-29

Family

ID=73608753

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910487541.4A Active CN112055220B (en) 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment
CN202211098101.8A Pending CN115460424A (en) 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211098101.8A Pending CN115460424A (en) 2019-06-05 2019-06-05 Encoding and decoding method, device and equipment

Country Status (2)

Country Link
CN (2) CN112055220B (en)
WO (1) WO2020244425A1 (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100290530A1 (en) * 2009-05-14 2010-11-18 Qualcomm Incorporated Motion vector processing
CN102164278A (en) * 2011-02-15 2011-08-24 杭州海康威视软件有限公司 Video coding method and device for removing flicker of I frame
CN102223528A (en) * 2010-04-15 2011-10-19 华为技术有限公司 Method for obtaining reference motion vector
JP2013005343A (en) * 2011-06-20 2013-01-07 Jvc Kenwood Corp Image encoder, image encoding method and image encoding program
CN106454378A (en) * 2016-09-07 2017-02-22 中山大学 Frame rate up-conversion video coding method based on deformation movement model and system
WO2017048008A1 (en) * 2015-09-17 2017-03-23 엘지전자 주식회사 Inter-prediction method and apparatus in video coding system
US20170272745A1 (en) * 2016-03-18 2017-09-21 Mediatek Inc. Method and Apparatus of Intra Prediction in Image and Video Processing
US20170339425A1 (en) * 2014-10-31 2017-11-23 Samsung Electronics Co., Ltd. Video encoding device and video decoding device using high-precision skip encoding and method thereof
CN107454403A (en) * 2016-05-31 2017-12-08 谷歌公司 The adjustable directional intra prediction of block size
CN108605122A (en) * 2015-11-19 2018-09-28 韩国电子通信研究院 Method and apparatus for coding/decoding screen inner estimation mode
CN108781297A (en) * 2016-03-18 2018-11-09 联发科技股份有限公司 The method and apparatus of Video coding
CN109089119A (en) * 2017-06-13 2018-12-25 浙江大学 A kind of method and apparatus of motion-vector prediction
CN109587479A (en) * 2017-09-29 2019-04-05 华为技术有限公司 Inter-frame prediction method, device and the codec of video image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2861773T3 (en) * 2011-05-31 2021-10-06 Jvc Kenwood Corp Motion picture decoding device, motion picture decoding procedure and motion picture decoding program
JP2013121167A (en) * 2011-12-09 2013-06-17 Jvc Kenwood Corp Image encoder, image encoding method, and image encoding program
CN103338373B (en) * 2013-06-15 2016-05-04 浙江大学 A kind of adjacent boundary length deriving method and device
CN104811726B (en) * 2015-04-24 2018-12-18 宏祐图像科技(上海)有限公司 The candidate motion vector selection method of estimation in frame per second conversion
CN116915979A (en) * 2017-01-16 2023-10-20 世宗大学校产学协力团 Video signal decoding/encoding method and method for transmitting bit stream

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100290530A1 (en) * 2009-05-14 2010-11-18 Qualcomm Incorporated Motion vector processing
CN102223528A (en) * 2010-04-15 2011-10-19 华为技术有限公司 Method for obtaining reference motion vector
CN102164278A (en) * 2011-02-15 2011-08-24 杭州海康威视软件有限公司 Video coding method and device for removing flicker of I frame
JP2013005343A (en) * 2011-06-20 2013-01-07 Jvc Kenwood Corp Image encoder, image encoding method and image encoding program
US20170339425A1 (en) * 2014-10-31 2017-11-23 Samsung Electronics Co., Ltd. Video encoding device and video decoding device using high-precision skip encoding and method thereof
WO2017048008A1 (en) * 2015-09-17 2017-03-23 엘지전자 주식회사 Inter-prediction method and apparatus in video coding system
CN108605122A (en) * 2015-11-19 2018-09-28 韩国电子通信研究院 Method and apparatus for coding/decoding screen inner estimation mode
US20170272745A1 (en) * 2016-03-18 2017-09-21 Mediatek Inc. Method and Apparatus of Intra Prediction in Image and Video Processing
CN108781297A (en) * 2016-03-18 2018-11-09 联发科技股份有限公司 The method and apparatus of Video coding
CN107454403A (en) * 2016-05-31 2017-12-08 谷歌公司 The adjustable directional intra prediction of block size
CN106454378A (en) * 2016-09-07 2017-02-22 中山大学 Frame rate up-conversion video coding method based on deformation movement model and system
CN109089119A (en) * 2017-06-13 2018-12-25 浙江大学 A kind of method and apparatus of motion-vector prediction
CN109587479A (en) * 2017-09-29 2019-04-05 华为技术有限公司 Inter-frame prediction method, device and the codec of video image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BRUNO MACCHIAVELLO: "Loss-Resilient Coding of Texture and Depth for Free-Viewpoint Video Conferencing", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
冯磊: "高效视频编码中的帧内预测快速算法研究", 《中国优秀硕士学位论文全文数据库》 *
荆倩: "基于HEVC的视频编码快速算法研究", 《中国优秀硕士学位论文全文数据库》 *

Also Published As

Publication number Publication date
CN115460424A (en) 2022-12-09
CN112055220B (en) 2022-07-29
WO2020244425A1 (en) 2020-12-10

Similar Documents

Publication Publication Date Title
US10812821B2 (en) Video encoding and decoding
US11936896B2 (en) Video encoding and decoding
CN111385569B (en) Coding and decoding method and equipment thereof
CN111263144B (en) Motion information determination method and equipment
CN113709488B (en) Encoding and decoding method, device and equipment
CN113841404B (en) Video encoding/decoding method and apparatus, and recording medium storing bit stream
CN113709499B (en) Encoding and decoding method, device and equipment
CN113747166B (en) Encoding and decoding method, device and equipment
CN112055220B (en) Encoding and decoding method, device and equipment
CN112449181B (en) Encoding and decoding method, device and equipment
CN112565747A (en) Decoding and encoding method, device and equipment
CN111669592B (en) Encoding and decoding method, device and equipment
CN113422951B (en) Decoding and encoding method, device and equipment
CN114598889A (en) Encoding and decoding method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant