CN115529459A - Central point searching method and device, computer equipment and storage medium - Google Patents

Central point searching method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115529459A
CN115529459A CN202211232932.XA CN202211232932A CN115529459A CN 115529459 A CN115529459 A CN 115529459A CN 202211232932 A CN202211232932 A CN 202211232932A CN 115529459 A CN115529459 A CN 115529459A
Authority
CN
China
Prior art keywords
searching
coding unit
target
central point
center point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211232932.XA
Other languages
Chinese (zh)
Other versions
CN115529459B (en
Inventor
朱传传
邵瑾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Granfei Intelligent Technology Co.,Ltd.
Original Assignee
Glenfly Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glenfly Tech Co Ltd filed Critical Glenfly Tech Co Ltd
Priority to CN202211232932.XA priority Critical patent/CN115529459B/en
Publication of CN115529459A publication Critical patent/CN115529459A/en
Application granted granted Critical
Publication of CN115529459B publication Critical patent/CN115529459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application relates to a center point searching method, a center point searching device, a computer device, a storage medium and a computer program product. The method comprises the following steps: acquiring an original pixel and a reference pixel; searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units; when the current coding unit is not divided, obtaining a first searching central point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit, and calculating the current coding unit to obtain a second searching central point; and searching for duplication of the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to the current coding unit. By adopting the method, the second searching central point can be additionally obtained so as to improve the coding quality.

Description

Central point searching method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of video coding technologies, and in particular, to a center point search method, apparatus, computer device, storage medium, and computer program product.
Background
Motion estimation is an extremely important link in the video coding process. The motion estimation is used for obtaining the optimal matching block of the current block, the higher the matching degree is, the better the coding quality is, therefore, the good and bad motion estimation is made, and the basic trend of the coding quality is determined.
Due to limited hardware resources, too many points cannot be searched by motion estimation, and therefore, a search area is limited within a certain range. Therefore, in the conventional art, a search center point is determined, and then a search area is determined centering on the point. And finally, performing matching judgment on candidate blocks corresponding to all or part of candidate points in the search area to finally obtain an optimal matching point. However, the search range is limited, and the probability that the global optimal point can be searched is not high, thereby causing low coding quality.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a center point searching method, apparatus, computer device, computer readable storage medium and computer program product capable of additionally acquiring a second search center point to improve encoding quality.
In a first aspect, the present application provides a center point searching method. The method comprises the following steps:
acquiring an original pixel and a reference pixel;
searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
when the current coding unit is not divided, obtaining a first search central point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit, and calculating the current coding unit to obtain a second search central point;
and searching for duplication of the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to the current coding unit.
In one embodiment, the calculating the current coding unit to obtain the second search center point includes:
when the size of the current coding unit is smaller than the preset size, calculating the sum of absolute errors outside the coding unit, and obtaining the second search central point according to the sum of absolute errors outside the coding unit;
and when the size of the current coding unit is equal to the preset size, calculating the sum of absolute errors inside the coding unit, and obtaining the second search central point according to the sum of absolute errors inside the coding unit.
In one embodiment, the calculating, when the current size of the coding unit is smaller than the preset size, a sum of absolute errors outside the coding unit and obtaining the second search center point according to the sum of absolute errors outside the coding unit includes:
calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit;
and comparing the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit to obtain a target coding unit, and taking the motion vector of the target coding unit as the second searching central point.
In one embodiment, the calculating, when the current size of the coding unit is equal to the preset size, a sum of absolute errors inside the coding unit, and obtaining the second search center point according to the sum of absolute errors inside the coding unit includes:
calculating the minimum absolute error and the corresponding motion vector of each divided coding unit in the current coding unit;
and taking the minimum absolute error of each divided coding unit and the median of the corresponding motion vectors as the second searching central point.
In one embodiment, the performing duplicate checking on the first search center point and the second search center point to obtain a target first search center point and a target second search center point corresponding to the current coding unit includes:
when the first searching central point is not equal to the second searching central point, taking the first searching central point as the target first searching central point and taking the second searching central point as the target second searching central point;
and when the first searching central point is equal to the second searching central point, taking the first searching central point as the target first searching central point, and performing quadrant segmentation on the basis of the target first searching central point to obtain the target second searching central point.
In one embodiment, the performing quadrant segmentation based on the target first search center point to obtain the target second search center point includes:
taking the target first search central point as an origin, and performing quadrant segmentation by a preset step length to obtain a plurality of initial second search central points;
and obtaining the target second searching central point from a plurality of initial second searching central points according to the quadrant of the predicted motion vector of the target first searching central point.
In one embodiment, the method further comprises:
and when the current coding unit has partitions and the partition number is a target number, respectively taking the motion vectors of the prediction unit of the current coding unit as the target first search central point and the target second search central point.
In a second aspect, the present application provides a motion estimation method based on center point search. The method comprises the following steps:
obtaining a target first searching central point and a target second searching central point according to the central point searching method in any one of the embodiments;
performing intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
obtaining a first prediction result according to the target first search center point and the target second search center point;
and comparing the first prediction result with the second prediction result to obtain a target prediction result.
In a third aspect, the present application further provides a center point searching apparatus. The device comprises:
the acquisition module is used for acquiring an original pixel and a reference pixel;
the searching module is used for searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
the calculation module is used for obtaining a first search central point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit when the current coding unit is not divided, and calculating the current coding unit to obtain a second search central point;
and the duplication checking module is used for checking duplication of the first searching central point and the second searching central point to obtain a target first searching central point and a target second searching central point corresponding to the current coding unit.
In a fourth aspect, the present application further provides a motion estimation apparatus based on center point search. The device comprises:
a center point obtaining module, configured to obtain a target first search center point and a target second search center point according to the center point searching apparatus in any of the embodiments;
the intra-frame prediction module is used for carrying out intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
the inter-frame prediction module is used for obtaining a first prediction result according to the target first search center point and the target second search center point;
and the comparison module is used for comparing the first prediction result with the second prediction result to obtain a target prediction result.
In a fifth aspect, the present application further provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method in any of the above embodiments when the processor executes the computer program.
In a sixth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any of the above embodiments.
In a seventh aspect, the present application further provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, performs the steps of the method in any of the above embodiments.
The encoder firstly searches according to the acquired original pixels and the reference pixels to obtain a plurality of CUs and PUs, and when the current CU is not divided, a first search central point corresponding to the current CU is obtained based on the PUs corresponding to the current CU, the current CU is calculated to obtain a second search central point, and finally the first search central point and the second search central point are subjected to duplication checking to obtain a target first search central point and a target second search central point corresponding to the current coding unit. Secondly, the encoder can also check the duplicate of the first search central point and the second search central point, so as to obtain a target first search central point and a target second search central point.
Drawings
FIG. 1 is a flow diagram illustrating a center point search method according to an embodiment;
FIG. 2 is an external view of an encoding unit according to an embodiment;
FIG. 3 is a diagram illustrating the interior of an encoding unit in one embodiment;
FIG. 4 is a diagram illustrating center point searching in one embodiment;
FIG. 5 is a flow diagram illustrating a method for motion estimation based on center point search in one embodiment;
FIG. 6 is a diagram of motion estimation in one embodiment;
FIG. 7 is a diagram of a fine search in one embodiment;
FIG. 8 is a diagram illustrating coding unit partitioning, according to an embodiment;
FIG. 9 is a block diagram showing the construction of a center point searching apparatus according to an embodiment;
FIG. 10 is a block diagram of a motion estimation apparatus based on a center point search according to an embodiment;
FIG. 11 is a diagram of the internal structure of an encoder in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In one embodiment, as shown in fig. 1, there is provided a center point searching method, including the steps of:
s102, acquiring an original pixel and a reference pixel.
The original pixel refers to an image to be processed without any processing, and may be a frame of image to be processed or a video composed of multiple frames of images to be processed; the reference pixel refers to an image to be referred to when encoding. In order to achieve compression during video encoding, new pictures can be generated by buffering parts of pictures and then combining these pictures with motion vectors, these buffered pictures then being called reference pictures. Wherein, the reference pixel can be an image of a previously encoded frame of a certain frame.
Alternatively, the reference pixel may be selected from the original pixels by a preset selection method. The preset selection method is a preset method for selecting a reference pixel from original pixels, and can be specifically set in combination with a specific application scene. For example, the preset selection method may be a pre-trained reference pixel selection model based on deep learning, and the optimal reference pixel may be obtained through the model, so that the subsequent motion estimation effect can be improved.
Optionally, the original pixel and the reference pixel obtained by the encoder are encrypted, so that the encoder needs to perform decryption first and then perform subsequent processing after obtaining the original pixel and the reference pixel. The encryption mode includes an MD5 mode, an SHA1 mode, and the like, so that the security of the original pixel and the reference pixel can be ensured.
S104, searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units.
Here, the Coding Unit refers to a basic Unit of prediction Coding (CU), and has a size of 8 × 8, 16 × 16, 32 × 32, or 64 × 64.
The Prediction Unit (PU) is obtained by partitioning according to the coding Unit, and one CU may be partitioned into a plurality of PUs according to the partition type of the prediction model.
Alternatively, the original pixel may be searched based on the reference pixel by a coarse search, which determines a larger search range, and then searches several matching points within the larger search range according to a certain step size to roughly estimate the partition between CU and PU. And obtaining a plurality of CUs corresponding to the original pixels and the reference pixels and PUs corresponding to the CUs through coarse searching, wherein the dividing modes and the numbers of the CUs and the PUs of the original pixels and the reference pixels are consistent.
For example, a CU partition scheme that results in a 64 × 64CTU through a coarse search is determined, such as a CTU partition into n CUs, a preliminary PU partition scheme for each CU, and a coarse Motion Vector (MV) for each PU.
S106, when the current coding unit is not divided, a first searching central point corresponding to the current coding unit is obtained based on the prediction unit corresponding to the current coding unit, and the current coding unit is calculated to obtain a second searching central point.
After obtaining the plurality of CUs, the encoder traverses each CU to perform calculation, and obtains a first search center point and a second search center point. First, the encoder determines whether the CU is partitioned, and if the current CU is not partitioned, i.e. the PU size is equal to the CU size, the encoder uses the MV of the coarse search of PU0 (i.e. the current CU, since the PU and the CU are equal in size at this time) as the first search center point of the current CU.
Alternatively, the encoder may select the coarse search optimal MV of PU0 as the first search center point of the fine search of the current CU, i.e., the first search center point.
Then, the encoder continues to perform calculation based on the current CU to obtain a second search center point. Alternatively, the encoder may calculate its corresponding minimum Sum of Absolute Difference (SAD) from the current CU to determine the second search center point.
And S108, performing duplicate checking on the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to the current coding unit.
The target first searching center point and the target second searching center point are searching center points which are finally used for motion estimation after the first searching center point and the second searching center point are subjected to duplicate searching.
And the encoder checks the first searching center point and the second searching center point for duplication, namely, judges whether the first searching center point and the second searching center point are equal. And then, according to the duplicate checking result, namely whether the first searching central point is equal to the second searching central point or not, obtaining a target first searching central point and a target second searching central point on the basis of the first searching central point and the second searching central point.
For example, when the duplicate search results are not equal, the first search center point may be used as the target first search center point, and the second search center point may be used as the target second search center point. When the duplicate search results are equal, in order to encode the repeated search, thereby wasting hardware resources, the second search center point needs to be determined based on the position of the first search center point, so as to obtain the target first search center point and the target second search center point.
In the center point searching method, an encoder firstly searches according to an acquired original pixel and a reference pixel to obtain a plurality of CUs and PUs, when the current CU is not divided, a first searching center point corresponding to the current CU is obtained based on the PUs corresponding to the current CU, the current CU is calculated to obtain a second searching center point, and finally duplication checking is carried out on the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to a current coding unit. Secondly, the encoder can also check the duplicate of the first search central point and the second search central point, so as to obtain a target first search central point and a target second search central point.
In one embodiment, the calculating the current coding unit to obtain the second search center point includes: when the size of the current coding unit is smaller than the preset size, calculating the sum of absolute errors outside the coding unit, and obtaining a second searching central point according to the sum of absolute errors outside the coding unit; and when the size of the current coding unit is equal to the preset size, calculating the sum of absolute errors inside the coding unit, and obtaining a second searching central point according to the sum of absolute errors inside the coding unit.
The sum of absolute errors outside the coding unit refers to the sum of absolute errors of CUs outside the current CU, and may include the sum of absolute errors of CUs neighboring the current CU and the sum of absolute errors of a combined coding unit composed of the current CU and the neighboring CU. Fig. 2 may be combined with fig. 2, and fig. 2 is a schematic diagram of the outside of the coding unit in an embodiment, when the current CU is CU0, that is, M × M _0 in the diagram, the surrounding CUs are obtained, that is, the synthesized coding unit may be composed, and then the absolute error outside the coding unit includes the sum of the absolute errors of the respective neighboring CUs around CU0 and the sum of the absolute errors of the synthesized coding unit.
The absolute error sum in the coding unit refers to an absolute error sum corresponding to each divided coding unit after the current CU is divided. Referring to fig. 3, fig. 3 is a schematic diagram of the inside of an encoding unit in an embodiment, where the partitioning units partitioned by three partitioning manners are (M/2) × (M/2), M × (M/2) _0 and (M/2) × M _0, respectively, and then the sum of absolute errors inside the encoding unit includes the sum of absolute errors of (M/2) × (M/2), M × (M/2) _0 and (M/2) × M _ 0.
When the size of the current CU is smaller than the preset size, the other 3 CUs with the same size are arranged around the current CU, the current CU and the adjacent 3 CUs can form a combined coding unit, and the sum of absolute errors outside the coding unit is calculated at the moment, so that the second searching center point corresponding to the current CU can be obtained. This is because the motion trend of the current CU has a large correlation with the surrounding CUs, and thus the second search center point of the current CU can be obtained by calculating the sum of absolute errors outside the coding unit.
When the size of the current CU is equal to the preset size, i.e. the current CU is as large as the CTU, the surrounding CUs may belong to neighboring CTUs, and the neighboring CTUs may not have started encoding yet, so the second search center point is obtained by calculating the sum of absolute errors inside the coding unit.
In the above embodiment, the encoder determines whether to obtain the second search center point corresponding to the current CU by calculating the sum of absolute errors obtained outside the coding unit or inside the coding unit according to whether the size of the current CU is equal to the preset size, so as to obtain a more accurate second search center point.
In one embodiment, when the size of the current coding unit is smaller than a preset size, calculating a sum of absolute errors outside the coding unit, and obtaining a second search center point according to the sum of absolute errors outside the coding unit, includes: calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit; and comparing the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit to obtain a target coding unit, and taking the motion vector of the target coding unit as a second searching central point.
The encoder calculates the minimum absolute error sum of the current CU, each adjacent CU and the synthesized coding unit respectively, then compares the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit, obtains a target coding unit according to the comparison result, and finally takes the MV of the target coding unit as a second search center point.
Optionally, the encoder filters the minimum sum of absolute errors of the adjacent coding units and the minimum sum of absolute errors of the combined coding unit by a preset filtering method to obtain the target coding unit. For example, the preset filtering method may set a unit with the smallest absolute error value as the target coding unit, that is, compare the smallest error sum of each neighboring CU and the synthesized coding unit, and select a unit corresponding to the smallest error sum as the target coding unit.
Alternatively, the encoder may pre-process the synthesized coding unit when comparing, since the synthesized coding unit has a size of 2M × 2M and other neighboring CUs have a size of M × M, and directly comparing may not be fair to the neighboring CUs, and thus the synthesized coding unit needs to be pre-processed. Illustratively, the value of the synthesized coding unit needs to be reduced by a factor of 4 before being compared with the minimum sum of absolute errors of other neighboring coding units.
In other embodiments, the encoder may perform the reduction according to the ratio of the synthesized coding unit to the adjacent coding units, for example, the size of the synthesized coding unit is 4M × 4M, and the size of the other adjacent coding units is M × M, so that the synthesized coding unit needs to be reduced by 16 times.
In the above embodiment, by comparing the minimum absolute error sum of the respective neighboring coding units and the minimum absolute error sum of the synthesized coding unit, an accurate second search center point can be obtained by combining the motion tendency of the current CU with the surrounding CUs.
In one embodiment, when the size of the current coding unit is equal to a preset size, calculating the sum of absolute errors inside the coding unit, and obtaining the second search center point according to the sum of absolute errors inside the coding unit, includes: calculating the minimum absolute error and the corresponding motion vector of each divided coding unit in the current coding unit; and taking the minimum absolute error of each divided coding unit and the median of the corresponding motion vectors as a second searching central point.
The encoder firstly calculates the minimum absolute error and the corresponding motion vector of each divided coding unit in the current coding unit, and takes the median of the minimum absolute error and the corresponding MV of each divided unit as a second search center point.
Illustratively, continuing with fig. 3, the encoder obtains the MV corresponding to the minimum SAD of (M/2) × (M/2) _0 inside the current M × M CU, denoted as MV1; obtaining the MV corresponding to the minimum SAD of M x (M/2) _0 in the current M x M CU, and marking as MV2; the MV corresponding to the minimum SAD of (M/2) × M _0 inside the current M × M CU is obtained and is denoted as MV3, then the median of MV1, MV2, MV3 is taken and is denoted as candMV, that is, candMV = medium (MV 1, MV2, MV 3), and candMV is taken as the second search center point.
In the above embodiment, the encoder may obtain the accurate second search center point when the size of the current CU is as large as the CTU, in combination with the correlation of the current CU and its interior.
In one embodiment, the performing duplicate checking on the first search center point and the second search center point to obtain a target first search center point and a target second search center point corresponding to the current coding unit includes: when the first searching central point is not equal to the second searching central point, taking the first searching central point as a target first searching central point and taking the second searching central point as a target second searching central point; and when the first searching central point is equal to the second searching central point, taking the first searching central point as a target first searching central point, and performing quadrant segmentation on the basis of the target first searching central point to obtain a target second searching central point.
After the encoder obtains the first search center point and the second search center point, whether the first search center point and the second search center point are equal or not is judged, if the first search center point and the second search center point are not equal, the first search center point is used as a target first search center point, and the second search center point is used as a target second search center point. When the first searching central point is equal to the second searching central point, a second searching central point needs to be searched again to serve as a target second searching central point. And then, the encoder continues to search the next CU, and searches the corresponding target first search center point and the target second search center point for the next CU until the encoder finishes traversing all CUs, namely, all CUs search the corresponding target first search center point and the target second search center point.
When searching for the target second search center point, the encoder performs quadrant segmentation based on the first target search center point to obtain the target second search center point. Optionally, quadrant segmentation may be performed based on a preset step size and the first target search central point to obtain a target second search central point. For example, the encoder may use the first search center point as an origin to obtain a corresponding coordinate system. And then, quadrant segmentation is carried out on the coordinate system by using a preset step length, and a second search central point is obtained by using the position of the predicted motion vector of the first search central point.
In the above embodiment, the encoder performs duplicate checking on the first search center point and the second search center point to obtain the target first search center point and the target second search center point, so that when the first search center point and the second search center point are equal, repeated search is avoided, and waste of hardware resources is avoided.
In one embodiment, the quadrant segmentation based on the target first search center point to obtain the target second search center point includes: taking the target first search central point as an origin, and performing quadrant segmentation by using a preset step length to obtain a plurality of initial second search central points; and obtaining a target second searching central point from the plurality of initial second searching central points according to the quadrant of the predicted motion vector of the target first searching central point.
Specifically, when the encoder performs quadrant segmentation based on the target first search center point to obtain the target second search center point, the encoder first uses the target first search center as an origin to obtain a corresponding coordinate system, and then segments the quadrants by using the search step length of the coarse search as a preset step length to obtain a plurality of initial second search center points. And then, obtaining a target second searching central point from the initial second searching central point according to the quadrant where the predicted motion vector of the target searching central point is located.
Illustratively, in conjunction with fig. 4, fig. 4 is a schematic diagram of searching for a center point in an embodiment. Suppose the search step size of the rough search is n, C0 is the first search center point of the current CU, the coordinate values are (x, y), and MVP is the predicted MV of the current CU. Since the MVP reflects the motion trend of the current CU to some extent, the second search center point can be determined by using the relationship between the MVP and C0. The coordinate system of the lower graph is drawn by taking C0 as a central point. Then, the quadrant is divided by taking n as a step length, so that a plurality of initial second search center points, namely C1, C2, C3 and C4, can be obtained. And then, obtaining a target second searching central point from C1, C2, C3 and C4 according to the position of the MVP of the current CU in the coordinate system. For example, if the MVP of the current CU is in the first quadrant of C0, then C1 is taken as the second search center point; if the MVP of the current CU is in a second quadrant of the C0, taking the C2 as a second searching central point; if the MVP of the current CU is in the third quadrant of C0, taking C3 as a second searching central point; and if the MVP of the current CU is in the fourth quadrant of C0, taking C4 as a second search center point.
In the above embodiment, since the MVP may reflect the motion trend of the current CU, a more accurate target second search center point may be obtained by using the MVP of the current CU.
In one embodiment, the method further comprises: and when the current coding unit has partitions and the number of the partitions is the target number, respectively taking the motion vectors of the prediction unit of the current coding unit as a target first searching central point and a target second searching central point.
Illustratively, if the coarse search stage determines the current CU to be partitioned into 2 PUs, as shown in fig. 5, fig. 5 is a schematic diagram of the partition of the coding unit in one embodiment. Then taking the optimal MV of the coarse search of the PU0 as a first search center point of the fine search of the current CU; and taking the optimal MV of the coarse search of the PU1 as a second search center point of the fine search of the current CU.
In one embodiment, as shown in fig. 5, there is provided a motion estimation method based on center point search, including the steps of:
s502, a target first searching central point and a target second searching central point are obtained according to the central point searching method in any of the embodiments.
The encoder can obtain the target first search center point and the target second search center point corresponding to all CUs by using the center point search method in any one of the embodiments.
S504, the original pixel is subjected to intra-frame prediction based on the reference pixel, and a second prediction result is obtained.
The intra prediction refers to predicting pixels of a current block in a current frame by using boundary pixels of neighboring reconstructed blocks as reference pixels.
Optionally, a horizontal or vertical model or the like may be used to perform intra-frame search to obtain a second prediction result, where the second prediction result is a prediction result obtained by performing intra-frame prediction on an original pixel.
S506, obtaining a first prediction result according to the target first searching central point and the target second searching central point.
The encoder determines a search area based on the target first search center point and the target second search center point, and then with the target first search center point and the target second search center point as centers. And finally, performing matching judgment on candidate blocks corresponding to all or part of the candidate points in the search area to finally obtain an optimal matching point (or a matching block corresponding to the matching point). The term "optimal" refers to the minimum coding cost.
And S508, comparing the first prediction result with the second prediction result to obtain a target prediction result.
The encoder compares the first prediction result with the second prediction result, and then selects a better-effect mode to perform motion estimation, so as to obtain a final target prediction result.
For example, if a first prediction result obtained by performing motion estimation according to the target first search center point and the target second search center point is better than a second prediction result obtained by performing intra-frame prediction on the original pixel, the encoder may select the target first search center point and the target second search center point to perform motion estimation, and the target prediction result is the first prediction result. Otherwise, the encoder performs motion estimation by means of intra-frame prediction, and the corresponding target prediction result is the second prediction result.
In one embodiment, 21 video sequences were tested, and the probability of including a global optimum in the search area is greatly increased after the second search start point is obtained. Therefore, the coding efficiency of the video sequence is improved, the maximum coding efficiency is improved by 10.21%, and the average coding efficiency is improved by 2.48%.
In the above embodiment, the encoder selects a better mode for motion estimation by comparing the first prediction result with the second prediction result, so as to improve the encoding efficiency.
In an exemplary embodiment, as shown in fig. 6, fig. 6 is a schematic diagram of motion estimation in an embodiment.
Firstly, reading an original pixel and a reference pixel from a memory; then, performing intra-frame prediction and inter-frame prediction on the original pixels respectively, wherein the inter-frame prediction comprises coarse search and fine search; then, the results of the intra prediction and the inter prediction are judged to obtain the final motion estimation result.
First, a coarse search is performed, and after the coarse search is performed, a CU partition scheme of 64 × 64 CTUs (with sizes of 16 × 16, 32 × 32, and 64 × 64) is determined, for example, a CTU is partitioned into n CUs, each CU has a preliminary PU partition scheme, and each PU has a coarse MV.
Then, the fine search phase is entered. Referring to fig. 7, fig. 7 is a schematic diagram of fine search in an embodiment. First, whether a CU has partitions is determined. As shown in fig. 8, fig. 8 is a schematic diagram of coding unit division in an embodiment. If in the coarse search stage, the current CU is determined to be divided into 2 PUs, as shown in fig. 8, the optimal MV of the coarse search of PU0 is taken as the first search center point of the fine search of the current CU; and taking the optimal MV of the coarse search of the PU1 as a second search center point of the fine search of the current CU. And if the current CU is judged to be divided into non-divisions in the coarse searching stage, namely the size of the PU is equal to that of the CU, taking the optimal MV of the coarse search of the PU0 as the first search center point of the fine search of the current CU. A second search center point is then obtained as follows.
When M <64, for the case of M <64, i.e. M =8/16/32, there are 3 other CUs of the same size around the current CU, and the current CU and the 3 other surrounding CUs may form a larger 2M × 2M CU, as shown in fig. 2. Since the motion trend of the current CU has a great correlation with its surrounding CUs, the motion information of the surrounding CUs can be used to obtain the second search center point of the current CU, which is specifically performed as follows:
1) The minimum SAD (Sum of Absolute Difference) values of the other 3 MxM CUs around the current MxM CU are obtained and are respectively marked as SAD0/SAD1/SAD2.
The calculation formula of SAD is as follows:
Figure BDA0003882242420000141
the method comprises the steps that M multiplied by N is the size of a pixel block for calculating SAD, CB is the current pixel block to be coded, CB (i, j) is the pixel value with the coordinate (i, j) in the current pixel block, RB is a reference pixel block, and RB (i, j) is the pixel value with the coordinate (i, j) in the reference pixel block.
2) And acquiring the minimum SAD value of the 2 Mx 2M CU where the current Mx M CU is located, and recording the minimum SAD value as SAD3.
3) Comparing SAD: since the CU size corresponding to SAD3 is 2 mx 2M, and the CU sizes corresponding to other SADs are mxm, if directly comparing SADs of CUs with different sizes, it is not fair, so we need to reduce the value of SAD3 by 4 times, i.e. compare the sizes of SAD0/SAD1/SAD 2/(SAD 3> > 2), and then take the optimal MV of the coarse search of the CU corresponding to the minimum value, and mark it as candMV.
When M =64, since the size of the current CU is as large as the CTU, it is possible that its surrounding CUs belong to neighboring CTUs, which may not have started encoding yet. Therefore, the second search center point is obtained by using the correlation of the motion trend of the current CU with the smaller CU/PU inside thereof.
With continued reference to fig. 3, using this point, the specific method for the 64 × 64CU to obtain the second search center point is as follows:
1) Obtaining the MV corresponding to the minimum SAD of (M/2) × (M/2) _0 in the current M × M CU, and recording the MV as MV1
2) Obtaining the MV corresponding to the minimum SAD of M (M/2) _0 inside the current M multiplied by M CU, and recording the MV as MV2
3) Obtaining the MV corresponding to the minimum SAD of the inner (M/2) multiplied by M _0 of the current M multiplied by M CU, and recording the MV as MV3
4) The median of MV1, MV2, MV3 is taken and denoted candMV, i.e. candMV = medium (MV 1, MV2, MV 3).
And (4) performing duplicate checking on the first searching central point and the second searching central point to obtain a target first searching central point and a target second searching central point. And if candMV is not equal to the first search central point of the current CU, using candMV as a second search central point of the current CU. Otherwise, if the two are equal, the repeated searching is avoided, thereby wasting hardware resources. A new second search center point needs to be found. Suppose the search step size of the rough search is n, C0 is the first search center point of the current CU, the coordinate values thereof are (x, y), and MVP is the predicted MV of the current CU. Since the MVP reflects the motion trend of the current CU to some extent, the second search center point can be determined by using the relationship between the MVP and C0. With continued reference to the figures, the system,
1) And if the MVP of the current CU is in the first quadrant of C0, taking C1 as a second search center point.
2) If the MVP of the current CU is in the second quadrant of C0, then C2 is taken as the second search center point.
3) And if the MVP of the current CU is in the third quadrant of C0, taking C3 as a second searching central point.
4) And if the MVP of the current CU is in the fourth quadrant of C0, taking C4 as a second search center point.
In the above embodiment, a second search center point is additionally obtained to appropriately increase the search range, so as to increase the probability of searching the global optimal matching point, thereby improving the coding quality.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a center point search device for implementing the above center point search method and a motion estimation device based on center point search for the motion estimation method based on center point search. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the method, so specific limitations in the following embodiments of one or more center point searching devices and the motion estimation device based on center point searching may refer to the limitations of the center point searching method and the motion estimation method based on center point searching, and are not described herein again.
In one embodiment, as shown in fig. 9, there is provided a center point searching apparatus including: the device comprises an acquisition module 100, a search module 200, a calculation module 300 and a duplication checking module 400, wherein:
the obtaining module 100 is configured to obtain an original pixel and a reference pixel.
The searching module 200 is configured to search the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the plurality of coding units.
The calculating module 300 is configured to, when the current coding unit is not partitioned, obtain a first search center point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit, and calculate the current coding unit to obtain a second search center point.
And the duplication checking module 400 is configured to check duplication of the first search center point and the second search center point to obtain a target first search center point and a target second search center point corresponding to the current coding unit.
In one embodiment, the calculating module 300 comprises:
and the external calculation unit is used for calculating the sum of absolute errors outside the coding unit when the size of the current coding unit is smaller than the preset size, and obtaining a second search central point according to the sum of absolute errors outside the coding unit.
And the internal calculation unit is used for calculating the sum of absolute errors inside the coding unit when the size of the current coding unit is equal to the preset size, and obtaining a second search central point according to the sum of absolute errors inside the coding unit.
In one embodiment, the external computing unit includes:
and the error calculation subunit is used for calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit.
And the error comparison unit is used for comparing the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit to obtain a target coding unit, and taking the motion vector of the target coding unit as a second searching central point.
In one embodiment, the internal computing unit includes:
and the third error calculation subunit is used for calculating the minimum absolute error and the corresponding motion vector of each divided coding unit in the current coding unit.
And the error processing subunit is used for taking the minimum absolute error of each divided coding unit and the median of the corresponding motion vector as a second searching central point.
In one embodiment, the above-mentioned duplication checking module 400 includes:
and the first duplication checking unit is used for taking the first searching central point as a target first searching central point and taking the second searching central point as a target second searching central point when the first searching central point is not equal to the second searching central point.
And the second duplication checking unit is used for taking the first searching central point as a target first searching central point when the first searching central point is equal to the second searching central point, and performing quadrant segmentation on the basis of the target first searching central point to obtain a target second searching central point.
In one embodiment, the second duplication checking unit includes:
and the dividing unit is used for taking the target first searching central point as an origin and performing quadrant division by using a preset step length to obtain a plurality of initial second searching central points.
And the target acquisition unit is used for obtaining a target second searching central point from the plurality of initial second searching central points according to the quadrant of the predicted motion vector of the target first searching central point.
In one embodiment, the above apparatus further comprises:
and the dividing module is used for respectively taking the motion vectors of the prediction unit of the current coding unit as a target first searching central point and a target second searching central point when the current coding unit has the divisions and the division number is the target number.
In one embodiment, as shown in fig. 10, there is provided a motion estimation apparatus based on center point search, including: a center point acquisition module 500, an intra prediction module 600, a comparison module 700, and a target result module 800.
The central point obtaining module 500 is configured to obtain the target first searching central point and the target second searching central point according to the central point searching apparatus of claim 8.
The intra-frame prediction module 600 is configured to perform intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result.
The inter-frame prediction module 700 is configured to obtain a first prediction result according to the target first search center point and the target second search center point.
The comparing module 800 is configured to compare the first prediction result and the second prediction result to obtain a target prediction result. .
The modules in the center point search and the motion estimation apparatus based on the center point search may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be an encoder, the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store raw pixel and reference pixel data. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a center point search method.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of the method in any of the above embodiments.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the steps of the method of any of the above embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of the method in any of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, databases, or other media used in the embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (13)

1. A center point searching method, the method comprising:
acquiring an original pixel and a reference pixel;
searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
when the current coding unit is not divided, obtaining a first search central point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit, and calculating the current coding unit to obtain a second search central point;
and searching for duplication of the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to the current coding unit.
2. The method of claim 1, wherein said calculating the current coding unit to obtain the second search center point comprises:
when the size of the current coding unit is smaller than a preset size, calculating the sum of absolute errors outside the coding unit, and obtaining the second search central point according to the sum of absolute errors outside the coding unit;
and when the current size of the coding unit is equal to the preset size, calculating the sum of absolute errors inside the coding unit, and obtaining the second searching central point according to the sum of absolute errors inside the coding unit.
3. The method of claim 2, wherein calculating the sum of absolute errors outside the coding unit when the size of the current coding unit is smaller than the preset size, and obtaining the second search center point according to the sum of absolute errors outside the coding unit comprises:
calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit;
and comparing the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit to obtain a target coding unit, and taking the motion vector of the target coding unit as the second searching central point.
4. The method of claim 2, wherein calculating the sum of absolute errors within the coding unit when the size of the current coding unit is equal to the preset size, and obtaining the second search center point according to the sum of absolute errors within the coding unit comprises:
calculating the minimum absolute error and the corresponding motion vector of each divided coding unit in the current coding unit;
and taking the minimum absolute error of each divided coding unit and the median of the corresponding motion vectors as the second searching central point.
5. The method of claim 1, wherein the performing duplicate checking on the first search center point and the second search center point to obtain a target first search center point and a target second search center point corresponding to the current coding unit comprises:
when the first searching central point is not equal to the second searching central point, taking the first searching central point as the target first searching central point and taking the second searching central point as the target second searching central point;
and when the first searching central point is equal to the second searching central point, taking the first searching central point as the target first searching central point, and performing quadrant segmentation on the basis of the target first searching central point to obtain the target second searching central point.
6. The method of claim 5, wherein the quadrant segmentation based on the target first search center point to obtain the target second search center point comprises:
taking the target first search central point as an origin, and performing quadrant segmentation by a preset step length to obtain a plurality of initial second search central points;
and obtaining the target second searching central point from the plurality of initial second searching central points according to the quadrant of the predicted motion vector of the target first searching central point.
7. The method of claim 1, further comprising:
and when the current coding unit has partitions and the partition number is the target number, respectively taking the motion vectors of the prediction unit of the current coding unit as the target first search center point and the target second search center point.
8. A method for motion estimation based on center point search, the method comprising:
the center point searching method according to claims 1-7, obtaining a target first searching center point and a target second searching center point;
performing intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
obtaining a first prediction result according to the target first searching central point and the target second searching central point;
and comparing the first prediction result with the second prediction result to obtain a target prediction result.
9. An apparatus for searching a center point, the apparatus comprising:
the acquisition module is used for acquiring an original pixel and a reference pixel;
the searching module is used for searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
the calculation module is used for obtaining a first search central point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit when the current coding unit is not divided, and calculating the current coding unit to obtain a second search central point;
and the duplication checking module is used for checking duplication of the first searching central point and the second searching central point to obtain a target first searching central point and a target second searching central point corresponding to the current coding unit.
10. An apparatus for motion estimation based on center point search, the apparatus comprising:
the central point acquisition module is used for obtaining a first target searching central point and a second target searching central point according to the central point searching device of claim 8;
the intra-frame prediction module is used for carrying out intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
the inter-frame prediction module is used for obtaining a first prediction result according to the target first search center point and the target second search center point;
and the comparison module is used for comparing the first prediction result with the second prediction result to obtain a target prediction result.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7 or 8.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7 or 8.
13. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7 or 8.
CN202211232932.XA 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium Active CN115529459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211232932.XA CN115529459B (en) 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211232932.XA CN115529459B (en) 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115529459A true CN115529459A (en) 2022-12-27
CN115529459B CN115529459B (en) 2024-02-02

Family

ID=84702521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211232932.XA Active CN115529459B (en) 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115529459B (en)

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0734177A2 (en) * 1995-03-20 1996-09-25 Daewoo Electronics Co., Ltd Method and apparatus for encoding/decoding a video signal
WO2000033580A1 (en) * 1998-11-30 2000-06-08 Microsoft Corporation Improved motion estimation and block matching pattern
JP2000236552A (en) * 1999-02-15 2000-08-29 Nec Corp Motion vector detector
CN1662067A (en) * 2004-02-27 2005-08-31 松下电器产业株式会社 Motion estimation method and moving picture coding method
EP1679900A2 (en) * 2005-01-07 2006-07-12 NTT DoCoMo, Inc. Apparatus and method for multiresolution encoding and decoding
CN101022551A (en) * 2007-03-15 2007-08-22 上海交通大学 Motion compensating module pixel prefetching device in AVS video hardware decoder
CN101087413A (en) * 2006-06-07 2007-12-12 中兴通讯股份有限公司 Division method of motive object in video sequence
JP2008301270A (en) * 2007-05-31 2008-12-11 Canon Inc Moving image encoding device and moving image encoding method
CN101600112A (en) * 2009-07-09 2009-12-09 杭州士兰微电子股份有限公司 Sub-pixel motion estimation device and method
US20100026903A1 (en) * 2008-07-30 2010-02-04 Sony Corporation Motion vector detection device, motion vector detection method, and program
WO2010041624A1 (en) * 2008-10-09 2010-04-15 株式会社エヌ・ティ・ティ・ドコモ Moving image encoding device, moving image decoding device, moving image encoding method, moving image decoding method, moving image encoding program, moving image decoding program, moving image processing system and moving image processing method
CN101815218A (en) * 2010-04-02 2010-08-25 北京工业大学 Method for coding quick movement estimation video based on macro block characteristics
US20140169472A1 (en) * 2012-12-19 2014-06-19 Mikhail Fludkov Motion estimation engine for video encoding
WO2016008284A1 (en) * 2014-07-18 2016-01-21 清华大学 Intra-frame pixel prediction method, encoding method and decoding method, and device thereof
CN106331703A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Video coding and decoding method, and video coding and decoding device
CN107872674A (en) * 2017-11-23 2018-04-03 上海交通大学 A kind of layering motion estimation method and device for ultra high-definition Video Applications
GB201810794D0 (en) * 2018-06-29 2018-08-15 Imagination Tech Ltd Guaranteed data compression
US20180241993A1 (en) * 2016-05-17 2018-08-23 Arris Enterprises Llc Template matching for jvet intra prediction
CN108495138A (en) * 2018-03-28 2018-09-04 天津大学 A kind of integer pixel motion estimation method based on GPU
CN109660800A (en) * 2017-10-12 2019-04-19 北京金山云网络技术有限公司 Method for estimating, device, electronic equipment and computer readable storage medium
CN110365988A (en) * 2018-04-11 2019-10-22 福州瑞芯微电子股份有限公司 A kind of H.265 coding method and device
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN112514392A (en) * 2020-02-18 2021-03-16 深圳市大疆创新科技有限公司 Method and apparatus for video encoding
WO2021056225A1 (en) * 2019-09-24 2021-04-01 Oppo广东移动通信有限公司 Inter-frame prediction method and apparatus, device and storage medium
CN114565501A (en) * 2022-02-21 2022-05-31 格兰菲智能科技有限公司 Data loading method and device for convolution operation

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0734177A2 (en) * 1995-03-20 1996-09-25 Daewoo Electronics Co., Ltd Method and apparatus for encoding/decoding a video signal
WO2000033580A1 (en) * 1998-11-30 2000-06-08 Microsoft Corporation Improved motion estimation and block matching pattern
JP2000236552A (en) * 1999-02-15 2000-08-29 Nec Corp Motion vector detector
CN1662067A (en) * 2004-02-27 2005-08-31 松下电器产业株式会社 Motion estimation method and moving picture coding method
US20050190844A1 (en) * 2004-02-27 2005-09-01 Shinya Kadono Motion estimation method and moving picture coding method
EP1679900A2 (en) * 2005-01-07 2006-07-12 NTT DoCoMo, Inc. Apparatus and method for multiresolution encoding and decoding
CN101087413A (en) * 2006-06-07 2007-12-12 中兴通讯股份有限公司 Division method of motive object in video sequence
CN101022551A (en) * 2007-03-15 2007-08-22 上海交通大学 Motion compensating module pixel prefetching device in AVS video hardware decoder
JP2008301270A (en) * 2007-05-31 2008-12-11 Canon Inc Moving image encoding device and moving image encoding method
US20100026903A1 (en) * 2008-07-30 2010-02-04 Sony Corporation Motion vector detection device, motion vector detection method, and program
WO2010041624A1 (en) * 2008-10-09 2010-04-15 株式会社エヌ・ティ・ティ・ドコモ Moving image encoding device, moving image decoding device, moving image encoding method, moving image decoding method, moving image encoding program, moving image decoding program, moving image processing system and moving image processing method
CN101600112A (en) * 2009-07-09 2009-12-09 杭州士兰微电子股份有限公司 Sub-pixel motion estimation device and method
CN101815218A (en) * 2010-04-02 2010-08-25 北京工业大学 Method for coding quick movement estimation video based on macro block characteristics
US20140169472A1 (en) * 2012-12-19 2014-06-19 Mikhail Fludkov Motion estimation engine for video encoding
WO2016008284A1 (en) * 2014-07-18 2016-01-21 清华大学 Intra-frame pixel prediction method, encoding method and decoding method, and device thereof
CN106331703A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Video coding and decoding method, and video coding and decoding device
US20180241993A1 (en) * 2016-05-17 2018-08-23 Arris Enterprises Llc Template matching for jvet intra prediction
CN109660800A (en) * 2017-10-12 2019-04-19 北京金山云网络技术有限公司 Method for estimating, device, electronic equipment and computer readable storage medium
CN107872674A (en) * 2017-11-23 2018-04-03 上海交通大学 A kind of layering motion estimation method and device for ultra high-definition Video Applications
CN108495138A (en) * 2018-03-28 2018-09-04 天津大学 A kind of integer pixel motion estimation method based on GPU
CN110365988A (en) * 2018-04-11 2019-10-22 福州瑞芯微电子股份有限公司 A kind of H.265 coding method and device
GB201810794D0 (en) * 2018-06-29 2018-08-15 Imagination Tech Ltd Guaranteed data compression
WO2021056225A1 (en) * 2019-09-24 2021-04-01 Oppo广东移动通信有限公司 Inter-frame prediction method and apparatus, device and storage medium
CN112514392A (en) * 2020-02-18 2021-03-16 深圳市大疆创新科技有限公司 Method and apparatus for video encoding
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN114565501A (en) * 2022-02-21 2022-05-31 格兰菲智能科技有限公司 Data loading method and device for convolution operation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BENJAMIN BROSS: "JVET AHG report: Draft text and test model algorithm description editing (AHG2)", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING: GOTHENBURG, SE, 3–12 JULY 2019 *
MATHIAS WIEN ETAL: "AHG4: Agenda and report of the AHG meeting on the 360 Video Verification Tests on 2020-09-04", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 20TH MEETING: BY TELECONFERENCE, 7 – 16 OCTOBER 2020 *

Also Published As

Publication number Publication date
CN115529459B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
US20200244986A1 (en) Picture prediction method and related apparatus
US9451266B2 (en) Optimal intra prediction in block-based video coding to calculate minimal activity direction based on texture gradient distribution
US7580456B2 (en) Prediction-based directional fractional pixel motion estimation for video coding
US8634471B2 (en) Moving image encoding apparatus, control method thereof and computer-readable storage medium
CN107318026A (en) Video encoder and method for video coding
JP2009147807A (en) Image processing apparatus
CN110312130B (en) Inter-frame prediction and video coding method and device based on triangular mode
KR100994773B1 (en) Method and Apparatus for generating motion vector in hierarchical motion estimation
CN111246212B (en) Geometric partitioning mode prediction method and device based on encoding and decoding end, storage medium and terminal
CN109688411B (en) Video coding rate distortion cost estimation method and device
CN115529459B (en) Center point searching method, center point searching device, computer equipment and storage medium
CN113747166B (en) Encoding and decoding method, device and equipment
CN114040209A (en) Motion estimation method, motion estimation device, electronic equipment and storage medium
JP6390275B2 (en) Encoding circuit and encoding method
CN113347417A (en) Method, device, equipment and storage medium for improving rate distortion optimization calculation efficiency
US20150092835A1 (en) Methods for Comparing a Target Block to a Reference Window for Motion Estimation during Video Encoding
CN116156174B (en) Data encoding processing method, device, computer equipment and storage medium
CN113556551B (en) Encoding and decoding method, device and equipment
TW201521429A (en) Video pre-processing method and apparatus for motion estimation
CN113365081B (en) Method and device for optimizing motion estimation in video coding
CN113766234B (en) Decoding and encoding method, device and equipment
Ogunfunmi et al. Low power HD video fast motion estimation algorithm based on signatures
WO2022021310A1 (en) Encoding method and apparatus, computing processing device, computer program, and storage medium
CN117857814A (en) Video processing method, device, equipment and medium
CN116723324A (en) Video encoding method, video encoding device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 200135, 11th Floor, Building 3, No. 889 Bibo Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: Granfei Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: 200135 Room 201, No. 2557, Jinke Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee before: Gryfield Intelligent Technology Co.,Ltd.

Country or region before: China