WO2007089068A1 - Method and apparatus for block-based motion estimation - Google Patents
Method and apparatus for block-based motion estimation Download PDFInfo
- Publication number
- WO2007089068A1 WO2007089068A1 PCT/KR2006/004689 KR2006004689W WO2007089068A1 WO 2007089068 A1 WO2007089068 A1 WO 2007089068A1 KR 2006004689 W KR2006004689 W KR 2006004689W WO 2007089068 A1 WO2007089068 A1 WO 2007089068A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- blocks
- video
- search points
- degree
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/533—Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/567—Motion estimation based on rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
Definitions
- the present invention relates to a video signal compression system, and more particularly, to a method and apparatus for motion estimation, in which a motion vector is obtained by allocating a search point for each block in a frame having a plurality of block groups.
- PC personal computers
- HDTV high-definition televisions
- PAL Phase Alternation Line
- NTSC National Television System Committee
- FRC Frame rate conversion
- MPEG Moving Picture Experts Group
- the BMA includes a full search method and a three step search (TSS) method.
- the TSS method provides optimal motion estimation by searching for a matching point while reducing the interval of a step from a search center.
- FlG. 1 is a conceptual view for explaining a conventional TSS method. In FlG. 1,
- 101 denotes a search area of a reference frame
- 102 denotes a reference block of the reference frame
- 103 denotes a block of a current frame
- 104 denotes a search point.
- the present invention provides a method of block-based motion estimation, in which an initial motion vector is estimated by allocating different search points for blocks in a frame having a plurality of block groups, thereby reducing the amount of computation and providing accurate motion estimation.
- the present invention also provides an apparatus for block-based motion estimation, in which an initial motion vector is estimated by allocating different search points for blocks in a frame having a plurality of block groups.
- FlG. 1 is a conceptual view for explaining a conventional three step search (TSS) method
- FlG. 2 is a graph illustrating sums of absolute difference (SADs) according to the
- FlG. 3 is a block diagram of an apparatus for block-based motion estimation according to an embodiment of the present invention.
- FlG. 4 illustrates an example in which different search points are allocated for blocks in block groups of a video frame
- FIGS. 5 through 8 illustrate examples in which different search points are allocated for the blocks of FIG. 4;
- FlG. 9 is a conceptual view for explaining motion estimation of a first error matching unit of the apparatus of FlG. 3;
- FlG. 10 is a conceptual view for explaining motion estimation of a second error matching unit of the apparatus of FlG. 3;
- FlG. 11 is a flowchart illustrating a method of block-based motion estimation according to an embodiment of the present invention.
- a video motion estimation method including allocating different search points for blocks in each block group of an input video frame, estimating candidate motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points, and measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied, by applying the estimated candidate motion vectors to a current block, and determining the candidate motion vector having the minimum degree of block distortion to be a final motion vector of the current block.
- a video motion estimation apparatus including a search point allocating unit, an error matching unit, and a motion vector determining unit.
- the search point allocating unit allocates different search points for blocks in each block group of an input video frame.
- the error matching unit estimates initial motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points.
- the motion vector determining unit measures the degree of block distortion to which each of the estimated candidate motion vectors is applied, by applying the estimated candidate motion vectors to a current block, and determines the initial motion vector having the minimum degree of block distortion to be a motion vector of the current block.
- FlG. 3 is a block diagram of an apparatus for block-based motion estimation according to an embodiment of the present invention.
- the apparatus for block-based motion estimation includes a search point allocating unit 310, a first error matching unit 320, and a motion vector determining unit 330.
- the motion vector determining unit 330 includes a second error matching unit 332, a weight multiplying unit 334, and a minimum value selecting unit 336.
- the two video frames include a current frame and a reference frame that is temporally adjacent to the current frame.
- the search point allocating unit 310 divides the reference frame into a plurality of block groups and allocates different search points for blocks in each block group.
- FlG. 4 illustrates an example in which different search points are allocated for blocks in block groups of a video frame. Referring to FlG. 4, the entire area of the reference frame is divided into 8x8 pixel blocks and the 8x8 pixel blocks are grouped into 2x2 block groups. The block groups are of a 2x2 type, but may be of a 3x3 or 4x4 type. Thus, different search points are allocated for blocks in each block group in the reference frame to allow searches for motions of different sizes and types in the blocks.
- FIGS. 5 through 8 illustrate examples in which different search points are allocated for the blocks of FlG. 4.
- FlG. 4 illustrate examples in which different search points are allocated for the blocks of FlG. 4.
- FIG. 5 illustrates search points that are allocated for estimation of a motion vector of a block 401 of FlG. 4.
- 501 denotes a search range
- 502 denotes a block
- 503 denotes a center search point
- 504 denotes search points.
- the range of search points 504 used to search for a large motion in the horizontal direction is horizontally + 11 pixels and vertically + 4 pixels from the center search point 503.
- FlG. 6 illustrates search points that are allocated for estimation of a block 402 of FlG. 4.
- the range of search points used to search for a middle motion in the horizontal direction is horizontally +8 pixels and vertically + 4 pixels from the center search point.
- the first error matching unit 320 matches the reference frame, for which the search points are allocated by the search point allocating unit 310, with the input current frame to generate initial motion vectors MVl through MV4 for blocks.
- MVl, MV2, MV3, and MV4 are generated for 2x2 block groups.
- 901 denotes a search range of a reference block of the reference frame
- 902 denotes a reference block of the reference frame
- 903 denotes a current block of the current frame.
- the first error matching unit 320 may measure the degree of distortion between the reference block 902 and the current block 903 using an error function given by equation 1.
- Er(l,k) indicates the degree of distortion between a reference block and a current block at a current position (l,k)
- RB indicates the reference block
- CB indicates the current block
- (i j) indicates a pixel position
- BSy indicates a block size in the y- direction
- BSx indicates a block size in the x-direction.
- the error function measures the degree of distortion using SADs between two blocks from a center, i.e., the current position (l,k).
- the first error matching unit 320 obtains an SAD corresponding to each of the search points that are allocated as in FIGS. 5 through 8 and determines the position (l,k) having the minimum SAD to be an initial motion vector of a block.
- the second error matching unit 332 re-attempts motion estimation for the current block by applying initial motion vectors determined by the first error matching unit 320 to the current block.
- initial motion vectors determined by the first error matching unit 320
- blocks since blocks have initial motion vectors determined using different search points, they have different initial motion vectors.
- an appropriate motion vector for the current block can be estimated by applying the initial motion vectors of adjacent blocks to the current block.
- a current block (i,j) has the motion vector MV4.
- Blocks (i,j-l), (i-lj-1), and (i-1 j) adjacent to the current block (ij) have the motion vectors MVl, MV2, and MV3, respectively.
- an error function between the reference block and the current block is performed by applying the motion vectors MVl, MV2, and MV3 of the adjacent blocks and the motion vector MV4 of the current block to the current block.
- the second error matching unit 332 generates four SADs; SADl, SAD2, SAD3, and S AD4, for the current block using the error function.
- the weight multiplying unit 334 applies weights Wl, W2, W3, and W4 to the four
- the weight multiplying unit 334 may applies different weights to SADs according to various motion information such as the amount of motion or the spatial position of a block referred to by the current block.
- the minimum value selecting unit 336 selects the motion vector having the minimum SAD from among the four SADs to which the weights are applied by the weight multiplying unit 334 as a final motion vector.
- FlG. 11 is a flowchart illustrating a method of block-based motion estimation according to an embodiment of the present invention.
- operation 1110 two video frames that are temporally adjacent to each other, i.e., occur sequentially in time, are input.
- the two video frames include a current frame and a reference frame that is temporally adjacent to the current frame.
- operation 1120 the reference frame is divided into a plurality of block groups and different search points for searching for motions of different sizes are allocated for blocks in each block group.
- the degree of block distortion between the current frame and the reference frame is measured in the allocated search point for each block.
- a candidate motion vector of each block is estimated using the degree of block distortion measured in operation 1130.
- the degree of block distortion is measured using an SAD.
- motion estimation is re-attempted by applying candidate motion vectors of adjacent blocks to a current block. For example, the degree of distortion is measured by applying four candidate motion vectors to the current block, thereby obtaining four
- the different weights are applied to the obtained SADs according to motion directions.
- the candidate motion vector having the minimum SAD is determined to be a final motion vector of the current block.
- the present invention can also be embodied as a computer-readable code on a computer-readable recording medium.
- the computer-readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of computer-readable recording media include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and carrier waves.
- ROM read-only memory
- RAM random-access memory
- CD-ROMs compact discs, digital versatile discs, digital versatile discs, and Blu-rays, and Blu-rays, and Blu-rays, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Provided is a method and apparatus for block-based motion estimation, in which a motion vector is estimated by allocating different search points for blocks in a frame having a plurality of block groups. A video motion estimation method includes allocating different search points for blocks in each block group of an input video frame, estimating candidate motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points, and measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied , by applying the estimated candidate motion vectors to a current block, and determining the candidate motion vector having the minimum degree of block distortion to be a final motion vector of the current block.
Description
Description METHOD AND APPARATUS FOR BLOCK-BASED MOTION
ESTIMATION
Technical Field
[1] The present invention relates to a video signal compression system, and more particularly, to a method and apparatus for motion estimation, in which a motion vector is obtained by allocating a search point for each block in a frame having a plurality of block groups.
Background Art
[2] Typically, personal computers (PC) or high-definition televisions (HDTV) perform frame rate conversion compatible with programs that follow various broadcasting standards such as the Phase Alternation Line (PAL) or the National Television System Committee (NTSC). Frame rate conversion (FRC) is the act of changing the number of output frames per second. In particular, it is necessary to interpolate new frames when a frame rate increases. With recent advances in broadcasting technologies, frame rate conversion is performed after video data is compressed according to video compression standards such as Moving Picture Experts Group (MPEG) and H.263.
[3] In the field of video processing, video signals usually have redundancies due to their high correlation. Data compression efficiency can be improved by removing redundancies during data compression. In this case, in order to efficiently compress a video frame that changes temporally with time, it is necessary to remove redundancies in the time-axis direction. In other words, by replacing a frame showing no movement or slight movement with a previous frame, the amount of data to be transmitted can be greatly reduced. Motion estimation (ME) is the act of searching for a block in a previous frame that is most similar to a block in a current frame. A motion vector (MV) indicates how much a block has moved.
[4] There are two motion estimation approaches, a pixel recursive algorithm (PRA) and a block matching algorithm (BMA). The BMA is widely used in video encoding or FRC due to its simplicity,
[5] ease of hardware implementation, and possibility of real-time processing.
[6] The BMA includes a full search method and a three step search (TSS) method.
[7] Though the full search method provides accurate motion information by searching in a search area for a block having the minimum sum of absolute difference (SAD), it requires a large amount of computation.
[8] The TSS method provides optimal motion estimation by searching for a matching point while reducing the interval of a step from a search center.
[9] FlG. 1 is a conceptual view for explaining a conventional TSS method. In FlG. 1,
101 denotes a search area of a reference frame, 102 denotes a reference block of the reference frame, 103 denotes a block of a current frame, and 104 denotes a search point.
[10] Referring to FlG. 1, SADs between a reference search point at (0, 0) and its neighboring eight search points are compared to obtain a search point having the minimum SAD. A search is then performed by reducing the interval of a search step from the obtained search point by half. A final search point 105 having the minimum SAD is obtained by repeating the search until the interval of the search step becomes 1 and the obtained search point 105 is determined to be a final motion vector. Disclosure of Invention
Technical Problem
[11] However, in the conventional TSS method, many local minimum values in a block may cause an incorrect search.
[12] Referring to FlG. 2, there are 5 minimum SADs corresponding to an error function in a single block. Thus, in the TSS method, it is expected that only a single minimum SAD exists between temporally adjacent blocks, but in practice, several local minimum SADs exist between them, resulting in a failure to obtain accurate motion information. As a result, the conventional TSS method may cause blocking artifacts in an actual image due to the plurality of local minimum SADs. Moreover, since the conventional TSS method has a fixed search point for all blocks, it is not easy to estimate motion information that falls outside a search range.
Technical Solution
[13] The present invention provides a method of block-based motion estimation, in which an initial motion vector is estimated by allocating different search points for blocks in a frame having a plurality of block groups, thereby reducing the amount of computation and providing accurate motion estimation.
[14] The present invention also provides an apparatus for block-based motion estimation, in which an initial motion vector is estimated by allocating different search points for blocks in a frame having a plurality of block groups.
Advantageous Effects
[15] As described above, according to the present invention, by allocating different search points for blocks in a reference frame in video encoding, FRC, or interlace to progressive conversion (IPC), the amount of computation can be reduced and accurate motion estimation can be achieved. Furthermore, local minimum error values that may be generated in an actual image during motion estimation can be minimized, thereby implementing more accurate motion estimation.
Description of Drawings
[16] The above and other features and advantages of the present invention will become more apparent by describing in detail an exemplary embodiment thereof with reference to the attached drawings in which:
[17] FlG. 1 is a conceptual view for explaining a conventional three step search (TSS) method;
[18] FlG. 2 is a graph illustrating sums of absolute difference (SADs) according to the
TSS method of FIG. 1;
[19] FlG. 3 is a block diagram of an apparatus for block-based motion estimation according to an embodiment of the present invention;
[20] FlG. 4 illustrates an example in which different search points are allocated for blocks in block groups of a video frame;
[21] FIGS. 5 through 8 illustrate examples in which different search points are allocated for the blocks of FIG. 4;
[22] FlG. 9 is a conceptual view for explaining motion estimation of a first error matching unit of the apparatus of FlG. 3;
[23] FlG. 10 is a conceptual view for explaining motion estimation of a second error matching unit of the apparatus of FlG. 3; and
[24] FlG. 11 is a flowchart illustrating a method of block-based motion estimation according to an embodiment of the present invention.
Best Mode
[25] According to one aspect of the present invention, there is provided a video motion estimation method including allocating different search points for blocks in each block group of an input video frame, estimating candidate motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points, and measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied, by applying the estimated candidate motion vectors to a current block, and determining the candidate motion vector having the minimum degree of block distortion to be a final motion vector of the current block.
[26] According to another aspect of the present invention, there is provided a video motion estimation apparatus including a search point allocating unit, an error matching unit, and a motion vector determining unit. The search point allocating unit allocates different search points for blocks in each block group of an input video frame. The error matching unit estimates initial motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points. The motion vector determining unit measures the degree of block distortion to which each
of the estimated candidate motion vectors is applied, by applying the estimated candidate motion vectors to a current block, and determines the initial motion vector having the minimum degree of block distortion to be a motion vector of the current block.
Mode for Invention
[27] Hereinafter, an exemplary embodiment of the present invention will be described in detail with reference to the accompanying drawings.
[28] FlG. 3 is a block diagram of an apparatus for block-based motion estimation according to an embodiment of the present invention.
[29] Referring to FlG. 3, the apparatus for block-based motion estimation includes a search point allocating unit 310, a first error matching unit 320, and a motion vector determining unit 330. The motion vector determining unit 330 includes a second error matching unit 332, a weight multiplying unit 334, and a minimum value selecting unit 336.
[30] First, two video frames that are temporally adjacent to each other, i.e., occur sequentially in time, are input. The two video frames include a current frame and a reference frame that is temporally adjacent to the current frame.
[31] The search point allocating unit 310 divides the reference frame into a plurality of block groups and allocates different search points for blocks in each block group. FlG. 4 illustrates an example in which different search points are allocated for blocks in block groups of a video frame. Referring to FlG. 4, the entire area of the reference frame is divided into 8x8 pixel blocks and the 8x8 pixel blocks are grouped into 2x2 block groups. The block groups are of a 2x2 type, but may be of a 3x3 or 4x4 type. Thus, different search points are allocated for blocks in each block group in the reference frame to allow searches for motions of different sizes and types in the blocks. FIGS. 5 through 8 illustrate examples in which different search points are allocated for the blocks of FlG. 4. FlG. 5 illustrates search points that are allocated for estimation of a motion vector of a block 401 of FlG. 4. Referring to FlG. 5, 501 denotes a search range, 502 denotes a block, 503 denotes a center search point, and 504 denotes search points. The range of search points 504 used to search for a large motion in the horizontal direction is horizontally + 11 pixels and vertically + 4 pixels from the center search point 503. FlG. 6 illustrates search points that are allocated for estimation of a block 402 of FlG. 4. Referring to FlG. 6, the range of search points used to search for a middle motion in the horizontal direction is horizontally +8 pixels and vertically + 4 pixels from the center search point. FlG. 7 illustrates search points that are allocated for estimation of a block 403 of FlG. 4. Referring to FlG. 7, the range of search points used to search for a small motion in the horizontal direction is horizontally +5 pixels and vertically + 4 pixels from the center search point. FlG. 8 illustrates search points
that are allocated for estimation of a motion vector of the block 404 of FlG. 4. Referring to FlG. 8, the range of search points to search for a fine motion is horizontally +2 pixels and vertically +2 pixels from the center search point. [32] Referring back to FlG. 3, the first error matching unit 320 matches the reference frame, for which the search points are allocated by the search point allocating unit 310, with the input current frame to generate initial motion vectors MVl through MV4 for blocks. In an embodiment of the present invention, MVl, MV2, MV3, and MV4 are generated for 2x2 block groups. Referring to FlG. 9, 901 denotes a search range of a reference block of the reference frame, 902 denotes a reference block of the reference frame, and 903 denotes a current block of the current frame. The first error matching unit 320 may measure the degree of distortion between the reference block 902 and the current block 903 using an error function given by equation 1.
Er(I, k) = ^ ∑ ( \ BBiMJ^ -CBtUi \ ) . >l >
2-0 J-O
[33] where Er(l,k) indicates the degree of distortion between a reference block and a current block at a current position (l,k), RB indicates the reference block, CB indicates the current block, (i j) indicates a pixel position, BSy indicates a block size in the y- direction, and BSx indicates a block size in the x-direction. The error function measures the degree of distortion using SADs between two blocks from a center, i.e., the current position (l,k). Thus, the first error matching unit 320 obtains an SAD corresponding to each of the search points that are allocated as in FIGS. 5 through 8 and determines the position (l,k) having the minimum SAD to be an initial motion vector of a block.
[34] The second error matching unit 332 re-attempts motion estimation for the current block by applying initial motion vectors determined by the first error matching unit 320 to the current block. In other words, since blocks have initial motion vectors determined using different search points, they have different initial motion vectors. Thus, an appropriate motion vector for the current block can be estimated by applying the initial motion vectors of adjacent blocks to the current block. Referring to FlG. 10, a current block (i,j) has the motion vector MV4. Blocks (i,j-l), (i-lj-1), and (i-1 j) adjacent to the current block (ij) have the motion vectors MVl, MV2, and MV3, respectively. Thus, an error function between the reference block and the current block is performed by applying the motion vectors MVl, MV2, and MV3 of the adjacent blocks and the motion vector MV4 of the current block to the current block. As a result, the second error matching unit 332 generates four SADs; SADl, SAD2, SAD3, and S AD4, for the current block using the error function.
[35] The weight multiplying unit 334 applies weights Wl, W2, W3, and W4 to the four
SADs, respectively. In other words, since adjacent blocks have different search ranges and, in an actual image, horizontally adjacent blocks usually have the same motion, a more accurate motion vector can be estimated by applies different weights to SADs obtained by motion vectors of the adjacent blocks of the current block. The weight multiplying unit 334 may applies different weights to SADs according to various motion information such as the amount of motion or the spatial position of a block referred to by the current block.
[36] The minimum value selecting unit 336 selects the motion vector having the minimum SAD from among the four SADs to which the weights are applied by the weight multiplying unit 334 as a final motion vector.
[37] Computation performed by the weight multiplying unit 334 and the minimum value selecting unit 336 can be expressed with equation 2.
" SAD(MVV) + SAD(MVV) * 0 3 "
SAD(MVT) + SAD(MVZ) *0 2 MVf = nun f SAD(MVJ) + SAD(MVJ) * 0 1
SAD(MVA)
[38] FlG. 11 is a flowchart illustrating a method of block-based motion estimation according to an embodiment of the present invention. [39] In operation 1110, two video frames that are temporally adjacent to each other, i.e., occur sequentially in time, are input. The two video frames include a current frame and a reference frame that is temporally adjacent to the current frame. [40] In operation 1120, the reference frame is divided into a plurality of block groups and different search points for searching for motions of different sizes are allocated for blocks in each block group. [41] In operation 1130, the degree of block distortion between the current frame and the reference frame is measured in the allocated search point for each block. In operation
1140, a candidate motion vector of each block is estimated using the degree of block distortion measured in operation 1130. At this time, the degree of block distortion is measured using an SAD. [42] Next, motion estimation is re-attempted by applying candidate motion vectors of adjacent blocks to a current block. For example, the degree of distortion is measured by applying four candidate motion vectors to the current block, thereby obtaining four
SADs in operation 1150. [43] In operation 1160, the different weights are applied to the obtained SADs according to motion directions. [44] In operation 1170, the candidate motion vector having the minimum SAD is
determined to be a final motion vector of the current block.
Industrial Applicability
[45] The present invention can also be embodied as a computer-readable code on a computer-readable recording medium. The computer-readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of computer-readable recording media include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and carrier waves. The computer-readable recording medium can also be distributed over a network of coupled computer systems so that the computer-readable code is stored and executed in a decentralized fashion.
[46] While the present invention has been particularly shown and described with reference to an exemplary embodiment thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
Claims
[1] L A video motion estimation method comprising: allocating different search points for blocks in each block group of an input video frame; estimating candidate motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points; and measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied by applying the estimated candidate motion vectors to a current block, and determining the candidate motion vector having the minimum degree of block distortion to be a final motion vector of the current block.
2. The video motion estimation method of claim 1, wherein the allocating of the different search points comprises: inputting two video frames that are temporally adjacent to each other; and dividing blocks of a reference frame that is temporally adjacent to a current frame into block groups and allocating the different search points for the blocks in each of the block groups, the reference frame and the current frame being included in the two video frames.
3. The video motion estimation method of claim 1, wherein the allocating of the different search points comprises allocating the different search points for the blocks according to the size and type of a motion.
4. The video motion estimation method of claim 1, wherein the allocating of the different search points comprises using the variable number of blocks in each of the block groups.
5. The video motion estimation method of claim 1, wherein the allocating of the different search points comprises changing the arrangement of the search points for the blocks in each of the block groups.
6. The video motion estimation method of claim 1, wherein the degree of block distortion is measured using a sum of absolute difference (SAD).
7. The video motion estimation method of claim 1, wherein the determining of the final motion vector comprises measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied, by applying a motion vector of the current block and motion vectors of adjacent blocks of the current block to the current block.
8. The video motion estimation method of claim 1, wherein the determining of the final motion vector comprises applying weights to the degrees of block distortion generated for the candidate motion vectors.
9. The video motion estimation method of claim 8, wherein different weights are applied to the degrees of block distortion according to the arrangement of search points in adjacent blocks of the current block.
10. The video motion estimation method of claim 8, wherein different weights are applied to the degrees of block distortion according to the spatial position of a block referred to by the current block.
11. A video motion estimation apparatus comprising: a search point allocating unit allocating different search points for blocks in each of block groups of an input video frame; an error matching unit estimating candidate motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points; and a motion vector determining unit measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied, by applying the estimated candidate motion vectors to a current block, and determining the initial motion vector having the minimum degree of block distortion to be a motion vector of the current block.
12. The video motion estimation apparatus of claim 11, wherein the search point allocating unit allocates the different search points for the blocks according to the size and type of a motion.
13. The video motion estimation apparatus of claim 11, wherein the motion vector determining unit comprises: a second error matching unit measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied, by applying the estimated initial motion vectors to the current block, thereby generating a plurality of error function values; a multiplying unit applying weights to the error function values generated by the second error matching unit according to motion directions; and a minimum value selecting unit determining the initial motion vector having the minimum error function value to be the motion vector of the current block.
14. A computer-readable recording medium having recorded thereon a program for a video motion estimation method, the video motion estimation method comprising: allocating different search points for blocks in each block group of an input video frame; estimating candidate motion vectors for the blocks by measuring the degree of block distortion between frames at each of the allocated search points; and measuring the degree of block distortion to which each of the estimated candidate motion vectors is applied, by applying the estimated candidate motion vectors to a current block, and determining the candidate motion vector having the minimum degree of block distortion to be a final motion vector of the current block.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2006800296659A CN101243691B (en) | 2006-02-02 | 2006-11-10 | Method and apparatus for estimating motion based on block |
JP2008553143A JP5089610B2 (en) | 2006-02-02 | 2006-11-10 | Block-based motion estimation method and apparatus |
EP06812523A EP1980113A4 (en) | 2006-02-02 | 2006-11-10 | Method and apparatus for block-based motion estimation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2006-0010053 | 2006-02-02 | ||
KR1020060010053A KR101217627B1 (en) | 2006-02-02 | 2006-02-02 | Method and apparatus for estimating motion vector based on block |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007089068A1 true WO2007089068A1 (en) | 2007-08-09 |
Family
ID=38327611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2006/004689 WO2007089068A1 (en) | 2006-02-02 | 2006-11-10 | Method and apparatus for block-based motion estimation |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1980113A4 (en) |
JP (1) | JP5089610B2 (en) |
KR (1) | KR101217627B1 (en) |
CN (1) | CN101243691B (en) |
WO (1) | WO2007089068A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120307905A1 (en) * | 2009-11-18 | 2012-12-06 | Sk Telecom Co., Ltd. | Method and apparatus for encoding/decoding a motion vector by selecting a set of predicted candidate motion vectors, and method and apparatus for image encoding/decoding using the same |
CN104469380A (en) * | 2014-12-25 | 2015-03-25 | 中国电子科技集团公司第四十一研究所 | Video image prediction search method based on H.264/AVC standard |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100948164B1 (en) * | 2008-01-03 | 2010-03-16 | 서울시립대학교 산학협력단 | Motion estimation method and motion estimator using a center biased search pattern |
KR100929608B1 (en) * | 2008-01-17 | 2009-12-03 | 한양대학교 산학협력단 | Video motion estimation method and device using fast global search block matching algorithm |
KR101036552B1 (en) * | 2009-11-02 | 2011-05-24 | 중앙대학교 산학협력단 | Apparatus and method for fast motion estimation based on adaptive search range and partial matching error |
KR101678654B1 (en) * | 2010-06-11 | 2016-11-22 | 에스케이 텔레콤주식회사 | Method and Apparatus of adaptive motion vector predictors selection for competition-based motion vector coding, and Recording Medium therefor |
KR101280298B1 (en) * | 2011-07-07 | 2013-07-01 | 에스케이하이닉스 주식회사 | Method for estimating motion vector based on block |
GB201113527D0 (en) * | 2011-08-04 | 2011-09-21 | Imagination Tech Ltd | External vectors in a motion estimation system |
KR101347272B1 (en) * | 2011-11-04 | 2014-01-10 | 연세대학교 산학협력단 | Method and apparatus for inter prediction |
WO2014009864A2 (en) * | 2012-07-09 | 2014-01-16 | Squid Design Systems Pvt Ltd | Programmable variable block size motion estimation processor |
KR102379196B1 (en) * | 2017-05-31 | 2022-03-28 | 삼성전자주식회사 | Processing apparatuses and control methods thereof |
KR102132335B1 (en) * | 2018-09-20 | 2020-07-09 | 주식회사 핀텔 | Object Region Detection Method, Device and Computer Program Thereof |
CN111836055B (en) * | 2020-07-17 | 2023-01-10 | 上海顺久电子科技有限公司 | Image processing device and image block matching method based on image content for MEMC |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001010132A2 (en) * | 1999-08-02 | 2001-02-08 | Koninklijke Philips Electronics N.V. | Motion estimation |
US6289049B1 (en) * | 1997-07-30 | 2001-09-11 | Lg Electronics Inc. | Method for coding motion vector in moving picture |
US6845130B1 (en) * | 2000-10-12 | 2005-01-18 | Lucent Technologies Inc. | Motion estimation and compensation for video compression |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07288814A (en) * | 1994-04-20 | 1995-10-31 | Oki Electric Ind Co Ltd | Detection method for motion vector |
US5537155A (en) * | 1994-04-29 | 1996-07-16 | Motorola, Inc. | Method for estimating motion in a video sequence |
JP2000102016A (en) * | 1998-09-22 | 2000-04-07 | Victor Co Of Japan Ltd | Motion compensation prediction circuit |
JP2001028754A (en) * | 1999-07-14 | 2001-01-30 | Matsushita Electric Ind Co Ltd | Motion vector detection method |
US6654502B1 (en) * | 2000-06-07 | 2003-11-25 | Intel Corporation | Adaptive early exit techniques in image correlation |
CN1159919C (en) * | 2000-07-28 | 2004-07-28 | 三星电子株式会社 | Movement estimating method |
KR100492127B1 (en) * | 2002-02-23 | 2005-06-01 | 삼성전자주식회사 | Apparatus and method of adaptive motion estimation |
KR100474285B1 (en) * | 2002-04-08 | 2005-03-08 | 엘지전자 주식회사 | Method for finding motion vector |
KR20040008359A (en) * | 2002-07-18 | 2004-01-31 | 삼성전자주식회사 | Method for estimating motion using hierarchical search and apparatus thereof and image encoding system using thereof |
JP3715283B2 (en) * | 2003-02-04 | 2005-11-09 | 株式会社半導体理工学研究センター | Image compression encoding method and apparatus for moving images |
US20040258154A1 (en) * | 2003-06-19 | 2004-12-23 | Microsoft Corporation | System and method for multi-stage predictive motion estimation |
JP2005123760A (en) * | 2003-10-15 | 2005-05-12 | Victor Co Of Japan Ltd | Motion vector detecting apparatus and motion vector detection program |
KR100597397B1 (en) * | 2003-11-06 | 2006-07-07 | 삼성전자주식회사 | Method For Encording Moving Picture Using Fast Motion Estimation Algorithm, And Apparatus For The Same |
TWI252695B (en) * | 2004-07-21 | 2006-04-01 | Realtek Semiconductor Corp | Block-based motion estimation method |
JP2006031597A (en) * | 2004-07-21 | 2006-02-02 | Shibasoku:Kk | Motion vector detection device |
-
2006
- 2006-02-02 KR KR1020060010053A patent/KR101217627B1/en not_active IP Right Cessation
- 2006-11-10 WO PCT/KR2006/004689 patent/WO2007089068A1/en active Application Filing
- 2006-11-10 JP JP2008553143A patent/JP5089610B2/en not_active Expired - Fee Related
- 2006-11-10 CN CN2006800296659A patent/CN101243691B/en not_active Expired - Fee Related
- 2006-11-10 EP EP06812523A patent/EP1980113A4/en not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6289049B1 (en) * | 1997-07-30 | 2001-09-11 | Lg Electronics Inc. | Method for coding motion vector in moving picture |
WO2001010132A2 (en) * | 1999-08-02 | 2001-02-08 | Koninklijke Philips Electronics N.V. | Motion estimation |
US6845130B1 (en) * | 2000-10-12 | 2005-01-18 | Lucent Technologies Inc. | Motion estimation and compensation for video compression |
Non-Patent Citations (1)
Title |
---|
See also references of EP1980113A4 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120307905A1 (en) * | 2009-11-18 | 2012-12-06 | Sk Telecom Co., Ltd. | Method and apparatus for encoding/decoding a motion vector by selecting a set of predicted candidate motion vectors, and method and apparatus for image encoding/decoding using the same |
US9363530B2 (en) * | 2009-11-18 | 2016-06-07 | Sk Telecom Co., Ltd. | Method and apparatus for encoding/decoding a motion vector by selecting a set of predicted candidate motion vectors, and method and apparatus for image encoding/decoding using the same |
US9479793B2 (en) | 2009-11-18 | 2016-10-25 | Sk Telecom Co., Ltd. | Method and apparatus for encoding/decoding a motion vector by selecting a set of predicted candidate motion vectors, and method and apparatus for image encoding/decoding using the same |
CN104469380A (en) * | 2014-12-25 | 2015-03-25 | 中国电子科技集团公司第四十一研究所 | Video image prediction search method based on H.264/AVC standard |
CN104469380B (en) * | 2014-12-25 | 2019-05-03 | 中国电子科技集团公司第四十一研究所 | Video image forecasting search method based on H.264/AVC standard |
Also Published As
Publication number | Publication date |
---|---|
CN101243691B (en) | 2011-04-13 |
KR20070079411A (en) | 2007-08-07 |
CN101243691A (en) | 2008-08-13 |
JP2009525663A (en) | 2009-07-09 |
EP1980113A1 (en) | 2008-10-15 |
KR101217627B1 (en) | 2013-01-02 |
EP1980113A4 (en) | 2009-04-29 |
JP5089610B2 (en) | 2012-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007089068A1 (en) | Method and apparatus for block-based motion estimation | |
US11240406B2 (en) | Object tracking using momentum and acceleration vectors in a motion estimation system | |
KR100579493B1 (en) | Motion vector generation apparatus and method | |
EP1638339B1 (en) | Motion estimation | |
US20050249288A1 (en) | Adaptive-weighted motion estimation method and frame rate converting apparatus employing the method | |
KR100657261B1 (en) | Method and apparatus for interpolating with adaptive motion compensation | |
JP2000134585A (en) | Motion vector deciding method and method and circuit for number of frames of image signal conversion | |
WO2001050770A2 (en) | Methods and apparatus for motion estimation using neighboring macroblocks | |
US5754237A (en) | Method for determining motion vectors using a hierarchical motion estimation | |
WO2005120075A1 (en) | Method of searching for a global motion vector. | |
KR100565066B1 (en) | Method for interpolating frame with motion compensation by overlapped block motion estimation and frame-rate converter using thereof | |
KR20040105866A (en) | Motion estimation unit and method of estimating a motion vector | |
WO2001049029A1 (en) | Methods and apparatus for motion estimation in compressed domain | |
KR100855976B1 (en) | Frame interpolate device for estimating motion by separating static object and moving object and frame interpolate method using the device | |
EP1897376A2 (en) | Motion estimation | |
EP1420595B1 (en) | Motion vector selection in a video motion estimator based on a preferred reference point | |
Vranješ et al. | Influence of block size on motion vector estimation error in enhancement of video temporal resolution | |
JPH07203452A (en) | Dynamic image coding device | |
IE20020426A1 (en) | Video data half-pel motion estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680029665.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006812523 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008553143 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |