GB2341030A - Video motion estimation - Google Patents

Video motion estimation Download PDF

Info

Publication number
GB2341030A
GB2341030A GB9824312A GB9824312A GB2341030A GB 2341030 A GB2341030 A GB 2341030A GB 9824312 A GB9824312 A GB 9824312A GB 9824312 A GB9824312 A GB 9824312A GB 2341030 A GB2341030 A GB 2341030A
Authority
GB
United Kingdom
Prior art keywords
block
search
candidate
boundary
search block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9824312A
Other versions
GB9824312D0 (en
Inventor
Sang-Hoon Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WiniaDaewoo Co Ltd
Original Assignee
Daewoo Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daewoo Electronics Co Ltd filed Critical Daewoo Electronics Co Ltd
Publication of GB9824312D0 publication Critical patent/GB9824312D0/en
Publication of GB2341030A publication Critical patent/GB2341030A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • G06T7/238Analysis of motion using block-matching using non-full search, e.g. three-step search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

A method of motion estimating between a current and a previous frame of an image signal divides the current frame into a plurality of equal-sized search blocks. A plurality of search regions are formed within the previous frame 410, wherein each search region corresponds to a search block and whether each search block is a boundary block or an inner block is determined, 440, based on the shape data of the current frame, wherein a boundary block is a search block containing background pixels as well as object pixels and an inner block is a search block containing object pixels only. Based on the determination result, one of a block matching and boundary matching motion estimation processes 420, 430, is selectively performed for each search block to thereby provide a motion vector of each search block.

Description

2341030 ADAPTIVE MOTION ESTIMATION METHOD AND APPARATUS The present
invention relates to a motion estimation method and apparatus; and, more particularly, to a method and apparatus for adaptively determining motion vectors with a improvedcomputational complexity.
In digitally televised systems such as video-telephone, teleconference and high definition television systems, a large amount of digital data is needed to define each video frame signal since a video line signal in the video frame signal comprises a sequence of digital data referred to as pixel values. Since, however, the available frequency bandwidth of a conventional transmission channel is limited, in order to transmit the large amount of digital data therethrough, it is necessary to compress or reduce the volume of data through the use of various data compression techniques, especially in the case of such low bit-rate video signal encoders as videotelephone and teleconference systems.
one of such techniques for encoding video signals for a low bit-rate encoding system is the so-called object-oriented analysis -synthesis coding technique, wherein an input video image is divided into objects, and three sets of parameters for defining the motion, contour and pixel data of each object are processed through different encoding channels.
one example of object-oriented coding scheme is the socalled MPEG(Moving Picture Express Group) phase 4(MPEG-4), which is designed to provide an audio-visual coding standard for allowing content-based interactivity, improved coding efficiency and/or universal accessibility in such applications as low-bit rate communication, interactive multimedia (e.g., games, interactive TV, etc.) and area surveillance(see, for instance, MPEG-4 Video Verification Model Version 7.0, International organisation for Standardisation, ISO/IEC JTC1/SC29/WG11 MPEG97/N1642, April 1997).
According to the MPEG-4, an input video image is divided into a plurality of video object planes(V0P1s), which correspond to entities in a bit stream that a user can access is and manipulate. A VOP can be referred to as an object and represented by a bounding rectangle whose width and height may be the smallest multiples of 16 pixels(a macroblock size) surrounding each object so that the encoder may process the input video image on a VOP-by-wP basis, i.e., an object-by- object basis.
A VOP disclosed in the MPEG-4 includes shape information and texture information for an object therein which are represented by a plurality of macroblocks on the VOP, each of macroblocks having, e.g., 16 x 16 pixels, wherein the shape information is represented in binary shape signals and the texture information includes luminance and chrominance data.
Since the texture information for two input video images sequentially received has temporal redundancies, it is desirable to reduce the temporal redundancies therein by using a motion estimation and compensation technique in order to efficiently encode the texture information.
In order to perform the motion estimation and compensation and a discrete cosine transformation, a reference VOP, e.g., a previous VOP, should be padded by a progressive image padding technique, i.e., a conventional repetitive padding technique. In principle, the repetitive padding technique fills the transparent area outside the object of the VOP by repeating boundary pixels of the object, wherein the boundary pixels are located on the contour of the object. If transparent pixels in a transparent area outside the object can be filled by the repetition of more than one boundary pixels, the average of the repeated values is taken as a padded value. This progressive padding process is generally divided into 3 steps: a horizontal repetitive padding; a vertical repetitive padding and an exterior padding (see, MPEG- 4 Video Verification Model Version 7.0).
Meanwhile, according to a conventional block matching algorithm, the current VOP is divided into a plurality of search blocks. The size of a search block typically ranges between BxS and 32x32 pixels. To determine a motion vector for a search block in the current VOP, a similarity measurement is performed between the search block and each of a plurality of equal-sized candidate blocks included in a generally larger search region within the previous VOP. An error function such as the mean absolute error or mean square error is used to carry out the similarity measurement between the search block and each of the candidate blocks. And a motion vector, by definition, represents a displacement between the search block and a candidate block which yields a minimum error function.
However, in the conventional block matching algorithm, the computational process is rather complex, especially since error functions are computed for all pixels in a search block. In order to implement a real time process efficiently, therefore, it is necessary to reduce the computational complexity in the conventional block matching algorithm.
It is, therefore, a primary object of the present invention to provide a method and apparatus for adaptively determining motion vectors with a reduced computational complexity.
In accordance with one aspect of the present invention, there is provided a method for motion estimating between a current and a previous frames of an image signal, for use in an encoder, wherein texture data of the current frame, texture data of the previous frame and shape data of the current frame are provided to the encoder, the method comprising the steps of: (a) dividing the current frame into a plurality of equalsized search blocks; (b) forming a plurality of search regions within the previous frame, wherein each search region corresponds to a search block; (c) determining whether each search block is a boundary block or an inner block based on the shape data of the current frame, wherein the boundary block is a search block containing background pixels and object pixels and the inner block is a search block containing the object pixels only; and (d) selectively performing one of a block matching and a boundary matching motion estimation processes for each search block based on the determination result at step (c) to thereby provide a motion vector of said each search block.
In accordance with another aspect of the present invention, there is provided an apparatus for motion estimating between a current and a previous frames of an image signal, for use in an encoder, wherein texture data of the current frame, texture data of the previous frame and shape data of the current frame are provided to the encoder, the apparatus comprising: a block for dividing the current frame into a plurality of equal-sized search blocks; a block for forming a plurality of search regions within the previous frame, wherein each search region corresponds to a search block; a block for determining whether each search block is a boundary block or an inner block based on the shape data of the current frame to thereby generate a control signal, wherein the boundary block is a search block containing background pixels and object pixels and the inner block is a search block containing the object pixels only; a block for performing a boundary matching motion estimation process based on the control signal; a block for performing a block matching motion estimation process based on the control signal; and a block for providing a motion vector of each search block by using outputs from the boundary matching motion estimation block and the block matching motion estimation block based on the control signal.
The above and other objects and features of the present invention will become apparent from the following description of preferred embodiments given in conjunction with the accompanying drawings, in which:
Fig. 1 represents a block diagram of an encoding apparatus in accordance with the present invention; Fig. 2 illustrates a detailed diagram of a motion estimation block shown in Fig. 1; Fig. 3 shows a detailed diagram of a block matching motion estimation block shown in Fig. 2; Fig. 4 demonstrates a detailed diagram of a boundary matching motion estimation block shown in Fig.2; and Fig. 5 exemplifies a boundary matching process in accordance with a preferred embodiment of the present invention.
Referring to Fig. 1, there is illustrated a block diagram of an encoding apparatus 1 in accordance with the present invention.
Current frame texture data containing one or more current VOP's is provided to a search block formation block 100. The search block formation block 100 divides a current VOP in the current frame into a plurality of equal-sized search blocks, to thereby provide search block data to a substraction block 200 and a motion estimation(ME) block 400, wherein the size of search blocks is, for example, 16 x 16 pixels.
The ME block 400 is provided with previous frame texture is data containing one or more previous VOP ' s from a frame memory 300, wherein boundary blocks in a previous frame have been padded by a padding block 1100 after having been reconstructed at an addition block 1000.
Thereafter, the ME block 400 forms a plurality of search regions in the previous VOP, each search region corresponding to a search block. Meanwhile, the ME block 400 is provided with current frame shape data, wherein each pixel in the current frame has a label identifying the region it belongs to. For instance, a pixel in a background is labeled with 11011 and each pixel in an object is labeled with a non-zero value.
The ME block 400 performs an adaptive motion estimation process on each of the search blocks to thereby determine motion vectors corresponding to the search blocks. That is, if a search block is a boundary block, a motion vector corresponding to the search block is determined by the conventional block matching motion estimation scheme, the boundary block consisting of pixels belonging to an object as well as pixels belonging to a background; and if the search block is an inner block, a motion vector corresponding to the search block is determined by a boundary matching motion estimation scheme, the inner block consisting of pixels belonging to the object. The detailed motion estimation schemes in accordance with the present invention are further described in detail with reference to Figs. 2 to 4. The motion vectors are provided to a motion compensation (MC) block 500 and a transmitter(not shown).
The MC block 500 is provided with the motion vectors from the ME block 400 and pixel data of corresponding optimum candidate blocks from the frame memory 300. Thereafter, the MC block 500 performs a motion compensation process on the optimum candidate blocks by using the corresponding motion vectors to thereby generate motion compensated optimum candidate blocks and provide same to the substraction block 200 and the addition block 1000.
The subtraction block 200 subtracts the motion compensated optimum candidate blocks from the corresponding search blocks to thereby provide a substraction result, namely, an error signal, to a padding block 600. The padding block 600 is also provided with the current frame shape data. By employing the conventional macroblock based padding scheme, the error signal is padded by using the current frame shape data. The padded error signal is applied to the discrete cosine transform and quatization (DCT & Q) block 700.
The DCT & Q block 700 performs a discrete cosine transform and quatization process on the padded error signal and provides a set of quantized discrete cosine transform coefficients to a statistical encoder 800 and an inverse discrete cosine transform and inverse quantization (IDCT & IQ) block 900. The statistical encoder 800 statistically encodes the set of quantized discrete cosine transform coefficients to thereby provide the statistically encoded signal to the transmitter.
The IDCT & IQ block 900 performs an inverse discrete cosine transform and inverse quatization on the set of quantized discrete cosine transform coefficients and sends the restored error signal to the addition block 1000. The addition block 1000 adds the restored error signal to the motion compensated optimum candidate blocks from the MC block 500 to thereby generate reconstructed search blocks.
The padding block 1100 is provided with the reconstructed search blocks and the current frame shape data and pads the reconstructed search blocks based on the conventional macroblock based padding scheme. The padded reconstructed search blocks are stored at the frame memory 300 as previous frame texture data for a next frame.
Referring to Fig. 2, there is illustrated a detailed block diagram of the ME block 400 shown in Fig. 1, wherein the ME block 400 includes a search region formation block 410, a block matching ME(ME) block 420, a boundary matching motion estimation(ME) block 430, a motion estimation(ME) mode selector 440 and a multiplexor(MUX) 450.
The previous frame texture data is provided to the search region formation block 410 from the frame memory 300. The search region formation block 410 defines search regions corresponding to the search blocks with a certain size, shape and search pattern, whereby the motion estimation process for the search blocks is carried out. After the search region is determined at the search region formation block 410, the search region data is applied to the block matching ME block 420 and the boundary matching ME block 430.
In the meantime, the current frame shape data is applied to the ME mode selector 440. The ME mode selector 440 determines whether a search block is a boundary block or an inner block based on the current frame shape data. That is, if an area in the current frame shape data which is located at a same position as that of a search block and whose size is same as that of the search block contains background pixels as well as object pixels, the search block is determined as a boundary block; and if the area contains object pixels only, - the search block is determined as an inner block.
When a search block is determined as a boundary block, the ME mode selector 440 provides a control signal of a f irst level to the block matching ME block 420, the boundary matching ME block 430 and the MUX 450, wherein the control signal of the first level enables the block matching ME block 420, disables the boundary matching ME block 430 and makes the MUX 450 select an input from the block matching ME block 420 to provide same to the MC block 500 and the transmitter. And, when the search block is determined as an inner block, the ME mode selector 440 provides a control signal of a second level to the block matching ME block 420, the boundary matching ME block 430 and the MUX 450, wherein the control signal of the second level enables the boundary matching ME block 430, disables the block matching ME block 420 and makes the MUX 450 select an input from the boundary matching ME block 430 to provide same to the MC block 500 and the transmitter.
The block matching ME block 420 and the boundary matching ME block 430 are provided with the search region data from the search region formation block 410, the search block data from the search block formation block 100 and the control signal from the ME mode selector 440. If the block matching ME block 420 is enabled by the control signal of the first level, it performs a conventional block matching process on a search block and provides a motion vector corresponding to the search block to the MUX 450; and if the boundary matching ME block 430 is enabled by the control signal of the second level, it performs a boundary matching process on a search block and provides a motion vector corresponding to the search block to the MUX 450. The detailed operations of the block matching ME block 420 and the boundary matching ME block 430 will be described with reference to Figs. 3 and 4, respectively.
Referring to Fig. 3, the detailed structure of the block matching ME block 420 is depicted. A candidate block formation block 422 enabled by the control signal of the first level from the ME mode selector 440 forms a multiplicity of, e.g., M number of, equal-sized candidate blocks in a search region, M being a positive integer representing the total number of the candidate blocks formed, wherein the search region data is provided from the search region formation block 410 shown in Fig. 2 and the size of the candidate blocks is same as that of the search blocks - Pixel data of an ith candidate block is provided to an ith block matching error function calculation block 424-i, i being an integer ranging from 1 to M.
moreover, the candidate block formation block 422 determines displacement vectors from the search block to all the candidate blocks, i.e., DV1 to DVm and provides same to a first selector 428. And, the search block data is applied to each of block matching error function calculation blocks 424-1 to 424-M, wherein only 3 blocks are depicted for the sake of simplicity. At the ith block matching error function calculation block 424-i, a block matching error function between the search block and the ith candidate block is calculated. The block matching error function is, for example, the mean absolute error or mean square error between object pixels in the search block and corresponding pixels in the ith candidate block, wherein object pixels in the search block are determined based on the current frame shape data. The calculated block matching error functions are provided to a first comparator 426.
The first comparator 426 compares the block matching error functions with each other and selects a minimum block matching error function among them to thereby provide a first indication signal representing the minimum block matching error function to the first selector 428. The first selector 428, in response to the first indication signal, provides a displacement vector corresponding to the minimum block matching error function to the MUX 450 as a motion vector of the search block.
Fig. 4 provides a detailed block diagram of the boundary matching ME block 430 shown in Fig. 2. The boundary matching scheme is applicable to inner blocks since adjacent pixel values in an image signal are highly correlated with each other.
An expanded candidate block formation block 432 enabled by the control signal of the second level from the ME mode selector 440 forms a multiplicity of, e.g., M number of, equal-sized expanded candidate blocks in the search region, wherein the search region data is provided from the search region formation block 410 shown in Fig. 2 and an expanded candidate block is constructed from a candidate block and a border with a width of 1 pixel around the candidate block.
That is, since the size of the search blocks and the candidate blocks is 16 x 16 pixels, the size of the expanded candidate blocks is 18 x_18 pixels.
Moreover, the expanded candidate block formation block 432 determines displacement vectors from the search block to all the expanded candidate blocks, i.e., DV, to DVm and provides same to a second selector 438. And, the search block data is applied to each of boundary matching error function calculation blocks 434-1 to 434-M, wherein only 3 blocks are depicted for the sake of simplicity. At each of the boundary matching error function calculation blocks 434-1 to 434-M, a boundary matching error function- is calculated, wherein a boundary matching error function calculation process is represented in reference to Fig. 5.
An exemplary search block SB is overlapped with an expanded candidate block as is shown in Fig. 5, so that the location of the search block completely coincides with that of the candidate block contained in the expanded candidate block. Although the exemplary search block SB has 16 x 16 pixels, only 16 pixels are depicted for the sake of simplicity.
Then, in accordance with a preferred embodiment of the present invention, differences between pixel values of the search block and the expanded candidate block are detected for top rows and left-most columns and a boundary matching error function is calculated as follows:
16 E = I E 1 ([P.,,.i - Til + lPj'j -Lill wherein E denotes a boundary matching error function between the search block and the expanded candidate block; Pj', denotes an ith pixel value in the top row of the search block; T, denotes an ith pixel value in the top row. of the expanded candidate block; P,,, denotes an ith pixel value in the leftmost column of the search block; and Li denotes an ith pixel value in the left-most column of the expanded candidate block.
In accordance with another preferred embodiment of the present invention, differences between pixel values of the search block and the expanded candidate block are detected for top rows, bottom rows, left-most columns and right-most columns so that a boundary matching error function can be more accurate, and a boundary matching error function is calculated as follows:
16 E P1, I - Tj + 1 P16, i - Bj + I Pi, 1 - Li + I Pj, 1,5 - Rj wherein P 16, i denotes an ith pixel value in the bottom row of the search block; B, denotes an ith pixel value in the bottom row of the expanded candidate block; P,,16 denotes an ith pixel value in the right-most column of the search block; and R, denotes an ith pixel value in the a right-most column of the expanded candidate block. The calculated boundary matching error functions are provided to a second comparator 436.
The second comparator 436 compares the boundary matching error functions with each other and selects a minimum boundary matching error function among them to thereby provide a second indication signal representing the minimum boundary matching error function to the second selector 438. An optimum candidate block for this case is a candidate block contained in an expanded candidate block corresponding to the minimum boundary matching error function. The second selector 438, in response to the second indication signal, provides a displacement vector corresponding to the minimum boundary matching error function to the MUX 450 as a motion vector of the search block.
As demonstrated above, the block matching process or the boundary matching process is selectively applied to each search block to thereby considerably reduce the computational complexity. For instance, when the size of the search block is 16 x 16 pixels, the differences are calculated for 16 x 16 pairs of pixels in the block matching scheme. However, in the boundary matching scheme of the first embodiment of the present invention, the differences are calculated only for 16 x 2 pairs of pixels to thereby reduce the computational complexity by a factor 8.
16 - While the present invention has been described with respect to the particular embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the scope of the invention as defined in the following claims.
17 -

Claims (19)

Claims:
1. A method for motion estimating between a current and a previous frames of an image signal, for use in an encoder, wherein texture data of the current frame, texture data of the previous frame and shape data of the current frame are provided to the encoder, the method comprising the steps of:
(a) dividing the current frame into a plurality of equalsized search blocks; (b) forming a plurality of search regions within the previous frame, wherein each search region corresponds to a search block; (c) determining whether each search block is a boundary block or an inner block based on the shape data of the current frame, wherein the boundary block is a search block containing background pixels as well as object pixels and the inner block is a search block containing the object pixels only; and (d) selectively performing one of a block matching and a boundary matching motion estimation processes on each search block based on the determination result at step (c) to thereby provide a motion vector of said each search block.
2. The method according to claim 1, wherein the step (d) includes the steps of:
(di) performing the boundary matching motion estimation process on a search block if the search block is determined as the inner block; and (d2) performing the block matching motion estimation process on a search block if the search block is determined as the boundary block.
3. The method according to claim 2, wherein the step (di) contains the steps of:
(d11) forming a multiplicity of expanded candidate blocks within a search region corresponding to the search block, wherein each of the expanded candidate blocks is constructed from a candidate block and a border with a width of 1 pixel around the candidate block and the candidate block has a same size as the search block; (d12) generating a displacement of each expanded candidate block from the search block as a displacement vector of said each expanded candidate block; (d13) calculating boundary matching error functions between the search block and corresponding expanded candidate blocks; (d14) comparing the boundary matching error functions with each other to thereby select a minimum boundary matching error function; and (d15) providing a displacement vector corresponding to the minimum boundary matching error function as the motion vector of the search block.
4. The method according to claim 3, wherein the step (d13) has therein the steps of:
(d131) overlapping the search block with an expanded candidate block so that the location of the search block completely coincides with that of the candidate block contained in the expanded candidate block; (d132) calculating differences between pixels of predetermined regions in the search block and corresponding pixels of corresponding regions in the expanded candidate block; and (d133) summing up the calculated differences to thereby provide a boundary matching error function between the search block and the expanded candidate block.
5. The method according to claim 4, wherein the predetermined regions in the search block are a top row and a left column of the search block; and the corresponding regions in the expanded candidate block are a top row and a left column of the expanded candidate block.
6. The method according to claim 4, wherein the predetermined regions in the search blocks are a top row, a bottom row, a left column and a right column of the search block; and the corresponding regions in the expanded candidate block are a top row, a bottom row, a lef t column and a right column of the expanded candidate block.
7. The method according to claim 2, wherein the step (d2) contains the steps of:
(d2l) forming a multiplicity of candidate blocks within a search region corresponding to the search block; (d22) generating a displacement of each candidate block from the search block as a displacement vector of said each candidate block; (d23) calculating block matching error functions between the search block and corresponding candidate blocks; (d24) comparing the block matching error functions with each other to thereby select a minimum block matching error function; and (d25) providing a displacement vector corresponding to the minimum block matching error function as the motion vector of the search block.
8. The method according to claim 7, wherein the step (d23) has therein the steps of:
(d231) overlapping the search block with a candidate block; (d232) calculating differences between object pixels in the search block and corresponding pixels in the candidate block; and (d233) summing up the calculated differences to thereby provide a block matching error function between the search block and the candidate block.
9. An apparatus for motion estimating between a current and a previous f rames of an image signal, for use in an encoder, wherein texture data of the current frame, texture data of the previous frame and shape data of the current frame are provided to the encoder, the apparatus comprising:
means for dividing the current frame into a plurality of equal-sized search blocks; means for forming a plurality of search regions within the previous frame, wherein each search region corresponds to a search block; means for determining whether each search block is a boundary block or an inner block based on the shape data of the current frame to thereby generate a control signal, wherein the boundary block is a search block containing background pixels as well as object pixels and the inner block is a search block containing object pixels only; means for performing a boundary matching motion estimation process based on the control signal; means 'for performing a block matching motion estimation process based on the control signal; and means for providing a motion vector of each search block by using outputs from the boundary matching motion estimation means and the block matching motion estimation means based on the control signal.
10. The apparatus according to claim 9, wherein the determining means generates a control signal of a f irst level if a search block is determined as an inner block; and generates a control signal of a second level if the search block is determined as a boundary block.
11. The apparatus according to claim 10, wherein the control signal of the first level enables the boundary matching motion estimation means, disables the block matching motion estimation block and makes the providing means provide an output from the boundary matching motion estimation means; and the control signal of the second level disables the boundary matching motion estimation means, enables the block matching motion estimation means and makes the providing means provide an output from the block matching motion estimation means.
12. The apparatus according to claim 11, wherein the boundary matching motion estimation means includes:
means for forming a multiplicity of expanded candidate blocks within a search region corresponding to the search block, wherein each of the expanded candidate blocks is constructed from a candidate block and a border with a width of 1 pixel around the candidate block and the candidate block has a same size as the search block; means for generating a displacement of each expanded candidate block from the search block as a displacement vector of said each expanded candidate block; means for calculating boundary matching error functions between the search block and corresponding expanded candidate blocks; means for comparing boundary matching error functions with each other to thereby select a minimum boundary matching error function; and means for providing a displacement vector corresponding to the minimum boundary matching error function as a motion vector of the search block.
13. The apparatus according to claim 12, wherein the boundary matching error function calculating means contains: means for overlapping the search block with an expanded candidate block so that the location of the search block completely coincides with that of the candidate block contained in the expanded candidate block; means for calculating differences between pixels of predetermined regions in the search block and corresponding pixels of corresponding regions in the expanded candidate block; and means for summing up the calculated differences to thereby provide a boundary matching error function between the search block and the expanded candidate block.
14. The apparatus according to claim 13, wherein the predetermined regions in the search block are a top row and a left column of the search block; and the corresponding regions in the expanded candidate block are a top row and a left column of the expanded candidate block.
15. The apparatus according to claim 13, wherein the predetermined regions in the search blocks are a top row, a bottom row, a lef t column and a right column of the search block; and the corresponding regions in the expanded candidate block are a top row, a bottom row, a left column and a right column of the expanded candidate block.
16. The apparatus according to claim 13, wherein the block matching motion estimation means includes: means for forming a multiplicity of candidate blocks is within a search region corresponding to the search block; means for generating a displacement of each candidate block from the search block as a displacement vector of said each candidate block; means for calculating block matching error functions between the search block and corresponding candidate blocks; means for comparing block matching error functions with each other to thereby select a minimum block matching error function; and means for providing a displacement vector corresponding to the minimum block matching error function as a motion vector of the search block.
17. The apparatus according to claim 16, wherein the block matching error function calculating means contains:
means for overlapping the search block with a candidate block; means for calculating differences between object pixels in the search block and corresponding pixels in the candidate block; and means for summing up the calculated differences to thereby provide a block matching error function between the search block and the candidate block.
18. A method for motion estimating between a current and a previous frames of an image signal, for use in an encoder, substantially as herein described with reference to or as shown in figures 1 to 5 of the accompanying drawings.
19. An apparatus for motion estimating between a current and a previous frames of an image signal, for use in an encoder, constructed and arranged substantially as herein described with reference to or as shown in figures 1 to 5 of the accompanying drawings.
- 26
GB9824312A 1998-08-31 1998-11-05 Video motion estimation Withdrawn GB2341030A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1019980035580A KR100303086B1 (en) 1998-08-31 1998-08-31 Adaptive motion estimating apparatus

Publications (2)

Publication Number Publication Date
GB9824312D0 GB9824312D0 (en) 1998-12-30
GB2341030A true GB2341030A (en) 2000-03-01

Family

ID=19548936

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9824312A Withdrawn GB2341030A (en) 1998-08-31 1998-11-05 Video motion estimation

Country Status (3)

Country Link
JP (1) JP2000078583A (en)
KR (1) KR100303086B1 (en)
GB (1) GB2341030A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8253825B2 (en) 2004-05-05 2012-08-28 France/Ecole Normale Superieure De Cachan Image data processing method by reducing image noise, and camera integrating means for implementing said method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009060153A (en) * 2005-12-21 2009-03-19 Panasonic Corp Intra prediction mode decision device, method, and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2231750A (en) * 1989-04-27 1990-11-21 Sony Corp Motion dependent video signal processing
US5790207A (en) * 1996-03-14 1998-08-04 Daewoo Electronics, Co., Ltd. Motion compensation method for use in an image encoding system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2231750A (en) * 1989-04-27 1990-11-21 Sony Corp Motion dependent video signal processing
US5790207A (en) * 1996-03-14 1998-08-04 Daewoo Electronics, Co., Ltd. Motion compensation method for use in an image encoding system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8253825B2 (en) 2004-05-05 2012-08-28 France/Ecole Normale Superieure De Cachan Image data processing method by reducing image noise, and camera integrating means for implementing said method
US8427559B2 (en) 2004-05-05 2013-04-23 France/Ecole Normale Seperieure de Cachan Image data processing method by reducing image noise, and camera integrating means for implementing said method

Also Published As

Publication number Publication date
KR20000015559A (en) 2000-03-15
KR100303086B1 (en) 2001-09-24
GB9824312D0 (en) 1998-12-30
JP2000078583A (en) 2000-03-14

Similar Documents

Publication Publication Date Title
USRE41383E1 (en) Method and apparatus for encoding interplaced macroblock texture information
US5973743A (en) Mode coding method and apparatus for use in an interlaced shape coder
US6094225A (en) Method and apparatus for encoding mode signals for use in a binary shape coder
AU748276B2 (en) Method and apparatus for encoding a motion vector of a binary shape signal
US6404814B1 (en) Transcoding method and transcoder for transcoding a predictively-coded object-based picture signal to a predictively-coded block-based picture signal
US5978048A (en) Method and apparatus for encoding a motion vector based on the number of valid reference motion vectors
EP1016286A1 (en) Method for generating sprites for object-based coding systems using masks and rounding average
GB2328337A (en) Encoding motion vectors
US6069976A (en) Apparatus and method for adaptively coding an image signal
US6133955A (en) Method for encoding a binary shape signal
US5978031A (en) Method and apparatus for determining an optimum grid for use in a block-based video signal coding system
US6020933A (en) Method and apparatus for encoding a motion vector
GB2341030A (en) Video motion estimation
EP0923250A1 (en) Method and apparatus for adaptively encoding a binary shape signal
KR100283579B1 (en) Method and apparatus for coding mode signals in interlaced shape coding technique
KR20000021867A (en) Method for encoding motion vector of binary form signal
MXPA00008676A (en) Method and apparatus for padding interlaced macroblock texture information

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)