CN1127969A - Method and apparatus for detecting motion vectors in a frame decimating video encoder - Google Patents

Method and apparatus for detecting motion vectors in a frame decimating video encoder Download PDF

Info

Publication number
CN1127969A
CN1127969A CN 95101342 CN95101342A CN1127969A CN 1127969 A CN1127969 A CN 1127969A CN 95101342 CN95101342 CN 95101342 CN 95101342 A CN95101342 A CN 95101342A CN 1127969 A CN1127969 A CN 1127969A
Authority
CN
China
Prior art keywords
frame
point
motion vector
search
search point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 95101342
Other languages
Chinese (zh)
Inventor
丁海默
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WiniaDaewoo Co Ltd
Original Assignee
Daewoo Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daewoo Electronics Co Ltd filed Critical Daewoo Electronics Co Ltd
Priority to CN 95101342 priority Critical patent/CN1127969A/en
Publication of CN1127969A publication Critical patent/CN1127969A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The improved motion estimating method for determining the motion vector group between current frame and preceding one jumping over N frames features use of said both frames to obtain a series of motion vectors on a searched point in preceding chosen frame. Said a series of motion vectors are accumulated to find out a target motion vector representing displacement between a search point and correspondent optimal matched point in current frame. Said process is repeated until the target motion vector group of all the searched points in preceding chosen frame is detected out.

Description

Be used for detecting the method and the device of motion vector at frame decimating video encoder
The present invention relates to method and device, more specifically, relate to the method and the device that use frame reduction technology and compensate the motion vector in the estimated frames decimating video encoder based on frame by frame motion estimation and data compression to video signal coding.
As well-known, the digitized video transmission can transmit the video image more much higher than analog signal transmission quality.When the picture intelligence that comprises a series of images " frame " is expressed with digital form, just produced a large amount of data in order to transmission, especially all the more so in high-definition television system.But,, for by a large amount of numerical data of limited frequency bandwidth transmission, just to compress inevitably or cut down the amount that is transmitted data because available frequency bandwidth is limited in the conventional transmission channel.In various video compression technologies, so-called mixed coding technology is about to time and spatial compression techniques together and the technology that the statistical coding technology combines, and is considered to the most effective.
Most of mixed coding technology has used motion compensation DPCM (differential pulse-code modulation), two-dimensional dct (discrete cosine transform), DCT coefficient quantization and VLC (variable length code).Motion compensation DPCM is to determine object of which movement between present frame and its previous frame, and predict present frame according to the motion stream of object, so that produce a kind of method of representing the differential signal of difference between present frame and the prediction thereof.This method for example is described in " the fixing and adaptive predictor that is used for hybrid predicting/transition coding " Staffan Ericsson work, and IEEE Transaction on Communications COM-33 is among the No.12 (in December, 1985); And " the motion compensation interframe encoding scheme that is used for TV image ", Ninomiya and Ohtswka work, IEEETransaction on Communications, COM-30 is among the No.1 (January nineteen eighty-two).
Particularly, in motion compensation DPCM, from corresponding previous frame data, predict the present frame data based on the motion estimation between present frame and the previous frame.This estimated motion can be described with the term of representing the two-dimensional motion vector of pixel displacement between previous frame and the present frame.
Displacement has had many schemes for estimation object pixel, and always, they can be divided into two classes, promptly connect motion estimation one and connect an element based on a pixel based on one.
A plurality of in a piece in based on the motion estimation of block-by-block in present frame and its previous frame compare, till determining optimum Match.For this reason, can make estimation for the present frame that is transmitted for whole interframe movement vector (this piece at interframe movement what).But at block-by-block is that the boundary at a piece in movement compensation process may block (blocking) effect in the motion estimation on basis; If all pixels in this piece are not to move in same mode, then may produce poor estimation result, reduced binary encoding efficient thus.
On the other hand, when using the method for pixel one by one, all determine its displacement for each pixel.This technology can be estimated pixel value more accurately, and has the ability that the processing of being easy to ratio changes (for example zoom, perpendicular to the motion on image surface).But in the method for pixel one by one,, thereby all motion vectors can not be transferred to receiver in practice on each pixel because motion vector will determine.Therefore, will be for one group of selected pixel, the motion vector that is characteristic point is transferred to receiver, and wherein each characteristic point is defined as representing near the position of the pixel of pixel its, so that the motion vector for non-characteristic point can be regenerated by the motion vector of characteristic point.The present invention relates generally to the motion estimation of use characteristic point.In a kind of encoder that adopts based on the motion estimation of characteristic point, at first from be included in previous frame, select a plurality of characteristic points in all pixel.Then, determine its motion vector for each characteristic point that is selected, wherein each motion vector is corresponding match point in characteristic point and the previous frame in the present frame, i.e. spatial movement between the most similar pixel.Particularly, for searching for match point in the region of search of each characteristic point in a reference frame, this reference frame for example be one at preceding frame, wherein the region of search is defined as a presumptive area of the individual features point position that has surrounded it.
The compress technique that another kind is easy to realize is the frame cutting method, its use is only encoded for the frame of selecting in vision signal and is transmitted, skip the frame between them or cut down the residue frame between them, have (for example referring to " video coder-decoder that is used for the audiovisual service of P * 64kb/s ", the CCITT rule H.261, CDMXV-R37E, CCITT (CCITT), August nineteen ninety).
What usually, be input to video encoder is the vision signal of one 30 frame/second.Be encoded at per two and skip one, frame rate that two or three frames produced between the frame and typically be respectively for 15,10 or 7.5 frame/seconds.
Use at the same time in the conventional video encoder of hybrid coding and frame reduction technology, selected frame to vision signal uses interframe and transform coding method to encode simultaneously, and the motion vector that is obtained by interframe encode is two being encoded and being detected between the frame in succession.Because some frame is skipped, become at two be encoded in succession motion spacing between the frame or displacements and more to suddenly change than the raw video signal of not cutting down, this will cause producing bigger motion vector.Therefore, to detect optimum movement vector between the frame in order being encoded at two, should to be encoded last one and to use bigger region of search in the frame, this regional scale depends on the frame rate or the reduction degree of the frame that is encoded.Because the computational complexity of block matching algorithm is proportional to the scale of region of search usually, on the video encoder that adopts frame reduction technology, embodied bigger computation burden for the estimating motion vector.
Therefore main purpose of the present invention is to provide in a kind of a plurality of steps that adopt in frame decimating video encoder two motion vector of characteristic point detects between the frame improving one's methods and installing that are encoded, and reduces the overall computational complexity of video encoder thus.
According to the invention provides a kind of present frame and on determine the method for target motion vectors between the selected frame for last set point, wherein on reaching, present frame skipped N frame between the selected frame, described N comprises 1 positive integer, search point group contained in last one selected frame is scheduled to, and this method may further comprise the steps:
(a) N frame of being skipped of storage;
(b) set a search point as reference search point;
(c) for reference search point, determine to comprise i by the optimal match point in the corresponding searching area in the frame of skipping, produce i motion vector of representative displacement between reference search point and optimal match point thus, and this optimal match point is made as reference search point, wherein i is a number of selecting from 1 to N ascending series, the number of i is more little, relates to the frame of an approaching more last selected frame in time;
(d) i motion vector of storage;
(e) repeat above-mentioned steps (c) to (d), till obtaining first to N motion vector;
(f) for reference search point, determine to be included in the optimal match point in the corresponding searching area in the present frame, produce (N+1) individual motion vector of representative displacement between reference search point and optimal match point thus;
(g), provide representative target motion vectors of displacement between the corresponding optimal match point in a described search point and present frame thus with (N+1) individual motion vector addition; And
(h) repeat above-mentioned steps (b) to (g), till the target motion vectors group for all search points is detected.
By below in conjunction with the accompanying drawing description of preferred embodiments, will make and address other purpose on of the present invention and feature is more clear, its accompanying drawing is:
Fig. 1 has been to use a block diagram of the video encoder of motion estimator unit of the present invention;
Fig. 2 A and 2B represent the difference between the inventive method and the prior art motion estimation technique;
Fig. 3 represents the block diagram of motion estimator unit of the present invention;
Fig. 4 is the block diagram of the predicting unit shown in the presentation graphs 1;
Fig. 5 is used to detect a demonstration methods for the motion vector of non-characteristic point.
With reference to Fig. 1, the block diagram of a video encoder of motion estimator unit 126 of the present invention has been used in its expression.Input digital video signal is provided to frame and cuts down device 101 and motion estimator unit.Cut down on the device 101 at frame, select frame to be encoded with the intermediate frame between the predetermined frame reduction rate skipped frame of representative reduction degree, and it is supplied with a subtracter 102.For example, when predetermined reduction rate was 2 or 3, then frame was cut down per second frame or per the 3rd frame that vision signal was selected or used to device 101 respectively.
On motion estimator unit 126, with the current frame signal on the line L10 or leaked the frame signal of choosing and handle from last one of the reconstruction on the line L12 of frame memory 124 frame signal that is encoded, to calculate and to estimate that first group of motion vector, each in them represent the characteristic point of present frame and be included in the displacement between the optimal match point in the corresponding searching area in the last frame that is encoded.
According to the present invention, two selected frames, promptly present frame and last one motion vector that is encoded between the frame will detect with a plurality of steps, and this is described with reference to Fig. 2 A and 3 hereinafter.In each step, the motion vector between two successive frames (comprised selected by leakage the frame and the frame that is encoded) is detected and stores, utilize then to add that being stored motion vector provides two motion vectors that are encoded between the frame.
Provide the motion vector on the online L20 to be provided for a predicting unit 122 and an entropy coder 107 by motion estimator unit 126.
In response to this motion vector, on predicting unit 122 based on as determine a prediction signal according to a pixel described in Fig. 4 with connecing a pixel, and it is offered subtracter 102 and adder 115 via line L30.
Prediction signal from predicting unit 122 is deducted from current frame signal on subtracter 102, the data that produced, be the error signal of indication pixel difference, be sent to a picture signal coder 105, by means of for example discrete cosine transform (DCT) and any known quantization method one grouping error signal encoding become one group of coefficient that is quantized conversion therein.Transmit the coefficient that this group is quantized conversion by two signal paths then, a path is a guiding entropy coder 107, these are quantized the coefficient of conversion and provide the motion vector that comes to be in the same place by line L20 therein, by means of for example being used to transmit their running length and the combination of variable length code technology is encoded, and another path is image signal decoding device 113 of guiding, by means of inverse quantization and reverse conversion, these are quantized conversion coefficient change back the differential error signal of rebuilding therein.For the decoder capabilities in the encoder check receiver, so that can stop the reconstruction signal of decoder to depart from current frame signal, so need produce error signal again.
Be combined in adder 115 by the reconstruction error signal of image signal decoding device 113 and from the prediction signal of predicting unit 122, with the reconstruction current frame signal that provides to be written in the frame memory 124.
Fig. 2 A and 2B summarily represent respectively according to motion vector detecting method of the present invention and traditional motion vector detecting method.For the purpose of this diagram, suppose that the frame reduction rate is 3, promptly be encoded and skipped two frames between the frame to be encoded of frame and current selection last one.Fig. 2 A represents according to of the present invention, to present frame F1 and be stored in the be encoded process of the motion vector estimation between the frame F4 of last one in the frame memory 124.Two frame F2 and F3 that leaked choosing are stored in the frame stack 190, as shown in Figure 3, and the details of motion estimator unit 126 shown in Fig. 3 presentation graphs 1.At first, in the frame F3 that it is then skipped, determine with on a corresponding region of search SR3 of search point SP4 that is encoded frame F4, wherein to put SP4 be last one characteristic point that is encoded frame F4 in search.From the SR3 of region of search, determine the optimal match point of SP4, between F4 and F3, provide a motion vector MV3.Then, utilize optimal match point SP3 among the F3, in the frame F2 that it is then skipped, determine corresponding region of search SR2 as new search point, it by region of search SR3 displacement MV3 obtain.In SR2, detect the optimal match point of search point SP3, between F3 and F2, provide a motion vector MV2.Use similar manner, between F2 and present frame F1, detect motion vector MV1.Present frame F1 and on a motion vector that is encoded between the frame F4 be MV1, the vector of MV2 and MV3 and, it is shown among the F4 displacement between the optimal match point among search point SP4 and the F1.
Above-mentioned will be encoded for last one estimation process that is encoded the motion vector of characteristic point in the frame that all further feature points repeat in the frame to last one.
The motion estimation scheme that Fig. 2 B represents to utilize prior art is to last one process that motion vector between frame F4 and the present frame F1 detects that is encoded.The optimal match point of the point of search therein SP4 is in the region of search that directly determines in F1.If wherein used the region of search (for example SR5) with the identical size of method shown in Fig. 2 A, then the optimal match point SP1 that is determined in present frame F1 by the method for Fig. 2 A will be positioned at beyond the border of region of search SR5.Therefore, in order to obtain more accurate motion vectors, need to use bigger region of search, for example SR6.In fact, present frame and last one motion amplitude that is encoded between the frame depend primarily on the frame reduction rate.Therefore, in order to obtain more accurate motion vectors, in present frame, should use be proportional to the frame reduction rate than the large search zone.If used bigger region of search, when for example SR6 obtains accurate motion vectors, determine that in the region of search computational complexity of optimal match point will increase with the big or small direct ratio ground of region of search rule.Thereby, remove the rapid additional calculations of handling of required execution multistep outside the time, the method for Fig. 2 A embodies less computation burden than the method for Fig. 2 B.
Referring to Fig. 3, it represents the more detailed block diagram of motion estimator unit of the present invention 126 shown in Figure 1.
Vision signal by line L10 input motion evaluation unit 126 supplies to a frame stack 190.Particularly, frame and present frame that the quilt of vision signal is skipped supply to frame stack 190, store them therein also again from offering motion vector detecting unit 210 here.
Be imported into a characteristic point selected cell 200 and motion vector detecting unit 210 from last one of the frame memory 124 retrieval frame that is encoded via line L12.In characteristic point selected cell 200, a plurality of characteristic points that are included in last one pixel that is encoded in the frame are made one's options.Each characteristic point is defined as representing its position of a pixel of pixel on every side.
Each selected characteristic point is provided for a characteristic point updating block 220, and upgrades therein with resembling following description.The characteristic point that is updated is fed into motion vector detecting unit 210 as search point, explains ground as reference Fig. 2 A, and the motion vector of the characteristic point of renewal is detected.When handling beginning, the characteristic point that is updated is identical with the characteristic point that provides from characteristic point selected cell 200.Motion vector detecting unit 210 search in the region of search SR3 of the frame F3 that is skipped is determined the motion vector MV3 between the optimal match point among search point SP4 and the region of search SR3 thus for the optimal match point of characteristic point (for example search point SP4 among Fig. 2 A).Motion vector for detected characteristics point has many Processing Algorithm.Wherein a kind of Processing Algorithm is a certain size the piece of at first setting up for a characteristic point, uses a kind of known block matching algorithm to detect motion vector for characteristic point then.
Motion vector MV3 is sent to motion vector accumulator 230 and is stored in wherein, and is sent to characteristic point updating block 220, thus optimal match point SP3 is offered motion vector detecting unit 210 as new renewal characteristic point.In motion vector detecting unit 210, for the new search point, be the new feature point SP3 among Fig. 2 A, in the region of search SR2 of the frame F2 that is skipped, determine optimal match point, just determined the motion vector MV2 between the optimal match point SP2 among search point SP3 and the region of search SR2 thus.This motion vector MV3 is sent in the motion vector accumulator 230 and the MV3 addition then, and is sent to characteristic point updating block 220.The process of this detection motion vector and renewal characteristic point repeats between frame F2 that is skipped and present frame F1.Add up detected motion vector, for example of utilization: MV1, MV2 and MV3, motion vector accumulator 220 will provide final motion vector, last one corresponding searching area in characteristic point SP4 in the frame and the present frame on the online L20 that is encoded as shown in Fig. 2 B for example, as the optimal match point among the SR1, as the MV between the SP1.For repeating said process, offer 122 first groups of motion vectors of the predicting unit shown in Fig. 1 thus for characteristic point in last one all characteristic points that are encoded in the frame.
Be described though The present invention be directed to the frame reduction rate and be 3, various frames are cut down schemes or reduction rate all can be used in the coding of vision signal, and available and the similar manner calculating kinematical vector of proposition here.
Referring now to Fig. 4, this is the more detailed block diagram of expression predicting unit 122, and first group of motion vector for characteristic point offers a non-characteristic point motion estimator 214 via line L20 from motion estimator unit 126.In non-characteristic point motion estimator 214, determine for each non-characteristic point, promptly remove the last one outer second group of motion vector that is encoded the point in the frame of characteristic point by using for first group of motion vector of characteristic point.
Fig. 5 is illustrated in a demonstration methods that detects non-characteristic point motion vector under the situation of characteristic point irregular distribution in entire frame.Utilization is located to have radius and calculates its motion vector for the characteristic point in round edge circle of dr+da for the non-characteristic point pixel that has asterisk note, wherein da is the distance from the pixel location of asterisk note to nearest characteristic point, and dr is a predetermined extended radius that can be included in the other characteristic point of using in the motion vector computation.For example, if be " Y " to the nearest characteristic point of the pixel of asterisk note, and characteristic point " X " is positioned at radius in round edge circle of (da+dr), then for the motion vector of the pixel of asterisk note (MVx, being calculated as follows MVy): M V x , M V y = 1 d x ( M V x , M V y ) X + 1 d Y ( M V x , M V y ) Y 1 d X + 1 d Y Dx and dy are that characteristic point X and Y are respectively apart from the distance of the pixel location of spider lable in the formula; And (MVx, MVy) x and (MVx, MVy) y is each motion vector for characteristic point.
Back referring to Fig. 4, the second group of motion vector for non-characteristic point that is determined offered pixel fallout predictor 216 one by one, utilize these two groups of motion vectors to determine each pixel value that is included in the predicted present frame therein.
Though specific embodiment is described for The present invention be directed to,, under the situation of spirit of the present invention that does not break away from following claims and limited and scope, obviously also can make various variations and remodeling to those skilled in the art.

Claims (2)

1, a kind of present frame and on determine the method for target motion vectors between the selected frame for last set point, wherein on reaching, present frame skipped N frame between the selected frame, described N comprises 1 positive integer, search point group contained in last one selected frame is scheduled to, and this method may further comprise the steps:
(a) N frame of being skipped of storage;
(b) set a search point as reference search point;
(c) for this reference search point, determine to be included in i by the optimal match point in the corresponding searching area in the frame of skipping, produce i motion vector of representative displacement between this reference search point and this optimal match point thus, and this optimal match point is made as reference search point, wherein i is a number of selecting from 1 to N ascending series, the numerical value of i is more little, relates to the frame of an approaching more last selecteed frame in time;
(d) i motion vector of storage;
(e) repeat above-mentioned steps (c) to (d), till obtaining first to N motion vector;
(f) for this reference search point, determine to be included in the optimal match point in the corresponding searching area in the present frame, produce (N+1) individual motion vector of representative displacement between this reference search point and this optimal match point thus;
(g), provide representative target motion vectors of displacement between the corresponding optimal match point in one of described search point and present frame thus with (N+1) individual motion vector addition; And
(h) repeat above-mentioned steps (b) to (g), till the target motion vectors group for all search points is detected.
2, a kind of motion estimation apparatus, be used for present frame and on determine target motion vectors for last set point between the selected frame, wherein on reaching, present frame skipped N frame between the selected frame, described N comprises 1 positive integer, this search point group is included in the selected frame, and this device comprises:
Be used to store the storage device of N the frame of being skipped;
Be used for selecting the device of search point group at last one selected frame;
Be used for providing the device of reference frame reference search point, wherein, if when not having the motion vector input, a search point of selecting in last one selected frame is confirmed as reference search point, and if motion vector when input arranged, then in response to this motion vector, reference search point is updated to the optimal match point of the reference search point in the subsequent frame that is included in reference point, the individual reference search point of described providing (N+1) is carried out for all search points;
Device for detecting motion vector is used for determining that for reference search point one is included in the optimal match point in the reference frame subsequent frame region of search, produces one of (N+1) individual motion vector in succession of representing displacement between reference search point and the optimal match point thus; And
Storage and add up (N+1) individual motion vector in succession device to provide one of described search of representative to reach a target motion vectors of the displacement between the present frame corresponding optimum match point is provided, provides target motion vectors group thus for all search points.
CN 95101342 1995-01-26 1995-01-26 Method and apparatus for detecting motion vectors in a frame decimating video encoder Pending CN1127969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 95101342 CN1127969A (en) 1995-01-26 1995-01-26 Method and apparatus for detecting motion vectors in a frame decimating video encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 95101342 CN1127969A (en) 1995-01-26 1995-01-26 Method and apparatus for detecting motion vectors in a frame decimating video encoder

Publications (1)

Publication Number Publication Date
CN1127969A true CN1127969A (en) 1996-07-31

Family

ID=5073907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 95101342 Pending CN1127969A (en) 1995-01-26 1995-01-26 Method and apparatus for detecting motion vectors in a frame decimating video encoder

Country Status (1)

Country Link
CN (1) CN1127969A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8355438B2 (en) 2006-10-30 2013-01-15 Nippon Telegraph And Telephone Corporation Predicted reference information generating method, video encoding and decoding methods, apparatuses therefor, programs therefor, and storage media which store the programs
CN102946540A (en) * 2008-03-10 2013-02-27 联发科技股份有限公司 Adaptive motion estimation coding
CN106303544A (en) * 2015-05-26 2017-01-04 华为技术有限公司 A kind of video coding-decoding method, encoder
CN110929093A (en) * 2019-11-20 2020-03-27 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for search control

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8355438B2 (en) 2006-10-30 2013-01-15 Nippon Telegraph And Telephone Corporation Predicted reference information generating method, video encoding and decoding methods, apparatuses therefor, programs therefor, and storage media which store the programs
US8675735B2 (en) 2006-10-30 2014-03-18 Nippon Telegraph And Telephone Corporation Predicted reference information generating method, video encoding and decoding methods, apparatuses therefor, programs therefor, and storage media which store the programs
CN102946540A (en) * 2008-03-10 2013-02-27 联发科技股份有限公司 Adaptive motion estimation coding
CN102946540B (en) * 2008-03-10 2015-11-18 联发科技股份有限公司 Video signal encoding method
CN106303544A (en) * 2015-05-26 2017-01-04 华为技术有限公司 A kind of video coding-decoding method, encoder
CN106303544B (en) * 2015-05-26 2019-06-11 华为技术有限公司 A kind of video coding-decoding method, encoder and decoder
US10554997B2 (en) 2015-05-26 2020-02-04 Huawei Technologies Co., Ltd. Video coding/decoding method, encoder, and decoder
CN110929093A (en) * 2019-11-20 2020-03-27 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for search control
CN110929093B (en) * 2019-11-20 2023-08-11 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for search control

Similar Documents

Publication Publication Date Title
CN1117482C (en) Method for encoding video signal using feature point based motion estimation
US5619281A (en) Method and apparatus for detecting motion vectors in a frame decimating video encoder
CN1115879C (en) Image processing system using pixel-by-pixel motion estimation and frame decimation
US5453801A (en) Method and apparatus for detecting motion vectors in a frame decimating video encoder
CN1103166C (en) Method for encoding video signal using feature point based motion estimation
CN1135148A (en) Method for encoding video signal using feature point based motion estimation
CN1135146A (en) Apparatus for encoding video signal using feature point based motion estimation
CN1142731A (en) Method and apparatus for detecting motion vectors based on hierarchical motion estimation
CN100388793C (en) Image processing apparatus using pixel-by-pixel motion estimation based on feature points
CN1136732C (en) Improved motion compensation apparatus for use in image encoding system
EP0721284B1 (en) An image processing system using pixel-by-pixel motion estimation and frame decimation
CN100384256C (en) Image processing system using pixel-by-pixel motion estimation based on feature points
CN100384257C (en) Image processing system using feature point-based motion estimation
CN1127969A (en) Method and apparatus for detecting motion vectors in a frame decimating video encoder
CN1108061C (en) Apparatus for encoding vided signal using search grid
KR0174455B1 (en) Method and apparatus for encoding a video signal using pixel-by-pixel motion prediction
KR0174463B1 (en) Method and apparatus for detecting motion vectors in a frame decimating video encoder
KR0174462B1 (en) Image processing system using pixel-by-pixel motion estimation and frame decimation
CN1134086A (en) Method and apparatus for selectively encoding/decoding video signal
KR0159374B1 (en) Apparatus for encoding a video signal using a search grid
JPH08205176A (en) Apparatus and method for video signal coding
CN1127970A (en) Method and apparatus for encoding a video signal using pixel-by-pixel motion prediction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication