CN103888767A - Frame rate improving method with UMH block matching motion estimation and optical flow field motion estimation combined - Google Patents

Frame rate improving method with UMH block matching motion estimation and optical flow field motion estimation combined Download PDF

Info

Publication number
CN103888767A
CN103888767A CN201410125926.3A CN201410125926A CN103888767A CN 103888767 A CN103888767 A CN 103888767A CN 201410125926 A CN201410125926 A CN 201410125926A CN 103888767 A CN103888767 A CN 103888767A
Authority
CN
China
Prior art keywords
motion
motion vector
frame
optical flow
frame rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410125926.3A
Other languages
Chinese (zh)
Other versions
CN103888767B (en
Inventor
孙国霞
赵悦
刘琚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201410125926.3A priority Critical patent/CN103888767B/en
Publication of CN103888767A publication Critical patent/CN103888767A/en
Application granted granted Critical
Publication of CN103888767B publication Critical patent/CN103888767B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a frame rate improving method which mainly comprises four steps that image segmentation is carried out, and the foreground, the background and the object edge are obtained; the foreground and the background are subjected to variable size block matching motion estimation, and the object edge is subjected to motion estimation based on an optical flow field; a motion vector is subjected to post processing, and a proper motion vector is obtained; and a method of overlapped block motion compensation and bilinear interpolation is used for motion compensation, and interpolation frames are synthesized. According to the frame rate improving method, the problems that the halo effect, the edge block sawtooth effect and the like happen in a traditional frame rate improving method can be solved, and the method is widely used in the field of frame rate improving.

Description

UMH block matching motion is estimated a kind of frame per second method for improving combining with optical flow field estimation
Technical field
The present invention relates to a kind of method that frame-rate video promotes, belong to video data process field.
Background technology
The frame per second method for improving of video is the visual quality that the frame by insert prediction in original frame per second video sequence promotes low frame-rate video, obtains the video of high frame per second.Because video frame rate promotes diversified application, frame per second lift technique is more and more important in consumer electronics field.HDTV and multimedia PC system can be play the video higher than broadcast video stream frame per second, and video frame rate lift technique just can be applied to lifting original video frame per second and improve terminal use's viewing effect.
It is the motion compensation process based on estimation that video frame rate lifting at present adopts maximum methods, and most frame per second method for improving is all the method for estimating based on piece coupling.But at the intersection of prospect and background, due to the inaccuracy of the object edge piece coupling of moving, cause estimating inaccurate motion vector, thereby there will be the problem such as halo effect, jagged edges in inaccessible region, cause the video quality after frame per second promotes to decline, affect terminal use's visual effect.
Summary of the invention
The halo effect existing in promoting for frame per second, the problem of jagged edges, prospect, background separation that the application provides a kind of moving object to detect, the video frame rate method for improving that the estimation that employing is mated based on piece and optical flow field estimation combine, in order to improve the video quality after frame per second promotes.
Technical solution of the present invention is as follows:
Based on the video frame rate method for improving that UMHexagonS block matching motion is estimated and optical flow field estimation combines, it is characterized in that the method comprises the following steps:
Step 1: original video is processed, adopted frame-to-frame differences method to separate prospect background, and marker edge pixel;
Step 2: adopt variable-sized UMHexagonS block matching motion to estimate to obtain the motion vector of prospect, background;
Step 3: adopt optical flow field estimation to obtain moving object edge pixel point motion vector;
Step 4: the motion vector obtaining is carried out to reprocessing;
Step 5: prospect and background are carried out to overlapped block motion compensation, object edge is carried out to bilinear interpolation motion compensation, obtain inserting frame;
Step 6: will insert the video of the synthetic high frame per second of frame and primitive frame.
Preferably, in step 2 and step 3, prospect, background, object edge are adopted respectively to adaptive movement estimation method, to improve the accuracy of object edge pixel motion vector.
Preferably, in step 4, the motion vector obtaining is carried out to reliability judgement, insecure motion vector is carried out to medium filtering, to improve the accuracy of motion vector.
Preferably, in step 5, carry out overlapped block motion compensation, to reduce blocking effect, to improve video quality.
Brief description of the drawings
Fig. 1: be disposed of in its entirety block diagram of the present invention.
Fig. 2: UMHexagonS block matching motion estimation method schematic diagram.
Fig. 3: simulation result figure.
Embodiment
The present invention is directed to the motion conditions of objects in images, be divided into prospect, background, fringe region, adopt respectively size variable block matching motion method of estimation, method based on optical flow field estimation, the method construct that adopts overlapped block motion compensation after motion vector reprocessing is inserted to frame, play the effect that reduces halo effect, edge sawtooth, reach the target of the high frame-rate video of reconstruct high-quality.
Below in conjunction with specific embodiment (but being not limited to this example) and accompanying drawing, the present invention is further detailed.
(1) processing to original digital image:
(1) read in video;
(2) counter t=1 is set, preserves successively i frame as present frame, t+2 frame is as next frame, and reserved t+1 frame is as incoming frame to be inserted;
(3) utilize dynamic self-adapting threshold value frame-to-frame differences method to carry out moving object detection and come separation prospect and background.Motion detection based on inter-frame difference is frame difference method, and it detects moving target according to the size of consecutive frame or brightness variation every between two field picture.Specifically comprise following step:
A. the difference of calculating t two field picture and t+2 two field picture by formula (1), is designated as D (x, y):
D (x, y)=| F t+2(x, y)-F t(x, y) | (formula 1)
Wherein: F t(x, y), F t+2(x, y) represents respectively the image in t and t+2 moment;
B. choosing of dynamic self-adapting threshold value TH, the value of threshold value TH be the highest gray value of difference diagram in frame difference method and lowest gray value poor 1/2.The maximum that records D (x, y) by step a is D max, minimum value is D min, TH=(D max-D min)/2;
C. difference image D (x, y) is carried out to binary conversion treatment, carries out image according to formula 2 and cut apart:
R ( x , y ) = 0 D ( x , y ) > TH 1 D ( x , y ) ≤ TH (formula 2)
In formula 2, R (x, y) is binaryzation difference image afterwards, and TH is the threshold value that image is cut apart.
(2) estimation stages of motion vector:
(1), with reference to the accompanying drawings shown in 2, for moving object, adopt piece to be of a size of the UMHexagonS bi-directional motion estimation method of 4*4.Concrete steps are as follows:
A. virtual incoming frame t+1 frame to be inserted, according to step (one's) segmentation result, is divided into the position at foreground object place the rectangular block of some 4*4, successively each fritter is carried out to bi-directional motion estimation;
B. the criterion of piece coupling is as follows: calculate least absolute error and (Sum of Absolute Difference, the SAD) of t frame and t+2 frame corresponding blocks according to formula 3, SAD is defined as follows:
SAD [ B i , j , v ] = Σ s ∈ B i , j | f t [ s - v ] - f t + 1 [ s + v ] (formula 3)
Wherein, B i,jbe the piece that will estimate in t+1 frame, the motion vector that v is candidate, s is the pixel that will insert in t+1, f t[s-v] is that t+1 is mapped to forward pixel corresponding in t frame, f t+1[s+v] is that t+1 frame is mapped to pixel corresponding in t+2 frame backward;
C. according to matching criterior, asymmetric cross multi-level hexagon lattice point (Unsymmetrical Multi-resolution Hexagon, UMHexagonS) searching method has adopted hybrid multilayer time way of search, and concrete steps are as follows:
Step1: the prediction of central point, first initial center point is carried out to median prediction, be then upper strata prediction, finally carry out the motion-vector prediction of front frame corresponding blocks;
Step2: the motion search that hybrid multilayer is inferior;
The asymmetric Cross Search of Step2.1, then centered by the optimum point searching, point carries out grid search, then point carries out large hexagon search centered by current optimum point;
Step2.2 expands hexagon search, until point or stop while reaching maximum search number of times centered by optimum point;
Step2.3 dwindles hunting zone, carries out diamond search, until point or stop while reaching maximum search number of times centered by optimum point.
(2), for relatively static background, adopt piece to be of a size of the UMHexagonS search bi-directional motion estimation method (concrete step is as step (1)) of 8*8;
(3) for the edge contour of prospect and background, adopt the estimation based on optical flow field method.The basic model of optical flow computation is: suppose that the brightness value that faces pixel on territory in a less space keeps constant, can calculate faster the local light flow field of moving image by least square method optimal method, concrete steps are as follows:
A. make f (x, y, t) represent continuous space-time Luminance Distribution, if along its brightness preservation of movement locus constant we can obtain:
df ( x , y , t ) dt = 0 (formula 4)
In formula 4, x, y is respectively along movement locus t variation in time.Use differential chain rule to obtain to formula 4:
∂ f ( x , y , t ) ∂ x v x ( x , y , t ) + ∂ f ( x , y , t ) ∂ y v y ( x , y , t ) + ∂ f ( x , y , t ) ∂ t = 0 (formula 5)
In formula 5,
Figure BDA0000484916290000043
represent respectively along the component of the motion vector of space coordinates.Formula 5 is called optical flow equation or optical flow constraint condition, and formula 5 can also be write as the form of < inner product of vector >:
&lang; &dtri; f ( x , y , t ) , v ( x , y , t ) &rang; + df ( x , y , t ) dt = 0 (formula 6)
B. the minimum value that the flow vector of sports ground changes by pixel should meet optical flow equation, order:
&epsiv; of ( v ( x , y , t ) ) = &lang; &dtri; f ( x , y , t ) , v ( x , y , t ) &rang; + df ( x , y , t ) dt (formula 7)
Formula 7 represents the error in optical flow equation, works as ε of(v (x, y, t)) equal 0 time, meet optical flow equation.In the situation that blocking with noise, obtain ε ofthe minimum value of (v (x, y, t)) quadratic power.We can ask light stream with regularization method, make following formula minimum:
&Integral; ( &dtri; fgv + &PartialD; f &PartialD; t ) 2 + &lambda; ( | | &dtri; v x | | 2 + | | &dtri; v y | | 2 ) dx (formula 8)
In formula 8, &dtri; v x = ( &PartialD; v x &PartialD; x , &PartialD; v x &PartialD; y ) T , &dtri; v y = ( &PartialD; v y &PartialD; x , &PartialD; v y &PartialD; y ) T , λ Lagrange's multiplier, if derivative ▽ f and
Figure BDA0000484916290000051
more accurate the obtaining of energy, the desirable the greater of parameter, otherwise desirable smaller.
(3) motion vector post-processing stages:
A. the judgement of motion vector reliability:
Step1: calculate and want decision block (being designated as B piece) and the mean value of the motion vector of eight pieces around thereof:
v m = 1 9 &Sigma; i = 1 9 v i (formula 9)
V in formula mfor mean value, v ithe representative motion vector of eight pieces around respectively.
Step2: calculated difference:
Dn = 1 8 &Sigma; i = 2 9 | v m - v i | (formula 10)
V in formula mfor mean value, v irepresent the motion vector of B piece.
Step3: calculate mean difference:
Dc=|v m-v 1| (formula 11)
Step4: judgement, if Dc > is Dn, v 1for unreliable motion vector, need medium filtering.
B. unreliable motion vector is carried out to medium filtering:
V 1smooth=median[v 1, v 2, v 3..., v 9] (formula 12)
(4) motion compensation stage: prospect and background object are adopted to overlapped block motion compensation method (OBMC), to the motion compensation process of prospect and background edge employing bilinear interpolation.When estimation of motion vectors is inaccurate or object of which movement is not, while having multiple different objects motion in simple translational motion and a piece, to adopt overlapped block motion compensation method can solve blocking effect problem.Adopt OBMC method, the prediction of a pixel is the estimation of the motion vector based on piece under it not only, also the estimation of motion vectors based on adjacent block.
In traditional video frame rate method for improving, when moving object edge is done to block-based estimation, can use the pixel of background, this has just caused the inaccuracy of margin estimation.Adopt the estimation based on pixel just can not have the problem of estimating moving object Pixel Information by background pixel information at moving object edge, thereby can obtain correct motion vector, effectively solved edge-light toroidal effect and sawtooth piece problem.
As shown in Figure 3, in figure, from left to right, be once respectively from top to bottom UMHexagonSexagonS block matching motion and estimate first motion compensation, overlapped block motion compensation, the motion compensation of optical flow field estimation bilinear interpolation, this patent motion compensation simulation result.
The present invention adopts standard yuv video cycle tests foreman sequence to obtain simulation result, with based on UMHexagonSexagonS motion estimation and compensation method, compare based on optical flow field estimation interpolation method, overlapped block motion compensation method, can find out that method of the present invention efficiently solves the problem of edge-light toroidal effect, edge sawtooth piece.

Claims (4)

1. the video frame rate method for improving based on UMHexagonS block matching motion is estimated and optical flow field estimation combines, is characterized in that the method comprises the following steps:
Step 1: original video is processed, adopted frame-to-frame differences method to separate prospect background, and marker edge pixel;
Step 2: adopt variable-sized UMHexagonS block matching motion to estimate to obtain the motion vector of prospect, background;
Step 3: adopt optical flow field estimation to obtain moving object edge pixel point motion vector;
Step 4: the motion vector obtaining is carried out to reprocessing;
Step 5: prospect and background are carried out to overlapped block motion compensation, object edge is carried out to bilinear interpolation motion compensation, obtain inserting frame;
Step 6: will insert the video of the synthetic high frame per second of frame and primitive frame.
2. the video frame rate method for improving based on UMHexagonS block matching motion is estimated and optical flow field estimation combines according to claim 1, it is characterized in that, in step 2 and step 3, prospect, background, object edge are adopted respectively to adaptive movement estimation method, to improve the accuracy of object edge pixel motion vector.
3. the video frame rate method for improving based on UMHexagonS block matching motion is estimated and optical flow field estimation combines according to claim 1, it is characterized in that, in step 4, the motion vector obtaining is carried out to reliability judgement, insecure motion vector is carried out to medium filtering, to improve the accuracy of motion vector.
4. the video frame rate method for improving based on UMHexagonS block matching motion is estimated and optical flow field estimation combines according to claim 1, is characterized in that carrying out overlapped block motion compensation in step 5, to reduce blocking effect, to improve video quality.
CN201410125926.3A 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation Expired - Fee Related CN103888767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410125926.3A CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410125926.3A CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Publications (2)

Publication Number Publication Date
CN103888767A true CN103888767A (en) 2014-06-25
CN103888767B CN103888767B (en) 2017-07-28

Family

ID=50957457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410125926.3A Expired - Fee Related CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Country Status (1)

Country Link
CN (1) CN103888767B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065975A (en) * 2014-06-30 2014-09-24 山东大学 Frame rate up-conversion method based on adaptive motion estimation
CN105517671A (en) * 2015-05-25 2016-04-20 北京大学深圳研究生院 Video frame interpolation method and system based on optical flow method
CN105915881A (en) * 2016-05-06 2016-08-31 电子科技大学 Stereoscopic video frame rate improvement method based on significance detection
CN106303546A (en) * 2016-08-31 2017-01-04 四川长虹通信科技有限公司 Conversion method and system in a kind of frame rate
CN108040217A (en) * 2017-12-20 2018-05-15 深圳岚锋创视网络科技有限公司 A kind of decoded method, apparatus of video and camera
CN108280444A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of fast motion object detection method based on vehicle panoramic view
CN109889849A (en) * 2019-01-30 2019-06-14 北京市商汤科技开发有限公司 Video generation method, device, medium and equipment
CN110163892A (en) * 2019-05-07 2019-08-23 国网江西省电力有限公司检修分公司 Learning rate Renewal step by step method and dynamic modeling system based on estimation interpolation
CN110392282A (en) * 2018-04-18 2019-10-29 优酷网络技术(北京)有限公司 A kind of method, computer storage medium and the server of video interleave
CN110651472A (en) * 2017-05-17 2020-01-03 株式会社Kt Method and apparatus for video signal processing
CN112203095A (en) * 2020-12-04 2021-01-08 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium
CN113873095A (en) * 2020-06-30 2021-12-31 晶晨半导体(上海)股份有限公司 Motion compensation method and module, chip, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
JP2010517415A (en) * 2007-01-26 2010-05-20 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Image block classification
CN102595089A (en) * 2011-12-29 2012-07-18 香港应用科技研究院有限公司 Frame-rate conversion using mixed bidirectional motion vector for reducing corona influence
US20120206567A1 (en) * 2010-09-13 2012-08-16 Trident Microsystems (Far East) Ltd. Subtitle detection system and method to television video
CN103167304A (en) * 2013-03-07 2013-06-19 海信集团有限公司 Method and device for improving a stereoscopic video frame rates
CN103313059A (en) * 2013-06-14 2013-09-18 珠海全志科技股份有限公司 Method for judging occlusion area in process of frame rate up-conversion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010517415A (en) * 2007-01-26 2010-05-20 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Image block classification
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
US20120206567A1 (en) * 2010-09-13 2012-08-16 Trident Microsystems (Far East) Ltd. Subtitle detection system and method to television video
CN102595089A (en) * 2011-12-29 2012-07-18 香港应用科技研究院有限公司 Frame-rate conversion using mixed bidirectional motion vector for reducing corona influence
CN103167304A (en) * 2013-03-07 2013-06-19 海信集团有限公司 Method and device for improving a stereoscopic video frame rates
CN103313059A (en) * 2013-06-14 2013-09-18 珠海全志科技股份有限公司 Method for judging occlusion area in process of frame rate up-conversion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BYUNG-TAE CHOI ETAL: "NEW FRAME RATE UP-CONVERSION USING BI-DIRECTIONAL MOTION ESTIMATION", 《IEEE TRANSACTIONS ON CONSUMER ELECTRONICS》 *
XIAOLIN CHEN ETAL: "Backward Adaptive Pixel-based Fast", 《IEEE SIGNAL PROCESSING LETTERS》 *
YUANZHOUHAN CAO ET AL: "Motion Compensated Frame Rate Up-conversion Using Soft-decision Motion Estimation and Adaptive-weighted Motion Compensated Interpolation", 《JOURNAL OF COMPUTATIONAL INFORMATION SYSTEMS》 *
杨越 等: "一种基于自适应补偿的快速帧速率上转换算法", 《光了学报》 *
林川: "基于图像遮挡分析的帧率上变换", 《中国优秀博硕士学位论文全文数据库(硕士)》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065975B (en) * 2014-06-30 2017-03-29 山东大学 Based on the frame per second method for improving that adaptive motion is estimated
CN104065975A (en) * 2014-06-30 2014-09-24 山东大学 Frame rate up-conversion method based on adaptive motion estimation
CN105517671A (en) * 2015-05-25 2016-04-20 北京大学深圳研究生院 Video frame interpolation method and system based on optical flow method
CN105915881A (en) * 2016-05-06 2016-08-31 电子科技大学 Stereoscopic video frame rate improvement method based on significance detection
CN106303546B (en) * 2016-08-31 2019-05-14 四川长虹通信科技有限公司 Conversion method and system in a kind of frame rate
CN106303546A (en) * 2016-08-31 2017-01-04 四川长虹通信科技有限公司 Conversion method and system in a kind of frame rate
CN110651472A (en) * 2017-05-17 2020-01-03 株式会社Kt Method and apparatus for video signal processing
US11743483B2 (en) 2017-05-17 2023-08-29 Kt Corporation Method and device for video signal processing
CN110651472B (en) * 2017-05-17 2023-08-18 株式会社Kt Method and apparatus for video signal processing
WO2019120012A1 (en) * 2017-12-20 2019-06-27 深圳岚锋创视网络科技有限公司 Video decoding method and camera
CN108040217A (en) * 2017-12-20 2018-05-15 深圳岚锋创视网络科技有限公司 A kind of decoded method, apparatus of video and camera
US11509859B2 (en) 2017-12-20 2022-11-22 Arashi Vision Inc. Video decoding method and camera
CN108040217B (en) * 2017-12-20 2020-01-24 深圳岚锋创视网络科技有限公司 Video decoding method and device and camera
CN108280444A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of fast motion object detection method based on vehicle panoramic view
CN110392282A (en) * 2018-04-18 2019-10-29 优酷网络技术(北京)有限公司 A kind of method, computer storage medium and the server of video interleave
CN110392282B (en) * 2018-04-18 2022-01-07 阿里巴巴(中国)有限公司 Video frame insertion method, computer storage medium and server
CN109889849A (en) * 2019-01-30 2019-06-14 北京市商汤科技开发有限公司 Video generation method, device, medium and equipment
CN109889849B (en) * 2019-01-30 2022-02-25 北京市商汤科技开发有限公司 Video generation method, device, medium and equipment
CN110163892A (en) * 2019-05-07 2019-08-23 国网江西省电力有限公司检修分公司 Learning rate Renewal step by step method and dynamic modeling system based on estimation interpolation
CN110163892B (en) * 2019-05-07 2023-06-20 国网江西省电力有限公司检修分公司 Learning rate progressive updating method based on motion estimation interpolation and dynamic modeling system
CN113873095A (en) * 2020-06-30 2021-12-31 晶晨半导体(上海)股份有限公司 Motion compensation method and module, chip, electronic device and storage medium
CN112203095A (en) * 2020-12-04 2021-01-08 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN103888767B (en) 2017-07-28

Similar Documents

Publication Publication Date Title
CN103888767A (en) Frame rate improving method with UMH block matching motion estimation and optical flow field motion estimation combined
US8736767B2 (en) Efficient motion vector field estimation
CN102883175B (en) Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN106778776B (en) Time-space domain significance detection method based on position prior information
CN107146239B (en) Satellite video moving target detection method and system
CN104219533B (en) A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
US20090208123A1 (en) Enhanced video processing using motion vector data
US20070041445A1 (en) Method and apparatus for calculating interatively for a picture or a picture sequence a set of global motion parameters from motion vectors assigned to blocks into which each picture is divided
CN103413276A (en) Depth enhancing method based on texture distribution characteristics
CN105872345A (en) Full-frame electronic image stabilization method based on feature matching
US20110188583A1 (en) Picture signal conversion system
CN1422074A (en) Interpolating picture element data selection for motion compensation and its method
CN104717402B (en) A kind of Space-time domain combines noise estimating system
CN102572223B (en) Domain block searching method for video denoising
CN102595145B (en) Method for error concealment of whole frame loss of stereoscopic video
CN103402098A (en) Video frame interpolation method based on image interpolation
CN102014281A (en) Methods and systems for motion estimation with nonlinear motion-field smoothing
CN102073866B (en) Video super resolution method by utilizing space-time Markov random field model
US9437010B2 (en) Method and device for generating a motion field for a video sequence
CN104079800A (en) Shaking preventing method for video image in video surveillance
CN102985949A (en) Multi-view rendering apparatus and method using background pixel expansion and background-first patch matching
CN103051857A (en) Motion compensation-based 1/4 pixel precision video image deinterlacing method
CN103400346B (en) The video super-resolution method of autoregression model is guided based on adaptive super-pixel
US20110187924A1 (en) Frame rate conversion device, corresponding point estimation device, corresponding point estimation method and corresponding point estimation program
US10225587B1 (en) Motion estimation method for frame rate converter and video processor using the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170728

Termination date: 20190331