CN104915966A - Frame rate up-conversion motion estimation method and frame rate up-conversion motion estimation system based on Kalman filtering - Google Patents
Frame rate up-conversion motion estimation method and frame rate up-conversion motion estimation system based on Kalman filtering Download PDFInfo
- Publication number
- CN104915966A CN104915966A CN201510233587.5A CN201510233587A CN104915966A CN 104915966 A CN104915966 A CN 104915966A CN 201510233587 A CN201510233587 A CN 201510233587A CN 104915966 A CN104915966 A CN 104915966A
- Authority
- CN
- China
- Prior art keywords
- vector
- candidate
- frame
- block
- motion vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a frame rate up-conversion motion estimation method and a frame rate up-conversion motion estimation system based on Kalman filtering. The method comprises the following steps: first, the parameters and the initial value of state of a Kalman filtering model are set to make the model accord with the actual system; then, the observed value of a motion vector is obtained through a strategy in which unidirectional motion estimation is carried out first and then a unidirectional motion vector is mapped to an interpolated frame; and finally, an observation vector is updated by a time-variant gain Kalman filtering method so as to obtain a more accurate motion vector. Based on the method, a frame rate up-conversion motion estimation hardware architecture based on Kalman filtering is put forward. High utilization rate and high throughput of the system are realized through an alternate block scanning sequence and two parallel data channels.
Description
Technical field
The present invention relates to Video post-processing technical field, particularly, relate to a kind of frame rate up-conversion method for estimating based on Kalman filtering and system.
Background technology
LCD (liquid crystal display) overcomes that CRT is bulky, the shortcoming of power consumption and flicker, and having replaced CRT in recent years gradually becomes topmost display device in people's daily life.But LCD often produces shade when showing the video of rapid movement, affects the display effect of video.From the angle of signal transacting, a kind of effective solution adopts frame rate up-conversion technology, namely in the middle of original video source, produces new frame, thus improve the frame per second of video source.Based on this, scholars propose a series of frame rate up-conversion method, as average in black plug frame, frame repetition and frame.Although these class methods improve frame per second, also bring shake sense.And pass through to predict the movable information between primitive frame based on the frame rate up-conversion of estimation, and rebuild interpolation frame accordingly, make the video after converting more level and smooth, thus improve visual quality.
Estimation determines that whether effective frame rate up-conversion is crucial.Concerning real-time video processing component, method for estimating not only needs to ensure good visual effect, also must meet from hard-wired constraint, as the systematicness of method, low complex degree, low bandwidth and requirement of real time.Therefore, existing most method cannot reach attainable requirement.Be in particular in: the non-causality operation such as search for too much point in searching motion vector process, motion estimation process exists successive ignition, and low bandwidth needs is also usually out in the cold, existing method is made to be difficult to be applicable to the more and more higher video processing component of input source resolution.
Summary of the invention
For defect of the prior art, the object of this invention is to provide a kind of frame rate up-conversion method for estimating based on Kalman filtering and system, make it be applicable to the more and more higher video processing component of input source resolution.
According to an aspect of the present invention, provide a kind of frame rate up-conversion method for estimating based on Kalman filtering, the method can realize estimation accurately with lower complexity.
Described method comprises the steps:
Step 1: parameter and state initial value that Kalman filter model is set;
Step 2: measure motion vector observed reading:
A) carry out one-way movement estimation (UME), obtain one-way movement vector (UMV);
B) one-way movement vector is mapped to frame to be interpolated, obtains the candidate vector of bi-directional motion estimation (BME);
C) bi-directional motion vector (BMV) is obtained, as the observed reading of system motion vector by bi-directional motion estimation.
Step 3: Kalman filtering is carried out to measurement vector:
A) estimated motion vector predicted value;
B) according to the difference between motion vector observed reading and motion vector predictor, estimated state noise variance and observation noise variance;
C) according to the Kalman filter model parameter in step 1 and step 3b) in state-noise variance, observation noise variance, calculate kalman gain;
D) according to step 2 and step 3c) result that obtains, upgrade motion vector and filtering error covariance.
Step 2a) in one-way movement estimate comprise 8 candidate vectors, they are: 0 vector, 3 spatial candidate vectors and 4 time candidate motion vectors.Wherein time candidate vector is arranged in current block 5*5 neighborhood, and ensures that it differs from spatial candidate vectors.
Adopt the mode traveled through from current block to neighborhood spiral, find out 4 the time candidate vectors being greater than certain threshold value with spatial candidate vectors difference; If time candidate vector is less than 4, then supplement after adding shake vector with the summit block of 5*5 neighborhood.
Step 2b) in current block is carried out overlapping block expansion, and along the direction projection of motion vector in frame to be interpolated, in interpolation frame, overlapping block produces the block occured simultaneously and all will include their two-way candidate set after this motion vector convergent-divergent half in therewith.
Step 3a) in, according to the motion vector predictor of 1 dimension autoregressive model determination current block in time domain, by the motion vector of last interpolation frame and current block co-located as the motion vector predictor of current block.
Step 3b) in, according to the predicted value of motion vector and the diversity factor determination state-noise variance of observed reading and observation noise variance: if both are equal or close, then state-noise variance and the observation noise variance SAD corresponding with it become simple proportionate relationship; Otherwise recalculate diversity factor after observed reading is level and smooth, if diversity factor obviously reduces, judge that observed reading is inaccurate, predicted value is more accurate, vice versa.
According to an aspect of the present invention, provide a kind of frame rate up-conversion movement estimation system based on Kalman filtering, described system comprises with lower module:
Block group/block sequence control module: divide for carrying out block group/block to image, block scan order in controlled motion estimation procedure, and the beginning of estimation and end.
UME candidate vector generation module: after being coupled to block group/block sequence controller and UMV buffer memory, for generation of the candidate vector set needed for UME, these candidate vectors are by accessing UMV buffer memory and obtaining according to candidate vector create-rule.
BME candidate vector generation module: after being coupled to block group/block sequence controller and BME candidate vector buffer memory, for generation of the candidate vector set needed for BME, these candidate vectors are by accessing BME candidate vector buffer memory and obtaining after de-redundancy.
UMV cache module: for the UMV vector field of buffer memory former frame and present frame, the data of two frames are stored by ping-pong structure, access.
BME candidate vector cache module: for the BMV candidate vector of buffer memory former frame and present frame, the data of two frames are stored by ping-pong structure, access.
Pixel cache module: after being coupled to UME candidate vector generation module and BME candidate vector generation module, preserves the data in the field of search of estimation, and the scope of Dynamic Maintenance search window; Response candidate vector request of access, returns the pixel blocks of data of the front and back frame of its correspondence.
SAD calculates and analysis module: after being coupled to pixel cache module, for calculate front and back frame pixel block between absolute difference and (SAD), and com-parison and analysis is carried out to the SAD of each candidate vector, the candidate vector that output SAD is minimum.
Kalman filtering module: be coupled to SAD and calculate and after analysis module, receive the BMV measured value that it exports, and upgrade by predicted value, thus obtain best BMV;
Best BMV buffer memory: for the BMV vector field of buffer memory former frame and present frame; Wherein the BMV vector field of former frame is for generation of the predicted value of current frame motion vectors, and the data of two frames are stored by ping-pong structure, access.
Preferably, described system comprises two parallel data channel: passage 1, according to UME candidate vector access pixel buffer memory, by comparing SAD, obtains UMV; Passage 2 is according to BME candidate vector access pixel buffer memory, and by comparing SAD, obtain BMV, and then carry out Kalman filtering, parallel two paths of data channel multiplexing SAD calculates and analysis module.
The data channel that described two-way walks abreast processes the block of the same position of two continuous frames respectively, and passage 2 is relative to passage 1 time delay one frame, and two paths adopts same block group/block number and scanning sequency.
Preferably, one two field picture divides by described piece of group/block sequence control module in units of block group, block group scanning sequency is that horizontal direction from left to right, from right to left replaces line by line, vertical direction replaces from top to bottom, from top to bottom frame by frame, and block group is inner according to from replacing frame by frame left to bottom right, from lower-left to upper right.
Compared with prior art, the present invention has following beneficial effect:
1, ensure that the otherness between candidate vector, not only reduce the calculated amount of estimation, also improve the accuracy of estimation simultaneously;
2, the vector based on overlapping block expansion maps, and increases the smoothness of BME candidate vector, thus improves the accuracy of measurement vector;
3, the frame rate up-conversion estimation based on Kalman effectively can remove the Motion estimationnoise produced based on SAD criterion;
4, the scanning sequency replaced improves the convergence of system, and improves the recycling rate of waterused of pel data.
5, parallel two-way path, improves handling up and processing speed of system.
Accompanying drawing explanation
By reading the detailed description done non-limiting example with reference to the following drawings, other features, objects and advantages of the present invention will become more obvious:
Fig. 1 is method overview flow chart proposed by the invention;
Fig. 2 is the candidate vector distribution plan that one-way movement is estimated;
Fig. 3 is the schematic diagram that vector maps;
Fig. 4 is system architecture diagram proposed by the invention;
Fig. 5 is block group/block scan sequential schematic;
Fig. 6 is the schematic diagram of two-way path.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.Following examples will contribute to those skilled in the art and understand the present invention further, but not limit the present invention in any form.It should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, some distortion and improvement can also be made.These all belong to protection scope of the present invention.
As shown in Figure 1, the present embodiment provides a kind of frame rate up-conversion method for estimating based on Kalman filtering, comprises the following steps:
Step 1: parameter and state initial value that Kalman filter model is set;
Step 2: measure motion vector observed reading:
A) carry out one-way movement estimation (UME), obtain one-way movement vector (UMV);
B) one-way movement vector is mapped to frame to be interpolated, obtains the candidate vector of bi-directional motion estimation (BME);
C) bi-directional motion vector (BMV) is obtained, as the observed reading of system motion vector by bi-directional motion estimation.
Step 3: Kalman filtering is carried out to measurement vector:
A) estimated motion vector predicted value;
B) according to the difference between motion vector observed reading and motion vector predictor, estimated state noise variance and observation noise variance;
C) according to the Kalman filter model parameter in step 1 and step 3b) in state-noise variance, observation noise variance, calculate kalman gain;
D) according to step 2 and step 3c) result that obtains, upgrade motion vector and filtering error covariance.
In order to describe a kind of frame rate up-conversion method for estimating based on Kalman filtering of the present invention in detail, illustrate with next example:
Kalman filtering is a kind of observation data by system, system state is carried out to the algorithm of optimal estimation, and it is widely used in multiple fields of Science and engineering.The state-space model of Kalman filtering is made up of state equation and observation equation.
According to the time continuity of vector field, object can be developed by its motion at front frame at the motion vector of present frame, and therefore, state equation can be expressed as follows:
V
k=φV
k-1+ω
k
Wherein, V
krepresent the state value of motion vector, φ represents state-transition matrix, ω
krepresent the state-noise of 0 average Gaussian distribution.
Because estimation is an ill-conditioning problem, there is improper value in the motion vector estimated, if be regarded as estimating noise, then observation equation can be expressed as unavoidably:
Y
k=HV
k+n
k
Wherein, Y
krepresent motion vector observed reading, H represents observing matrix, n
krepresent observation noise, be assumed to be the Gaussian noise of 0 average.
The first step of method, determines the parameter phi of Kalman filter model, H and state initial value V
0.For most video, the scene motion in the short time can be approximated to be linear uniform motion, therefore, in example of the present invention, φ, H is set to definite value, is taken as unit matrix simply.Initial value V
0be related to the speed of convergence of vector field, in order to make vector field restrain as early as possible, the measurement vector that this example obtains by the first frame estimation is as V
0assignment.
Second step, measures motion vector observed reading.The candidate vector that one-way movement is estimated as shown in Figure 2, comprises 0 vector, 3 spatial candidate vectors (V
2, V
3and V
4) and 4 time candidate vectors.Wherein, the choosing method of time candidate vector for: according to the spiral order shown in Fig. 2, search in the time vector neighborhood of 5*5, if the difference of certain time vector and any spatial candidate vectors is greater than threshold value, then included in candidate vector.Choose at most 4 time candidate vectors, if less than 4, then press search order, supplement after adding shake vector with the vector of 5*5 neighborhood summit block.The shake vector that this example is used is as follows:
After obtaining one-way movement vector, need be mapped in interpolation frame.As shown in Figure 3, block all expands to full-sized 2 times at level, vertical direction, and extension blocks is mapped in interpolation frame along movement locus, produces at the most occur simultaneously with 9 interior inserted blocks, then using the candidate vector one of of the half of this motion vector as 9 interior inserted blocks.Carrying out SAD to all candidate vectors of each interior inserted block to compare, select the minimum bi-directional motion vector as inserted block in this, is also the observed reading of system.
3rd step, carries out Kalman filtering to motion vector observed reading.
First the predicted value of optimum movement vector as current vector of last interpolation frame same position block is chosen, namely
P
k|k-1=φP
k-1|k-1φ
T+Q
k
Wherein,
p
k-1|k-1for the Posterior estimator of motion vector and predicated error in kth-1 frame,
p
k|k-1then for testing estimation before motion vector in kth frame and predicated error.φ is state-transition matrix, represents the change of motion vector from former frame to present frame.Q
krepresent predict noise variance.
Secondly, predict noise variance and observation noise variance is determined according to the difference D between motion vector predictor and observed reading:
Wherein, Y
krepresent the motion vector observed reading that second step obtains.If both are equal or close, then predict noise variance Q
kand observation noise variance R
kcan be calculated by the sad value of correspondence:
Q
k=SAD
pk 3/(SAD
pk 3+SAD
mk 3)
R
k=SAD
mk 3/(SAD
pk 3+SAD
mk 3)
Wherein, SAD
pkrepresent the SAD that motion vector predictor is corresponding, SAD
mkrepresent the SAD that motion vector observed reading is corresponding.If the difference of motion vector predictor and observed reading is greater than threshold value TH, then investigate
Wherein,
for testing estimation before motion vector in kth frame, it is also estimation of motion vectors value;
for measurement vector is at the mean value of 3*3 neighborhood;
for motion vector predictor with
difference between observed reading.To the video sequence of different resolution, TH should get different values, in the present embodiment, to all CIF video sequences, and TH value 5.Then predict noise variance and observation noise error can be drawn by following formula:
Wherein, parameter alpha represents
relative to the decline number percent of threshold value; β, γ are used for balancing negative exponential function, and they meet e
-β D+ γ< 0.5.Accordingly, Kalman filtering gain K
kbe calculated as follows:
K
k=P
k|k-1[P
k|k-1+R
k]
-1
Wherein, P
k|k-1, R
kdefinition as previously described.
Optimum movement vector is updated to:
Wherein, I represents unit matrix,
for the Posterior estimator of motion vector in kth frame, it is also the optimum movement vector of system.
Meanwhile, filtering error covariance is updated to:
P
k|k=[I-K
k]P
k|k-1
Wherein, P
k|kfor the Posterior estimator of predicated error in kth frame, it is also filtering error covariance.
As shown in Figure 4, in another embodiment, provide a kind of frame rate up-conversion movement estimation system based on Kalman filtering, it comprises with lower module:
Block group/block sequence control module: divide for carrying out block group/block to image, block scan order in controlled motion estimation procedure, and the beginning of estimation and end.As shown in Figure 5, in the present embodiment, a two field picture is split by block group/block sequence control module in units of block group.Block group scanning sequency is that horizontal direction from left to right, from right to left replaces line by line, and vertical direction replaces from top to bottom, from top to bottom frame by frame; Block group is inner according to from the alternating sequence left to bottom right, from lower-left to upper right.
UME candidate vector generation module: after being coupled to block group/block sequence controller and UMV buffer memory, for generation of the candidate vector set needed for UME, these candidate vectors are by accessing UMV buffer memory and obtaining according to candidate vector create-rule.
BME candidate vector generation module: after being coupled to block group/block sequence controller and BME candidate vector buffer memory, for generation of the candidate vector set needed for BME, these candidate vectors are by accessing BME candidate vector buffer memory and obtaining after de-redundancy.
UMV cache module: for the UMV vector field of buffer memory former frame and present frame, the data of two frames are stored by ping-pong structure, access.
BME candidate vector cache module: for the BMV candidate vector of buffer memory former frame and present frame, the data of two frames are stored by ping-pong structure, access.
Pixel cache module: after being coupled to UME candidate vector generation module and BME candidate vector generation module, preserves the data in the field of search of estimation, and the scope of Dynamic Maintenance search window; Response candidate vector request of access, returns the pixel blocks of data of the front and back frame of its correspondence.
SAD calculates and analysis module: after being coupled to pixel cache module, for calculate front and back frame pixel block between absolute difference and (SAD), and com-parison and analysis is carried out to the SAD of each candidate vector, the candidate vector that output SAD is minimum.
Kalman filtering module: be coupled to SAD and calculate and after analysis module, receive the BMV measured value that it exports, and upgrade by predicted value, thus obtain best BMV;
Best BMV buffer memory: for the BMV vector field of buffer memory former frame and present frame; Wherein the BMV vector field of former frame is for generation of the predicted value of current frame motion vectors, and the data of two frames are stored by ping-pong structure, access.
As shown in Figure 6, whole system is divided into parallel two paths, and passage 1, according to UME candidate vector access pixel buffer memory, by comparing SAD, obtains UMV; Passage 2, according to BME candidate vector access pixel buffer memory, by comparing SAD, obtains BMV, and then carries out Kalman filtering.The passage that two-way walks abreast all is controlled by block group/block controller, and process the block of the same position of two continuous frames respectively, passage 2 is relative to passage 1 time delay one frame.Two paths serial access pixel cache module, pixel buffer memory is according to different access type mapping addresss, and return to corresponding front and back to pel data, multiplexing SAD calculates and analysis module.
Described UME candidate vector generation module produces UME candidate vector in a fixed order, is followed successively by: 0 vector, 3 spatial candidate vectors and 4 time candidate vectors.Wherein, during generation time candidate vector, at the neighborhood of 5*5, select 4 the time candidate vectors different with spatial candidate vectors according to current block to the scanning sequency that neighborhood spiral travels through; If time candidate vector is less than 4, then replace after adding shake vector with the summit block of 5*5 neighborhood.
The quantity of the BME candidate vector of described each piece is different, and all candidate vectors, all after the de-redundancy module of inside, abandon the candidate vector of repetition.
Described UMV cache module, BME candidate vector cache module and best BMV buffer memory all adopt ping-pong structure to preserve the data of two continuous frames, and wherein the data of former frame are that current operation is used, and the data that current operation obtains give over to next frame operation and use.
Described UME candidate vector generation module and BME candidate vector generation module serial access pixel cache module, pixel buffer memory, according to different access type mapping addresss, returns to corresponding front and back to pel data.
Above specific embodiments of the invention are described.It is to be appreciated that the present invention is not limited to above-mentioned particular implementation, those skilled in the art can make various distortion or amendment within the scope of the claims, and this does not affect flesh and blood of the present invention.
Claims (14)
1. based on a frame rate up-conversion method for estimating for Kalman filtering, it is characterized in that, described method comprises the steps:
Step 1: parameter and state initial value that Kalman filter model is set;
Step 2: measure motion vector observed reading:
A) one-way movement estimation (UME) is carried out to candidate vector, obtain one-way movement vector (UMV);
B) one-way movement vector is mapped to frame to be interpolated, obtains the candidate vector of bi-directional motion estimation;
C) bi-directional motion estimation (BME) is carried out to the candidate vector obtained in b), obtain bi-directional motion vector (BMV), as the observed reading of system motion vector;
Step 3: Kalman filtering is carried out to motion vector observed reading:
A) estimated motion vector predicted value;
B) according to the difference between motion vector observed reading and motion vector predictor, estimated state noise variance and observation noise variance;
C) according to the Kalman filter model parameter in step 1 and step 3b) in state-noise variance, observation noise variance, calculate kalman gain;
D) according to step 2 and step 3c) result that obtains, upgrade motion vector and filtering error covariance.
2. the frame rate up-conversion method for estimating based on Kalman filtering according to claim 1, it is characterized in that, step 2a) in one-way movement estimate comprise 8 candidate vectors, they are: 0 vector, 3 spatial candidate vectors and 4 time candidate motion vectors, wherein time candidate vector is arranged in current block 5*5 neighborhood, and ensures that it differs from spatial candidate vectors.
3. the frame rate up-conversion method for estimating based on Kalman filtering according to claim 2, is characterized in that, adopts the mode traveled through from current block to neighborhood spiral, finds out 4 the time candidate vectors being greater than certain threshold value with spatial candidate vectors difference; If time candidate vector is less than 4, then supplement after adding shake vector with the summit block of 5*5 neighborhood.
4. the frame rate up-conversion method for estimating based on Kalman filtering according to claim 1, it is characterized in that, step 2b) in current block is carried out overlapping block expansion, and along the direction projection of motion vector in frame to be interpolated, in interpolation frame, overlapping block produces the block occured simultaneously and all will include their two-way candidate set after this motion vector convergent-divergent half in therewith.
5. the frame rate up-conversion method for estimating based on Kalman filtering according to claim 1, it is characterized in that, step 3a) in, according to the motion vector predictor of 1 dimension autoregressive model determination current block in time domain, by last interpolation frame with the motion vector predictor of the motion vector of current block co-located as current block.
6. the frame rate up-conversion method for estimating based on Kalman filtering according to claim 1, it is characterized in that, step 3b) in, according to the predicted value of motion vector and the diversity factor determination state-noise variance of observed reading and observation noise variance: if both are equal or close, then state-noise variance and the observation noise variance SAD corresponding with it become simple proportionate relationship; Otherwise recalculate diversity factor after observed reading is level and smooth, if diversity factor obviously reduces, judge that observed reading is inaccurate, predicted value is more accurate, vice versa.
7. based on a frame rate up-conversion movement estimation system for Kalman filtering, it is characterized in that, described system comprises with lower module:
Block group/block sequence control module: divide for carrying out block group/block to image, block scan order in controlled motion estimation procedure, and the beginning of estimation and end;
UME candidate vector generation module: after being coupled to block group/block sequence controller and one-way movement vector buffer memory, candidate vector set needed for estimating for generation of one-way movement, these candidate vectors are by accessing one-way movement vector buffer memory and obtaining according to candidate vector create-rule;
BME candidate vector generation module: after being coupled to block group/block sequence controller and bi-directional motion vector candidate vector buffer memory, for generation of the candidate vector set needed for bi-directional motion vector, these candidate vectors are by accessing bi-directional motion vector candidate vector buffer memory and obtaining after de-redundancy;
UMV cache module: for the one-way movement vector vector field of buffer memory former frame and present frame, the data of two frames are stored by ping-pong structure, access;
BME candidate vector cache module: for the bi-directional motion vector candidate vector of buffer memory former frame and present frame, the data of two frames are stored by ping-pong structure, access;
Pixel cache module: after being coupled to UME candidate vector generation module and BME candidate vector generation module, preserves the data in the field of search of estimation, and the scope of Dynamic Maintenance search window; Response candidate vector request of access, returns the pixel blocks of data of the front and back frame of its correspondence;
SAD calculates and analysis module: after being coupled to pixel cache module, for calculate front and back frame pixel block between absolute difference and (SAD), and com-parison and analysis is carried out to the SAD of each candidate vector, the candidate vector that output SAD is minimum;
Kalman filtering module: be coupled to SAD and calculate and after analysis module, receive the bi-directional motion vector measured value that it exports, and upgrade by predicted value, thus obtain best bi-directional motion vector;
Best BMV buffer memory: for the bi-directional motion vector vector field of buffer memory former frame and present frame; Wherein the bi-directional motion vector vector field of former frame is for generation of the predicted value of current frame motion vectors, and the data of two frames are stored by ping-pong structure, access.
8. the frame rate up-conversion movement estimation system based on Kalman filtering according to claim 7, is characterized in that, described system comprises two parallel data channel: passage 1, according to UME candidate vector access pixel buffer memory, by comparing SAD, obtains UMV; Passage 2, according to BME candidate vector access pixel buffer memory, by comparing SAD, obtains BMV, and then carries out Kalman filtering; Parallel two paths of data channel multiplexing SAD calculates and analysis module.
9. the frame rate up-conversion movement estimation system based on Kalman filtering according to claim 8, it is characterized in that, described two parallel data channel process the block of the same position of two continuous frames respectively, passage 2 is relative to passage 1 time delay one frame, and two paths adopts same block group/block number and scanning sequency.
10. the frame rate up-conversion movement estimation system based on Kalman filtering according to claim 7, it is characterized in that, one two field picture divides by described piece of group/block sequence control module in units of block group, block group scanning sequency is that horizontal direction from left to right, from right to left replaces line by line, vertical direction replaces from top to bottom, from top to bottom frame by frame, and block group is inner according to from replacing frame by frame left to bottom right, from lower-left to upper right.
The 11. frame rate up-conversion movement estimation systems based on Kalman filtering according to claim 7, it is characterized in that, described UME candidate vector generation module produces UME candidate vector in a fixed order, is followed successively by: 0 vector, 3 spatial candidate vectors and 4 time candidate vectors; Wherein, during generation time candidate vector, at the neighborhood of 5*5, select 4 the time candidate vectors different with spatial candidate vectors according to current block to the scanning sequency that neighborhood spiral travels through; If time candidate vector is less than 4, then replace after adding shake vector with the summit block of 5*5 neighborhood.
12. frame rate up-conversion movement estimation systems based on Kalman filtering according to any one of claim 7-11, it is characterized in that, the quantity of the BME candidate vector of described each piece is different, and all candidate vectors, all after the de-redundancy module of inside, abandon the candidate vector of repetition.
13. frame rate up-conversion movement estimation systems based on Kalman filtering according to any one of claim 7-11, it is characterized in that, described UMV cache module, BME candidate vector cache module and best BMV buffer memory all adopt ping-pong structure to preserve the data of two continuous frames, wherein the data of former frame are that current operation is used, and the data that current operation obtains give over to next frame operation and use.
14. frame rate up-conversion movement estimation systems based on Kalman filtering according to any one of claim 7-11, it is characterized in that, described UME candidate vector generation module and BME candidate vector generation module serial access pixel cache module, pixel buffer memory, according to different access type mapping addresss, returns to corresponding front and back to pel data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510233587.5A CN104915966B (en) | 2015-05-08 | 2015-05-08 | Frame rate up-conversion method for estimating and system based on Kalman filtering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510233587.5A CN104915966B (en) | 2015-05-08 | 2015-05-08 | Frame rate up-conversion method for estimating and system based on Kalman filtering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104915966A true CN104915966A (en) | 2015-09-16 |
CN104915966B CN104915966B (en) | 2018-02-09 |
Family
ID=54085003
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510233587.5A Active CN104915966B (en) | 2015-05-08 | 2015-05-08 | Frame rate up-conversion method for estimating and system based on Kalman filtering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104915966B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110446107A (en) * | 2019-08-15 | 2019-11-12 | 电子科技大学 | A kind of video frame rate upconversion method suitable for scaling movement and light and shade variation |
WO2020147772A1 (en) * | 2019-01-16 | 2020-07-23 | Beijing Bytedance Network Technology Co., Ltd. | Motion candidates derivation |
US10778997B2 (en) | 2018-06-29 | 2020-09-15 | Beijing Bytedance Network Technology Co., Ltd. | Resetting of look up table per slice/tile/LCU row |
US10873756B2 (en) | 2018-06-29 | 2020-12-22 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US20200413044A1 (en) | 2018-09-12 | 2020-12-31 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US11134243B2 (en) | 2018-07-02 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Rules on updating luts |
US11134267B2 (en) | 2018-06-29 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Update of look up table: FIFO, constrained FIFO |
US11140383B2 (en) | 2019-01-13 | 2021-10-05 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between look up table and shared merge list |
US11140385B2 (en) | 2018-06-29 | 2021-10-05 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
US11146785B2 (en) | 2018-06-29 | 2021-10-12 | Beijing Bytedance Network Technology Co., Ltd. | Selection of coded motion information for LUT updating |
US11159817B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for updating LUTS |
US11159807B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Number of motion candidates in a look up table to be checked according to mode |
CN114979091A (en) * | 2022-07-28 | 2022-08-30 | 腾讯科技(深圳)有限公司 | Data transmission method, related device, equipment and storage medium |
US11528500B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Partial/full pruning when adding a HMVP candidate to merge/AMVP |
US11589071B2 (en) | 2019-01-10 | 2023-02-21 | Beijing Bytedance Network Technology Co., Ltd. | Invoke of LUT updating |
US11641483B2 (en) | 2019-03-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between merge list construction and other tools |
US11895318B2 (en) | 2018-06-29 | 2024-02-06 | Beijing Bytedance Network Technology Co., Ltd | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1792201B1 (en) * | 2004-09-17 | 2008-11-19 | ITT Manufacturing Enterprises, Inc. | Improved gps accumulated delta range processing for navigation applications |
CN103533214A (en) * | 2013-10-01 | 2014-01-22 | 中国人民解放军国防科学技术大学 | Video real-time denoising method based on kalman filtering and bilateral filtering |
CN103873743A (en) * | 2014-03-24 | 2014-06-18 | 中国人民解放军国防科学技术大学 | Video de-noising method based on structure tensor and Kalman filtering |
CN103905826A (en) * | 2014-04-10 | 2014-07-02 | 北京工业大学 | Self-adaptation global motion estimation method |
CN104301736A (en) * | 2014-10-13 | 2015-01-21 | 上海交通大学 | Storage-bandwidth-requirement lowered ultra-high definition frame rate up-conversion system |
-
2015
- 2015-05-08 CN CN201510233587.5A patent/CN104915966B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1792201B1 (en) * | 2004-09-17 | 2008-11-19 | ITT Manufacturing Enterprises, Inc. | Improved gps accumulated delta range processing for navigation applications |
CN103533214A (en) * | 2013-10-01 | 2014-01-22 | 中国人民解放军国防科学技术大学 | Video real-time denoising method based on kalman filtering and bilateral filtering |
CN103873743A (en) * | 2014-03-24 | 2014-06-18 | 中国人民解放军国防科学技术大学 | Video de-noising method based on structure tensor and Kalman filtering |
CN103905826A (en) * | 2014-04-10 | 2014-07-02 | 北京工业大学 | Self-adaptation global motion estimation method |
CN104301736A (en) * | 2014-10-13 | 2015-01-21 | 上海交通大学 | Storage-bandwidth-requirement lowered ultra-high definition frame rate up-conversion system |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11146786B2 (en) | 2018-06-20 | 2021-10-12 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
US11134267B2 (en) | 2018-06-29 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Update of look up table: FIFO, constrained FIFO |
US10778997B2 (en) | 2018-06-29 | 2020-09-15 | Beijing Bytedance Network Technology Co., Ltd. | Resetting of look up table per slice/tile/LCU row |
US11695921B2 (en) | 2018-06-29 | 2023-07-04 | Beijing Bytedance Network Technology Co., Ltd | Selection of coded motion information for LUT updating |
US11140385B2 (en) | 2018-06-29 | 2021-10-05 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
US11909989B2 (en) | 2018-06-29 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Number of motion candidates in a look up table to be checked according to mode |
US11895318B2 (en) | 2018-06-29 | 2024-02-06 | Beijing Bytedance Network Technology Co., Ltd | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
US11877002B2 (en) | 2018-06-29 | 2024-01-16 | Beijing Bytedance Network Technology Co., Ltd | Update of look up table: FIFO, constrained FIFO |
US11245892B2 (en) | 2018-06-29 | 2022-02-08 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
US11159807B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Number of motion candidates in a look up table to be checked according to mode |
US11973971B2 (en) | 2018-06-29 | 2024-04-30 | Beijing Bytedance Network Technology Co., Ltd | Conditions for updating LUTs |
US10873756B2 (en) | 2018-06-29 | 2020-12-22 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US11146785B2 (en) | 2018-06-29 | 2021-10-12 | Beijing Bytedance Network Technology Co., Ltd. | Selection of coded motion information for LUT updating |
US12034914B2 (en) | 2018-06-29 | 2024-07-09 | Beijing Bytedance Network Technology Co., Ltd | Checking order of motion candidates in lut |
US11528500B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Partial/full pruning when adding a HMVP candidate to merge/AMVP |
US11528501B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US11153557B2 (en) | 2018-06-29 | 2021-10-19 | Beijing Bytedance Network Technology Co., Ltd. | Which LUT to be updated or no updating |
US11159817B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for updating LUTS |
US11706406B2 (en) | 2018-06-29 | 2023-07-18 | Beijing Bytedance Network Technology Co., Ltd | Selection of coded motion information for LUT updating |
US11134244B2 (en) | 2018-07-02 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Order of rounding and pruning in LAMVR |
US11153558B2 (en) | 2018-07-02 | 2021-10-19 | Beijing Bytedance Network Technology Co., Ltd. | Update of look-up tables |
US11134243B2 (en) | 2018-07-02 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Rules on updating luts |
US11463685B2 (en) | 2018-07-02 | 2022-10-04 | Beijing Bytedance Network Technology Co., Ltd. | LUTS with intra prediction modes and intra mode prediction from non-adjacent blocks |
US11153559B2 (en) | 2018-07-02 | 2021-10-19 | Beijing Bytedance Network Technology Co., Ltd. | Usage of LUTs |
US20200413044A1 (en) | 2018-09-12 | 2020-12-31 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US20210297659A1 (en) | 2018-09-12 | 2021-09-23 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US11997253B2 (en) | 2018-09-12 | 2024-05-28 | Beijing Bytedance Network Technology Co., Ltd | Conditions for starting checking HMVP candidates depend on total number minus K |
US11159787B2 (en) | 2018-09-12 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking HMVP candidates depend on total number minus K |
US11589071B2 (en) | 2019-01-10 | 2023-02-21 | Beijing Bytedance Network Technology Co., Ltd. | Invoke of LUT updating |
US11140383B2 (en) | 2019-01-13 | 2021-10-05 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between look up table and shared merge list |
US11909951B2 (en) | 2019-01-13 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Interaction between lut and shared merge list |
US11956464B2 (en) | 2019-01-16 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd | Inserting order of motion candidates in LUT |
US11962799B2 (en) | 2019-01-16 | 2024-04-16 | Beijing Bytedance Network Technology Co., Ltd | Motion candidates derivation |
WO2020147772A1 (en) * | 2019-01-16 | 2020-07-23 | Beijing Bytedance Network Technology Co., Ltd. | Motion candidates derivation |
US11641483B2 (en) | 2019-03-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between merge list construction and other tools |
CN110446107A (en) * | 2019-08-15 | 2019-11-12 | 电子科技大学 | A kind of video frame rate upconversion method suitable for scaling movement and light and shade variation |
CN110446107B (en) * | 2019-08-15 | 2020-06-23 | 电子科技大学 | Video frame rate up-conversion method suitable for scaling motion and brightness change |
CN114979091A (en) * | 2022-07-28 | 2022-08-30 | 腾讯科技(深圳)有限公司 | Data transmission method, related device, equipment and storage medium |
CN114979091B (en) * | 2022-07-28 | 2022-11-11 | 腾讯科技(深圳)有限公司 | Data transmission method, related device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104915966B (en) | 2018-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104915966A (en) | Frame rate up-conversion motion estimation method and frame rate up-conversion motion estimation system based on Kalman filtering | |
KR100604394B1 (en) | A frame interpolation method, apparatus and image display system using the same | |
CN103402098B (en) | A kind of video frame interpolation method based on image interpolation | |
CN103369208B (en) | Self adaptation interlace-removing method and device | |
CN102291578B (en) | Apparatus and method for frame rate conversion | |
US7535513B2 (en) | Deinterlacing method and device in use of field variable partition type | |
CN101621693B (en) | Frame frequency lifting method for combining target partition and irregular block compensation | |
CN102163334B (en) | Method for extracting video object under dynamic background based on fisher linear discriminant analysis | |
US8817878B2 (en) | Method and system for motion estimation around a fixed reference vector using a pivot-pixel approach | |
CN102123283B (en) | Interpolated frame acquisition method and device in video frame rate conversion | |
US20110299597A1 (en) | Image processing method using motion estimation and image processing apparatus | |
CN103702128B (en) | A kind of interpolation frame generating method being applied on video frame rate conversion | |
CN101207707A (en) | System and method for advancing frame frequency based on motion compensation | |
US20130170551A1 (en) | Halo Reduction in Frame-Rate-Conversion Using Hybrid Bi-Directional Motion Vectors for Occlusion/Disocclusion Detection | |
TW200947416A (en) | Video display apparatus | |
EP2306401A1 (en) | Motion estimating method and image processing apparatus | |
US8446523B2 (en) | Image processing method and circuit | |
TWI399094B (en) | Device and method for adaptive blending motion compensation interpolation in frame rate up-conversion | |
CN101557516A (en) | Video quality evaluation method and device | |
US20130136182A1 (en) | Motion vector refining device and video refining method thereof | |
CN102065263B (en) | Image interpolation processing apparatus and method thereof | |
CN102170567A (en) | Motion vector search prediction-based adaptive motion estimation algorithm | |
CN103152566B (en) | A kind of video frame rate method for improving | |
CN102364933A (en) | Motion-classification-based adaptive de-interlacing method | |
CN107707916A (en) | A kind of frame per second transfer algorithm based on scene change detecte |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |