CN105005977B - A kind of single video frame per second restored method based on pixel stream and time prior imformation - Google Patents

A kind of single video frame per second restored method based on pixel stream and time prior imformation Download PDF

Info

Publication number
CN105005977B
CN105005977B CN201510414187.4A CN201510414187A CN105005977B CN 105005977 B CN105005977 B CN 105005977B CN 201510414187 A CN201510414187 A CN 201510414187A CN 105005977 B CN105005977 B CN 105005977B
Authority
CN
China
Prior art keywords
pixel stream
video
restored
pixel
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510414187.4A
Other languages
Chinese (zh)
Other versions
CN105005977A (en
Inventor
徐枫
蒋德富
王慧斌
石爱业
张振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201510414187.4A priority Critical patent/CN105005977B/en
Publication of CN105005977A publication Critical patent/CN105005977A/en
Application granted granted Critical
Publication of CN105005977B publication Critical patent/CN105005977B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a kind of single video frame per second restored method based on pixel stream and time prior imformation, first obtain single video and the structure observation pixel stream of parked, and single video is expressed as the matrix form of observation pixel stream; Then set up the observation degradation model of pixel stream and the probability estimate formula of original pixels stream, derive the recovery formula of the original pixels stream containing time prior imformation thus; Restore one by one observation pixel stream, the time prior model adopted in recovery is all determined by data-driven version again; Finally the recovery pixel stream of restoring one by one is combined into recovery video in the matrix form, as final high frame-rate video of restoring.Restored method of the present invention adopts single video, such that video acquisition is convenient, to restore link succinct; Frame per second is restored based on pixel stream probability statistics framework, and introduces time prior imformation, and adopts data-driven version to determine time prior model, not only improves video fidelity, also effectively eliminates the conditions of streaking of frame of video.

Description

Single video frame rate restoration method based on pixel flow and time prior information
Technical Field
The invention relates to a video restoration method, in particular to a single-video frame rate restoration method based on pixel flow and time prior information, and belongs to the technical field of computer image and video processing.
Background
Most of the traditional video restoration methods aim at the problem of low spatial resolution of videos, and improve the spatial resolution of the videos and restore the spatial detail information of the videos by performing spatial restoration on single videos frame by frame or performing spatial information complementary reconstruction on multiple videos. On the basis of extensive and intensive research for many years, various video restoration methods provided by the majority of scientific researchers can solve the problem of low spatial resolution to a certain extent or meet the basic application requirements.
However, even if the video spatial detail information is recovered, the above-mentioned targeted video restoration method still has the problems of low temporal resolution, missing frame information, etc., so that video flickering, pause or jitter effects may occur.
Therefore, unlike the commonly used video restoration techniques, in addition to involving spatial resolution enhancement and spatial detail information restoration, video restoration should also focus on temporal resolution (i.e., frame rate) and temporal detail information to further improve video quality or visual effect. At present, some scholars at home and abroad notice the problem and propose a frame rate restoration method for videos.
Existing video frame rate restoration methods generally follow two paths.
One method is to collect multiple videos in the same time period and scene and to realize the restoration of the video frame rate by fusing the redundant/complementary frame information of the multiple videos. However, the path is constrained by video capture conditions, such as whether the number of devices is sufficient, whether the models are uniform, and the like; and the synchronization and time registration problems of multiple videos are involved, and the implementation is complex.
And the other simpler path followed by the frame rate restoration only needs one video, and the restoration of the video frame rate is realized through interframe interpolation. However, the assumption of interpolation functions (e.g., linear functions, spline functions, quadratic functions, etc.) required for inter-frame interpolation of single video has great randomness, so that the fidelity of the interpolated frame is not high; even if the interpolation is realized by adopting the minimum mean square error, the trailing phenomenon of the video frame can not be solved, and the phenomenon is mainly caused by long exposure time of acquisition equipment, so that a high-speed moving object is shown as a fuzzy phenomenon along the motion track of the high-speed moving object in an image.
Disclosure of Invention
The main objective of the present invention is to overcome the deficiencies in the prior art, and to provide a single video frame rate restoration method based on pixel stream and time prior information, which is particularly suitable for restoring video images of high-speed moving objects.
The invention aims to solve the technical problem of providing a single video frame rate restoration method based on pixel flow and time prior information, which has the advantages of convenient video acquisition, concise restoration link, reliable restoration result and strong practicability, not only can free the complex program process of synchronous acquisition or time registration related to the use of multi-video restoration, but also greatly improves the video fidelity, effectively eliminates the trailing phenomenon caused by long exposure time, and has great industrial utilization value.
In order to achieve the purpose, the invention adopts the technical scheme that:
a method for single video frame rate restoration based on pixel stream and temporal prior information, comprising the steps of:
acquiring a video to be restored: acquiring a single video I ═ { I (I) | I ∈ N } through video acquisition, wherein I (I) is a frame of the video, and I is a number of each frame in time sequence;
step (2) designing a construction method of a pixel stream to be restored: according to frame ordering, locating pixels I of single video each frame at same coordinate (m, n)mn(i) Are connected in series to form an observation pixel stream Imn={Imn(i) I ∈ N as a pixel stream to be restored;
step (3) represents the single video as a matrix form of the observed pixel stream: according to the construction method in the step (2), the observation pixel streams are constructed one by one according to the coordinate sequence in each frame of the single video, and the constructed observation pixel streams are combined, so that the single video can be expressed as the observation pixel streams in a matrix formEach element in the matrix is a constructed observation pixel stream;
step (4) establishing a degradation model of the observation pixel flow: i ismn=DBHmn+ E, where D is the time down-sampling matrix, B is the time-blurring matrix, used to simulate the exposure time, HmnE is the added noise vector of Gaussian distribution;
calculating to obtain an original pixel stream H in step (5)mnThe probability estimation equation of (1):
according to the Bayes probability statistical rule, calculating to obtain the original pixel flow HmnIs estimated as
H ^ m n = arg max H m n P ( H m n | I m n ) = arg max H m n [ P ( I m n | H m n ) P ( H m n ) ] ;
Calculating to obtain an original pixel stream HmnThe restoration formula of (2):
according to the degradation model of the observed pixel flow established in the step (4) and the original pixel flow H obtained in the step (5)mnIs obtained by logarithmic calculation to obtain the original pixel stream HmnIs restored to the formula
H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Wherein,to restore the pixel stream, logP (H)mn) A temporal prior information item representing the original pixel stream, α being an optimization parameter;
and (7) restoring the observation pixel stream: utilizing the original pixel stream H obtained in the step (6)mnThe single video frequency expressed in the matrix form of the observation pixel stream obtained in the step (3) is restored one by one according to the subscript sequence, and each I in the matrixmnAre restored by the following steps to obtain a restored pixel stream
Step (7-1) of determining a temporal prior model P (H) of the observed pixel streammn) And judges it to be highGaussian or laplace type:
from the observed pixel stream ImnDetermining a temporal prior model P (H) of the observed pixel stream in a data-driven mannermn) Is of Gaussian typeOr laplace typeWherein a high pass operator of the signal is represented;
step (7-2) is based on the original pixel stream HmnThe restitution of (c) derives the partial derivative equation:
determining a temporal prior model P (H)mn) Then, based on the original pixel stream H obtained in step (6)mnOf the restoration type
H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Deriving a partial derivative equation of ∂ ∂ H m n [ log P ( H m n ) ] + αB T D T ( I m n - DBH m n ) = 0 ;
Step (7-3) for observing pixel stream ImnPerforming linear interpolation to obtain pixel streamAs an iteration initial value;
step (7-4) iterative solution is carried out on the partial derivative equation obtained in step (7-2) by using a conjugate gradient method, and a restored pixel stream is obtained
And (8) combining to obtain a restored video: the restored pixel stream restored one by one in the step (7)In the form of a matrixCombined into a restored video H as output, the restored video H being represented in matrix form as
The invention is further configured to: the video acquisition in the step (1) can use a camera to shoot a moving scene to obtain a single video.
The invention is further configured to: and (4) adopting a trial and error method to set the optimization parameter alpha in the step (6) for value taking.
The invention is further configured to: the step (7-1) of determining a time prior model P (Hmn) of the observed pixel flow and judging whether the observed pixel flow is of a Gaussian type or a Laplace type, and judging by a data driving method, comprises the following steps,
step (7-1-1) for observing the pixel stream ImnLinear interpolation is carried out to obtain pixel stream
Step (7-1-2) of establishing a pixel streamHigh-pass plate ofGaussian prior model of
P G ( Γ H ^ m n 0 ) = ( 2 πσ G 2 ) - K 2 exp { - | | Γ H ^ m n 0 | | 2 2 2 σ G 2 }
And laplacian prior model P L ( Γ H ^ m n 0 ) = ( 2 σ L ) - K exp { - | | Γ H ^ m n 0 | | 1 1 σ L } ,
Wherein σGAnd σLRespectively representing the standard deviation of a Gaussian prior model and a Laplace prior model, and K representsThe dimension of (a);
step (7-1-3) utilizes the Gauss prior model and the Laplace prior model built in step (7-1-2) to solve the problem that the prior model is not a good modelGiven the known premises, according to the maximum likelihood rule,
estimate the standard deviation
Determination of step (7-1-4)Andcomparing the magnitudes of the two values, ifIs greater thanThen judging that the time prior model of the observed pixel flow is Gaussian; and otherwise, judging that the time prior model of the observed pixel flow is of a Laplace type.
Compared with the prior art, the invention has the beneficial effects that:
the method for restoring the collected single video based on the pixel flow can not only get rid of the constraint of the conditions required by the restoration of the collected multiple videos, such as whether the number of the devices is sufficient or not, whether the models are uniform or not and the like; the complex procedure of multi-video synchronous acquisition or time registration is not required, the path is simple, and the recovery link is simplified; frame rate restoration is realized through a probability statistical framework of an original pixel stream, randomness of an assumed interpolation function is avoided, and video fidelity can be greatly improved; meanwhile, time prior information is introduced into a probability statistics frame, a time prior model is determined in a data driving mode, the reliability of the prior model is improved, and the trailing phenomenon caused by long exposure time is effectively eliminated.
The foregoing is only an overview of the technical solutions of the present invention, and in order to more clearly understand the technical solutions of the present invention, the present invention is further described below with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a method of constructing a pixel stream to be restored;
FIG. 3 is a schematic diagram of a structured stream of observed pixels combined in a matrix form into a single video;
FIG. 4 is a schematic side view of a single video represented in a matrix form of a stream of observed pixels;
FIG. 5 is a diagram of the relationship between the original pixel stream and the observed pixel stream;
FIG. 6 is a flow chart for restoring an observed pixel stream to an original pixel stream;
FIG. 7 is a flow chart for determining a temporal prior model of observed pixel flow based on a data-driven approach.
Detailed Description
The invention is further described with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a single video frame rate restoration method based on pixel stream and time prior information, which includes obtaining a single video to be restored, constructing an observation pixel stream as the pixel stream to be restored, and representing the single video as a matrix form of the observation pixel stream; then, establishing a degradation model of the observed pixel flow and a probability estimation formula of the original pixel flow, and deriving a recovery formula of the original pixel flow containing time prior information; restoring the observed pixel flow one by one, wherein time prior models adopted in restoration are determined in a data driving mode; and finally combining the restored pixel streams restored one by one into a restored video in a matrix form to be used as the finally restored high-frame-rate video. The method specifically comprises the following steps:
acquiring a video to be restored: the method comprises the steps of finishing video acquisition by shooting a moving scene through a camera to obtain a single video I ═ { I (I) | I ∈ N } as a video to be restored, wherein I (I) is a frame of the video, and I is a number of each frame in time sequence.
Step (2) designing a construction method of a pixel stream to be restored: according to frame ordering, locating pixels I of single video each frame at same coordinate (m, n)mn(i) Are connected in series to form an observation pixel stream Imn={Imn(i) I ∈ N as the pixel stream to be restored, as shown in fig. 2.
Step (3) represents the single video as a matrix form of the observed pixel stream: according to the construction method in step (2), the observation pixel streams are constructed one by one according to the coordinate sequence in each frame of the single video, and the constructed observation pixel streams are combined, as shown in fig. 3, so that the single video can be expressed as the observation pixel stream in a matrix form
Each element in the matrix is a constructed observation pixel stream; fig. 4 is a schematic side view of a single video represented in the form of a matrix of observed pixel streams.
Step (4) establishing a degradation model of the observation pixel flow: i ismn=DBHmn+ E, where D is the time down-sampling matrix, B is the time-blurring matrix, used to simulate the exposure time, HmnE is the added gaussian distributed noise vector for the original pixel stream.
Calculating to obtain an original pixel stream H in step (5)mnThe probability estimation equation of (1):
according to the Bayes probability statistical rule, calculating to obtain the original pixel flow HmnIs estimated as
H ^ m n = arg max H m n P ( H m n | I m n ) = arg max H m n [ P ( I m n | H m n ) P ( H m n ) ] ;
Restoration of the original pixel stream is achieved by a probabilistic statistical framework, rather than estimating the original pixel stream H using interpolationmnAnd the fidelity of the recovery result is higher.
Calculating to obtain an original pixel stream HmnThe restoration formula of (2):
according to the degradation model of the observed pixel flow established in the step (4) and the original pixel flow H obtained in the step (5)mnIs obtained by logarithmic calculation to obtain the original pixel stream HmnIs restored to the formula
H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Wherein,to restore the pixel stream, logP (H)mn) The time prior information item representing the original pixel stream, α, is an optimized parameter that can be set by trial and error to take a value.
The addition of the time prior information in the step (6) can effectively eliminate the tailing phenomenon of the video.
Since the smearing phenomenon is essentially a time-blurring matrix B representing the exposure time versus the original pixel stream HmnThe result of performing the time convolution, as shown in FIG. 5, ImnEach pixel value of mainly consisting of HmnConvolution sum of several pixels is obtained; the elimination of the smearing phenomenon is naturally deconvolved.
If only the minimum mean square error is usedIn a deconvolution manner to restore the original pixel stream HmnTemporal priors, which are essentially a likelihood estimate, are ignored, and thus it is difficult to decompose the convolution and form implicit in the observed pixel stream, and thus difficult to eliminate the smearing.
And (7) restoring the observation pixel stream:
utilizing the original pixel stream H obtained in the step (6)mnThe single video frequency expressed in the matrix form of the observation pixel stream obtained in the step (3) is restored one by one according to the subscript sequence, and each I in the matrixmnAre restored by the steps shown in fig. 6 to obtain a restored pixel stream
Step (7-1) of determining a temporal prior model P (H) of the observed pixel streammn) And judging it to be of Gaussian or Laplace type:
from the observed pixel stream ImnDetermining a temporal prior model P (H) of the observed pixel stream in a data-driven mannermn) Is of Gaussian typeOr laplace typeWherein a high pass operator of the signal is represented; specifically, by using the steps shown in fig. 7 and determining the time prior model by using a data-driven method, the model can be made to more approximate the data characteristics of the original pixel stream, thereby avoiding the randomness of the model which is usually artificially assumed.
Step (7-1-1) for observing the pixel stream ImnLinear interpolation is carried out to obtain pixel stream
Step (7-1-2) of establishing a pixel streamHigh-pass plate ofGaussian prior model of
P G ( Γ H ^ m n 0 ) = ( 2 πσ G 2 ) - K 2 exp { - | | Γ H ^ m n 0 | | 2 2 2 σ G 2 }
And laplacian prior model P L ( Γ H ^ m n 0 ) = ( 2 σ L ) - K exp { - | | Γ H ^ m n 0 | | 1 1 σ L } ,
Wherein σGAnd σLRespectively representing the standard deviation of a Gaussian prior model and a Laplace prior model, and K representsThe dimension of (a);
step (7-1-3) utilizes the Gauss prior model and the Laplace prior model built in step (7-1-2) to solve the problem that the prior model is not a good modelGiven the known premises, according to the maximum likelihood rule,
estimate the standard deviation σ ^ G = | | Γ H ^ m n 0 | | 2 2 K , σ ^ L = | | Γ H ^ m n 0 | | 1 1 K ;
Determination of step (7-1-4)Andcomparing the magnitudes of the two values, ifIs greater thanThen judging that the time prior model of the observed pixel flow is Gaussian; and otherwise, judging that the time prior model of the observed pixel flow is of a Laplace type.
Step (7-2) is based on the original pixel stream HmnThe restitution of (c) derives the partial derivative equation:
determining a temporal prior model P (H)mn) Then, based on the original pixel stream H obtained in step (6)mnOf the restoration type
H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Deriving a partial derivative equation of ∂ ∂ H m n [ log P ( H m n ) ] + αB T D T ( I m n - DBH m n ) = 0.
Step (7-3) for observing pixel stream ImnPerforming linear interpolation to obtain pixel streamAs an iteration initial value.
Step (7-4) iterative solution is carried out on the partial derivative equation obtained in step (7-2) by using a conjugate gradient method, and a restored pixel stream is obtained
And (8) combining to obtain a restored video: the restored pixel stream restored one by one in the step (7)The restored video H is combined in a matrix form as output, and the restored video H is expressed in a matrix form as
The invention has the innovation points that the single video is adopted, the frame rate restoration with convenient video acquisition and concise restoration link is realized, the frame rate restoration is based on a pixel stream probability statistical frame, the time prior information is introduced, and a data driving mode is adopted to determine a time prior model, so that the video fidelity is improved, and the trailing phenomenon of a video frame is effectively eliminated.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. A method for single video frame rate restoration based on pixel stream and temporal prior information, comprising the steps of:
acquiring a video to be restored: acquiring a single video I ═ { I (I) | I ∈ N } through video acquisition, wherein I (I) is a frame of the video, and I is a number of each frame in time sequence;
step (2) designing a construction method of a pixel stream to be restored: according to frame ordering, locating pixels I of single video each frame at same coordinate (m, n)mn(i) Are connected in series to form an observation pixel stream Imn={Imn(i) I ∈ N as a pixel stream to be restored;
step (3) represents the single video as a matrix form of the observed pixel stream: according to the construction method in the step (2), the observation pixel streams are constructed one by one according to the coordinate sequence in each frame of the single video, and the constructed observation pixel streams are combined, so that the single video can be expressed as the observation pixel streams in a matrix formEach element in the matrix is a constructed observation pixel stream;
step (4) establishing a degradation model of the observation pixel flow: i ismn=DBHmn+ E, where D is the time down-sampling matrix, B is the time-blurring matrix, used to simulate the exposure time, HmnE is the added noise vector of Gaussian distribution;
calculating to obtain an original pixel stream H in step (5)mnThe probability estimation equation of (1):
according to the Bayes probability statistical rule, calculating to obtain the original pixel flow HmnIs estimated as
H ^ m n = argmax H m n P ( H m n | I m n ) = argmax H m n [ P ( I m n | H m n ) P ( H m n ) ] ;
Calculating to obtain an original pixel stream HmnThe restoration formula of (2):
according to the degradation model of the observed pixel flow established in the step (4) and the original pixel flow H obtained in the step (5)mnIs obtained by logarithmic calculation to obtain the original pixel stream HmnIs restored to the formula
H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Wherein,to restore the pixel stream, logP (H)mn) A temporal prior information item representing the original pixel stream, α being an optimization parameter;
and (7) restoring the observation pixel stream: utilizing the original pixel stream H obtained in the step (6)mnThe single video frequency expressed in the matrix form of the observation pixel stream obtained in the step (3) is restored one by one according to the subscript sequence, and each I in the matrixmnAre restored by the following steps to obtain a restored pixel stream
Step (7-1) of determining a temporal prior model P (H) of the observed pixel streammn) And judging it to be of Gaussian or Laplace type:
from the observed pixel stream ImnDetermining a temporal prior model P (H) of the observed pixel stream in a data-driven mannermn) Is of Gaussian typeOr laplace typeWherein a high pass operator of the signal is represented;
step (7-2) is based on the original pixel stream HmnThe restitution of (c) derives the partial derivative equation:
determining a temporal prior model P (H)mn) Then, based on the original pixel stream H obtained in step (6)mnOf the restoration type H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Deriving a partial derivative equation of ∂ ∂ H m n [ log P ( H m n ) ] + αB T D T ( I m n - DBH m n ) = 0 ;
Step (7-3) for observing pixel stream ImnPerforming linear interpolation to obtain pixel streamAs an iteration initial value;
step (7-4) iterative solution is carried out on the partial derivative equation obtained in step (7-2) by using a conjugate gradient method, and a restored pixel stream is obtained
And (8) combining to obtain a restored video: the restored pixel stream restored one by one in the step (7)The restored video H is combined in a matrix form as output, and the restored video H is expressed in a matrix form as
2. The method of claim 1, wherein the method for restoring single video frame rate based on pixel stream and temporal prior information comprises: the video acquisition in the step (1) can use a camera to shoot a moving scene to obtain a single video.
3. The method of claim 1, wherein the method for restoring single video frame rate based on pixel stream and temporal prior information comprises: and (4) adopting a trial and error method to set the optimization parameter alpha in the step (6) for value taking.
4. The method of claim 1, wherein the method for restoring single video frame rate based on pixel stream and temporal prior information comprises: determining a temporal prior model P (H) of the observed pixel stream in said step (7-1)mn) Judging whether the signal is Gaussian or Laplace, and judging by a data driving method, comprising the following stepsIn the step of,
step (7-1-1) for observing the pixel stream ImnLinear interpolation is carried out to obtain pixel stream
Step (7-1-2) of establishing a pixel streamHigh-pass plate ofGaussian prior model of P G ( Γ H ^ m n 0 ) = ( 2 πσ G 2 ) - K 2 exp { - | | Γ H ^ m n 0 | | 2 2 2 σ G 2 }
And laplacian prior model P L ( Γ H ^ m n 0 ) = ( 2 σ L ) - K exp { - | | Γ H ^ m n 0 | | 1 1 σ L } ,
Wherein σGAnd σLRespectively representing the standard deviation of a Gaussian prior model and a Laplace prior model, and K representsThe dimension of (a);
step (7-1-3) utilizes the Gauss prior model and the Laplace prior model built in step (7-1-2) to solve the problem that the prior model is not a good modelGiven the known premises, according to the maximum likelihood rule,
estimate the standard deviation σ ^ G = | | Γ H ^ m n 0 | | 2 2 K , σ ^ L = | | Γ H ^ m n 0 | | 1 1 K ;
Determination of step (7-1-4)Andcomparing the magnitudes of the two values, ifIs greater thanThen judging that the time prior model of the observed pixel flow is Gaussian; and otherwise, judging that the time prior model of the observed pixel flow is of a Laplace type.
CN201510414187.4A 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation Expired - Fee Related CN105005977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510414187.4A CN105005977B (en) 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510414187.4A CN105005977B (en) 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation

Publications (2)

Publication Number Publication Date
CN105005977A CN105005977A (en) 2015-10-28
CN105005977B true CN105005977B (en) 2016-04-27

Family

ID=54378636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510414187.4A Expired - Fee Related CN105005977B (en) 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation

Country Status (1)

Country Link
CN (1) CN105005977B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112291676B (en) * 2020-05-18 2021-10-15 珠海市杰理科技股份有限公司 Method and system for inhibiting audio signal tailing, chip and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5493513A (en) * 1993-11-24 1996-02-20 Intel Corporation Process, apparatus and system for encoding video signals using motion estimation
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
CN104376547A (en) * 2014-11-04 2015-02-25 中国航天科工集团第三研究院第八三五七研究所 Motion blurred image restoration method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5493513A (en) * 1993-11-24 1996-02-20 Intel Corporation Process, apparatus and system for encoding video signals using motion estimation
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
CN104376547A (en) * 2014-11-04 2015-02-25 中国航天科工集团第三研究院第八三五七研究所 Motion blurred image restoration method

Also Published As

Publication number Publication date
CN105005977A (en) 2015-10-28

Similar Documents

Publication Publication Date Title
CN105847804B (en) A kind of up-conversion method of video frame rate based on sparse redundant representation model
WO2021208122A1 (en) Blind video denoising method and device based on deep learning
CN111028150B (en) Rapid space-time residual attention video super-resolution reconstruction method
CN107274347A (en) A kind of video super-resolution method for reconstructing based on depth residual error network
CN101504765B (en) Motion blur image sequence restoration method employing gradient amalgamation technology
CN110458756A (en) Fuzzy video super-resolution method and system based on deep learning
CN112102163B (en) Continuous multi-frame image super-resolution reconstruction method based on multi-scale motion compensation framework and recursive learning
CN111614965B (en) Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN104867111A (en) Block-blur-kernel-set-based heterogeneous video blind deblurring method
CN105872345A (en) Full-frame electronic image stabilization method based on feature matching
Liu et al. Large motion video super-resolution with dual subnet and multi-stage communicated upsampling
CN116862773A (en) Video super-resolution reconstruction method applied to complex scene
CN105957036A (en) Video motion blur removing method strengthening character prior
CN109658361A (en) A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account
CN103310486A (en) Reconstruction method of atmospheric turbulence degraded images
Xie et al. Mitigating artifacts in real-world video super-resolution models
Fan et al. Joint appearance and motion learning for efficient rolling shutter correction
CN114494050A (en) Self-supervision video deblurring and image frame inserting method based on event camera
Liao et al. Synthetic aperture imaging with events and frames
CN105005977B (en) A kind of single video frame per second restored method based on pixel stream and time prior imformation
CN102222321A (en) Blind reconstruction method for video sequence
Nie et al. High frame rate video reconstruction and deblurring based on dynamic and active pixel vision image sensor
CN103517078A (en) Side information generating method in distribution type video code
CN104182931A (en) Super resolution method and device
Chae et al. Siamevent: Event-based object tracking via edge-aware similarity learning with siamese networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160427

Termination date: 20210714