CN112200732A - Video deblurring method with clear feature fusion - Google Patents

Video deblurring method with clear feature fusion Download PDF

Info

Publication number
CN112200732A
CN112200732A CN202010368483.6A CN202010368483A CN112200732A CN 112200732 A CN112200732 A CN 112200732A CN 202010368483 A CN202010368483 A CN 202010368483A CN 112200732 A CN112200732 A CN 112200732A
Authority
CN
China
Prior art keywords
clear
feature
deblurring
frames
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010368483.6A
Other languages
Chinese (zh)
Other versions
CN112200732B (en
Inventor
魏颢
项欣光
潘金山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010368483.6A priority Critical patent/CN112200732B/en
Publication of CN112200732A publication Critical patent/CN112200732A/en
Application granted granted Critical
Publication of CN112200732B publication Critical patent/CN112200732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a video deblurring method with fused clear features. Firstly, selecting a plurality of continuous fuzzy video frames, estimating optical flow between the continuous frames by using an optical flow estimation network, twisting images of adjacent frames by using the estimated optical flow, then taking a twisted result and an original fuzzy frame sequence as input of a deblurring network, then selecting a plurality of clear frames, passing through a clear feature extraction module, obtaining clear features, fusing the clear features into the deblurring network, and finally outputting relatively clear video frames by the deblurring network. The method is robust to scenes with clear frames, clear frames of any scene can be used for feature fusion, and reconstruction of video frames is facilitated, so that the method is convenient and effective.

Description

Video deblurring method with clear feature fusion
Technical Field
The invention relates to an end-to-end video deblurring network, in particular to a video deblurring algorithm based on clear feature fusion.
Background
In recent years, with the development of portable imaging devices, such as mobile phones, cameras, image/video deblurring techniques have received much attention. The reasons for the formation of blur are many, including the motion of objects during imaging, camera shake and depth of field, which provide much resistance to computer vision studies (object detection, object recognition). Therefore, it is very necessary to conduct the research of the deblurring algorithm.
At present, the deblurring algorithm is mainly divided into two types, one is a method based on a physical model, and the other is a method based on learning. Early, according to a degradation model (B ═ K × S + N), where B denotes a blurred image, K denotes a blur kernel, S denotes a sharp image, and N denotes additive noise. In the case of the known blurred image B, it is very difficult to solve the blurred kernel K and the sharp image S, and a multi-solution situation may occur, so that it is a pathological problem. In order to constrain the solution space, a plurality of natural image priors are designed, including an L0 gradient prior, a dark channel prior and the like to constrain the natural image, and the solution is carried out under a maximum posterior framework. However, the method is difficult to optimize based on a physical model, time-consuming, without universality, and requires artificially designed prior information for constraint.
Another learning-based approach is the one that is currently more popular. People learn the inherent distribution of natural images from data sets through designing a reasonable neural network, and finally achieve the aim of deblurring.
The invention provides an end-to-end video deblurring network, which is different from image deblurring, and the video deblurring needs to consider the relation between adjacent frames.
Disclosure of Invention
The invention aims to provide an end-to-end video deblurring algorithm based on clear feature fusion, which is used for inputting a plurality of continuous blurred video frames and recovering clear intermediate frames.
The technical solution for realizing the purpose of the invention is as follows: a video deblurring algorithm based on sharp feature fusion comprises the following steps:
step A: designing an optical flow estimation module, inputting three continuously blurred frames into the optical flow estimation module, and outputting a result obtained by performing image torsion (image warp) on two adjacent frames;
and B: designing a clear feature fusion module, inputting any clear three frames into the module, and outputting clear feature graphs with different scales;
and C: and designing a deblurring module, stacking the result of the step A and the original input, sending the result into the module, and fusing the result of the step B into the whole deblurring process to finally obtain a clear intermediate frame.
Compared with the prior art, the invention has the following remarkable advantages: the invention considers the motion compensation and the image deblurring at the same time. The method is robust to scenes with clear frames, clear frames of any scene can be used for feature fusion, and reconstruction of video frames is facilitated, so that the method is convenient and effective.
Drawings
FIG. 1 is an overall flow chart of the present invention.
Fig. 2 is a diagram of a network architecture designed by the present invention.
Fig. 3 is a comparison graph of the deblurring effect of the present invention. (a) Is a blurred video frame and (b) is the reconstruction result.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 2, the whole network includes 3 modules, which are an optical flow estimation module, a sharp feature fusion module and a deblurring module.
According to the first panel of fig. 1, the data preparation steps are as follows:
step 1, downloading a GOPRO _ Su data set as a training sample, wherein the GOPRO _ Su data set comprises 71 video sets, and each video is provided with a plurality of paired fuzzy-clear video frames;
and 2, dividing 71 videos into two parts, wherein 61 videos serve as training samples, and 10 videos serve as testing samples.
According to the second block of fig. 1, the optical flow estimation module has the following steps:
step 1, presetting training parameters, including a learning rate 1e-6 of an optical flow estimation module, a learning rate 1e-4 of a deblurring module, and a maximum iteration epoch of 500;
step 2, three continuous blurred images { I ] are selected1,I2,I3Estimating optical flow { f) between two adjacent frames by FlowNet S1→2,f3→2Obtaining two 2-channel optical flow graphs respectively,performing image torsion (image warp) operation on the optical flow and the adjacent frame, and twisting the adjacent frame to the intermediate frame to finally obtain two 3-channel RGB images { warp1→2,warp3→2}. The process can be described as follows:
f1→2=F([I1,I2]),f3→2=F([I3,I2]),
warp1→2=W([f1→2,I1]),warp3→2=W([f3→2,I3]);
where F () denotes the optical flow estimation network, [ ] denotes the operation of the image concat, and W () denotes the warp operation.
According to the third panel of fig. 1, the steps of the sharp feature fusion module are as follows:
selecting three clear images { S1,S2,S3And the three clear images are passed through a feature extraction module, wherein the feature extraction module is formed by two layers of convolution respectively, and 32 256 × 256 feature maps { feature _ coarse } and 64 × 64 feature maps { feature _ fine } are finally obtained respectively. The process can be described as follows:
Feature_coarse,feature_fine=E([S1,S2,S3])
where E () represents the feature extraction network and [ ] represents the operation of the image concat.
According to the fourth panel of fig. 1, the step of deblurring the module is as follows:
step 1, obtaining a warp result { warp ] from an optical flow estimation module1→2,warp3→2And original blurred video frame { I }1,I2,I3Sending the data into a deblurring module together;
and 2, the deblurring module is a coding and decoding structure, wherein each coding block is formed by adding three residual blocks to 1-layer convolution, and each decoding block is formed by adding one deconvolution to the three residual blocks. In the process of deblurring, the clear feature map { feature _ coarse, feature _ fine } obtained by the clear feature fusion module is embedded into an encoder and a decoder of the deblurring modulePart of, finally obtaining a sheet and I2Corresponding sharp image R2. Clear image R generated by calculation2With the original I2Reference sharp image G2To update the entire network. The process can be described as follows:
input=[warp1→2,I1,I2,I3,warp3→2],
En_out=En(input,feature_coarse,feature_fine),
De_out=De(En_out,feature_coarse,feature_fine);
where En () denotes an encoder part and De () denotes a decoder part.
And 3, training according to the steps until the maximum iteration number is reached, and finishing training when the generated model is reached.
According to the fifth panel of fig. 1, the test procedure is as follows:
step 1, inputting 10 video test samples obtained by the data preparation part into a trained model;
and 2, selecting the clear frame, namely selecting any clear frame different from the fuzzy scene, and inputting the clear frame into the trained model to finally obtain a clear image. Fig. 3 is a diagram of the visual effect of the present invention.

Claims (4)

1. A video deblurring method with fusion of clear features specifically comprises the following steps
Step A: designing an optical flow estimation module, inputting three continuous fuzzy frames into the optical flow estimation module, and outputting a result obtained by performing image torsion on two adjacent frames;
and B: designing a clear feature fusion module, inputting any clear three frames into the module, and outputting clear feature graphs with different scales;
and C: and designing a deblurring module, stacking the result of the step A and the original input, sending the result into the module, and fusing the result of the step B into the whole deblurring process to finally obtain a clear intermediate frame.
2. The method according to claim 1, wherein step a comprises in particular the steps of:
step A01, presetting training parameters, including a learning rate 1e-6 of an optical flow estimation module, a learning rate 1e-4 of a deblurring module, and a maximum iteration epoch of 500;
step A02. selecting three continuous blurred images { I1,I2,I3Estimating the optical flow between two adjacent frames by FlowNet S1→2,flow3→2Obtaining two 2-channel optical flow graphs respectively, performing image torsion (image warp) operation on the optical flow and an adjacent frame, and twisting the adjacent frame to an intermediate frame to finally obtain two 3-channel RGB images { warp1→2,warp3→2}。
3. The method according to claim 1, wherein step B comprises in particular the steps of:
step B01, selecting three clear images { S1,S2,S3And the three clear images are different from the content of the blurred image in the step A02, and the three clear images are passed through a feature extraction module to obtain 32 256 × 256 feature maps { feature _ coarse } and 64 × 64 feature maps { feature _ fine } respectively.
4. The method according to claim 1, wherein step C comprises in particular the steps of:
step C01. get the result after warp from step A02 { warp1→2,warp3→2And original blurred video frame { I }1,I2,I3Sending the data into a deblurring module together; in the process of deblurring, embedding the clear feature map { feature _ coarse, feature _ fine } obtained in the step B01 into a deblurring module to finally obtain a feature map I2Corresponding sharp image R2(ii) a Clear image R generated by calculation2With the original I2Reference sharp image G2To update the entire network;
c02, training according to the steps until the maximum iteration number is reached, and finishing training when the generated model is reached;
and C03, after the model training is finished, inputting three continuous fuzzy video frames, and selecting a clear frame of any scene to extract clear features to finally obtain a clear image.
CN202010368483.6A 2020-04-30 2020-04-30 Video deblurring method with clear feature fusion Active CN112200732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010368483.6A CN112200732B (en) 2020-04-30 2020-04-30 Video deblurring method with clear feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010368483.6A CN112200732B (en) 2020-04-30 2020-04-30 Video deblurring method with clear feature fusion

Publications (2)

Publication Number Publication Date
CN112200732A true CN112200732A (en) 2021-01-08
CN112200732B CN112200732B (en) 2022-10-21

Family

ID=74005893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010368483.6A Active CN112200732B (en) 2020-04-30 2020-04-30 Video deblurring method with clear feature fusion

Country Status (1)

Country Link
CN (1) CN112200732B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767250A (en) * 2021-01-19 2021-05-07 南京理工大学 Video blind super-resolution reconstruction method and system based on self-supervision learning
CN114066751A (en) * 2021-10-29 2022-02-18 西北工业大学 Vehicle card monitoring video deblurring method based on common camera acquisition condition
WO2023193521A1 (en) * 2022-04-06 2023-10-12 腾讯科技(深圳)有限公司 Video inpainting method, related apparatus, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914810A (en) * 2013-01-07 2014-07-09 通用汽车环球科技运作有限责任公司 Image super-resolution for dynamic rearview mirror
CN109461131A (en) * 2018-11-20 2019-03-12 中山大学深圳研究院 A kind of real-time deblurring method of intelligent inside rear-view mirror based on neural network algorithm
CN110062164A (en) * 2019-04-22 2019-07-26 深圳市商汤科技有限公司 Method of video image processing and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914810A (en) * 2013-01-07 2014-07-09 通用汽车环球科技运作有限责任公司 Image super-resolution for dynamic rearview mirror
CN109461131A (en) * 2018-11-20 2019-03-12 中山大学深圳研究院 A kind of real-time deblurring method of intelligent inside rear-view mirror based on neural network algorithm
CN110062164A (en) * 2019-04-22 2019-07-26 深圳市商汤科技有限公司 Method of video image processing and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767250A (en) * 2021-01-19 2021-05-07 南京理工大学 Video blind super-resolution reconstruction method and system based on self-supervision learning
CN112767250B (en) * 2021-01-19 2021-10-15 南京理工大学 Video blind super-resolution reconstruction method and system based on self-supervision learning
CN114066751A (en) * 2021-10-29 2022-02-18 西北工业大学 Vehicle card monitoring video deblurring method based on common camera acquisition condition
CN114066751B (en) * 2021-10-29 2024-02-27 西北工业大学 Vehicle card monitoring video deblurring method based on common camera acquisition condition
WO2023193521A1 (en) * 2022-04-06 2023-10-12 腾讯科技(深圳)有限公司 Video inpainting method, related apparatus, device and storage medium

Also Published As

Publication number Publication date
CN112200732B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN112200732B (en) Video deblurring method with clear feature fusion
CN111028177B (en) Edge-based deep learning image motion blur removing method
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
CN112419151B (en) Image degradation processing method and device, storage medium and electronic equipment
CN112233038A (en) True image denoising method based on multi-scale fusion and edge enhancement
CN110933429B (en) Video compression sensing and reconstruction method and device based on deep neural network
CN111539884A (en) Neural network video deblurring method based on multi-attention machine mechanism fusion
CN112801901A (en) Image deblurring algorithm based on block multi-scale convolution neural network
CN109636721B (en) Video super-resolution method based on countermeasure learning and attention mechanism
CN114463218B (en) Video deblurring method based on event data driving
Guo et al. Dense123'color enhancement dehazing network
CN113066022B (en) Video bit enhancement method based on efficient space-time information fusion
CN116681584A (en) Multistage diffusion image super-resolution algorithm
CN114004766A (en) Underwater image enhancement method, system and equipment
CN113724136A (en) Video restoration method, device and medium
CN116485741A (en) No-reference image quality evaluation method, system, electronic equipment and storage medium
CN113379606B (en) Face super-resolution method based on pre-training generation model
CN112990171B (en) Image processing method, image processing device, computer equipment and storage medium
CN117745596A (en) Cross-modal fusion-based underwater de-blocking method
CN117274059A (en) Low-resolution image reconstruction method and system based on image coding-decoding
CN111932594A (en) Billion pixel video alignment method and device based on optical flow and medium
CN116883265A (en) Image deblurring method based on enhanced feature fusion mechanism
CN115829868B (en) Underwater dim light image enhancement method based on illumination and noise residual image
CN116208812A (en) Video frame inserting method and system based on stereo event and intensity camera
CN115797646A (en) Multi-scale feature fusion video denoising method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Xiang Xinguang

Inventor after: Wei Hao

Inventor after: Pan Jinshan

Inventor before: Wei Hao

Inventor before: Xiang Xinguang

Inventor before: Pan Jinshan

GR01 Patent grant
GR01 Patent grant