CN114387161B - Video super-resolution reconstruction method - Google Patents

Video super-resolution reconstruction method Download PDF

Info

Publication number
CN114387161B
CN114387161B CN202011111270.1A CN202011111270A CN114387161B CN 114387161 B CN114387161 B CN 114387161B CN 202011111270 A CN202011111270 A CN 202011111270A CN 114387161 B CN114387161 B CN 114387161B
Authority
CN
China
Prior art keywords
feature extraction
video
deep
frame
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011111270.1A
Other languages
Chinese (zh)
Other versions
CN114387161A (en
Inventor
何小海
雷佳佳
吴晓红
任超
陈洪刚
熊淑华
滕奇志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202011111270.1A priority Critical patent/CN114387161B/en
Publication of CN114387161A publication Critical patent/CN114387161A/en
Application granted granted Critical
Publication of CN114387161B publication Critical patent/CN114387161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video super-resolution reconstruction method. Mainly comprises the following steps: designing and constructing a convolutional neural network model of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part; constructing a training sample pair in the video data set, and training parameters of the constructed convolutional neural network model until the network converges; and inputting a continuous video frame sequence into the trained network model to obtain a super-resolution reconstruction result. The method can reconstruct low-resolution video into high-quality high-resolution video, and is an effective video super-resolution reconstruction method.

Description

Video super-resolution reconstruction method
Technical Field
The invention relates to an image super-resolution reconstruction technology, in particular to a video super-resolution method, and belongs to the field of image processing.
Background
The goal of super resolution is to recover high resolution images or video from observed low resolution images or video. The method has wide application in fields with high requirements on image or video resolution and detail, such as medical imaging, remote sensing imaging, satellite detection and the like. In recent years, with the progress of display technology, a new generation of ultra-high definition televisions having resolutions of 4K (3840×2160) and 8K (7680×4320) are having a wide market space, but contents matching such high resolutions are still scarce. As such, video super-resolution is becoming increasingly important. Convolutional neural networks have made significant progress in the field of video super-resolution due to their strong fitting ability. However, the existing video super-resolution method based on the convolutional neural network still has room for further improvement in the aspects of network structure, reconstruction quality and the like.
Disclosure of Invention
The invention aims to utilize a convolutional neural network to extract and fuse the characteristics with abundant space-time information, thereby constructing an effective video super-resolution method.
The invention provides a video super-resolution reconstruction method, which mainly comprises the following operation steps:
(1) Designing and constructing a convolutional neural network of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part;
(2) Constructing a training sample pair in the video data set, and training parameters of the convolutional neural network constructed in the step (1) until the network converges;
(3) Inputting a video frame sequence into the trained network in the step (2) to obtain a super-resolution reconstruction result.
Drawings
Fig. 1 is a schematic block diagram of a video super-resolution reconstruction method according to the present invention.
FIG. 2 is a graph comparing the results of the test video "calendar" super-resolution reconstruction with the other four methods of the present invention. Where (a) is the original high resolution image, (b) is the reconstruction result of bicubic interpolation, (c) to (f) are the reconstruction results of methods 1 to 4, and (g) is the reconstruction result of the present invention.
FIG. 3 is a graph comparing the results of the "city" super-resolution reconstruction of a test video by the present invention with the other four methods. Where (a) is the original high resolution image, (b) is the reconstruction result of bicubic interpolation, (c) to (f) are the reconstruction results of methods 1 to 4, and (g) is the reconstruction result of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
in fig. 1, a video super-resolution reconstruction method includes the following steps:
(1) Designing and constructing a convolutional neural network of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part;
(2) Constructing a training sample pair in the video data set, and training parameters of the convolutional neural network model constructed in the step (1) until the network converges;
(3) And (3) inputting a continuous video frame sequence into the trained network model in the step (2) to obtain a super-resolution reconstruction result.
Specifically, in the step (1), the constructed convolutional neural network model structure is shown in fig. 1.
In the shallow feature extraction part, we use 2n+1 convolution layers to perform feature extraction on the input video frames respectively. Shallow features F extracted from the t+Nth video frame t+N The formula is as follows:
F t+N =H SFE (X t+N ), (1)
wherein X is t+N Is the (t+N) th video frame of the input, H SFE (. Cndot.) represents the shallow feature extraction portion. By shallow feature extraction we get the set of shallow features { F ] t-N ,...,F t ,...,F t+N }。
In the deep feature extraction section, we use 2n+1 enhanced deep feature extraction modules to perform further feature extraction. Deep features of the extracted t+Nth video frame
Figure GDA0004250412830000021
The formula is as follows:
Figure GDA0004250412830000022
wherein H is DFE (. Cndot.) represents the deep feature extraction section. Each video frame corresponds to an enhanced deep feature extraction module, and the input of each enhanced deep feature extraction module corresponds to the shallow features extracted from the video frame and the shallow features extracted from the target frame. The enhanced deep feature extraction module simultaneously utilizes time information between frames in the frames, and enhances the representation capability of the extracted deep features. Specifically, as shown in fig. 1, in the t+nth enhanced deep feature extraction module, we first extract the t+nth video frame F t+N And target frame F t And concatenating in the channel dimension, followed by using a convolution layer to obtain a temporal fusion feature
Figure GDA0004250412830000023
The temporal fusion feature includes temporal information between the t+N video frame and the target frame. The resulting temporal fusion profile->
Figure GDA0004250412830000024
The equation can be formulated as follows:
Figure GDA0004250412830000025
wherein H is TF (. Cndot.) is a bottleneck convolution layer, [. Cndot.]Is a cascading operation. Then we will F t+N And
Figure GDA0004250412830000026
cascading in the channel dimension. The cascading features contain two types of information: spatial information within the frame, and temporal information between the (t+N) th video frame and the target frame. Finally we further extract deep features of the (t+N) th video frame from the intra-frame temporal information and inter-frame spatial information>
Figure GDA0004250412830000027
Figure GDA0004250412830000031
Wherein H is EH (. Cndot.) is a bottleneck convolution layer. By deep feature extraction we get a set of deep features
Figure GDA0004250412830000032
The recursive feature fusion part comprises a bottleneck convolution layer and U recursive units. For controlling model parameters, the B residual upsampling blocks constitute one recursive unit, with parameter sharing between the recursive units. Recursive learning can improve network performance by increasing the number of recursive units without adding additional parameters. The residual up-down sampling block adopts an up-down sampling layer as a residual branch of the residual block, and the up-down sampling layer can find out the interdependence relationship between high resolution and low resolution, so that the characteristics are fused better. Specifically, as shown in fig. 1, in each residual up-down sample block, up-sampling employs a deconvolution layer, and down-sampling employs a convolution layer. The step size and the magnification of the deconvolution layer and the convolution layer are the same. Output F of the block of downsampled samples up and down the b th residual in the u-th recursive unit u,b The method comprises the following steps:
F u,b =H (H (F u,b-1 ))+F u,b-1 , (5)
wherein F is u,b And F u,b-1 Output and input of the b-th residual up-and-down sampling block, H (. Cndot.) and H (. Cndot.) is an upsampling layer and a downsampling layer, respectively. By recursive fusion we get the fused feature F RFF
Figure GDA0004250412830000033
Wherein H is RFF (. Cndot.) represents a recursive fusion portion.
The reconstruction portion includes an upsampling layer and two convolution layers. We introduce a long-hop connection in the reconstruction part, enabling the network to perform global residual learning to alleviate training difficulties. Since our goal is to reconstruct the target frame, we use F t As an identity mapping branch in residual learning. By reconstruction we get reconstructed target frame Y t
Y t =H R (F RFF +F t ), (7)
Wherein H is R (. Cndot.) represents the reconstructed portion.
In said step (2), we train our network using the mean square error function as a loss function, which is expressed in terms of the following:
Figure GDA0004250412830000034
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure GDA0004250412830000035
and->
Figure GDA0004250412830000036
Is a pair of a J-th high resolution block and a low resolution block, J is the total number of training sample pairs,
Figure GDA0004250412830000037
representing the j-th high resolution block calculated from the network parameter theta.
In the step (3), a continuous video frame sequence is input into the trained network model in the step (2), and a super-resolution reconstruction result is obtained.
To better illustrate the effectiveness of the present invention, the video library "video 4" (comprising four video frame sequences of "calcadar", "city", "folrage" and "walk") is commonly used. We downsampled the original video frame sequence 4 times twice to a downsampled low resolution video frame sequence. In the experiment, bicubic and other four video super-resolution methods were chosen for comparison.
The method for reconstructing the super-resolution of the compared video comprises the following steps:
method 1: liu et al, reference "Robust video super-resolution with learned temporal dynamics, IEEE International Conference on Computer Vision, pp.2526-2534, 2017".
Method 2: the method proposed by Tao et al, reference "Detail-revealing deep video super-resolution, IEEE International Conference on Computer Vision, pp.22-29, 2017".
Method 3: the method proposed by Wang et al, reference "Deep video super-resolution using HR optical flow estimation, IEEE Transactions on Image Processing, pp.377-388, 2020".
Method 4: tian et al, reference "TDAN: temporally-deformable alignment network for video super-resolution, IEEE Conference on Computer Vision and Pattern Recognition, pp.3360-3369, 2020".
Experiment 1, bicubic, methods 1 to 4, and the invention performed 4-fold reconstruction of the low resolution test video library "video 4" obtained after degradation, respectively, with Bicubic interpolation. The super-resolution reconstruction results are shown in fig. 2 to 3, respectively. The objective evaluation results of the reconstruction results are shown in table 1. PSNR (Peak Signal to Noise Ratio, unit dB) and SSIM (Structure Similarity Index) are used to evaluate the reconstruction effect, respectively, with higher values of PSNR/SSIM indicating better reconstruction.
TABLE 1
Videoset4 Bicubic Method 1 Method 2 Method 3 Method 4 The invention is that
calendar 20.53/0.5678 22.18/0.7073 22.76/0.7450 22.76/0.7461 23.01/0.7645 23.37/0.7788
city 25.09/0.5997 26.48/0.7235 26.96/0.7571 26.81/0.7490 27.07/0.7696 27.30/0.7793
foliage 23.54/0.5661 25.17/0.7021 25.52/0.7207 25.52/0.7175 25.71/0.7278 25.83/0.7359
walk 25.98/0.7985 25.17/0.7021 28.90/0.8782 29.10/0.8831 29.75/0.8958 29.88/0.8999
Average of 23.78.0.6330 25.53/0.7493 26.03/0.7753 26.04/0.7739 26.39/0.7894 26.60/0.7985
It can be seen from Table 1 that the present invention achieves higher PSNR and SSIM. In the "calendar" video frame in fig. 2, the present invention is able to reconstruct the clearer word "MAREE" compared to the contrast method. In the "city" video frame of fig. 3, the reconstructed building of the present invention is more straight. In conclusion, compared with a comparison method, the reconstruction result of the invention has great advantages in subjective and objective evaluation. Therefore, the invention is an effective video super-resolution reconstruction method.

Claims (3)

1. The video super-resolution reconstruction method is characterized by comprising the following steps of:
step one: designing and constructing a convolutional neural network of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part; in the shallow layer feature extraction part, using 2N+1 convolution layers to respectively extract features of an input video frame; shallow features F extracted from the t+Nth video frame t+N The formula is as follows:
F t+N =H SFE (X t+N ), (1)
wherein X is t+N Is the (t+N) th video frame of the input, H SFE (. Cndot.) represents a shallow feature extraction portion; the shallow feature extraction is used for obtaining a set { F } of shallow features t-N ,...,F t ,...,F t+N };
In the deep feature extraction part, using 2N+1 enhanced deep feature extraction modules to perform further feature extraction; deep features of the extracted t+Nth video frame
Figure FDA0004250412820000011
The formula is as follows:
Figure FDA0004250412820000012
wherein H is DFE (. Cndot.) represents a deep feature extraction section; each video frame corresponds to an enhanced deep feature extraction module, and the input of each enhanced deep feature extraction module is corresponding to the shallow features extracted from the video frame and the shallow features extracted from the target frame; the enhanced deep feature extraction module simultaneously utilizes time information between frames in the frames, and enhances the representation capability of the extracted deep features; specifically, in the t+n enhanced deep feature extraction module, the t+n video frame F is first processed t+N And target frame F t And concatenating in the channel dimension, followed by using a convolution layer to obtain a temporal fusion feature
Figure FDA0004250412820000013
The time fusion feature comprises time information between the (t+N) th video frame and the target frame; the resulting temporal fusion profile->
Figure FDA0004250412820000014
The equation can be formulated as follows:
Figure FDA0004250412820000015
wherein H is TF (. Cndot.) is a bottleneck convolution layer, [. Cndot.]Is a cascading operation; then, F t+N And
Figure FDA0004250412820000016
cascading in the channel dimension; the cascading features contain two types of information: spatial information within the frame, and temporal information between the (t+N) th video frame and the target frame; finally, deep features of the (t+N) th video frame are further extracted from the intra-frame time information and the inter-frame space information>
Figure FDA0004250412820000017
Figure FDA0004250412820000018
Wherein H is EH (-) is a bottleneck convolution layer; obtaining a set of deep features through deep feature extraction
Figure FDA0004250412820000019
The recursive feature fusion part comprises a bottleneck convolution layer and U recursive units; in order to control model parameters, B residual up-down sampling blocks form a recursion unit, and parameters are shared among the recursion units; the recursive learning can improve the network performance by increasing the number of the recursive units on the premise of not adding additional parameters; the residual up-down sampling block adopts an up-down sampling layer as a residual branch of the residual block, and the up-down sampling layer can find out the mutual dependency relationship between high resolution and low resolution, so that the characteristics are better fused; specifically, in each residual up-down sampling block, up-sampling adopts a deconvolution layer, and down-sampling adopts a convolution layer; the step length of the deconvolution layer is the same as the amplification factor; output F of the block of downsampled samples up and down the b th residual in the u-th recursive unit u,b The method comprises the following steps:
F u,b =H (H (F u,b-1 ))+F u,b-1 , (5)
wherein F is u,b And F u,b-1 Output and input of the b-th residual up-and-down sampling block, H (. Cndot.) and H (. Cndot.) is an upsampling layer and a downsampling layer, respectively; obtaining a fused characteristic F through recursive fusion RFF
Figure FDA0004250412820000021
Wherein H is RFF (. Cndot.) represents a recursive fusion portion;
the reconstruction part comprises an up-sampling layer and two convolution layers; a long-jump connection is introduced into the reconstruction part, so that the network can perform global residual error learning to reduce the difficulty of training; since the target is a reconstructed target frame, F is used t As an identity mapping branch in residual learning; by reconstruction, a reconstructed target frame Y is obtained t
Y t =H R (F RFF +F t ), (7)
Wherein H is R (-) represents a reconstructed part;
step two: constructing a training sample pair in the video data set, and training parameters of the convolutional neural network model constructed in the first step until the network converges;
step three: and (3) inputting a continuous video frame sequence into the trained network model in the step two to obtain a super-resolution reconstruction result.
2. The method for reconstructing super-resolution video according to claim 1, wherein in the step one, the deep feature extraction modules are enhanced in the deep feature extraction section, each video frame corresponds to one deep feature extraction module, the input of each deep feature extraction module is shallow features extracted from the corresponding video frame and shallow features extracted from the target frame, the shallow features extracted from the corresponding video frame include intra-frame spatial information, the shallow features extracted from the corresponding video frame and the shallow features extracted from the target frame are fused to obtain a time fusion feature, the time fusion feature includes inter-frame time information, and the deep feature extraction modules simultaneously use the intra-frame information, thereby enhancing the representation capability of the deep features extracted.
3. The video super-resolution reconstruction method as claimed in claim 1, wherein in the step one, the residual up-down sampling block in the recursive feature fusion part adopts up-down sampling layers as residual branches of the residual block, and the up-down sampling layers can find the interdependence relationship between the high resolution and the low resolution, so as to better fuse the features.
CN202011111270.1A 2020-10-16 2020-10-16 Video super-resolution reconstruction method Active CN114387161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011111270.1A CN114387161B (en) 2020-10-16 2020-10-16 Video super-resolution reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011111270.1A CN114387161B (en) 2020-10-16 2020-10-16 Video super-resolution reconstruction method

Publications (2)

Publication Number Publication Date
CN114387161A CN114387161A (en) 2022-04-22
CN114387161B true CN114387161B (en) 2023-07-07

Family

ID=81193652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011111270.1A Active CN114387161B (en) 2020-10-16 2020-10-16 Video super-resolution reconstruction method

Country Status (1)

Country Link
CN (1) CN114387161B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422614B (en) * 2023-12-19 2024-03-12 华侨大学 Single-frame image super-resolution method and device based on hybrid feature interaction transducer
CN117575915A (en) * 2024-01-16 2024-02-20 闽南师范大学 Image super-resolution reconstruction method, terminal equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks
CN109949217A (en) * 2017-12-20 2019-06-28 四川大学 Video super-resolution method for reconstructing based on residual error study and implicit motion compensation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11222415B2 (en) * 2018-04-26 2022-01-11 The Regents Of The University Of California Systems and methods for deep learning microscopy
US11164067B2 (en) * 2018-08-29 2021-11-02 Arizona Board Of Regents On Behalf Of Arizona State University Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949217A (en) * 2017-12-20 2019-06-28 四川大学 Video super-resolution method for reconstructing based on residual error study and implicit motion compensation
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
任超 ; 吴伟 ; 黄征凯 ; 焦元元 ; .基于AIC准则的RBF神经网络在GPS高程拟合中的应用.测绘科学.2013,(02),第1-3节. *
何欣颖 ; 吴黎明 ; 郑耿哲 ; 吴佳毅 ; .基于Inception-ResNet-v2的乳腺癌辅助诊断方法.自动化与信息工程.2020,(01),第1-3节. *
周航 ; 何小海 ; 王正勇 ; 熊淑华 ; Karn Pradeep ; .采用双网络结构的压缩视频超分辨率重建.电讯技术.2020,(01),第1-3节. *
曹洪玉 ; 刘冬梅 ; 付秀华 ; 张静 ; 岳鹏飞 ; .基于CT图像的超分辨率重构研究.长春理工大学学报(自然科学版).2020,(01),第1-3节. *
胡睿 ; 何小海 ; 滕奇志 ; 卿粼波 ; 廖浚斌 ; .结合注意力的3D卷积网络脑胶质瘤分割算法.计算机工程与应用.2020,(第12期),第1-3节. *

Also Published As

Publication number Publication date
CN114387161A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
Zheng et al. Crossnet: An end-to-end reference-based super resolution network using cross-scale warping
CN112801877B (en) Super-resolution reconstruction method of video frame
CN115222601A (en) Image super-resolution reconstruction model and method based on residual mixed attention network
CN109272452B (en) Method for learning super-resolution network based on group structure sub-band in wavelet domain
CN114387161B (en) Video super-resolution reconstruction method
CN113177882B (en) Single-frame image super-resolution processing method based on diffusion model
CN111353940A (en) Image super-resolution reconstruction method based on deep learning iterative up-down sampling
CN109949217B (en) Video super-resolution reconstruction method based on residual learning and implicit motion compensation
CN111340744A (en) Attention double-flow deep network-based low-quality image down-sampling method and system
CN112241939B (en) Multi-scale and non-local-based light rain removal method
CN110689509A (en) Video super-resolution reconstruction method based on cyclic multi-column 3D convolutional network
CN114757828A (en) Transformer-based video space-time super-resolution method
Kim et al. Constrained adversarial loss for generative adversarial network‐based faithful image restoration
CN116563100A (en) Blind super-resolution reconstruction method based on kernel guided network
CN115345791A (en) Infrared image deblurring algorithm based on attention mechanism residual error network model
CN113379606A (en) Face super-resolution method based on pre-training generation model
CN111080533B (en) Digital zooming method based on self-supervision residual sensing network
CN113096032A (en) Non-uniform blur removing method based on image area division
Chen et al. Guided dual networks for single image super-resolution
CN116957940A (en) Multi-scale image super-resolution reconstruction method based on contour wave knowledge guided network
Huang et al. FFNet: A simple image dedusting network with feature fusion
CN112348745B (en) Video super-resolution reconstruction method based on residual convolutional network
CN114764750B (en) Image denoising method based on self-adaptive consistency priori depth network
Jin et al. Jointly texture enhanced and stereo captured network for stereo image super-resolution
Zhang et al. Superresolution approach of remote sensing images based on deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant