CN114387161B - Video super-resolution reconstruction method - Google Patents
Video super-resolution reconstruction method Download PDFInfo
- Publication number
- CN114387161B CN114387161B CN202011111270.1A CN202011111270A CN114387161B CN 114387161 B CN114387161 B CN 114387161B CN 202011111270 A CN202011111270 A CN 202011111270A CN 114387161 B CN114387161 B CN 114387161B
- Authority
- CN
- China
- Prior art keywords
- feature extraction
- video
- deep
- frame
- resolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000000605 extraction Methods 0.000 claims abstract description 38
- 230000004927 fusion Effects 0.000 claims abstract description 19
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 11
- 238000005070 sampling Methods 0.000 claims description 18
- 230000002123 temporal effect Effects 0.000 claims description 10
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 2
- 230000003321 amplification Effects 0.000 claims 1
- 230000002708 enhancing effect Effects 0.000 claims 1
- 238000003199 nucleic acid amplification method Methods 0.000 claims 1
- 238000012360 testing method Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video super-resolution reconstruction method. Mainly comprises the following steps: designing and constructing a convolutional neural network model of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part; constructing a training sample pair in the video data set, and training parameters of the constructed convolutional neural network model until the network converges; and inputting a continuous video frame sequence into the trained network model to obtain a super-resolution reconstruction result. The method can reconstruct low-resolution video into high-quality high-resolution video, and is an effective video super-resolution reconstruction method.
Description
Technical Field
The invention relates to an image super-resolution reconstruction technology, in particular to a video super-resolution method, and belongs to the field of image processing.
Background
The goal of super resolution is to recover high resolution images or video from observed low resolution images or video. The method has wide application in fields with high requirements on image or video resolution and detail, such as medical imaging, remote sensing imaging, satellite detection and the like. In recent years, with the progress of display technology, a new generation of ultra-high definition televisions having resolutions of 4K (3840×2160) and 8K (7680×4320) are having a wide market space, but contents matching such high resolutions are still scarce. As such, video super-resolution is becoming increasingly important. Convolutional neural networks have made significant progress in the field of video super-resolution due to their strong fitting ability. However, the existing video super-resolution method based on the convolutional neural network still has room for further improvement in the aspects of network structure, reconstruction quality and the like.
Disclosure of Invention
The invention aims to utilize a convolutional neural network to extract and fuse the characteristics with abundant space-time information, thereby constructing an effective video super-resolution method.
The invention provides a video super-resolution reconstruction method, which mainly comprises the following operation steps:
(1) Designing and constructing a convolutional neural network of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part;
(2) Constructing a training sample pair in the video data set, and training parameters of the convolutional neural network constructed in the step (1) until the network converges;
(3) Inputting a video frame sequence into the trained network in the step (2) to obtain a super-resolution reconstruction result.
Drawings
Fig. 1 is a schematic block diagram of a video super-resolution reconstruction method according to the present invention.
FIG. 2 is a graph comparing the results of the test video "calendar" super-resolution reconstruction with the other four methods of the present invention. Where (a) is the original high resolution image, (b) is the reconstruction result of bicubic interpolation, (c) to (f) are the reconstruction results of methods 1 to 4, and (g) is the reconstruction result of the present invention.
FIG. 3 is a graph comparing the results of the "city" super-resolution reconstruction of a test video by the present invention with the other four methods. Where (a) is the original high resolution image, (b) is the reconstruction result of bicubic interpolation, (c) to (f) are the reconstruction results of methods 1 to 4, and (g) is the reconstruction result of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
in fig. 1, a video super-resolution reconstruction method includes the following steps:
(1) Designing and constructing a convolutional neural network of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part;
(2) Constructing a training sample pair in the video data set, and training parameters of the convolutional neural network model constructed in the step (1) until the network converges;
(3) And (3) inputting a continuous video frame sequence into the trained network model in the step (2) to obtain a super-resolution reconstruction result.
Specifically, in the step (1), the constructed convolutional neural network model structure is shown in fig. 1.
In the shallow feature extraction part, we use 2n+1 convolution layers to perform feature extraction on the input video frames respectively. Shallow features F extracted from the t+Nth video frame t+N The formula is as follows:
F t+N =H SFE (X t+N ), (1)
wherein X is t+N Is the (t+N) th video frame of the input, H SFE (. Cndot.) represents the shallow feature extraction portion. By shallow feature extraction we get the set of shallow features { F ] t-N ,...,F t ,...,F t+N }。
In the deep feature extraction section, we use 2n+1 enhanced deep feature extraction modules to perform further feature extraction. Deep features of the extracted t+Nth video frameThe formula is as follows:
wherein H is DFE (. Cndot.) represents the deep feature extraction section. Each video frame corresponds to an enhanced deep feature extraction module, and the input of each enhanced deep feature extraction module corresponds to the shallow features extracted from the video frame and the shallow features extracted from the target frame. The enhanced deep feature extraction module simultaneously utilizes time information between frames in the frames, and enhances the representation capability of the extracted deep features. Specifically, as shown in fig. 1, in the t+nth enhanced deep feature extraction module, we first extract the t+nth video frame F t+N And target frame F t And concatenating in the channel dimension, followed by using a convolution layer to obtain a temporal fusion featureThe temporal fusion feature includes temporal information between the t+N video frame and the target frame. The resulting temporal fusion profile->The equation can be formulated as follows:
wherein H is TF (. Cndot.) is a bottleneck convolution layer, [. Cndot.]Is a cascading operation. Then we will F t+N Andcascading in the channel dimension. The cascading features contain two types of information: spatial information within the frame, and temporal information between the (t+N) th video frame and the target frame. Finally we further extract deep features of the (t+N) th video frame from the intra-frame temporal information and inter-frame spatial information>
Wherein H is EH (. Cndot.) is a bottleneck convolution layer. By deep feature extraction we get a set of deep features
The recursive feature fusion part comprises a bottleneck convolution layer and U recursive units. For controlling model parameters, the B residual upsampling blocks constitute one recursive unit, with parameter sharing between the recursive units. Recursive learning can improve network performance by increasing the number of recursive units without adding additional parameters. The residual up-down sampling block adopts an up-down sampling layer as a residual branch of the residual block, and the up-down sampling layer can find out the interdependence relationship between high resolution and low resolution, so that the characteristics are fused better. Specifically, as shown in fig. 1, in each residual up-down sample block, up-sampling employs a deconvolution layer, and down-sampling employs a convolution layer. The step size and the magnification of the deconvolution layer and the convolution layer are the same. Output F of the block of downsampled samples up and down the b th residual in the u-th recursive unit u,b The method comprises the following steps:
F u,b =H ↓ (H ↑ (F u,b-1 ))+F u,b-1 , (5)
wherein F is u,b And F u,b-1 Output and input of the b-th residual up-and-down sampling block, H ↑ (. Cndot.) and H ↓ (. Cndot.) is an upsampling layer and a downsampling layer, respectively. By recursive fusion we get the fused feature F RFF :
Wherein H is RFF (. Cndot.) represents a recursive fusion portion.
The reconstruction portion includes an upsampling layer and two convolution layers. We introduce a long-hop connection in the reconstruction part, enabling the network to perform global residual learning to alleviate training difficulties. Since our goal is to reconstruct the target frame, we use F t As an identity mapping branch in residual learning. By reconstruction we get reconstructed target frame Y t :
Y t =H R (F RFF +F t ), (7)
Wherein H is R (. Cndot.) represents the reconstructed portion.
In said step (2), we train our network using the mean square error function as a loss function, which is expressed in terms of the following:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Is a pair of a J-th high resolution block and a low resolution block, J is the total number of training sample pairs,representing the j-th high resolution block calculated from the network parameter theta.
In the step (3), a continuous video frame sequence is input into the trained network model in the step (2), and a super-resolution reconstruction result is obtained.
To better illustrate the effectiveness of the present invention, the video library "video 4" (comprising four video frame sequences of "calcadar", "city", "folrage" and "walk") is commonly used. We downsampled the original video frame sequence 4 times twice to a downsampled low resolution video frame sequence. In the experiment, bicubic and other four video super-resolution methods were chosen for comparison.
The method for reconstructing the super-resolution of the compared video comprises the following steps:
method 1: liu et al, reference "Robust video super-resolution with learned temporal dynamics, IEEE International Conference on Computer Vision, pp.2526-2534, 2017".
Method 2: the method proposed by Tao et al, reference "Detail-revealing deep video super-resolution, IEEE International Conference on Computer Vision, pp.22-29, 2017".
Method 3: the method proposed by Wang et al, reference "Deep video super-resolution using HR optical flow estimation, IEEE Transactions on Image Processing, pp.377-388, 2020".
Method 4: tian et al, reference "TDAN: temporally-deformable alignment network for video super-resolution, IEEE Conference on Computer Vision and Pattern Recognition, pp.3360-3369, 2020".
TABLE 1
| Bicubic | Method | 1 | Method 2 | Method 3 | Method 4 | The invention is that |
calendar | 20.53/0.5678 | 22.18/0.7073 | 22.76/0.7450 | 22.76/0.7461 | 23.01/0.7645 | 23.37/0.7788 | |
city | 25.09/0.5997 | 26.48/0.7235 | 26.96/0.7571 | 26.81/0.7490 | 27.07/0.7696 | 27.30/0.7793 | |
foliage | 23.54/0.5661 | 25.17/0.7021 | 25.52/0.7207 | 25.52/0.7175 | 25.71/0.7278 | 25.83/0.7359 | |
walk | 25.98/0.7985 | 25.17/0.7021 | 28.90/0.8782 | 29.10/0.8831 | 29.75/0.8958 | 29.88/0.8999 | |
Average of | 23.78.0.6330 | 25.53/0.7493 | 26.03/0.7753 | 26.04/0.7739 | 26.39/0.7894 | 26.60/0.7985 |
It can be seen from Table 1 that the present invention achieves higher PSNR and SSIM. In the "calendar" video frame in fig. 2, the present invention is able to reconstruct the clearer word "MAREE" compared to the contrast method. In the "city" video frame of fig. 3, the reconstructed building of the present invention is more straight. In conclusion, compared with a comparison method, the reconstruction result of the invention has great advantages in subjective and objective evaluation. Therefore, the invention is an effective video super-resolution reconstruction method.
Claims (3)
1. The video super-resolution reconstruction method is characterized by comprising the following steps of:
step one: designing and constructing a convolutional neural network of a video super-resolution reconstruction method, wherein the network consists of a shallow layer feature extraction part, a deep layer feature extraction part, a recursive feature fusion part and a reconstruction part; in the shallow layer feature extraction part, using 2N+1 convolution layers to respectively extract features of an input video frame; shallow features F extracted from the t+Nth video frame t+N The formula is as follows:
F t+N =H SFE (X t+N ), (1)
wherein X is t+N Is the (t+N) th video frame of the input, H SFE (. Cndot.) represents a shallow feature extraction portion; the shallow feature extraction is used for obtaining a set { F } of shallow features t-N ,...,F t ,...,F t+N };
In the deep feature extraction part, using 2N+1 enhanced deep feature extraction modules to perform further feature extraction; deep features of the extracted t+Nth video frameThe formula is as follows:
wherein H is DFE (. Cndot.) represents a deep feature extraction section; each video frame corresponds to an enhanced deep feature extraction module, and the input of each enhanced deep feature extraction module is corresponding to the shallow features extracted from the video frame and the shallow features extracted from the target frame; the enhanced deep feature extraction module simultaneously utilizes time information between frames in the frames, and enhances the representation capability of the extracted deep features; specifically, in the t+n enhanced deep feature extraction module, the t+n video frame F is first processed t+N And target frame F t And concatenating in the channel dimension, followed by using a convolution layer to obtain a temporal fusion featureThe time fusion feature comprises time information between the (t+N) th video frame and the target frame; the resulting temporal fusion profile->The equation can be formulated as follows:
wherein H is TF (. Cndot.) is a bottleneck convolution layer, [. Cndot.]Is a cascading operation; then, F t+N Andcascading in the channel dimension; the cascading features contain two types of information: spatial information within the frame, and temporal information between the (t+N) th video frame and the target frame; finally, deep features of the (t+N) th video frame are further extracted from the intra-frame time information and the inter-frame space information>
Wherein H is EH (-) is a bottleneck convolution layer; obtaining a set of deep features through deep feature extraction
The recursive feature fusion part comprises a bottleneck convolution layer and U recursive units; in order to control model parameters, B residual up-down sampling blocks form a recursion unit, and parameters are shared among the recursion units; the recursive learning can improve the network performance by increasing the number of the recursive units on the premise of not adding additional parameters; the residual up-down sampling block adopts an up-down sampling layer as a residual branch of the residual block, and the up-down sampling layer can find out the mutual dependency relationship between high resolution and low resolution, so that the characteristics are better fused; specifically, in each residual up-down sampling block, up-sampling adopts a deconvolution layer, and down-sampling adopts a convolution layer; the step length of the deconvolution layer is the same as the amplification factor; output F of the block of downsampled samples up and down the b th residual in the u-th recursive unit u,b The method comprises the following steps:
F u,b =H ↓ (H ↑ (F u,b-1 ))+F u,b-1 , (5)
wherein F is u,b And F u,b-1 Output and input of the b-th residual up-and-down sampling block, H ↑ (. Cndot.) and H ↓ (. Cndot.) is an upsampling layer and a downsampling layer, respectively; obtaining a fused characteristic F through recursive fusion RFF :
Wherein H is RFF (. Cndot.) represents a recursive fusion portion;
the reconstruction part comprises an up-sampling layer and two convolution layers; a long-jump connection is introduced into the reconstruction part, so that the network can perform global residual error learning to reduce the difficulty of training; since the target is a reconstructed target frame, F is used t As an identity mapping branch in residual learning; by reconstruction, a reconstructed target frame Y is obtained t :
Y t =H R (F RFF +F t ), (7)
Wherein H is R (-) represents a reconstructed part;
step two: constructing a training sample pair in the video data set, and training parameters of the convolutional neural network model constructed in the first step until the network converges;
step three: and (3) inputting a continuous video frame sequence into the trained network model in the step two to obtain a super-resolution reconstruction result.
2. The method for reconstructing super-resolution video according to claim 1, wherein in the step one, the deep feature extraction modules are enhanced in the deep feature extraction section, each video frame corresponds to one deep feature extraction module, the input of each deep feature extraction module is shallow features extracted from the corresponding video frame and shallow features extracted from the target frame, the shallow features extracted from the corresponding video frame include intra-frame spatial information, the shallow features extracted from the corresponding video frame and the shallow features extracted from the target frame are fused to obtain a time fusion feature, the time fusion feature includes inter-frame time information, and the deep feature extraction modules simultaneously use the intra-frame information, thereby enhancing the representation capability of the deep features extracted.
3. The video super-resolution reconstruction method as claimed in claim 1, wherein in the step one, the residual up-down sampling block in the recursive feature fusion part adopts up-down sampling layers as residual branches of the residual block, and the up-down sampling layers can find the interdependence relationship between the high resolution and the low resolution, so as to better fuse the features.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011111270.1A CN114387161B (en) | 2020-10-16 | 2020-10-16 | Video super-resolution reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011111270.1A CN114387161B (en) | 2020-10-16 | 2020-10-16 | Video super-resolution reconstruction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114387161A CN114387161A (en) | 2022-04-22 |
CN114387161B true CN114387161B (en) | 2023-07-07 |
Family
ID=81193652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011111270.1A Active CN114387161B (en) | 2020-10-16 | 2020-10-16 | Video super-resolution reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114387161B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117422614B (en) * | 2023-12-19 | 2024-03-12 | 华侨大学 | Single-frame image super-resolution method and device based on hybrid feature interaction transducer |
CN117575915A (en) * | 2024-01-16 | 2024-02-20 | 闽南师范大学 | Image super-resolution reconstruction method, terminal equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
CN109949217A (en) * | 2017-12-20 | 2019-06-28 | 四川大学 | Video super-resolution method for reconstructing based on residual error study and implicit motion compensation |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11222415B2 (en) * | 2018-04-26 | 2022-01-11 | The Regents Of The University Of California | Systems and methods for deep learning microscopy |
US11164067B2 (en) * | 2018-08-29 | 2021-11-02 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging |
-
2020
- 2020-10-16 CN CN202011111270.1A patent/CN114387161B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109949217A (en) * | 2017-12-20 | 2019-06-28 | 四川大学 | Video super-resolution method for reconstructing based on residual error study and implicit motion compensation |
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
Non-Patent Citations (5)
Title |
---|
任超 ; 吴伟 ; 黄征凯 ; 焦元元 ; .基于AIC准则的RBF神经网络在GPS高程拟合中的应用.测绘科学.2013,(02),第1-3节. * |
何欣颖 ; 吴黎明 ; 郑耿哲 ; 吴佳毅 ; .基于Inception-ResNet-v2的乳腺癌辅助诊断方法.自动化与信息工程.2020,(01),第1-3节. * |
周航 ; 何小海 ; 王正勇 ; 熊淑华 ; Karn Pradeep ; .采用双网络结构的压缩视频超分辨率重建.电讯技术.2020,(01),第1-3节. * |
曹洪玉 ; 刘冬梅 ; 付秀华 ; 张静 ; 岳鹏飞 ; .基于CT图像的超分辨率重构研究.长春理工大学学报(自然科学版).2020,(01),第1-3节. * |
胡睿 ; 何小海 ; 滕奇志 ; 卿粼波 ; 廖浚斌 ; .结合注意力的3D卷积网络脑胶质瘤分割算法.计算机工程与应用.2020,(第12期),第1-3节. * |
Also Published As
Publication number | Publication date |
---|---|
CN114387161A (en) | 2022-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zheng et al. | Crossnet: An end-to-end reference-based super resolution network using cross-scale warping | |
CN112801877B (en) | Super-resolution reconstruction method of video frame | |
CN115222601A (en) | Image super-resolution reconstruction model and method based on residual mixed attention network | |
CN109272452B (en) | Method for learning super-resolution network based on group structure sub-band in wavelet domain | |
CN114387161B (en) | Video super-resolution reconstruction method | |
CN113177882B (en) | Single-frame image super-resolution processing method based on diffusion model | |
CN111353940A (en) | Image super-resolution reconstruction method based on deep learning iterative up-down sampling | |
CN109949217B (en) | Video super-resolution reconstruction method based on residual learning and implicit motion compensation | |
CN111340744A (en) | Attention double-flow deep network-based low-quality image down-sampling method and system | |
CN112241939B (en) | Multi-scale and non-local-based light rain removal method | |
CN110689509A (en) | Video super-resolution reconstruction method based on cyclic multi-column 3D convolutional network | |
CN114757828A (en) | Transformer-based video space-time super-resolution method | |
Kim et al. | Constrained adversarial loss for generative adversarial network‐based faithful image restoration | |
CN116563100A (en) | Blind super-resolution reconstruction method based on kernel guided network | |
CN115345791A (en) | Infrared image deblurring algorithm based on attention mechanism residual error network model | |
CN113379606A (en) | Face super-resolution method based on pre-training generation model | |
CN111080533B (en) | Digital zooming method based on self-supervision residual sensing network | |
CN113096032A (en) | Non-uniform blur removing method based on image area division | |
Chen et al. | Guided dual networks for single image super-resolution | |
CN116957940A (en) | Multi-scale image super-resolution reconstruction method based on contour wave knowledge guided network | |
Huang et al. | FFNet: A simple image dedusting network with feature fusion | |
CN112348745B (en) | Video super-resolution reconstruction method based on residual convolutional network | |
CN114764750B (en) | Image denoising method based on self-adaptive consistency priori depth network | |
Jin et al. | Jointly texture enhanced and stereo captured network for stereo image super-resolution | |
Zhang et al. | Superresolution approach of remote sensing images based on deep convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |