CN110674926A - Progressive dense network of nested structures for target reconstruction - Google Patents

Progressive dense network of nested structures for target reconstruction Download PDF

Info

Publication number
CN110674926A
CN110674926A CN201910840115.4A CN201910840115A CN110674926A CN 110674926 A CN110674926 A CN 110674926A CN 201910840115 A CN201910840115 A CN 201910840115A CN 110674926 A CN110674926 A CN 110674926A
Authority
CN
China
Prior art keywords
dense
local
global
network
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910840115.4A
Other languages
Chinese (zh)
Inventor
李隆熹
王小娥
马丽红
韦岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910840115.4A priority Critical patent/CN110674926A/en
Publication of CN110674926A publication Critical patent/CN110674926A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a progressive dense network of nested structures for target reconstruction, which comprises local dense blocks and global dense blocks; the local dense block comprises local dense connection, local feature fusion and local residual learning; the local dense connection is used for inputting the output of the convolution units in the local dense block to all the convolution units; local feature fusion is used for hierarchical feature fusion learning of a convolution unit; local residual learning is used for constructing a network with sparser input data; the global dense block comprises global dense connection, global feature fusion and global residual learning; the global dense connection is used for inputting the output of each local dense block to all the following local dense blocks; the global feature fusion is used for the hierarchical feature fusion learning of the local dense blocks; global residual learning builds a more sparse network of input data. The invention makes full use of the layering characteristics of different scales and alleviates the problems of disappearing gradient and gradient explosion caused by network deepening.

Description

Progressive dense network of nested structures for target reconstruction
Technical Field
The invention relates to the field of convolutional neural networks, in particular to a progressive dense network of nested structures for target reconstruction.
Background
Single Image Super Resolution (SISR) is one of the typical target reconstruction techniques, which refers to the recovery of a High Resolution (HR) image from a Low Resolution (LR) image. Primarily for various computer vision tasks such as security and surveillance imaging, medical imaging and image generation. Because the mapping from LR to HR is not unique, image super-resolution presents an ill-posed inversion problem: when the magnification factor is large, the restored super-resolution (SR) image tends to lose high frequency details. Currently, most SR techniques assume that the high frequency information of an image is accurately predicted from low frequency data information. However, it is critical to the SR technique how to collect as much useful context information as possible from the LR image so that enough information can be captured to recover the high frequency details in the HR image.
Recently, super-resolution reconstruction based on convolutional networks has made tremendous progress. In 2014, Dong et al proposed SRCNNs, which were first used for super-resolution reconstruction, but only included three layers of CNN networks. Two years later, Dong et al proposed FSRCNN, which designed a convolutional network with a funnel structure, with the network depth increasing to 5 levels. In 2016, Kim et al inspired by residual error network, proposed VDSR and DRCN in turn, network depth increased to 20 layers, network reduced difficulty of deep network training by skipping connection or recursive monitoring using gradient clipping, and its effect was significantly improved compared with SRCNN. This fully demonstrates that the depth and width of the network are key factors in the SR domain, and the deeper the depth of the network, the wider the width, and the better the performance. In 2017, Lim et al constructed a very wide network EDSR and a very deep one MDSR (about 165 layers) by using simplified residual blocks. The great improvement of the performance of the EDSR and the MDSR further proves that the network depth plays a crucial role in the performance of the image SR. However, as the network deepens, network training becomes difficult, and gradient explosion or disappearance may occur.
Also, as networks increase, it does not mean that convolutional layer features closer to the end of the network must be better, but rather that feature information for each layer is equally important, except that features in each convolutional layer have different receptive fields. At the same time, since the objects in the image have different scales, perspectives, and aspect ratios, the layered nature of each convolutional layer will provide more information for reconstruction for very deep networks. However, most previous Deep Learning (DL) based methods, such as VDSR, laprn, and EDSR. A series of convolutional layers are simply concatenated together and are not reconstructed using the characteristic information of each convolutional layer. In MemNet, information of a previous memory block is input to a subsequent memory block, however, the original LR image is not directly input to the network by MemNet, but the original LR image is interpolated to a desired size and then input to the network for multi-level feature extraction. This way of interpolation not only increases the computational complexity, but also loses some detail of the original LR image, making the reconstructed SR too smooth due to lack of detail information. Tong et al introduced dense blocks in SRDenseNet, and since each layer in the dense blocks could be directly used by the following network, the output characteristics of the dense blocks were linear with the learning rate, and thus the growth rate was limited to a small value, while a larger growth rate could further improve the network performance. At the same time, it becomes more difficult to train a network with a larger network width due to the network width becoming narrower due to the low growth rate.
Disclosure of Invention
The invention aims to overcome the problems that most CNN-based target reconstruction methods do not selectively utilize different scales of hierarchical features and the network training becomes difficult and gradient explosion or disappearance occurs when the network deepens, and provides a progressive dense network of a nested structure for target reconstruction, wherein the network comprises Local Dense Blocks (LDBs) and a Global Dense Mechanism (GDM).
The purpose of the invention can be realized by the following technical scheme:
a progressive dense network of nested structures for object reconstruction, comprising locally dense blocks and globally dense blocks;
the local dense block comprises local dense connection, local feature fusion and local residual learning;
the local dense connection is used for inputting the output of the convolution unit in the local dense block to all the convolution units in the local dense block;
the local feature fusion is used for performing feature fusion learning on the input of the current local dense block and the output of the convolution unit reserved before, and adaptively selecting features;
local residual learning is used for constructing a network with more sparse input data, so that the network is easy to train;
setting a bottleneck layer before each local dense block; the local dense block contains convolution units, and each convolution unit comprises a convolution layer and a ReLU layer;
the global dense block comprises global dense connection, global feature fusion and global residual learning;
the global dense connection is used for inputting the output of each local dense block to all the following local dense blocks;
the global feature fusion is used for performing feature fusion learning on the output of the local dense block reserved before and adaptively selecting features;
and global residual learning is used for constructing a network with more sparse input data, so that the network is easy to train.
In the invention, the sparseness of local residual learning and global residual learning refers to sparseness of input data, the residual learning refers to learning of a difference value between input and output by a network, the input and the output have the same low-frequency information, and a data matrix after the difference has a large number of zero values, so that the data matrix is a sparse matrix.
Different from the common dense block, the local dense block provided by the invention adds Local Feature Fusion (LFF) and local residual learning on the basis of dense connection, and meanwhile, the convolution unit removes a BN layer and only comprises a convolution layer and a ReLU layer.
Specifically, for a locally dense connection, assume F'd-1And FdRespectively representing the input and output of the d-th local dense block, the size of the input and output characteristic graphs being G0The output of the c convolution unit in the d local dense block is expressed as:
Fd,c=σ(Wd,c[F′d-1,Fd,1,...,Fd,c-1]+Bd,c) (1)
where σ denotes the ReLU activation function, Wd,cAnd Bd,cRespectively representing the weight and deviation of the c convolution unit in the d local dense block, [ F'd-1,Fd,1,...,Fd,c-1]And (3) representing the series connection of the output feature maps of all the previous local dense blocks and the output feature maps of the 1 st to c-1 st convolution units in the d-th local dense block.
Suppose Fd,cThe size of the feature map is G (growth rate), Cd represents the number of convolution units contained in the d-th local dense block, and the size of the feature map after the d-th local dense block is connected through local density is G0+ (Cd-1). times.G. The size of the feature graph after the local dense connections is the feature number between the local dense connections and is not the final output feature number of the LDB.
The size of the feature map finally obtained by each local dense block is linearly related to the learning rate, if the feature map of the (d-1) th local dense block is directly input into the (d) th local dense block in a serial mode, the network is too large, the learning rate is limited, and the university learning rate is critical to the network performance. To avoid the above-mentioned situation of the network, Local Feature Fusion (LFF) is introduced to adaptively fuse the states of all the previous local dense blocks and the features of all convolution units in the current local dense block to reduce the size of the feature map.
Specifically, for local feature fusion, a 1 × 1 convolution layer is adopted, and the calculation formula of the d-th local dense block local feature fusion is as follows:
Figure BDA0002193413180000041
wherein the content of the first and second substances,the local feature fusion function of the d-th LDB is represented.
Specifically, for local residual learning, the final output of the local dense block is obtained through residual learning, and the final output of the d-th local dense block is calculated by the following formula:
Fd=F′d-1+Fd,LF(3)
further, the number of convolutional layers of convolutional units of the locally dense block linearly increases from the input end to the output end.
Specifically, a Bottleneck Layer (BL) containing only 1 × 1 convolutional layers is introduced before each locally dense block for the purpose of converting from [ F0,F1,...Fd-1]Features are adaptively selected and the number of input features per locally dense block is controlled.
Thus, the actual input to the d +1 th locally dense block is represented as:
F′d=HBL([F0,F1,...Fd]) (4)
wherein HBLRepresents the bottleneck layer function, [ F ]0,F1,...Fd]Representation of original features F0And the first d local dense block output features F1,...FdCan find corresponding symbols in the network structure.
Specifically, in correspondence with local feature fusion, Global Feature Fusion (GFF) is proposed to fuse local features of all local dense blocks to extract dense features FGF
FDF=HGFF([F0,F1,...FD]) (5)
Wherein, [ F ]0,F1,...FD]Representing original features F0And concatenation of D feature maps generated by locally dense blocks, HGFFIs a 1X 1 convolutional layer.
Specifically, global residual learning obtains the final result by residual learning:
FGF=F0+FDF(6)
compared with the prior art, the invention has the following beneficial effects:
1. the dense connection between different convolution layers is realized by the Local Dense Block (LDB), the dense connection between different local dense blocks is realized by the Global Dense Mechanism (GDM), the dense connection mode of the nested structure is formed by the Local Dense Block (LDB) and the Global Dense Mechanism (GDM), the layering characteristics of different scales are fully utilized, and meanwhile, the dense connection of the nested structure enables all convolution layers in the network to form a very short path to the output end, so that the problems of disappearing gradient and gradient explosion caused by network deepening are solved.
2. Compared with the traditional dense network, the invention does not directly reserve all the layered features, increases the fusion of the local/global features after all the local/global dense connections, and introduces the bottleneck layer before each local dense block to realize the fusion learning of the layered features, and adaptively selects effective features, thereby breaking the limitation of the network on the learning rate and realizing higher learning rate.
3. For a conventional dense network, the number of convolutional layers of all dense blocks is the same, and in the present invention, the number of convolutional layers of a local dense block is not fixed, but linearly increases from the input end to the output end. Since the convolutional layers further back in the network have larger receptive fields, which means that more global and higher semantic level features may be implied, mapping of global features or advanced features is more complicated, and more convolutional layers are needed to extract corresponding features.
Drawings
Fig. 1 is a schematic structural diagram of a nested-structured progressive dense network for object reconstruction.
Fig. 2 is a block diagram of a local dense block in a dense network of the present invention.
Fig. 3 is a structural view of a conventional local dense block.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
Fig. 1 is a schematic structural diagram of a progressive dense network of nested structures for object reconstruction, the dense network including local dense blocks and global dense blocks;
fig. 2 and 3 are schematic structural diagrams of a local dense block and a conventional dense block according to the present invention, respectively. The local dense block in the invention comprises local dense connection, local feature fusion and local residual learning;
the local dense connection is used for inputting the output of the convolution unit in the local dense block to all the convolution units in the local dense block;
the local feature fusion is used for performing feature fusion learning on the input of the current local dense block and the output of the convolution unit reserved before, and adaptively selecting features;
local residual learning is used for constructing a network with sparser input data;
setting a bottleneck layer before each local dense block; the local dense block contains convolution units, and each convolution unit comprises a convolution layer and a ReLU layer;
the global dense block comprises global dense connection, global feature fusion and global residual learning;
the global dense connection is used for inputting the output of each local dense block to all the following local dense blocks;
the global feature fusion is used for performing feature fusion learning on the output of the local dense block reserved before and adaptively selecting features;
global residual learning builds a more sparse network of input data.
Specifically, for a locally dense connection, assume F'd-1And FdRespectively representing the input and output of the d-th local dense block, the size of the input and output characteristic graphs being G0The output of the c convolution unit in the d local dense block is expressed as:
Fd,c=σ(Wd,c[F′d-1,Fd,1,...,Fd,c-1]+Bd,c) (1)
where σ denotes the ReLU activation function, Wd,cAnd Bd,cRespectively representing the weight and deviation of the c convolution unit in the d local dense block, [ F'd-1,Fd,1,...,Fd,c-1]And the output feature maps of all the previous local dense blocks and the output feature maps of the 1-c-1 convolution units in the d-th local dense block are connected in series.
Suppose Fd,cThe size of the feature map is G (growth rate), Cd represents the number of convolution units contained in the d-th local dense block, and the size of the feature map after the d-th local dense block is connected through local density is G0+(C-1)×G。
The size of the feature map finally obtained by each local dense block is linearly related to the learning rate, if the feature map of the (d-1) th local dense block is directly input into the (d) th local dense block in a serial mode, the network is too large, the learning rate is limited, and the university learning rate is critical to the network performance. To avoid the above-mentioned situation of the network, local feature fusion is introduced to adaptively fuse the states of all the previous local dense blocks and the features of all convolution units in the current local dense block to reduce the size of the feature map.
Specifically, for local feature fusion, a 1 × 1 convolution layer is adopted, and the calculation formula of the d-th local dense block local feature fusion is as follows:
Figure BDA0002193413180000081
specifically, for local residual learning, the final output of the local dense block is obtained through residual learning, and the final output of the d-th local dense block is calculated by the following formula:
Fd=F′d-1+Fd,LF(3)
further, the number of convolutional layers of convolutional units of the locally dense block linearly increases from the input end to the output end.
Specifically, a Bottleneck Layer (BL) containing only 1 × 1 convolutional layers is introduced before each locally dense block for the purpose of converting from [ F0,F1,...Fd-1]Features are adaptively selected and the number of input features per locally dense block is controlled.
Specifically, the actual inputs for the d +1 th locally dense block are represented as:
F′d=HBL([F0,F1,...Fd]) (4)
wherein HBLRepresents the bottleneck layer function, [ F ]0,F1,...Fd]Representation of original features F0And the first d local dense block output features F1,...FdAnd features are stacked, and corresponding symbols can be found in a network structure.
Specifically, in correspondence with local feature fusion, Global Feature Fusion (GFF) is proposed to fuse local features of all local dense blocks to extract dense features FGF
FDF=HGFF([F0,F1,...FD]) (5)
Wherein, [ F ]0,F1,...FD]Representing original features F0And concatenation of D feature maps generated by locally dense blocks, HGFFIs a 1X 1 convolutional layer.
Specifically, for global residual learning, by residual learning, the final result is obtained:
FGF=F0+FDF(6)
the above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (9)

1. A progressive dense network of nested structures for object reconstruction comprising locally dense blocks and globally dense blocks;
the local dense block comprises local dense connection, local feature fusion and local residual learning;
the local dense connection is used for inputting the output of the convolution unit in the local dense block to all the convolution units in the local dense block;
the local feature fusion is used for performing feature fusion learning on the input of the current local dense block and the output of the convolution unit reserved before, and adaptively selecting features;
local residual learning is used for constructing a network with sparser input data;
setting a bottleneck layer before each local dense block; the local dense block contains convolution units, and each convolution unit comprises a convolution layer and a ReLU layer;
the global dense block comprises global dense connection, global feature fusion and global residual learning;
the global dense connection is used for inputting the output of each local dense block to all the following local dense blocks;
the global feature fusion is used for performing feature fusion learning on the output of the local dense block reserved before and adaptively selecting features;
global residual learning builds a more sparse network of input data.
2. A progressive dense network of nested structures for object reconstruction as claimed in claim 1 wherein for locally dense connections the output of the c-th convolution unit in the d-th locally dense block is represented as:
Fd,c=σ(Wd,c[F′d-1,Fd,1,...,Fd,c-1]+Bd,c) (1)
where σ denotes the ReLU activation function, Wd,cAnd Bd,cRespectively representing the weight and deviation of the c convolution unit in the d local dense block, [ F'd-1,Fd,1,...,Fd,c-1]And (3) representing the series connection of the output feature maps of all the previous local dense blocks and the output feature maps of the 1 st to c-1 st convolution units in the d-th local dense block.
3. The progressive dense network of nested structures for object reconstruction of claim 1, wherein for local feature fusion, a 1 x 1 convolution layer is used, and the computing formula of the d-th local dense block local feature fusion extraction feature is:
Figure FDA0002193413170000021
wherein the content of the first and second substances,
Figure FDA0002193413170000022
local feature fusion function [ F ] representing the d-th locally dense block'd-1,Fd,1,...,Fd,c-1]And (3) representing the series connection of the output feature maps of all the previous local dense blocks and the output feature maps of the 1 st to c-1 st convolution units in the d-th local dense block.
4. The progressive dense network of nested structures for object reconstruction of claim 1, wherein for local residual learning, the final output of the d-th local dense block is calculated as:
Fd=F′d-1+Fd,LF(3)
wherein, F'd-1Input representing the d-th locally dense block, Fd,LFRepresenting the d-th locally dense block local feature fusion output.
5. The progressive dense network of nested structures for object reconstruction of claim 1, wherein the number of convolutional layers in the convolutional units of the locally dense block increases linearly from input to output.
6. A progressive dense network of nested structures for object reconstruction as claimed in claim 1 wherein each locally dense block is preceded by a bottleneck layer containing only 1 x 1 convolutional layers for adaptively selecting features and controlling the number of input features per locally dense block.
7. The progressive dense network of nested structures for object reconstruction of claim 1, wherein the actual inputs of the d +1 th locally dense block are represented as:
F′d=HBL([F0,F1,...Fd]) (4)
wherein HBLRepresents the bottleneck layer function, [ F ]0,F1,...Fd]Representation of original features F0And the first d local dense block output features F1,...FdThe feature stack of (1).
8. The progressive dense network of nested structures for object reconstruction of claim 1, in which for global feature fusion, local features of all local dense blocks are fused to extract dense features FDFThe calculation formula of (2) is as follows:
FDF=HGFF([F0,F1,...FD]) (5)
wherein, [ F ]0,F1,...FD]Representing original features F0And concatenation of D feature maps generated by locally dense blocks, HGFFIs a 1X 1 convolutional layer.
9. The progressive dense network of nested structures for object reconstruction of claim 1, wherein for global residual learning, the calculation formula to obtain the final result is:
FGF=F0+FDF(6)
wherein, F0Representing the original input, FDFDense features representing global feature fusion extraction.
CN201910840115.4A 2019-09-06 2019-09-06 Progressive dense network of nested structures for target reconstruction Pending CN110674926A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910840115.4A CN110674926A (en) 2019-09-06 2019-09-06 Progressive dense network of nested structures for target reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910840115.4A CN110674926A (en) 2019-09-06 2019-09-06 Progressive dense network of nested structures for target reconstruction

Publications (1)

Publication Number Publication Date
CN110674926A true CN110674926A (en) 2020-01-10

Family

ID=69076043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910840115.4A Pending CN110674926A (en) 2019-09-06 2019-09-06 Progressive dense network of nested structures for target reconstruction

Country Status (1)

Country Link
CN (1) CN110674926A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909150A (en) * 2017-11-29 2018-04-13 华中科技大学 Method and system based on block-by-block stochastic gradient descent method on-line training CNN
CN109064398A (en) * 2018-07-14 2018-12-21 深圳市唯特视科技有限公司 A kind of image super-resolution implementation method based on residual error dense network
CN109509149A (en) * 2018-10-15 2019-03-22 天津大学 A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features
US20190114544A1 (en) * 2017-10-16 2019-04-18 Illumina, Inc. Semi-Supervised Learning for Training an Ensemble of Deep Convolutional Neural Networks
CN110120019A (en) * 2019-04-26 2019-08-13 电子科技大学 A kind of residual error neural network and image deblocking effect method based on feature enhancing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114544A1 (en) * 2017-10-16 2019-04-18 Illumina, Inc. Semi-Supervised Learning for Training an Ensemble of Deep Convolutional Neural Networks
CN107909150A (en) * 2017-11-29 2018-04-13 华中科技大学 Method and system based on block-by-block stochastic gradient descent method on-line training CNN
CN109064398A (en) * 2018-07-14 2018-12-21 深圳市唯特视科技有限公司 A kind of image super-resolution implementation method based on residual error dense network
CN109509149A (en) * 2018-10-15 2019-03-22 天津大学 A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features
CN110120019A (en) * 2019-04-26 2019-08-13 电子科技大学 A kind of residual error neural network and image deblocking effect method based on feature enhancing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GAO HUANG ETAL.: "Densely Connected Convolutional Networks", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Similar Documents

Publication Publication Date Title
KR101822326B1 (en) Hybrid quantum circuit assembly
CN108259994A (en) A kind of method for improving video spatial resolution
Wei et al. Improving resolution of medical images with deep dense convolutional neural network
CN107103585A (en) A kind of image super-resolution system
CN110634101A (en) Unsupervised image-to-image conversion method based on random reconstruction
CN111427539B (en) Random data stream computing system and computing control method based on siganmin
CN110674926A (en) Progressive dense network of nested structures for target reconstruction
CN112492313B (en) Picture transmission system based on generation countermeasure network
Zang et al. Cascaded dense-UNet for image super-resolution
CN113362239A (en) Deep learning image restoration method based on feature interaction
Dutt et al. New solvable singular potentials
Mei et al. Deep residual refining based pseudo‐multi‐frame network for effective single image super‐resolution
CN111402140A (en) Single image super-resolution reconstruction system and method
CN108513130B (en) A kind of realization system and method for Tag-Tree coding
Mercier et al. Construction of amorphous structures
CN109087247A (en) The method that a kind of pair of stereo-picture carries out oversubscription
CN113436057B (en) Data processing method and binocular stereo matching method
CN108629737B (en) Method for improving JPEG format image space resolution
CN109150537A (en) A kind of File Ownership method of proof based on dynamic Bloom Filter
CN114885144A (en) High frame rate 3D video generation method and device based on data fusion
Guo et al. Speedy and accurate image super‐resolution via deeply recursive CNN with skip connection and network in network
Shen et al. Itsrn++: Stronger and better implicit transformer network for continuous screen content image super-resolution
Yang Super resolution using dual path connections
CN110120799A (en) A kind of insulation shortcut method of high-fidelity population inversion in two-level energy system
CN112800835A (en) External interference suppression method for planetary reducer based on wavelet threshold

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200110

WD01 Invention patent application deemed withdrawn after publication