CN114757830B - Image super-resolution reconstruction method based on channel-diffusion double-branch network - Google Patents

Image super-resolution reconstruction method based on channel-diffusion double-branch network Download PDF

Info

Publication number
CN114757830B
CN114757830B CN202210488529.7A CN202210488529A CN114757830B CN 114757830 B CN114757830 B CN 114757830B CN 202210488529 A CN202210488529 A CN 202210488529A CN 114757830 B CN114757830 B CN 114757830B
Authority
CN
China
Prior art keywords
convolution
diffusion
layer
channel
convolution layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210488529.7A
Other languages
Chinese (zh)
Other versions
CN114757830A (en
Inventor
张铭津
彭晓琪
张鹏
郭杰
李云松
高新波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202210488529.7A priority Critical patent/CN114757830B/en
Publication of CN114757830A publication Critical patent/CN114757830A/en
Application granted granted Critical
Publication of CN114757830B publication Critical patent/CN114757830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image super-resolution reconstruction method of a channel-diffusion double-branch network, which mainly solves the problems of insufficient texture details and structural distortion of the existing method and comprises the following implementation steps: constructing a training sample set and a test sample set; constructing a channel-diffusion double-branch network: constructing a channel-diffusion double-branch network of a first convolution layer, D channel-diffusion residual modules, a second convolution layer and an up-sampling module which are connected in sequence; the channel-diffusion residual block comprises an adaptive convolution module and parallel-arranged channel attention branches and diffusion branches connected with the adaptive convolution module; the adaptive convolution module comprises a plurality of convolution layers and a plurality of nonlinear activation layers; the diffusion branch comprises a P-M diffusion layer, a convolution layer and a nonlinear activation layer; the channel attention branch comprises a pooling layer, a plurality of nonlinear activation layers and a plurality of convolution layers; and carrying out iterative training on the channel-diffusion double-branch network. The invention can obtain clear and accurate super-resolution reconstructed images.

Description

Image super-resolution reconstruction method based on channel-diffusion double-branch network
Technical Field
The invention belongs to the technical field of image processing, relates to an image reconstruction method, and in particular relates to an image super-resolution reconstruction method based on a channel-diffusion double-branch network, which can be used in the technical field of pedestrian re-identification and the like.
Background
In the image acquisition process, the resolution of the shot picture is often too low and the quality is poor due to the limitation of factors such as imaging equipment, shooting distance, light rays and the like. In order to obtain higher resolution images, super resolution reconstruction techniques are typically employed. Image super-resolution reconstruction is a technique that generates high-resolution images from low-resolution images. In the field where there is a strict requirement on imaging quality such as pedestrian re-recognition, not only is an image required to have a higher resolution, but also the image should not be subjected to structural distortion and edge texture deletion to prevent recognition errors. Currently, the common super-resolution method mainly comprises three methods of interpolation-based, reconstruction-based and learning-based. The image restored by the interpolation method has the phenomena of blurring, jaggies and the like. The reconstruction-based method starts from a degradation model of the image, and key information in the low-resolution image is extracted, so that a high-resolution image is generated. The main idea of the super-resolution algorithm based on learning is to learn the correspondence between the low-resolution image and the high-resolution image, and to guide the super-resolution reconstruction of the image according to the correspondence.
In recent years, the development of deep learning is rapid, and many researchers combine the deep learning with super-resolution reconstruction of images to obtain effective results. For example, in the patent literature "single image super-resolution reconstruction system and method" applied by the university of metering in China "(patent application number: 202010218624.6, application publication number CN 111402140A), a single image super-resolution reconstruction method is proposed, which includes: extracting features from the original low-resolution image through two convolution layers by adopting an embedded network; reconstructing high-resolution residual features from the low-resolution features by using two cascaded fine extraction blocks through a coarse-to-fine method; the reconstructed high-resolution residual features are sent to a reconstruction network to obtain residual images through deconvolution operation; and adding the up-sampled low-resolution image and the high-resolution residual image to obtain a finally reconstructed high-resolution image. Although the resolution of the reconstructed image is improved, all information in the image is processed identically, important high-frequency information such as texture details, structures and spatial positions is not emphasized to restore, and further improvement of the image reconstruction performance is limited.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides an image super-resolution reconstruction method based on channel-diffusion double branches, and aims to strengthen the extraction capacity of high-frequency information through a channel-diffusion double-branch network so as to acquire a reconstructed image with more accurate structure and more complete edge details.
In order to achieve the above purpose, the technical scheme of the invention comprises the following steps:
(1) Constructing a training sample set and a test sample set:
(1a) Acquiring N RGB images, and carrying out 1/4 downsampling on each RGB image to obtain N downsampled RGB images, wherein N is more than or equal to 100;
(1b) Respectively cutting N RGB images into L multiplied by L image blocks to obtain H image blocks in total, and simultaneously cutting down-sampled RGB images corresponding to each RGB image into L multiplied by L image blocksObtaining H downsampled image blocks, taking each cut image block as a label of the corresponding downsampled cut image block, selecting M downsampled image blocks and the labels corresponding to the M downsampled image blocks to form a training sample set R, and forming a test sample set E by the rest downsampled image blocks and the labels corresponding to the rest downsampled image blocks, wherein L is more than or equal to 192, and M is more than or equal to 1/2H;
(2) Building a channel-diffusion double-branch network model O:
constructing a channel-diffusion double-branch network model O of a first convolution layer, D channel-diffusion residual error modules, a second convolution layer and an up-sampling module which are connected in sequence; the channel-diffusion residual block comprises an adaptive convolution module and parallel-arranged channel attention branches and diffusion branches connected with the adaptive convolution module; the adaptive convolution module comprises a plurality of convolution layers and a plurality of nonlinear activation layers; the diffusion branch comprises a P-M diffusion layer, a convolution layer and a nonlinear activation layer; the channel attention branch comprises a pooling layer, a plurality of nonlinear activation layers and a plurality of convolution layers;
(3) Iterative training is carried out on a channel-diffusion double-branch network model O:
(3a) Initializing iteration number as S, maximum iteration number as S, S being more than or equal to 10000, and channel-diffusion double-branch network model of the S-th iteration as O s ,O s The weight and bias parameters of (2) are w respectively s 、b s Let s= 1,O s =O;
(3b) Taking a training sample set R as an input of a channel-diffusion double-branch network O, and carrying out feature extraction on each training sample by a first convolution layer; d channel-diffusion residual error module pairs extracted n 1 Feature mapping is carried out on the feature map to obtain n 1 A nonlinear characteristic diagram, a second convolution layer pair n 1 Extracting features from the nonlinear feature map, and extracting n with the first convolution layer 1 Adding the characteristic images element by element; the up-sampling module performs up-sampling and dimension transformation on the added images to obtain M super-resolution reconstructed images, wherein n is 1 The number of convolution kernels of the first convolution layer;
(3c) Calculating a loss function by using the L1 norm, and calculating O by each reconstructed image and the corresponding training sample label s Loss value L of (2) s Respectively calculating L by a chain rule s For weight parameter omega in network s Bias parameter b s Is a deviator of (a) and />And according to->For omega s 、b s Updating to obtain a channel-diffusion double-branch network model of the current iteration;
(3d) Judging whether S is equal to or greater than S, if yes, obtaining a trained channel-diffusion double-branch network model O, otherwise, enabling s=s+1, and executing the step (3 b);
(4) Obtaining an image super-resolution reconstruction result:
and taking the test sample set E as the input of the trained channel-diffusion double-branch network model O to carry out forward propagation, so as to obtain reconstructed images corresponding to all the test samples.
Compared with the prior art, the invention has the following advantages:
the channel-diffusion double-branch network model O constructed by the invention comprises D channel-diffusion residual modules, each channel-diffusion residual module comprises an adaptive convolution module and channel attention branches and diffusion branches which are connected with the adaptive convolution module and are arranged in parallel, in the process of training the model and acquiring an image super-resolution reconstruction result, the adaptive convolution has a larger receptive field so as to more comprehensively learn the characteristics of the image, the channel attention branches can allocate different weights to different channel characteristics so as to strengthen the expression of important semantics, the diffusion branches can allocate different weights to different spatial information so as to strengthen the recovery of important areas, and experimental results show that the invention can effectively improve the resolution of natural image reconstruction.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a schematic diagram of a channel-diffusion dual-branch network according to the present invention;
FIG. 3 is a schematic diagram of a channel-diffusion residual module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an adaptive convolution module according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a P-M diffusion layer according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples.
Referring to fig. 1, the present invention includes the steps of:
step 1: constructing a training sample set and a test sample set:
(1a) Acquiring N RGB images, and carrying out 1/4 downsampling on each RGB image to obtain N downsampled RGB images, wherein N is more than or equal to 100;
(1b) Respectively cutting N RGB images into L multiplied by L image blocks to obtain H image blocks in total, and simultaneously cutting down-sampled RGB images corresponding to each RGB image into L multiplied by L image blocksObtaining H downsampled image blocks, taking each cut image block as a label of the corresponding downsampled cut image block, selecting M downsampled image blocks and the labels corresponding to the M downsampled image blocks to form a training sample set R, and forming a test sample set E by the rest downsampled image blocks and the labels corresponding to the rest downsampled image blocks, wherein L is more than or equal to 192, and M is more than or equal to 1/2H;
step 2: a channel-diffusion double-branch network O is built, and the structure of the channel-diffusion double-branch network O is shown in figure 2;
constructing a channel-diffusion double-branch network of a first convolution layer, D channel-diffusion residual modules, a second convolution layer and an up-sampling module which are connected in sequence; the channel-diffusion residual error module comprises a self-adaptive convolution module and channel attention branches and diffusion branches which are connected with the self-adaptive convolution module and are arranged in parallel, wherein D is more than or equal to 10, and D=10 in the embodiment; the adaptive convolution module comprises a plurality of convolution layers and a plurality of nonlinear activation layers; the diffusion branch comprises a P-M diffusion layer, a convolution layer and a nonlinear activation layer; the channel attention branch comprises a pooling layer, a plurality of nonlinear activation layers and a plurality of convolution layers;
the structure of the channel-diffusion residual module in this embodiment is shown in fig. 3;
let the input of the channel-diffusion residual block be M D-1 Firstly, generating a group of finer features, namely W, through an adaptive convolution; and (3) inputting W into the attention branches and the diffusion branches of the channel simultaneously, respectively generating two groups of attention weighted feature graphs, carrying out weighting operation on the two groups of attention weighted feature graphs and W to recalibrate the features, and adding the calibrated features and W to obtain the output of the channel-diffusion residual error module.
The structure of the adaptive convolution module in this embodiment is shown in fig. 4;
the self-adaptive convolution specific structure comprises two branches which are arranged in parallel, wherein the specific structure of a first branch comprises a third convolution layer, a first nonlinear activation function, a weighted image generation module, a fourth convolution layer, a fifth convolution layer and a second nonlinear activation function which are arranged in parallel, wherein the third convolution layer, the first nonlinear activation function, the weighted image generation module, the fourth convolution layer, the fifth convolution layer and the second nonlinear activation function are sequentially connected, the weighted image generation module comprises a sixth convolution layer, a seventh convolution layer and the third nonlinear activation function which are arranged in parallel, and the specific structure of the second branch comprises an eighth convolution layer, a fourth nonlinear activation function, a ninth convolution layer and the fifth nonlinear activation function which are sequentially connected; in the weighted image generation module, the feature image is downsampled before the sixth convolution layer to obtain an image with smaller size, so that the convolution layer pays attention to larger space information, more complete relevant features are obtained, the feature image is upsampled to obtain an image with the same size as the input feature image, the feature image is upsampled before the seventh convolution layer to obtain an image with smaller size, so that the convolution layer pays attention to detail information, finer features are obtained, the feature image is downsampled to obtain an image with the same size as the input feature image, the two images are added, the weight is calculated on the added image through a Sigmoid function, and the feature image output by the fourth convolution layer is recalibrated to obtain more complete features.
The structure of the P-M diffusion layer of this embodiment is shown in fig. 5;
inspired by P-M diffusion, a P-M diffusion layer is designed, which is an improved residual block, and the input tensor is set as W, and the P-M diffusion equation is as follows:
wherein For diffusion coefficient, t is diffusion step length, k is shapeForm definition constant (L)>Can be expressed as:
wherein the constant k is a threshold value. It is easily found that in flat or smooth areas with small gradientsDiffusion coefficient->Will approach 1, but in the region of rich texture or structural detail +.>The coefficient will be close to zero. Thus, with such spatially varying diffusion coefficients, the P-M diffusion mechanism can preserve image detail while removing noise.
There are the following identities:
W αα +W ββ =W xx +W yy (4)
wherein ,Wαα and Wββ Second partial derivatives of W along the image gradient direction and the vertical image feature (edge) direction, W xx and Wyy The second partial derivatives of W along coordinate axes x and y, respectively.
In addition, W ββ Can be expressed as:
wherein ,Wx and Wy First partial derivatives of W in x and y directions, W xy Is W x Partial derivative in the y direction.
Derived using formula (4):
substitution formula (2) can be obtained:
the gradient mode is larger near the edge, and the gradient direction is not smoothed as much as possible in order to better preserve the edge. Thus let W αα The coefficient is zero, i.eIf the diffusion step Δt is set to 1, it is obtained after substitution into equation (6):
the above equation can be expressed as a residual learning block because the right side of the equation can be considered as Δw. In each block ΔW is added to the input tensor W i 。W x 、W y 、W xy and />Derived from the twelfth, thirteenth, fourteenth, fifteenth and sixteenth convolution layers, respectively, the residual result is calculated according to equation (7).
W is passed through a P-M diffusion layer to obtain W+DeltaW, which is passed through a weight of omega DA Is then rescaled to [0,1 ] using a sigmoid function σ (), the attention map is then rescaled to [0,1 ]]The following formula is shown:
M DA =σ(Ω DA (W+ΔW)) (8)
wherein Is a DA mask, for spatially recalibrating W,
wherein fDA (. Cndot.) refers to the multiplication of the input W and the elements of the obtained DA map, W DA Is the result of the recalibration of the DA module. Each M DA (i, j) DA weights corresponding to spatial position (i, j) of W.
Step 3: iterative training is carried out on the channel-diffusion double-branch network O:
(3a) Initializing iteration number as S, maximum iteration number as S, S being more than or equal to 10000, and channel-diffusion double-branch network model of the S-th iteration as O s ,O s The weight and bias parameters of the learning parameters are w respectively s 、b s Let s= 1,O s =O;
(3b) Taking a training sample set R as an input of a channel-diffusion double-branch network O, and carrying out feature extraction on any training sample by a first convolution layer; d channel-diffusion residual error module pairs extracted n 1 Performing feature mapping on the characteristic map; n obtained by mapping features by a second convolution layer 1 Extracting features from the nonlinear feature map, and extracting n with the first convolution layer 1 Adding the characteristic images element by element; the up-sampling module comprises a PixelSheffe and a convolution layer, and the PixelSheffe is added to the n obtained by adding 1 The characteristic diagram is obtained after 4 times up samplingSetting the number of channels of the convolution kernel to 1 to reconstruct the image, i.e. +.>Converting the reconstructed image into dimension transformation to obtain a super-resolution reconstructed image, wherein n is as follows 1 The number of convolution kernels of the first convolution layer and the second convolution layer is equal to that of M imagesProcessing;
(3c) Calculating a loss function by using the L1 norm, and calculating O by each reconstructed image and the corresponding training sample label s Loss value L of (2) s Respectively calculating L by a chain rule s For weight parameter omega in network s Bias parameter b s Is a deviator of (a) and />And according to->For omega s 、b s Update according to->For omega s 、b s The update formulas for updating are respectively as follows:
wherein ,representing reconstructed images, I representing labels of samples in a training sample set, w s 、b s Represents O s Weights, bias parameters, w of all learnable parameters s '、b s ' represents updated learnable parameters, l r Represent learning rate, L s Is a loss function, +.>Representing a derivative operation.
(3d) Judging whether S is equal to or greater than S, if yes, obtaining a trained channel-diffusion double-branch network model O, otherwise, enabling s=s+1, and executing the step (3 b);
step 4: obtaining an image reconstruction result:
and taking the test sample set E as the input of the trained channel-diffusion double-branch network model O to carry out forward propagation, so as to obtain reconstructed images corresponding to all the test samples.
The technical effects of the invention can be further illustrated by the following simulation experiments
1. Simulation conditions and content:
the hardware platform of the simulation experiment is as follows: the processor is an Intel (R) Core i9-9900K CPU, the main frequency is 3.6GHz, the memory is 32GB, and the display card is NVIDIA GeForce RTX 2080Ti. The software platform of the simulation experiment is as follows: ubuntu 16.04 operating system, python version 3.7, pytorch version 1.7.1.
The RGB image dataset used in the simulation experiment was the DIV2K dataset. The training data Set used in the simulation experiment is a DIV2K data Set, and the test Set is a Set5 data Set.
The prior art peak signal-to-noise ratio in the test sample Set5 dataset was 37.78dB, the peak signal-to-noise ratios in the test sample Set5 dataset of the present invention were 38.14dB, respectively, and the results are shown in table 1. Compared with the prior art, the invention has the advantage that the peak signal-to-noise ratio is obviously improved.
TABLE 1
Method Prior Art The invention is that
PSNR 37.78dB 38.14dB
SSIM 0.9042 0.9612

Claims (3)

1. The image super-resolution reconstruction method based on the channel-diffusion double-branch network is characterized by comprising the following steps of:
(1) Constructing a training sample set and a test sample set:
(1a) Acquiring N RGB images, and carrying out 1/4 downsampling on each RGB image to obtain N downsampled RGB images, wherein N is more than or equal to 100;
(1b) Respectively cutting N RGB images into L multiplied by L image blocks to obtain H image blocks in total, and simultaneously cutting down-sampled RGB images corresponding to each RGB image into L multiplied by L image blocksObtaining H downsampled image blocks, taking each cut image block as a label of the corresponding downsampled cut image block, selecting M downsampled image blocks and the labels corresponding to the M downsampled image blocks to form a training sample set R, and forming a test sample set E by the rest downsampled image blocks and the labels corresponding to the rest downsampled image blocks, wherein L is more than or equal to 192, and M is more than or equal to 1/2H;
(2) Constructing a channel-diffusion double-branch network O:
constructing a channel-diffusion double-branch network O comprising a first convolution layer, D channel-diffusion residual modules, a second convolution layer and an up-sampling module which are connected in sequence; the channel-diffusion residual block comprises an adaptive convolution module and parallel-arranged channel attention branches and diffusion branches connected with the adaptive convolution module; the adaptive convolution module comprises a plurality of convolution layers and a plurality of nonlinear activation layers; the diffusion branch comprises a P-M diffusion layer, a convolution layer and a nonlinear activation layer; the channel attention branch comprises a pooling layer, a plurality of nonlinear activation layers and a plurality of convolution layers, wherein D is more than or equal to 10;
(3) Iterative training is carried out on the channel-diffusion double-branch network O:
(3a) Initializing iteration number as S, maximum iteration number as S, S being more than or equal to 10000, and channel-diffusion double-branch network model of the S-th iteration as O s ,O s The weight and bias parameters of (2) are w respectively s 、b s Let s= 1,O s =O;
(3b) Taking a training sample set R as an input of a channel-diffusion double-branch network O, and carrying out feature extraction on each training sample by a first convolution layer; d channel-diffusion residual error module pairs extracted n 1 Performing feature mapping on the characteristic map; n obtained by mapping features by a second convolution layer 1 Extracting features from the nonlinear feature map, and extracting n with the first convolution layer 1 Adding the characteristic images element by element; up-sampling module adds up the n 1 Upsampling the feature map and performing dimension transformation to obtain a super-resolution reconstructed image, wherein n is as follows 1 The number of convolution kernels of the first convolution layer and the second convolution layer is the number of convolution kernels of the first convolution layer and the second convolution layer;
(3c) Calculating a loss function by using the L1 norm, and calculating O by each reconstructed image and the corresponding training sample label s Loss value L of (2) s Respectively calculating L by a chain rule s For weight parameter omega in network s Bias parameter b s Is a deviator of (a)Andand according to->For omega s 、b s Updating to obtain the current iterationA lane-diffusion dual-branch network model;
(3d) Judging whether S is equal to or greater than S, if yes, obtaining a trained channel-diffusion double-branch network model O, otherwise, enabling s=s+1, and executing the step (3 b);
(4) Obtaining an image reconstruction result:
and taking the test sample set E as the input of the trained channel-diffusion double-branch network model O to carry out forward propagation, so as to obtain reconstructed images corresponding to all the test samples.
2. The method for reconstructing an image super-resolution based on a channel-diffusion dual-branch network according to claim 1, wherein the channel-diffusion dual-branch network O in step (2) comprises:
constructing a channel-diffusion double-branch network O of a first convolution layer, D channel-diffusion residual modules, a second convolution layer and an up-sampling module which are connected in sequence; the channel-diffusion residual block comprises an adaptive convolution module and parallel-arranged channel attention branches and diffusion branches connected with the adaptive convolution module; the adaptive convolution module comprises a plurality of convolution layers and a plurality of nonlinear activation layers; the diffusion branch comprises a P-M diffusion layer, a convolution layer and a nonlinear activation layer; the channel attention branch comprises a pooling layer, a plurality of nonlinear activation layers and a plurality of convolution layers;
the number n of convolution kernels of the first convolution layer and the second convolution layer 1 =64, convolution kernel sizes are 3*3;
the number of channel-diffusion residual modules d=10; the number of convolution layers in the self-adaptive convolution module is 7, the number of nonlinear activation function layers is 5, the self-adaptive convolution module comprises two branches which are arranged in parallel, a first branch comprises a third convolution layer, a first nonlinear activation function, a weighted image generation module, a fourth convolution layer, a fifth convolution layer and a second nonlinear activation function which are arranged in parallel, wherein the weighted image generation module comprises a sixth convolution layer, a seventh convolution layer and a third nonlinear activation function which are arranged in parallel, and the third nonlinear activation function is connected with the seventh convolution layer; the specific parameters are as follows: the third convolution layer, eighth convolution layer convolution kernel size 1*1, the fourth convolution layer, fifth convolution layer, sixth convolution layer, seventh convolution layer, ninth convolution layer convolution kernel size 3*3; the first nonlinear activation function, the second nonlinear activation function and the fifth nonlinear activation function are realized by a ReLU function, and the third nonlinear activation function is realized by a Sigmoid function;
the specific structure of the channel attention branch is a pooling layer, a tenth convolution layer, a sixth nonlinear activation layer, an eleventh convolution layer and a seventh nonlinear activation layer which are sequentially cascaded; the specific parameters are as follows: the convolution kernel sizes 1*1 of the tenth convolution layer and the eleventh convolution layer are set to be the maximum pooling, the sixth nonlinear activation layer is realized by a ReLU function, and the seventh nonlinear activation layer is realized by a Sigmoid function;
the specific structure of the diffusion branch is a P-M diffusion layer, a convolution layer and a nonlinear activation layer which are sequentially connected, wherein the P-M diffusion layer comprises a twelfth convolution layer, a thirteenth convolution layer, a fourteenth convolution layer, a fifteenth convolution layer and a sixteenth convolution layer which are arranged in parallel; the specific parameters are as follows: the convolution kernel sizes of the convolution layers between the P-M diffusion layer and the nonlinear activation layer are 1*1, the convolution kernel sizes of the twelfth convolution layer, the thirteenth convolution layer and the fourteenth convolution layer are 3*3, the convolution kernel sizes of the fifteenth convolution layer and the sixteenth convolution layer are 5*5, and the nonlinear activation function is realized by a Sigmoid function;
the up-sampling module is realized by a PixelSheffe, and the amplification parameter is 4.
3. The method for reconstructing an image super-resolution based on a channel-diffusion dual-branch network according to claim 1, wherein said L1 norm calculation loss function L in step (3 c) s And according to the expression of (2)For omega s 、b s The update formulas for updating are respectively as follows:
wherein ,representing reconstructed images, I representing labels of samples in a training sample set, w s 、b s Represents O s Weights, bias parameters, w of all learnable parameters s '、b s ' represents updated learnable parameters, l r Represent learning rate, L s Is a loss function, +.>Representing a derivative operation.
CN202210488529.7A 2022-05-06 2022-05-06 Image super-resolution reconstruction method based on channel-diffusion double-branch network Active CN114757830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210488529.7A CN114757830B (en) 2022-05-06 2022-05-06 Image super-resolution reconstruction method based on channel-diffusion double-branch network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210488529.7A CN114757830B (en) 2022-05-06 2022-05-06 Image super-resolution reconstruction method based on channel-diffusion double-branch network

Publications (2)

Publication Number Publication Date
CN114757830A CN114757830A (en) 2022-07-15
CN114757830B true CN114757830B (en) 2023-09-08

Family

ID=82332334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210488529.7A Active CN114757830B (en) 2022-05-06 2022-05-06 Image super-resolution reconstruction method based on channel-diffusion double-branch network

Country Status (1)

Country Link
CN (1) CN114757830B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991181A (en) * 2021-03-31 2021-06-18 武汉大学 Image super-resolution reconstruction method based on reaction diffusion equation
CN113177882A (en) * 2021-04-29 2021-07-27 浙江大学 Single-frame image super-resolution processing method based on diffusion model
CN113222822A (en) * 2021-06-02 2021-08-06 西安电子科技大学 Hyperspectral image super-resolution reconstruction method based on multi-scale transformation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739075B (en) * 2020-06-15 2024-02-06 大连理工大学 Deep network lung texture recognition method combining multi-scale attention

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991181A (en) * 2021-03-31 2021-06-18 武汉大学 Image super-resolution reconstruction method based on reaction diffusion equation
CN113177882A (en) * 2021-04-29 2021-07-27 浙江大学 Single-frame image super-resolution processing method based on diffusion model
CN113222822A (en) * 2021-06-02 2021-08-06 西安电子科技大学 Hyperspectral image super-resolution reconstruction method based on multi-scale transformation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于各向异性扩散偏微分方程的图像去模糊;彭宏京;侯文秀;;信号处理(第05期);第714-717页 *

Also Published As

Publication number Publication date
CN114757830A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN107633486B (en) Structural magnetic resonance image denoising method based on three-dimensional full-convolution neural network
CN106952228B (en) Super-resolution reconstruction method of single image based on image non-local self-similarity
CN109035142B (en) Satellite image super-resolution method combining countermeasure network with aerial image prior
CN109087273B (en) Image restoration method, storage medium and system based on enhanced neural network
CN113222822B (en) Hyperspectral image super-resolution reconstruction method based on multi-scale transformation
CN111598778B (en) Super-resolution reconstruction method for insulator image
CN110136060B (en) Image super-resolution reconstruction method based on shallow dense connection network
CN109003229B (en) Magnetic resonance super-resolution reconstruction method based on three-dimensional enhanced depth residual error network
CN114494015B (en) Image reconstruction method based on blind super-resolution network
CN113808032A (en) Multi-stage progressive image denoising algorithm
CN115953303B (en) Multi-scale image compressed sensing reconstruction method and system combining channel attention
CN111738954B (en) Single-frame turbulence degradation image distortion removal method based on double-layer cavity U-Net model
CN112991483B (en) Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method
CN114626984A (en) Super-resolution reconstruction method for Chinese text image
CN113379647B (en) Multi-feature image restoration method for optimizing PSF estimation
CN111260585A (en) Image recovery method based on similar convex set projection algorithm
Wen et al. The power of complementary regularizers: Image recovery via transform learning and low-rank modeling
CN113096015A (en) Image super-resolution reconstruction method based on progressive sensing and ultra-lightweight network
CN114757830B (en) Image super-resolution reconstruction method based on channel-diffusion double-branch network
CN113240581A (en) Real world image super-resolution method for unknown fuzzy kernel
Cheng et al. Adaptive feature denoising based deep convolutional network for single image super-resolution
CN111784584A (en) Insulator remote sensing image super-resolution method based on deep learning
Tojo et al. Image denoising using multi scaling aided double decker convolutional neural network
CN111223044A (en) Method for fusing full-color image and multispectral image based on dense connection network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant