CN115496658A - Lightweight image super-resolution reconstruction method based on double attention mechanism - Google Patents

Lightweight image super-resolution reconstruction method based on double attention mechanism Download PDF

Info

Publication number
CN115496658A
CN115496658A CN202211168904.6A CN202211168904A CN115496658A CN 115496658 A CN115496658 A CN 115496658A CN 202211168904 A CN202211168904 A CN 202211168904A CN 115496658 A CN115496658 A CN 115496658A
Authority
CN
China
Prior art keywords
resolution
super
image
network model
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211168904.6A
Other languages
Chinese (zh)
Inventor
谢峰
卢佩
刘效勇
郝晓辰
周子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN202211168904.6A priority Critical patent/CN115496658A/en
Publication of CN115496658A publication Critical patent/CN115496658A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a light-weight image super-resolution reconstruction method based on a double attention mechanism, which belongs to the technical field of image processing and comprises the following steps: 1) Constructing a training data set; 2) Constructing a super-resolution network model; 3) Training a super-resolution network model so as to obtain a trained super-resolution network model; 4) And obtaining a super-resolution reconstruction image. According to the method, the channel characteristic information and the spatial characteristic information which are beneficial to image super-resolution reconstruction can be obtained through the light-weight image super-resolution reconstruction method based on the simple channel attention mechanism and the enhanced spatial attention mechanism, so that the reconstruction performance of a super-resolution network model is effectively improved, and meanwhile, compared with a mainstream light-weight super-resolution network model, the method has lower parameters and calculation amount.

Description

Lightweight image super-resolution reconstruction method based on double attention mechanism
Technical Field
The invention relates to the technical field of image processing, in particular to a lightweight image super-resolution reconstruction method based on a double attention mechanism in the technical field of image super-resolution reconstruction.
Background
The goal of image super-resolution (SR) reconstruction is to recover a corresponding high-resolution image from a given low-resolution image. As a typical low-level computer vision task, the image super-resolution reconstruction plays an important role in the fields of satellite remote sensing, biological feature recognition, image analysis, image monitoring and the like. SR, however, is a pathological problem, i.e., a Low Resolution (LR) image may be degraded by a different High Resolution (HR) image. The rise of deep learning provides a powerful tool for solving this problem. Dong et al first applied a CNN-based method to image super-resolution reconstruction and developed an SRCNN network with only three layers of convolution. Kim et al propose a VDSR network with 20 layers of convolution that achieves better performance than SRCNN, and the results show that: the reconstruction performance of the super-resolution of the image can be improved by deepening the depth of the network. Thereafter, the RDN network and RCAN network proposed by Zhang et al increase the network depth to 100 layers and 400 layers, respectively.
However, these large-scale network models consume huge storage space and large computation cost, and in practical applications, the network models are difficult to use on mobile devices. To this end, scholars have proposed a series of strategies to reduce the number of parameters or calculations. The DRRN network proposed by Tai et al utilizes a recursive mechanism to reduce the number of network parameters. The CARN-M network proposed by Ahn utilizes packet convolution to reduce the number of network parameters. The IMDN network proposed by Hui et al utilizes information multiple distillation blocks to gradually extract hierarchical features and aggregate according to the importance of candidate features. Although these super-resolution network models allow for a reduction in the number of parameters when designing for lightweight, there are still many redundant calculations, and a more efficient super-resolution network model can be further constructed by reducing the redundant calculations and utilizing more efficient modules.
Disclosure of Invention
The invention provides a light-weight image super-resolution reconstruction method based on a double attention mechanism.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a light-weight image super-resolution reconstruction method based on a double attention mechanism comprises the following steps:
step 1, constructing a training data set:
(1a) Firstly, carrying out data enhancement processing on 900 high-resolution images in a DIV2K data set, and then carrying out bicubic interpolation downsampling operation on the high-resolution images subjected to the data enhancement processing so as to obtain corresponding low-resolution images;
(1b) Forming a pair of training samples by the high-resolution images and the corresponding low-resolution images so as to obtain a training data set;
step 2, constructing a super-resolution network model:
the network model includes four parts: the device comprises a shallow layer feature extraction module, an enhanced local feature extraction group, a depth feature fusion module and an up-sampling module. The shallow feature extraction module consists of a 3 multiplied by 3 convolution layer and is used for extracting shallow features; the enhanced local feature extraction group is formed by stacking 8 enhanced local feature extraction blocks, and depth level features are sequentially extracted by the 8 enhanced local feature extraction blocks after shallow features are input into the enhanced local feature extraction group; the depth feature fusion module fuses and screens the depth level features to obtain depth fusion features; finally, adding the shallow feature and the depth fusion feature, and obtaining a reconstructed high-resolution image through an up-sampling module;
step 3, training the super-resolution network model to obtain the trained super-resolution network model:
inputting all training samples in the training data set into a super-resolution network model, and iteratively updating the parameters of the network by using a gradient descent method until a loss function is converged, thereby obtaining a trained super-resolution network model;
and 4, performing super-resolution reconstruction on the low-resolution image:
and inputting the low-resolution image in the natural scene into the trained super-resolution network model, and processing to obtain a high-resolution image reconstructed from the super-resolution image.
The data enhancement in the step (1 a) refers to: and carrying out random horizontal overturning, random vertical overturning, random 90 DEG rotation, 180 DEG rotation and 270 DEG rotation on the high-resolution image of the DIV2K data set.
The bi-cubic interpolation down-sampling in the step (1 a) refers to: i is LR =I HRS ,I LR Representing low resolution images, I HR Representing a high resolution image, ↓ S Denotes a bicubic interpolation downsampling operation and S denotes a downsampling factor.
The enhanced local feature extraction block in the step 2 is composed of two enhanced spatial attention modules and two residual local feature layers, important spatial features are selected from input features of the first enhanced spatial attention module, then intermediate features are extracted through the two stacked residual local feature layers, and finally, a more important feature region is obtained through the other enhanced spatial attention module; the enhanced spatial attention module consists of a1 × 1 convolutional layer, a 3 × 3 convolutional layer, a maximum pooling layer, a convolutional group (2 3 × 3 convolutional layers are connected in series), an up-sampling layer and a Sigmoid function; the residual local feature layer consists of an LN layer, two 1 × 1 convolutional layers, a 3 × 3 convolutional layer, a GELU activation function and a simple channel attention module; the simple channel attention module consists of a global average pooling layer, a1 x 1 convolutional layer and a Sigmoid activation function.
The depth feature fusion module in step 2 is composed of two stacked convolutional layers (one 1 × 1 convolutional layer and one 3 × 3 convolutional layer) and a GELU activation function, and fuses and screens the depth level features.
The up-sampling module in the step 2 is composed of a 3 × 3 convolution layer and an efficient sub-pixel convolution layer, wherein the 3 × 3 convolution layer is used for compressing the feature map channels, and the sub-pixel convolution layer recombines the low-resolution feature map through multiple channels to obtain the high-resolution image.
In the step 3, in the training of the super-resolution network model, an L1 loss function is adopted, and a mathematical expression of the L1 loss function is as follows:
Figure BDA0003862732510000031
L (θ) represents the L1 loss function, theta represents the parameter to be learned for the super-resolution network model, N represents the number of images used for training, | · | 1 The mean absolute error is represented by the average absolute error,
Figure BDA0003862732510000032
representing the ith high-resolution original image,
Figure BDA0003862732510000033
showing the image after the i-th super-resolution reconstruction.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a light-weight image super-resolution reconstruction method based on a double attention mechanism, which constructs a light-weight network model, extracts rich hierarchical features through a high-efficiency enhanced local feature extraction block, and introduces a simple channel attention module and an enhanced space attention module to acquire important channel information and space feature information so as to further improve the reconstruction performance of the network model. Experimental results show that the method can achieve better image super-resolution reconstruction effect with smaller parameter quantity and calculated quantity.
Drawings
FIG. 1 is a flowchart of a lightweight image super-resolution reconstruction method based on a dual attention mechanism in embodiment 1
FIG. 2 is a schematic diagram of a super-resolution network model in example 1
FIG. 3 is a schematic diagram of an enhanced local feature extraction block according to embodiment 1
FIG. 4 is a schematic view of a residual partial feature layer of example 1
FIG. 5 is a schematic diagram of a simple channel attention module of embodiment 1
FIG. 6 is a schematic diagram of the enhanced spatial attention module according to embodiment 1
FIG. 7 is a graph of the reconstruction effect of the random super-resolution in example 1
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
The implementation steps of the present invention are described in further detail with reference to fig. 1.
Example 1
Step 1, constructing a training data set:
(1a) Firstly, 900 high-resolution images in a DIV2K data set are subjected to data enhancement processing, and then the high-resolution images subjected to the data enhancement processing are subjected to bicubic interpolation downsampling operation, so that corresponding low-resolution images are obtained.
(1b) And forming a pair of training samples by the high-resolution image and the corresponding low-resolution image so as to obtain a training data set.
The data enhancement in step (1 a) refers to: carrying out random horizontal turning, random vertical turning, random 90-degree rotation, random 180-degree rotation and random 270-degree rotation on a high-resolution image of a DIV2K data set;
the bicubic interpolation downsampling in the step (1 a) refers to: i is LR =I HRS ,I LR Representing low resolution images, I HR Representing a high resolution image, ↓ S Denotes a bicubic interpolation downsampling operation and S denotes a downsampling factor.
Step 2, constructing a super-resolution network model:
the super-resolution network model structure is shown in fig. 2, and the super-resolution network model comprises a shallow feature extraction module, an enhanced local feature extraction group, a depth feature fusion module and an up-sampling module.
(2a) Shallow layer feature extraction module:
the shallow feature extraction module is composed of a 3 x 3 convolution layer, and the low-resolution image is input into the shallow feature extraction module to obtain a shallow feature F 0 The expression formula is as follows:
F 0 =W 0 *I LR
in the formula, W 0 Represents the weight of the 3 × 3 convolutional layer, I LR Representing a low resolution image, representing a convolution operation.
(2b) Enhanced local feature extraction group:
the enhanced local feature extraction group comprises 8 enhanced local feature extraction blocks (ELFB), wherein the enhanced local feature extraction blocks are composed of two enhanced spatial attention modules (ESAs) and two Residual Local Feature Layers (RLFLs), and the shallow feature F is obtained 0 Extraction of depth level features F from input enhanced local feature extraction group d D =1,2,3.. D, D is the number of enhanced local feature extraction blocks;
specifically, 8 enhanced local feature extraction blocks are used for depth level feature extraction, and the depth level feature F of the output of the d-th enhanced local feature extraction block d (1. Ltoreq. D. Ltoreq. D), the expression formula is as follows:
Figure BDA0003862732510000041
in the formula (I), the compound is shown in the specification,
Figure BDA0003862732510000042
represents the d-th enhanced local feature extraction block.
The structure of the enhanced local feature extraction block (ELFB) is shown in FIG. 3, given an input feature F d-1 The d enhanced local feature extraction block first selects important spatial features from the input of the first enhanced spatial attention module (ESA), then extracts intermediate features through two Residual Local Feature Layers (RLFL), and finally obtains more important spatial features with another enhanced spatial attention module (ESA), which is expressed as follows:
F d =H ESA (H RLFL (H RLFL (H ESA (F d-1 ))))
in the formula, H RLFL (. Represents a residual local feature layer, H) ESA (. Cndot.) represents an enhanced spatial attention module.
The structure of the Residual Local Feature Layer (RLFL) is shown in FIG. 4, where the residual local feature layer is first accessedCross-Layer Normalization (LN) pair input features
Figure BDA0003862732510000043
Normalizing, enlarging the number of channels by using a first 1 × 1 convolutional layer, acquiring abundant local features by using a 3 × 3 grouped convolutional layer, adding nonlinearity to a network model by using a GELU activation function, acquiring important channel features by using a simple channel attention module (SCA), refining the features by using a1 × 1 convolutional layer, and adding the refined features and the initial input features to obtain output features
Figure BDA0003862732510000044
The expression formula is as follows:
Figure BDA0003862732510000045
in the formula, W 1 Represents the weight, W, of the first 1 × 1 convolutional layer 2 Weight, W, representing a 3 x 3 packet convolutional layer 3 Represents the weight, H, of the last 1 × 1 convolutional layer SCA (. Cndot.) denotes a simple channel attention module.
The structure of the simple channel attention Module (SCA) in which the features are first input is shown in FIG. 5
Figure BDA0003862732510000051
And (4) executing global average pooling operation, then passing through a1 multiplied by 1 convolution layer, and finally utilizing a Sigmoid activation function to carry out normalization processing on output characteristics. The expression formula is as follows:
Figure BDA0003862732510000052
wherein Pool (. Cndot.) represents the global average pooling layer, W 1 Represents the weight of the 1 x 1 convolutional layer,
Figure BDA0003862732510000053
representing the output characteristics of the simple channel attention module.
The structure of the enhanced spatial attention Module (ESA), in which features are input, is shown in FIG. 6
Figure BDA0003862732510000054
First, a compact feature is obtained by a1 x 1 convolutional layer with reduced input channels
Figure BDA0003862732510000055
Then through a 3 x 3 convolutional layer with step size of 2, followed by a max pooling layer H pool (. To) maximum pooling followed by a convolution bank (2 3X 3 convolutional layers in series) H g (. O), an upsampling layer H subsequently implemented by bilinear interpolation up To recover the feature space size and to match the feature
Figure BDA0003862732510000056
Adding to obtain an intermediate feature
Figure BDA0003862732510000057
Finally, restoring the number of input channels by using a1 multiplied by 1 convolutional layer, and acquiring important spatial features by using a Sigmoid function
Figure BDA0003862732510000058
The expression formula is as follows:
Figure BDA0003862732510000059
Figure BDA00038627325100000510
Figure BDA00038627325100000511
in the formula (I), the compound is shown in the specification,
Figure BDA00038627325100000512
represents the weight of the first 1 x 1 convolutional layer,
Figure BDA00038627325100000513
represents the weight of the 3 x 3 convolutional layer,
Figure BDA00038627325100000514
represents the weight of the last 1 x 1 convolutional layer.
(2c) A depth feature fusion module:
the depth feature fusion module fuses and screens depth level features by two stacked convolutional layers (one 1 × 1 convolutional layer and one 3 × 3 convolutional layer) and a GELU activation function, and combines the depth level features F d (d = 8) input into the depth feature fusion module to obtain the depth fusion feature F DFF The expression formula is as follows:
F DFF =W 2 *(GELU(W 1 *F d ))
in the formula, W 1 、W 2 The weights of the 1 × 1 convolutional layer and the 3 × 3 convolutional layer are expressed respectively.
(2d) An up-sampling module:
the upsampling module consists of a 3 x 3 convolutional layer and an efficient sub-pixel convolutional layer, first the depth-fused features F DFF With shallow feature F 0 Adding the obtained super-resolution reconstructed images, and inputting the obtained super-resolution reconstructed images into an up-sampling module to obtain a final super-resolution reconstructed image I SR The expression formula is as follows:
I SR =F up (W 3 *(F DFF +F 0 ))
in the formula, F up Representing a channel recombination operation of a sub-pixel convolution layer, W 3 The weight of the 3 × 3 convolutional layer is shown.
Step 3, training the super-resolution network model to obtain the trained super-resolution network model:
inputting all training samples in the training data set into a super-resolution network model, and iteratively updating the parameters of the network by using a gradient descent method until a loss function is converged, thereby obtaining a trained super-resolution network model;
in the step 3, in the training of the super-resolution network model, the invention adopts the L1 loss function, and the mathematical expression of the L1 loss function is as follows:
Figure BDA0003862732510000061
L (θ) represents the L1 loss function, theta represents the parameter to be learned for the super-resolution network model, N represents the number of images used for training, | · | 1 The mean absolute error is represented by the average absolute error,
Figure BDA0003862732510000062
representing the ith high-resolution original image,
Figure BDA0003862732510000063
showing the image after the i-th super-resolution reconstruction.
And 4, performing super-resolution reconstruction on the low-resolution image:
and inputting the low-resolution image in the natural scene into the trained super-resolution network model, and processing to obtain a high-resolution image reconstructed from the super-resolution image.
In order to better illustrate the technical effects of the present invention, in this embodiment, the method provided by the present invention and the existing Bicubic algorithm, SRCNN algorithm, FSRCNN algorithm, VDSR algorithm, laprn algorithm, DRRN algorithm, IDN algorithm, cari algorithm, and IMDN algorithm are respectively adopted, and in five reference data sets of image super-resolution: the results of 4-fold ultradifferentiation experiments on Set5, set14, B100, urban100 and Manga109 are shown in Table 1
TABLE 1 PSNR/SSIM index analysis for 4-fold super-resolution results
Figure BDA0003862732510000064
The evaluation indexes are respectively peak signal to noise ratio (PSNR) and Structural Similarity (SSIM), and the higher the two values are, the more similar the details and the results of the obtained high-resolution image and the original real image are, and the better the super-resolution reconstruction effect is. Bolded indicates that the result is the best result.
Meanwhile, the superiority of the invention can be verified from the visualization result.
Specifically, as shown in fig. 7, the present invention has the best super-resolution reconstruction effect on randomly selected pictures. Under the condition of the Barbara image in the Set14 data Set being 4 times of over-resolution, the method can more correctly and clearly recover the texture information of the book. These visualizations further demonstrate the effectiveness of the present invention.
In summary, compared with the mainstream algorithm, the method provided by the invention not only obtains better results on image quality indexes PSNR and SSIM, but also has lower parameter and calculation amount of a network model, and realizes good balance between the size of the model and the image reconstruction quality, so that the light-weight image super-resolution reconstruction method based on the double attention mechanism is a light-weight and high-efficiency super-resolution reconstruction method.
The above description is only one specific example of the present invention, and is not intended to limit the embodiments of the present invention. It will be understood by those skilled in the art that various changes, substitutions of equivalents, improvements and the like can be made without departing from the spirit and principles of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A light-weight image super-resolution reconstruction method based on a double attention mechanism is characterized by comprising the following steps:
step 1, constructing a training data set:
(1a) Firstly, carrying out data enhancement processing on 900 high-resolution images in a DIV2K data set, and then carrying out bicubic interpolation downsampling operation on the high-resolution images subjected to the data enhancement processing so as to obtain corresponding low-resolution images;
(1b) Forming a pair of training samples by the high-resolution images and the corresponding low-resolution images so as to obtain a training data set;
step 2, constructing a super-resolution network model:
the network model includes four parts: the system comprises a shallow layer feature extraction module, an enhanced local feature extraction group, a depth feature fusion module and an up-sampling module; the shallow feature extraction module consists of a 3 multiplied by 3 convolution layer and is used for extracting shallow features; the enhanced local feature extraction group is formed by stacking 8 enhanced local feature extraction blocks, and depth level features are sequentially extracted by the 8 enhanced local feature extraction blocks after shallow features are input into the enhanced local feature extraction group; the depth feature fusion module fuses and screens the depth level features to obtain depth fusion features; finally, adding the shallow feature and the depth fusion feature, and obtaining a reconstructed high-resolution image through an up-sampling module;
step 3, training the super-resolution network model to obtain the trained super-resolution network model:
inputting all training samples in the training data set into a super-resolution network model, and iteratively updating the parameters of the network by using a gradient descent method until a loss function is converged, thereby obtaining a trained super-resolution network model;
and 4, performing super-resolution reconstruction on the low-resolution image:
and inputting the low-resolution image in the natural scene into the trained super-resolution network model, and processing to obtain a high-resolution image reconstructed from the super-resolution image.
2. The method for reconstructing the super-resolution light-weight image based on the double attention mechanism as claimed in claim 1, wherein the data enhancement in step (1 a) refers to random horizontal flipping, random vertical flipping, random rotation of 90 °, 180 °, and 270 ° of the high-resolution image of the DIV2K data set; the bi-cubic interpolation down-sampling in the step (1 a) refers to: i is LR =I HRS ,I LR Representing low resolution images, I HR Representing a high resolution image, ↓ S Denotes a bicubic interpolation downsampling operation and S denotes a downsampling factor.
3. The method for reconstructing the super-resolution of the lightweight images based on the dual attention mechanism as claimed in claim 1, wherein the enhanced local feature extraction block in step 2 is composed of two enhanced spatial attention modules and two residual local feature layers;
the enhanced spatial attention module consists of a1 x 1 convolutional layer, a 3 x 3 convolutional layer, a maximum pooling layer, a convolutional group (2 3 x 3 convolutional layers are connected in series), an up-sampling layer and a Sigmoid function, and can select important spatial features;
the residual local feature layer consists of an LN layer, two 1 × 1 convolutional layers, a 3 × 3 convolutional layer, a GELU activation function and a simple channel attention module;
the simple channel attention module consists of a global average pooling layer, a1 x 1 convolutional layer and a Sigmoid activation function, and can select important channel characteristics.
4. The method for reconstructing the super-resolution of lightweight images based on the dual attention mechanism as claimed in claim 1, wherein the depth feature fusion module in step 2 is composed of two stacked convolutional layers (one 1 × 1 convolutional layer and one 3 × 3 convolutional layer) and a GELU activation function, and the module fuses and screens the depth level features.
5. The method for super-resolution reconstruction of lightweight images based on the dual attention mechanism as claimed in claim 1, wherein the up-sampling module in step 2 is composed of a 3 x 3 convolutional layer and an efficient sub-pixel convolutional layer, wherein the 3 x 3 convolutional layer is used for compressing the feature map channels, and the sub-pixel convolutional layer recombines the low-resolution feature map into the high-resolution image through multi-channel reconstruction.
6. The lightweight image super-resolution reconstruction method based on the double attention mechanism as claimed in claim 1, wherein in step 3, the invention adopts an L1 loss function in training the super-resolution network model, and the mathematical expression of the L1 loss function is as follows:
Figure FDA0003862732500000021
L (θ) represents the L1 loss function, θ represents the parameter to be learned for the super-resolution network model, N represents the number of images used for training, | · | 1 The mean absolute error is represented by the average absolute error,
Figure FDA0003862732500000022
representing the ith high-resolution original image,
Figure FDA0003862732500000023
showing the image after the i-th super-resolution reconstruction.
CN202211168904.6A 2022-09-25 2022-09-25 Lightweight image super-resolution reconstruction method based on double attention mechanism Pending CN115496658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211168904.6A CN115496658A (en) 2022-09-25 2022-09-25 Lightweight image super-resolution reconstruction method based on double attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211168904.6A CN115496658A (en) 2022-09-25 2022-09-25 Lightweight image super-resolution reconstruction method based on double attention mechanism

Publications (1)

Publication Number Publication Date
CN115496658A true CN115496658A (en) 2022-12-20

Family

ID=84471390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211168904.6A Pending CN115496658A (en) 2022-09-25 2022-09-25 Lightweight image super-resolution reconstruction method based on double attention mechanism

Country Status (1)

Country Link
CN (1) CN115496658A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385265A (en) * 2023-04-06 2023-07-04 北京交通大学 Training method and device for image super-resolution network
CN116402679A (en) * 2022-12-28 2023-07-07 长春理工大学 Lightweight infrared super-resolution self-adaptive reconstruction method
CN116664397A (en) * 2023-04-19 2023-08-29 太原理工大学 TransSR-Net structured image super-resolution reconstruction method
CN116721302A (en) * 2023-08-10 2023-09-08 成都信息工程大学 Ice and snow crystal particle image classification method based on lightweight network
CN117036162A (en) * 2023-06-19 2023-11-10 河北大学 Residual feature attention fusion method for super-resolution of lightweight chest CT image
CN117934286A (en) * 2024-03-21 2024-04-26 西华大学 Lightweight image super-resolution method and device and electronic equipment thereof

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402679A (en) * 2022-12-28 2023-07-07 长春理工大学 Lightweight infrared super-resolution self-adaptive reconstruction method
CN116402679B (en) * 2022-12-28 2024-05-28 长春理工大学 Lightweight infrared super-resolution self-adaptive reconstruction method
CN116385265A (en) * 2023-04-06 2023-07-04 北京交通大学 Training method and device for image super-resolution network
CN116385265B (en) * 2023-04-06 2023-10-17 北京交通大学 Training method and device for image super-resolution network
CN116664397A (en) * 2023-04-19 2023-08-29 太原理工大学 TransSR-Net structured image super-resolution reconstruction method
CN116664397B (en) * 2023-04-19 2023-11-10 太原理工大学 TransSR-Net structured image super-resolution reconstruction method
CN117036162A (en) * 2023-06-19 2023-11-10 河北大学 Residual feature attention fusion method for super-resolution of lightweight chest CT image
CN117036162B (en) * 2023-06-19 2024-02-09 河北大学 Residual feature attention fusion method for super-resolution of lightweight chest CT image
CN116721302A (en) * 2023-08-10 2023-09-08 成都信息工程大学 Ice and snow crystal particle image classification method based on lightweight network
CN116721302B (en) * 2023-08-10 2024-01-12 成都信息工程大学 Ice and snow crystal particle image classification method based on lightweight network
CN117934286A (en) * 2024-03-21 2024-04-26 西华大学 Lightweight image super-resolution method and device and electronic equipment thereof
CN117934286B (en) * 2024-03-21 2024-06-04 西华大学 Lightweight image super-resolution method and device and electronic equipment thereof

Similar Documents

Publication Publication Date Title
CN115496658A (en) Lightweight image super-resolution reconstruction method based on double attention mechanism
CN106991646B (en) Image super-resolution method based on dense connection network
CN115222601A (en) Image super-resolution reconstruction model and method based on residual mixed attention network
Luo et al. Lattice network for lightweight image restoration
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
CN111951164B (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN104899835A (en) Super-resolution processing method for image based on blind fuzzy estimation and anchoring space mapping
CN113538234A (en) Remote sensing image super-resolution reconstruction method based on lightweight generation model
CN117058160B (en) Three-dimensional medical image segmentation method and system based on self-adaptive feature fusion network
CN115660955A (en) Super-resolution reconstruction model, method, equipment and storage medium for efficient multi-attention feature fusion
CN113222818A (en) Method for reconstructing super-resolution image by using lightweight multi-channel aggregation network
CN115713462A (en) Super-resolution model training method, image recognition method, device and equipment
CN112884650A (en) Image mixing super-resolution method based on self-adaptive texture distillation
CN114022356A (en) River course flow water level remote sensing image super-resolution method and system based on wavelet domain
CN116188272B (en) Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores
CN116385265B (en) Training method and device for image super-resolution network
CN113096015A (en) Image super-resolution reconstruction method based on progressive sensing and ultra-lightweight network
CN116485654A (en) Lightweight single-image super-resolution reconstruction method combining convolutional neural network and transducer
CN116681592A (en) Image super-resolution method based on multi-scale self-adaptive non-local attention network
CN111402140A (en) Single image super-resolution reconstruction system and method
CN116091319A (en) Image super-resolution reconstruction method and system based on long-distance context dependence
CN115797181A (en) Image super-resolution reconstruction method for mine fuzzy environment
CN113191947B (en) Image super-resolution method and system
CN115660979A (en) Attention mechanism-based double-discriminator image restoration method
CN115375537A (en) Nonlinear sensing multi-scale super-resolution image generation system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination