CN115601242B - Lightweight image super-resolution reconstruction method suitable for hardware deployment - Google Patents

Lightweight image super-resolution reconstruction method suitable for hardware deployment Download PDF

Info

Publication number
CN115601242B
CN115601242B CN202211592921.2A CN202211592921A CN115601242B CN 115601242 B CN115601242 B CN 115601242B CN 202211592921 A CN202211592921 A CN 202211592921A CN 115601242 B CN115601242 B CN 115601242B
Authority
CN
China
Prior art keywords
channel
image
resolution
super
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211592921.2A
Other languages
Chinese (zh)
Other versions
CN115601242A (en
Inventor
常亮
樊东奇
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211592921.2A priority Critical patent/CN115601242B/en
Publication of CN115601242A publication Critical patent/CN115601242A/en
Application granted granted Critical
Publication of CN115601242B publication Critical patent/CN115601242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image enhancement, and particularly provides a lightweight image super-resolution reconstruction method suitable for hardware deployment, which is suitable for an FPGA (field programmable gate array) or other embedded equipment; the method comprises the steps of firstly converting a low-resolution image to be processed into a YCbCr image, then dividing the YCbCr image into a plurality of sub-images, sequentially inputting the sub-images into a lightweight image super-resolution reconstruction network, outputting reconstructed sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstructed sub-images in sequence to obtain a high-resolution reconstructed image. The invention balances the relation between the image super-resolution effect and the network parameter quantity and the calculated quantity, realizes the obvious light weight of the super-resolution reconstruction network, and simultaneously improves the peak signal-to-noise ratio (PSNR), so that the super-resolution reconstruction network can better land on mobile equipment, embedded equipment and other network edge equipment.

Description

Lightweight image super-resolution reconstruction method suitable for hardware deployment
Technical Field
The invention belongs to the technical field of image enhancement, relates to an image super-resolution reconstruction method, and particularly provides a lightweight image super-resolution reconstruction method suitable for hardware deployment.
Background
With the development of mobile phones, electronic equipment and other mobile equipment, people have increasingly high pursuits for high-quality images, namely daily entertainment pictures and monitoring, and medical image diagnosis and satellite image display; compared with the method that the structure and the performance of the camera are directly improved, the cost and the period can be greatly reduced by optimizing the image from the angle of the algorithm; the image super-resolution reconstruction is a method for changing a low-resolution image into a high-resolution image, and the evaluation index of the method is the peak signal to noise ratio (PSNR) which directly reflects the quality of the reconstructed image.
In the document "Learning a Deep conditional Network for Image Super-Resolution", SRCNN was proposed by dun shi in 2014, and the first time, super Resolution (SR) of the Image was realized by using the Convolutional neural Network, and the wave of realizing the Super Resolution of the Image by using the Convolutional neural Network was started. Under the support of various training strategies and new technologies, the evaluation index of the super-resolution of the current image is continuously refreshed, and the low-resolution image after model reasoning is continuously close to the original high-resolution image; however, the network model is also getting deeper and wider, from the first only 3 hidden layers to the present hundreds of layers, the parameter amount is also increased to the level of ten million from the first 8032, and then a complicated calculation process and huge memory overhead are brought. In this context, the existing image super-resolution reconstruction methods with outstanding effects are difficult to be landed on mobile devices, embedded devices and other network edge devices, because these devices do not have a PC-side powerful computing unit and a corresponding storage unit.
Disclosure of Invention
The invention aims to provide a light-weight image super-resolution reconstruction method suitable for hardware deployment, which is suitable for an FPGA (field programmable gate array) or other embedded equipment; the invention balances the relation between the image super-resolution effect and the network parameter quantity and the calculated quantity, realizes the obvious light weight of the super-resolution reconstruction network, and simultaneously improves the peak signal-to-noise ratio (PSNR), so that the super-resolution reconstruction network can better land on mobile equipment, embedded equipment and other network edge equipment.
In order to achieve the purpose, the invention adopts the technical scheme that:
a light-weight image super-resolution reconstruction method suitable for hardware deployment comprises the following steps:
step 1, preprocessing data;
carrying out format conversion on the low-resolution image to be processed to obtain a low-quality YCbCr image to be processed; then dividing the low-quality YCbCr image to be processed into a plurality of sub-images;
step 2, constructing a light-weight image super-resolution reconstruction network and completing network training;
the lightweight image super-resolution reconstruction network comprises: the device comprises a feature pre-extraction block, a first depth separable rolling block, a channel feature extraction block, a channel compression block and an upper sampling block which are connected in sequence;
and 3, sequentially inputting the sub-images in the step 1 into the lightweight image super-resolution reconstruction network in the step 2, outputting the reconstruction sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstruction sub-images according to a sequence to obtain a high-resolution reconstruction image.
Further, in the lightweight image super-resolution reconstruction network:
the feature pre-extraction block adopts convolution layer A 1
The first depth separable convolution block is composed of sequentially connected convolution layers A 2 Channel-by-channel convolution layer B 1 And a convolutional layer A 3 Form a
The channel feature extraction block is formed by sequentially connecting 4 channel feature extraction sub-blocks, and each channel feature extraction sub-block is formed by sequentially connecting a second depth separable rolling block and a channel feature extrusion block; the second depth separable convolution block is composed of sequentially connected channel-by-channel convolution layers B 2 And a convolutional layer A 4 Is composed of, and a layer A is convoluted 4 Output and channel-by-channel convolution layer B of 2 As the output of the second depth separable convolution block; the channel feature extrusion block is composed of a convolution layer A 5 Convolutional layer A 6 Convolutional layer A 7 Formed with a Softmax layer, the inputs of the channel feature extrusion blocks are respectively passed through a convolution layer A 5 And a convolutional layer A 6 Convolutional layer A 5 After passing through Softmax layer, the output of (A) is combined with convolution layer A 6 Is multiplied, the result of the multiplication is passed through convolution layer A 7 Then adding the data and the input data to be used as the output of the channel characteristic extrusion block;
the channel compression block adopts convolution layer A 8
The up-sampling block adopts a PixelShuffle block.
Further, a convolutional layer A 1 The convolution kernel size of (1) is 3 x 3, the input channel is 1, the output channel is 45, and padding is 1; convolutional layer A 2 With a convolution kernel size of 1 × 1, input channel 45, output channel 36, padding 0, channel-by-channel convolution layer B 1 Has a convolution kernel size of 3 × 3, input channel of 36, output channel of 36, padding of 1, convolution layer A 3 The convolution kernel size of (1 x 1), input channel of (36), output channel of (18), padding of (0); channel-by-channel convolution layer B 2 Has a convolution kernel size of 3 x 3, an input channel of 18, and an output channel18, padding 1, convolutional layer A 4 The convolution kernel size of (1 x 1), the input channel of (18), the output channel of (18), padding of (0); convolutional layer A 5 Convolutional layer A 6 And convolution layer A 7 The parameters are the same and are as follows: the convolution kernel size is 1 × 1, the input channel is 18, the output channel is 18, and padding is 0; convolutional layer A 8 Has a convolution kernel size of 1 × 1, an input channel of 18, and an output channel of Q 2 The padding is 0, and the Q is a super-resolution multiple.
Further, the training process of the lightweight image super-resolution reconstruction network is as follows:
the method comprises the steps that an existing image data set is used as a training set, down-sampling is conducted on an original image in the training set to obtain a low-resolution image, data preprocessing is conducted on the original image and the low-resolution image respectively to obtain a sub-image of the original image and a sub-image of the low-resolution image, the sub-image of the original image is used as a label, the sub-image of the low-resolution image is used as input, and a training sample is formed;
and setting a loss function and training parameters, and training the lightweight image super-resolution reconstruction network by adopting a supervised batch learning method.
Further, the loss function is:
Figure 626945DEST_PATH_IMAGE001
wherein,Lossthe function of the loss is represented by,Xthe label is represented by a number of labels,Yrepresents the output of the lightweight image super-resolution reconstruction network (a)x,y) Which represents the coordinates of the pixel or pixels,Nindicating the size of the label.
Further, the size of the sub-image isN×NNThe range of (a) is 30 to 60.
Based on the technical scheme, the invention has the beneficial effects that:
a light-weight image super-resolution reconstruction method suitable for hardware deployment is characterized in that a light-weight design of an image super-resolution reconstruction network is realized by building and compressing a convolutional neural network; moreover, by the method of dividing the low-resolution image into a plurality of sub-images which are sequentially sent to the network and then splicing each sub-image after super-resolution into a large image, the problems of low system efficiency and the like caused by interaction between a chip and an external memory on a mobile or edge device (such as an FPGA) can be avoided; in conclusion, the super-resolution reconstruction network can achieve obvious light weight and simultaneously improve peak signal to noise ratio (PSNR), so that the super-resolution reconstruction network can better land on mobile equipment, embedded equipment and other network edge equipment.
Drawings
Fig. 1 is a flow diagram of a lightweight image super-resolution reconstruction method suitable for hardware deployment according to the present invention.
Fig. 2 is a schematic structural diagram of a lightweight image super-resolution reconstruction network in the present invention.
Fig. 3 is a schematic structural diagram of a first depth separable volume block in the lightweight super-resolution image reconstruction network shown in fig. 2.
Fig. 4 is a schematic structural diagram of channel feature extraction sub-blocks in a channel feature extraction block in the lightweight super-resolution image reconstruction network shown in fig. 2.
Fig. 5 is a schematic structural diagram of a second depth separable convolution block in the channel feature extraction sub-block shown in fig. 4.
Fig. 6 is a schematic structural diagram of a channel feature extrusion block in the channel feature extraction sub-block shown in fig. 4.
Fig. 7 is a schematic structural diagram of a conventional depth-separable volume block.
FIG. 8 is a schematic diagram of a conventional Non-Local Mixed orientation block.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention will be further described in detail with reference to the accompanying drawings and examples.
The embodiment provides a lightweight image super-resolution reconstruction method suitable for hardware deployment, and the flow of the method is shown in fig. 1, and the method specifically includes the following steps:
step 1, data preprocessing;
will be treated lowCarrying out format conversion on the resolution image to obtain a to-be-processed low-quality YCbCr image, namely converting the RGB format into the YCbCr format (Y represents a brightness component, cb represents a blue chrominance component, and Cr represents a red chrominance component); then the low-quality YCbCr image to be processed is divided into a plurality of sub-images, and the size of each sub-image isN×NNThe value range of (1) is 30 to 60, in the embodimentN=30;
Step 2, constructing a light-weight image super-resolution reconstruction network, and completing network training;
the lightweight image super-resolution reconstruction network is shown in fig. 2 and comprises: the device comprises a characteristic pre-extraction block, a first depth separable convolution block, a channel characteristic extraction block, a channel compression block and an upper sampling block which are connected in sequence;
and 3, sequentially inputting the sub-images in the step 1 into the lightweight image super-resolution reconstruction network in the step 2, outputting reconstruction sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstruction sub-images according to a sequence to obtain a high-resolution reconstruction image.
Specifically, in the data preprocessing, compared with an RGB image, the YCbCr image only needs to occupy a very small bandwidth in the transmission process, so that format conversion is carried out in the invention; and moreover, the reasoning model which divides the picture into a plurality of sub-images for high-resolution reconstruction respectively and then splices and restores the high-resolution reconstructed sub-images is adopted, so that the reasoning effect which is equal to that of directly inputting the whole picture into the network can be achieved, and the storage space required by the propagation of the characteristic diagram in the network can be greatly reduced.
Specifically, in the lightweight image super-resolution reconstruction network:
the feature pre-extraction block adopts convolution layer A 1 The convolution kernel size is 3 × 3, the input channel is 1, the output channel is 45, and padding is 1; the convolution layer is used for performing special pre-extraction on an input picture, and based on the consideration of the size of a sub-picture after segmentation, if an overlarge convolution kernel is adopted, the extraction capability of fine features of the picture is low; if the convolution kernel is too small, the extraction capability of the local features of the image is also very low; therefore adopt 3A x 3 convolutional layer; in this example, convolutional layer A 1 The size of the output feature map is 45 × 30 × 30.
The first depth separable convolution blocks are formed of sequentially connected convolution layers A as shown in FIG. 3 2 Channel-by-channel convolution layer B 1 And a convolutional layer A 3 Forming; wherein, the convolution layer A 2 With a convolution kernel size of 1 × 1, input channel 45, output channel 36, padding 0, channel-by-channel convolution layer B 1 Has a convolution kernel size of 3 × 3, input channel of 36, output channel of 36, padding of 1, convolution layer A 3 Has a convolution kernel size of 1 × 1, input channels of 36, output channels of 18, padding of 0. In terms of working principle: compared with the conventional depth-separable volume block shown in fig. 7, the depth-separable volume block in the invention has the advantages that the input branch part is removed from the first depth-separable volume block, so that the depth-separable volume block can be endowed with the function of reducing the number of input feature map channels, and the conventional structure only has the function of extracting features; that is, in the present invention, the first depth-separable convolution block reduces the number of feature map channels while extracting features, and in this embodiment, the size of the feature map output by the first depth-separable convolution block is reduced to 18 × 30 × 30.
The channel feature extraction block is formed by sequentially connecting 4 channel feature extraction sub-blocks; the channel feature extraction sub-block is composed of a second depth separable rolling block and a channel feature extrusion block which are connected in sequence, as shown in fig. 4. The second depth separable convolution block is shown in FIG. 5 as being formed of sequentially connected channel-by-channel convolution layers B 2 And convolution layer A 4 Is composed of, and a layer A is convoluted 4 Output and channel-by-channel convolution layer B of 2 As the output of the second depth separable convolution block; channel-by-channel convolution layer B 2 Has a convolution kernel size of 3 × 3, input channel of 18, output channel of 18, padding of 1, convolution layer A 4 Has a convolution kernel size of 1 × 1, an input channel of 18, an output channel of 18, and padding of 0. The channel feature extrusion block is shown in FIG. 6 as being formed from convolution layer A 5 Convolutional layer A 6 Convolutional layer A 7 And Softmax layerIs composed of a layer A of convolution layer 5 And convolution layer A 6 Convolutional layer A 5 After passing through Softmax layer, the output of (A) is connected with convolution layer A 6 Is multiplied by the output of (a), the result of the multiplication is passed through a convolution layer A 7 Then adding the obtained value and the input value to be used as the output of the channel characteristic extrusion block; convolutional layer A 5 Convolutional layer A 6 And a convolutional layer A 7 The parameters of (a) are the same, specifically: the convolution kernel size is 1 × 1, the input channel is 18, the output channel is 18, and padding is 0. In terms of working principle: for the second depth-separable convolution block, compared with the conventional depth-separable convolution block as shown in fig. 7, the first convolution layer (1 × 1) in the second depth-separable convolution block is removed in the present invention, which can effectively reduce the number of parameters and the number of calculations. Aiming at the channel characteristic extrusion block, in order to achieve a better super-resolution effect, a channel attention mechanism is introduced into a network and is used for acquiring information on different channels, the channel characteristic extrusion block is provided for acquiring local channel information, and the channel characteristic extrusion block is very friendly to hardware; compared with the existing Non-Local Mixed extension block shown in fig. 8, the channel feature extrusion block of the invention adopts a one-way design, namely, one convolution layer (1 × 1) and multiplication operation in the block in fig. 8 are removed, and through the quantized design, the parameter number and the calculation times can be effectively reduced; in addition, because each branch operation in the network is performed, an additional storage space needs to be allocated in the hardware (each branch is parallel), and reducing one branch (convolution layer) can effectively reduce the storage space requirement, thereby improving the hardware friendliness.
The channel compression block adopts convolution layer A 8 With convolution kernel size of 1 × 1, input channel of 18, and output channel of Q 2 The padding is 0, Q is a super-resolution multiple, and the value range of Q is 2 to 4; and reducing the number of channels of the output characteristic diagram of the channel characteristic extraction block by adopting the channel compression block.
The upsampling block adopts a PixelShuffle block and is used for amplifying an input picture to (QXN) x (QXN); the PixelShuffle block is adopted, no additional parameters are introduced in the amplification process, and no calculation amount is needed.
The training process of the lightweight image super-resolution reconstruction network comprises the following steps:
taking the existing image data set as a training set, performing down-sampling on an original image in the training set to obtain a low-resolution image, performing data preprocessing on the original image and the low-resolution image respectively in the same way as in the step 1 to obtain a sub-image of the original image and a sub-image of the low-resolution image, taking the sub-image of the original image as a label, and taking the sub-image of the low-resolution image as input to form a training sample;
setting a loss function and training parameters, and training the lightweight image super-resolution reconstruction network by adopting a supervised batch learning method; the loss function is:
Figure 652670DEST_PATH_IMAGE002
wherein,Lossthe function of the loss is expressed as,Xthe label is represented by a number of labels,Yrepresents the output of the super-resolution reconstruction network of lightweight images (a)x,y) Which represents the coordinates of the pixel or pixels,Nindicating the size of the label; the loss function represents the 'difference' between the reconstructed high-resolution image output by the lightweight image super-resolution reconstruction network and the original high-resolution image.
In the embodiment, a 91-images data set is adopted as a training set, and a set5 data set is adopted as a verification set; the example image in the flowchart shown in fig. 1 is from a set5 dataset, the original low resolution image is 252 × 252 in size, divided into 64 sub-images, each 30 × 30 in size; the size of the high-resolution reconstructed image obtained by the method is 504 multiplied by 504, and the PSNR index of a single-channel test is 38.049; if the original picture is directly input into the network for reasoning, the PSNR index of the single-channel test is 38.076; therefore, the processing method for segmenting and splicing the images can achieve the reasoning effect which is equal to the effect of directly inputting the original images into the network.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (5)

1. A light-weight image super-resolution reconstruction method suitable for hardware deployment is characterized by comprising the following steps:
step 1, preprocessing data;
carrying out format conversion on the low-resolution image to be processed to obtain a low-quality YCbCr image to be processed; then dividing the low-quality YCbCr image to be processed into a plurality of sub-images;
step 2, constructing a light-weight image super-resolution reconstruction network and completing network training;
the lightweight image super-resolution reconstruction network comprises: the device comprises a feature pre-extraction block, a first depth separable rolling block, a channel feature extraction block, a channel compression block and an upper sampling block which are connected in sequence;
the feature pre-extraction block adopts convolution layer A 1
The first depth separable convolution blocks are composed of convolution layers A connected in sequence 2 Channel-by-channel convolution layer B 1 And a convolutional layer A 3 Forming;
the channel feature extraction block is formed by sequentially connecting 4 channel feature extraction sub-blocks, and each channel feature extraction sub-block is formed by sequentially connecting a second depth separable rolling block and a channel feature extrusion block; the second depth separable convolution block is composed of sequentially connected channel-by-channel convolution layers B 2 And a convolutional layer A 4 Is composed of, and a layer A is convoluted 4 Output and channel-by-channel convolution layer B of 2 As the output of the second depth separable convolution block; the channel feature extrusion block is composed of a convolution layer A 5 Convolutional layer A 6 Convolutional layer A 7 Formed with a Softmax layer, the inputs of the channel feature extrusion blocks are respectively passed through a convolution layer A 5 And convolution layer A 6 Convolutional layer A 5 After passing through Softmax layer, the output of (A) is combined with convolution layer A 6 Multiplying the outputs of (1), multiplyingThe result of (A) is passed through a convolutional layer A 7 Adding the obtained data with the input of the channel characteristic extrusion block as the output of the channel characteristic extrusion block;
the channel compression block adopts convolution layer A 8
The up-sampling block adopts a PixelShuffle block;
and 3, sequentially inputting the sub-images in the step 1 into the lightweight image super-resolution reconstruction network in the step 2, outputting reconstruction sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstruction sub-images according to a sequence to obtain a high-resolution reconstruction image.
2. The method for super-resolution reconstruction of lightweight images suitable for hardware deployment according to claim 1, wherein convolution layer A is 1 The convolution kernel size of (1) is 3 x 3, the input channel is 1, the output channel is 45, and padding is 1; convolutional layer A 2 With a convolution kernel size of 1 × 1, input channel 45, output channel 36, padding 0, channel-by-channel convolution layer B 1 Has a convolution kernel size of 3 × 3, input channel of 36, output channel of 36, padding of 1, convolution layer A 3 The convolution kernel size of (1 x 1), input channel of (36), output channel of (18), padding of (0); channel-by-channel convolutional layer B 2 Has a convolution kernel size of 3 × 3, input channel of 18, output channel of 18, padding of 1, convolution layer A 4 The convolution kernel size of (1 × 1), the input channel of (18), the output channel of (18), and the padding of (0); convolutional layer A 5 Convolutional layer A 6 And a convolutional layer A 7 The parameters are the same and are as follows: the convolution kernel size is 1 × 1, the input channel is 18, the output channel is 18, and padding is 0; convolutional layer A 8 Has a convolution kernel size of 1 × 1, an input channel of 18, and an output channel of Q 2 The padding is 0, and the Q is a super-resolution multiple.
3. The method for reconstructing the super-resolution light-weight image suitable for hardware deployment according to claim 1, wherein the training process of the super-resolution light-weight image reconstruction network is as follows:
the method comprises the steps that an existing image data set is used as a training set, down-sampling is conducted on an original image in the training set to obtain a low-resolution image, data preprocessing is conducted on the original image and the low-resolution image respectively to obtain a sub-image of the original image and a sub-image of the low-resolution image, the sub-image of the original image is used as a label, the sub-image of the low-resolution image is used as input, and a training sample is formed;
and setting a loss function and training parameters, and training the lightweight image super-resolution reconstruction network by adopting a supervised batch learning method.
4. The method for reconstructing the super-resolution light-weight images suitable for hardware deployment according to claim 3, wherein the loss function is as follows:
Figure QLYQS_1
wherein,Lossthe function of the loss is expressed as,Xthe label is represented by a number of labels,Yrepresents the output of the super-resolution reconstruction network of lightweight images (a)x,y) Which represents the coordinates of the pixel or pixels,Nindicating the size of the label.
5. The method for reconstructing the super-resolution light-weight images suitable for hardware deployment of claim 1, wherein the size of the sub-images isN×NNThe value range of (b) is 30 to 60.
CN202211592921.2A 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment Active CN115601242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211592921.2A CN115601242B (en) 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211592921.2A CN115601242B (en) 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment

Publications (2)

Publication Number Publication Date
CN115601242A CN115601242A (en) 2023-01-13
CN115601242B true CN115601242B (en) 2023-04-18

Family

ID=84854174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211592921.2A Active CN115601242B (en) 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment

Country Status (1)

Country Link
CN (1) CN115601242B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
CN112734643A (en) * 2021-01-15 2021-04-30 重庆邮电大学 Lightweight image super-resolution reconstruction method based on cascade network
CN113538234A (en) * 2021-06-29 2021-10-22 中国海洋大学 Remote sensing image super-resolution reconstruction method based on lightweight generation model
CN114519667A (en) * 2022-01-10 2022-05-20 武汉图科智能科技有限公司 Image super-resolution reconstruction method and system
CN114612306A (en) * 2022-03-15 2022-06-10 北京工业大学 Deep learning super-resolution method for crack detection

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657962B (en) * 2014-12-12 2017-08-25 西安电子科技大学 The Image Super-resolution Reconstruction method returned based on cascading linear
WO2017204886A1 (en) * 2016-05-23 2017-11-30 Massachusetts Institute Of Technology System and method for providing real-time super-resolution for compressed videos
CN108765511B (en) * 2018-05-30 2023-03-24 重庆大学 Ultrasonic image super-resolution reconstruction method based on deep learning
CN110136067B (en) * 2019-05-27 2022-09-06 商丘师范学院 Real-time image generation method for super-resolution B-mode ultrasound image
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN110378470B (en) * 2019-07-19 2023-08-18 Oppo广东移动通信有限公司 Optimization method and device for neural network model and computer storage medium
CN111105352B (en) * 2019-12-16 2023-04-25 佛山科学技术学院 Super-resolution image reconstruction method, system, computer equipment and storage medium
CN111402126B (en) * 2020-02-15 2023-12-22 北京中科晶上科技股份有限公司 Video super-resolution method and system based on blocking
CN112348103B (en) * 2020-11-16 2022-11-11 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof
CN112861978B (en) * 2021-02-20 2022-09-02 齐齐哈尔大学 Multi-branch feature fusion remote sensing scene image classification method based on attention mechanism
CN112991203B (en) * 2021-03-08 2024-05-07 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN114331837A (en) * 2021-12-22 2022-04-12 国网安徽省电力有限公司 Method for processing and storing panoramic monitoring image of protection system of extra-high voltage converter station

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
CN112734643A (en) * 2021-01-15 2021-04-30 重庆邮电大学 Lightweight image super-resolution reconstruction method based on cascade network
CN113538234A (en) * 2021-06-29 2021-10-22 中国海洋大学 Remote sensing image super-resolution reconstruction method based on lightweight generation model
CN114519667A (en) * 2022-01-10 2022-05-20 武汉图科智能科技有限公司 Image super-resolution reconstruction method and system
CN114612306A (en) * 2022-03-15 2022-06-10 北京工业大学 Deep learning super-resolution method for crack detection

Also Published As

Publication number Publication date
CN115601242A (en) 2023-01-13

Similar Documents

Publication Publication Date Title
CN108647775B (en) Super-resolution image reconstruction method based on full convolution neural network single image
CN107155110A (en) A kind of picture compression method based on super-resolution technique
CN113034358B (en) Super-resolution image processing method and related device
CN109785252B (en) Night image enhancement method based on multi-scale residual error dense network
CN112419151A (en) Image degradation processing method, device, storage medium and electronic equipment
CN111951164B (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN110751597A (en) Video super-resolution method based on coding damage repair
CN112017116B (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
WO2023010831A1 (en) Method, system and apparatus for improving image resolution, and storage medium
CN112001843A (en) Infrared image super-resolution reconstruction method based on deep learning
CN112288630A (en) Super-resolution image reconstruction method and system based on improved wide-depth neural network
CN111800630A (en) Method and system for reconstructing video super-resolution and electronic equipment
CN115018708A (en) Airborne remote sensing image super-resolution reconstruction method based on multi-scale feature fusion
CN111951171A (en) HDR image generation method and device, readable storage medium and terminal equipment
Zhang et al. Multi-scale-based joint super-resolution and inverse tone-mapping with data synthesis for UHD HDR video
CN104683818A (en) Image compression method based on biorthogonal invariant set multi-wavelets
CN115601242B (en) Lightweight image super-resolution reconstruction method suitable for hardware deployment
CN115187455A (en) Lightweight super-resolution reconstruction model and system for compressed image
CN112070676B (en) Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network
CN114170082A (en) Video playing method, image processing method, model training method, device and electronic equipment
CN111598781B (en) Image super-resolution method based on hybrid high-order attention network
CN116778539A (en) Human face image super-resolution network model based on attention mechanism and processing method
CN113674151A (en) Image super-resolution reconstruction method based on deep neural network
CN114240750A (en) Video resolution improving method and device, storage medium and electronic equipment
CN117635478B (en) Low-light image enhancement method based on spatial channel attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant