CN115601242A - Lightweight image super-resolution reconstruction method suitable for hardware deployment - Google Patents

Lightweight image super-resolution reconstruction method suitable for hardware deployment Download PDF

Info

Publication number
CN115601242A
CN115601242A CN202211592921.2A CN202211592921A CN115601242A CN 115601242 A CN115601242 A CN 115601242A CN 202211592921 A CN202211592921 A CN 202211592921A CN 115601242 A CN115601242 A CN 115601242A
Authority
CN
China
Prior art keywords
channel
image
resolution
super
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211592921.2A
Other languages
Chinese (zh)
Other versions
CN115601242B (en
Inventor
常亮
樊东奇
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211592921.2A priority Critical patent/CN115601242B/en
Publication of CN115601242A publication Critical patent/CN115601242A/en
Application granted granted Critical
Publication of CN115601242B publication Critical patent/CN115601242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image enhancement, and particularly provides a lightweight image super-resolution reconstruction method suitable for hardware deployment, which is suitable for an FPGA (field programmable gate array) or other embedded equipment; the method comprises the steps of firstly converting a low-resolution image to be processed into a YCbCr image, then dividing the YCbCr image into a plurality of sub-images, sequentially inputting the sub-images into a lightweight image super-resolution reconstruction network, outputting reconstructed sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstructed sub-images in sequence to obtain a high-resolution reconstructed image. The invention balances the relation between the image super-resolution effect and the network parameter quantity and the calculated quantity, realizes the obvious light weight of the super-resolution reconstruction network, and simultaneously improves the peak signal-to-noise ratio (PSNR), so that the super-resolution reconstruction network can better land on mobile equipment, embedded equipment and other network edge equipment.

Description

Lightweight image super-resolution reconstruction method suitable for hardware deployment
Technical Field
The invention belongs to the technical field of image enhancement, relates to an image super-resolution reconstruction method, and particularly provides a lightweight image super-resolution reconstruction method suitable for hardware deployment.
Background
With the development of mobile phones, electronic equipment and other mobile equipment, people seek higher and higher high-quality images, namely daily entertainment pictures and monitoring, and medical image diagnosis and satellite image display; compared with the method that the structure and the performance of the camera are directly improved, the cost and the period can be greatly reduced by optimizing the image from the angle of the algorithm; the image super-resolution reconstruction is a method for changing a low-resolution image into a high-resolution image, and the evaluation index of the method is the peak signal to noise ratio (PSNR) which directly reflects the quality of the reconstructed image.
In the document "Learning a Deep conditional Network for Image Super-Resolution", SRCNN was proposed by dunga et al in 2014, the first time Super-Resolution (SR) of the Image was realized by using the Convolutional neural Network, and the wave of realizing the Super-Resolution of the Image by using the Convolutional neural Network was opened. Under the support of various training strategies and new technologies, the evaluation index of the super-resolution of the current image continuously refreshes the record, and the low-resolution image after model reasoning is continuously close to the original high-resolution image; however, the network model is also getting deeper and wider, from the first only 3 hidden layers to the present hundreds of layers, the parameter amount is also increased to the level of ten million from the first 8032, and then a complicated calculation process and huge memory overhead are brought. In this context, the existing image super-resolution reconstruction methods with outstanding effects are difficult to land on mobile devices, embedded devices and other network edge devices, because these devices do not have a powerful computing unit and a corresponding storage unit on the PC side.
Disclosure of Invention
The invention aims to provide a light-weight image super-resolution reconstruction method suitable for hardware deployment, which is suitable for an FPGA (field programmable gate array) or other embedded equipment; the method balances the relation between the image super-resolution effect and the network parameter quantity and the calculated quantity, realizes the obvious light weight of the super-resolution reconstruction network, and simultaneously improves the peak signal to noise ratio (PSNR), so that the super-resolution reconstruction network can better land on mobile equipment, embedded equipment and other network edge equipment.
In order to realize the purpose, the invention adopts the technical scheme that:
a light-weight image super-resolution reconstruction method suitable for hardware deployment comprises the following steps:
step 1, preprocessing data;
carrying out format conversion on the low-resolution image to be processed to obtain a low-quality YCbCr image to be processed; then dividing the low-quality YCbCr image to be processed into a plurality of sub-images;
step 2, constructing a light-weight image super-resolution reconstruction network, and completing network training;
the lightweight image super-resolution reconstruction network comprises: the device comprises a characteristic pre-extraction block, a first depth separable convolution block, a channel characteristic extraction block, a channel compression block and an upper sampling block which are connected in sequence;
and 3, sequentially inputting the sub-images in the step 1 into the lightweight image super-resolution reconstruction network in the step 2, outputting the reconstruction sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstruction sub-images according to a sequence to obtain a high-resolution reconstruction image.
Further, in the lightweight image super-resolution reconstruction network:
the feature pre-extraction block adopts convolution layer A 1
The first depth separable convolution blocks are composed of convolution layers A connected in sequence 2 Channel-by-channel convolution layer B 1 And convolution layer A 3 Form a composition
The channel feature extraction block is formed by sequentially connecting 4 channel feature extraction sub-blocks, and each channel feature extraction sub-block is formed by sequentially connecting a second depth separable rolling block and a channel feature extrusion block; the second depth separable convolution block is composed of sequentially connected channel-by-channel convolution layers B 2 And a convolutional layer A 4 Is composed of, and a layer A is convoluted 4 Output and channel-by-channel convolution layer B of 2 As the output of the second depth separable convolution block; the channel feature extrusion block is composed of a convolution layer A 5 Convolutional layer A 6 Convolutional layer A 7 Formed with a Softmax layer, the inputs of the channel feature squeeze blocks are respectively passed through a convolution layer A 5 And a convolutional layer A 6 Convolutional layer A 5 Output of (2)After passing through Softmax layer, the layers are combined with a convolution layer A 6 Is multiplied by the output of (a), the result of the multiplication is passed through a convolution layer A 7 Then adding the data and the input data to be used as the output of the channel characteristic extrusion block;
the channel compression block adopts convolution layer A 8
The upsampling block is a PixelShuffle block.
Further, a convolutional layer A 1 The convolution kernel size of (1) is 3 x 3, the input channel is 1, the output channel is 45, and padding is 1; convolutional layer A 2 With a convolution kernel size of 1 × 1, input channel 45, output channel 36, padding 0, channel-by-channel convolution layer B 1 Has a convolution kernel size of 3 × 3, input channel of 36, output channel of 36, padding of 1, convolution layer A 3 The convolution kernel size of (1 x 1), input channel of (36), output channel of (18), padding of (0); channel-by-channel convolutional layer B 2 Has a convolution kernel size of 3 × 3, input channel of 18, output channel of 18, padding of 1, convolution layer A 4 The convolution kernel size of (1 x 1), the input channel of (18), the output channel of (18), padding of (0); convolutional layer A 5 Convolutional layer A 6 And a convolutional layer A 7 The parameters of (A) are the same and are as follows: the convolution kernel size is 1 × 1, the input channel is 18, the output channel is 18, and padding is 0; convolutional layer A 8 Has a convolution kernel size of 1 × 1, an input channel of 18, and an output channel of Q 2 The padding is 0, and the Q is a super-resolution multiple.
Further, the training process of the lightweight image super-resolution reconstruction network is as follows:
the method comprises the steps that an existing image data set is used as a training set, down-sampling is conducted on an original image in the training set to obtain a low-resolution image, data preprocessing is conducted on the original image and the low-resolution image respectively to obtain a sub-image of the original image and a sub-image of the low-resolution image, the sub-image of the original image is used as a label, the sub-image of the low-resolution image is used as input, and a training sample is formed;
and setting a loss function and training parameters, and training the lightweight image super-resolution reconstruction network by adopting a supervised batch learning method.
Further, the loss function is:
Figure 626945DEST_PATH_IMAGE001
wherein the content of the first and second substances,Lossthe function of the loss is represented by,Xa label is represented that is, for example,Yrepresents the output of the lightweight image super-resolution reconstruction network (a)x,y) Which represents the coordinates of the pixel or pixels,Nindicating the size of the label.
Further, the size of the sub-image isN×NNThe range of (a) is 30 to 60.
Based on the technical scheme, the invention has the beneficial effects that:
a light-weight image super-resolution reconstruction method suitable for hardware deployment is characterized in that a light-weight design of an image super-resolution reconstruction network is realized by building and compressing a convolutional neural network; moreover, by means of the method that the low-resolution image is divided into a plurality of sub-images which are sequentially sent to the network, and then each sub-image after super-resolution is spliced into a large image, the problems that a chip interacts with an external memory to cause low system efficiency and the like on mobile or edge equipment (such as an FPGA) can be solved; in conclusion, the super-resolution reconstruction network can achieve obvious light weight and simultaneously improve peak signal to noise ratio (PSNR), so that the super-resolution reconstruction network can better land on mobile equipment, embedded equipment and other network edge equipment.
Drawings
Fig. 1 is a flow diagram of a lightweight image super-resolution reconstruction method suitable for hardware deployment according to the present invention.
Fig. 2 is a schematic structural diagram of the lightweight image super-resolution reconstruction network in the present invention.
Fig. 3 is a schematic structural diagram of a first depth separable volume block in the lightweight super-resolution image reconstruction network shown in fig. 2.
Fig. 4 is a schematic structural diagram of channel feature extraction sub-blocks in a channel feature extraction block in the lightweight super-resolution image reconstruction network shown in fig. 2.
Fig. 5 is a schematic diagram of a second depth separable convolution block in the channel feature extraction sub-block shown in fig. 4.
Fig. 6 is a schematic structural diagram of a channel feature extrusion block in the channel feature extraction sub-block shown in fig. 4.
Fig. 7 is a schematic structural diagram of a conventional depth separable volume block.
FIG. 8 is a schematic diagram of a conventional Non-Local Mixed orientation block.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention is further described in detail with reference to the accompanying drawings and embodiments.
The embodiment provides a lightweight image super-resolution reconstruction method suitable for hardware deployment, and the flow of the method is shown in fig. 1, and the method specifically includes the following steps:
step 1, data preprocessing;
converting the format of the low-resolution image to be processed to obtain a low-quality YCbCr image to be processed, namely converting the RGB format into the YCbCr format (Y represents a brightness component, cb represents a blue chrominance component, and Cr represents a red chrominance component); then the YCbCr image to be processed with low quality is divided into a plurality of sub-images, the size of each sub-image isN×NNThe value range of (b) is 30 to 60, and in the embodimentN=30;
Step 2, constructing a light-weight image super-resolution reconstruction network, and completing network training;
the lightweight image super-resolution reconstruction network is shown in fig. 2, and comprises: the device comprises a feature pre-extraction block, a first depth separable rolling block, a channel feature extraction block, a channel compression block and an upper sampling block which are connected in sequence;
and 3, sequentially inputting the sub-images in the step 1 into the lightweight image super-resolution reconstruction network in the step 2, outputting the reconstruction sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstruction sub-images according to a sequence to obtain a high-resolution reconstruction image.
Specifically, in the data preprocessing, compared with an RGB image, the YCbCr image only needs to occupy a very small bandwidth in the transmission process, so that format conversion is carried out in the invention; and moreover, the inference model which divides the picture into a plurality of sub-images for high-resolution reconstruction and then splices and restores the high-resolution reconstructed sub-images is adopted, so that the inference effect which is equal to that of directly inputting the whole picture into a network can be achieved, and the storage space required by the propagation of the characteristic diagram in the network can be greatly reduced.
Specifically, in the lightweight image super-resolution reconstruction network:
the feature pre-extraction block adopts convolution layer A 1 The convolution kernel size is 3 × 3, the input channel is 1, the output channel is 45, and padding is 1; the convolution layer is used for performing special pre-extraction on an input picture, and based on the consideration of the size of a sub-picture after segmentation, if an overlarge convolution kernel is adopted, the extraction capability of fine features of the picture is low; if an undersized convolution kernel is adopted, the extraction capability of the local features of the image is low; therefore, 3 × 3 convolutional layers are used; in this example, the convolutional layer A 1 The size of the output feature map is 45 × 30 × 30.
The first depth separable convolution block is composed of sequentially connected convolution layers A as shown in FIG. 3 2 Channel-by-channel convolution layer B 1 And a convolutional layer A 3 Forming; wherein, the convolution layer A 2 With a convolution kernel size of 1 × 1, input channel 45, output channel 36, padding 0, channel-by-channel convolution layer B 1 Has a convolution kernel size of 3 × 3, input channel of 36, output channel of 36, padding of 1, convolution layer A 3 Has a convolution kernel size of 1 × 1, input channel of 36, output channel of 18, padding of 0. In terms of working principle: compared with the conventional depth-separable volume block shown in fig. 7, the depth-separable volume block in the invention has the advantages that the input branch part is removed from the first depth-separable volume block, so that the depth-separable volume block can be endowed with the function of reducing the number of input feature map channels, and the conventional structure only has the function of extracting features; that is, in the present invention, the first depth-separable volume block extracts features and reduces the number of feature map channels, and in this embodiment, the size of the feature map output by the first depth-separable volume block is reduced to 18 × 30 × 30.
The channel feature extraction block is formed by sequentially connecting 4 channel feature extraction sub-blocks; the channel feature extraction sub-block is composed of a second depth separable rolling block and a channel feature extrusion block which are connected in sequence, as shown in fig. 4. The second depth separable convolution block is shown in FIG. 5 as being formed of sequentially connected channel-by-channel convolution layers B 2 And a convolutional layer A 4 Is composed of, and a layer A is convoluted 4 Output and channel-by-channel convolution layer B of 2 As an output of the second depth separable volume block; channel-by-channel convolutional layer B 2 Has a convolution kernel size of 3 × 3, input channel of 18, output channel of 18, padding of 1, convolution layer A 4 The convolution kernel size of (1 × 1), the input channel of (18), the output channel of (18), and padding of (0). The channel feature extrusion block is shown in FIG. 6 as being formed from convolution layer A 5 Convolutional layer A 6 Convolutional layer A 7 With a Softmax layer, the input passing through the convolutional layer A 5 And a convolutional layer A 6 Convolutional layer A 5 After passing through Softmax layer, the output of (A) is combined with convolution layer A 6 Is multiplied, the result of the multiplication is passed through convolution layer A 7 Then adding the data and the input data to be used as the output of the channel characteristic extrusion block; convolutional layer A 5 Convolutional layer A 6 And convolution layer A 7 The parameters of (a) are the same, specifically: the convolution kernel size is 1 × 1, the input channel is 18, the output channel is 18, and padding is 0. In terms of working principle: for the second depth-separable convolution block, compared with the conventional depth-separable convolution block as shown in fig. 7, the first convolution layer (1 × 1) in the second depth-separable convolution block is removed in the present invention, which can effectively reduce the number of parameters and the number of calculations. Aiming at the channel characteristic extrusion block, in order to achieve a better super-resolution effect, a channel attention mechanism is introduced into a network and is used for acquiring information on different channels, the channel characteristic extrusion block is provided for acquiring local channel information, and the channel characteristic extrusion block is very friendly to hardware; compared with the prior Non-Local Mixed extension block shown in FIG. 8, the channel feature extrusion block of the present invention adopts a one-way design, i.e. one convolution layer (1 × 1) and multiplication operation in the block of FIG. 8 are removed, and through the quantized design, the parameter can be effectively reducedNumber and number of calculations; in addition, because each branch operation in the network is performed, an additional storage space needs to be allocated in the hardware (each branch is parallel), and reducing one branch (convolution layer) can effectively reduce the storage space requirement, thereby improving the hardware friendliness.
The channel compression block adopts convolution layer A 8 With convolution kernel size of 1 × 1, input channel of 18, and output channel of Q 2 The padding is 0, Q is a super-resolution multiple, and the value range of Q is 2 to 4; and reducing the number of channels of the output feature graph of the channel feature extraction block by adopting the channel compression block.
The upsampling block adopts a PixelShuffle block and is used for amplifying an input picture to (QxN) x (QxN); the PixelShuffle block is adopted, no extra parameters are introduced in the amplification process, and no calculation amount is needed.
The training process of the lightweight image super-resolution reconstruction network comprises the following steps:
taking the existing image data set as a training set, performing down-sampling on an original image in the training set to obtain a low-resolution image, performing data preprocessing on the original image and the low-resolution image respectively in the same way as in the step 1 to obtain a sub-image of the original image and a sub-image of the low-resolution image, taking the sub-image of the original image as a label, and taking the sub-image of the low-resolution image as input to form a training sample;
setting a loss function and training parameters, and training the lightweight image super-resolution reconstruction network by adopting a supervised batch learning method; the loss function is:
Figure 652670DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,Lossthe function of the loss is represented by,Xthe label is represented by a number of labels,Yrepresents the output of the lightweight image super-resolution reconstruction network (a)x,y) Which represents the coordinates of the pixel or pixels,Nindicating the size of the label; the loss function represents the 'difference' between the reconstructed high-resolution image and the original high-resolution image output by the super-resolution reconstruction network of the lightweight image of the invention "。
In the embodiment, a 91-images data set is adopted as a training set, and a set5 data set is adopted as a verification set; the example image in the flowchart shown in fig. 1 is from a set5 dataset, the original low resolution image is 252 × 252 in size, divided into 64 sub-images, each 30 × 30 in size; the size of the high-resolution reconstructed image obtained by the method is 504 multiplied by 504, and the PSNR index of a single-channel test is 38.049; if the original picture is directly input into the network for reasoning, the PSNR index of the single-channel test is 38.076; therefore, the processing method for segmenting and splicing the images can achieve the reasoning effect which is equal to the effect of directly inputting the original images into the network.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (6)

1. A light-weight image super-resolution reconstruction method suitable for hardware deployment is characterized by comprising the following steps:
step 1, preprocessing data;
carrying out format conversion on the low-resolution image to be processed to obtain a low-quality YCbCr image to be processed; then dividing the low-quality YCbCr image to be processed into a plurality of sub-images;
step 2, constructing a light-weight image super-resolution reconstruction network and completing network training;
the lightweight image super-resolution reconstruction network comprises: the device comprises a feature pre-extraction block, a first depth separable rolling block, a channel feature extraction block, a channel compression block and an upper sampling block which are connected in sequence;
and 3, sequentially inputting the sub-images in the step 1 into the lightweight image super-resolution reconstruction network in the step 2, outputting the reconstruction sub-images corresponding to the sub-images by the lightweight image super-resolution reconstruction network, and splicing all the reconstruction sub-images according to a sequence to obtain a high-resolution reconstruction image.
2. The method for reconstructing the super-resolution lightweight images suitable for hardware deployment according to claim 1, wherein in the network for reconstructing the super-resolution lightweight images:
the feature pre-extraction block adopts convolution layer A 1
The first depth separable convolution block is composed of sequentially connected convolution layers A 2 Channel-by-channel convolution layer B 1 And a convolutional layer A 3 Forming;
the channel feature extraction block is formed by sequentially connecting 4 channel feature extraction sub-blocks, and each channel feature extraction sub-block is formed by sequentially connecting a second depth separable rolling block and a channel feature extrusion block; the second depth separable convolution block is composed of sequentially connected channel-by-channel convolution layers B 2 And convolution layer A 4 Is composed of, and a layer A is convoluted 4 Output and channel-by-channel convolution layer B of 2 As the output of the second depth separable convolution block; the channel feature extrusion block is composed of a convolution layer A 5 Convolutional layer A 6 Convolutional layer A 7 Formed with a Softmax layer, the inputs of the channel feature extrusion blocks are respectively passed through a convolution layer A 5 And a convolutional layer A 6 Convolutional layer A 5 After passing through Softmax layer, the output of (A) is combined with convolution layer A 6 Is multiplied by the output of (a), the result of the multiplication is passed through a convolution layer A 7 Then adding the obtained data with the input of the channel characteristic extrusion block as the output of the channel characteristic extrusion block;
the channel compression block adopts convolution layer A 8
The upsampling block is a PixelShuffle block.
3. The method for reconstructing the super-resolution light-weight images suitable for hardware deployment as claimed in claim 2, wherein the convolution layer A is 1 The convolution kernel size of (1) is 3 x 3, the input channel is 1, the output channel is 45, and the padding is 1; convolutional layer A 2 Has a convolution kernel size of 1 × 1, an input channel of 45, an output channel of 36, and a padding of0, channel-by-channel convolution layer B 1 Has a convolution kernel size of 3 × 3, input channel of 36, output channel of 36, padding of 1, convolution layer A 3 The convolution kernel size of (1 x 1), input channel of (36), output channel of (18), padding of (0); channel-by-channel convolutional layer B 2 Has a convolution kernel size of 3 × 3, input channel of 18, output channel of 18, padding of 1, convolution layer A 4 The convolution kernel size of (1 x 1), the input channel of (18), the output channel of (18), padding of (0); convolutional layer A 5 Convolutional layer A 6 And a convolutional layer A 7 The parameters are the same and are as follows: the convolution kernel size is 1 × 1, the input channel is 18, the output channel is 18, and padding is 0; convolutional layer A 8 Has a convolution kernel size of 1 × 1, an input channel of 18, and an output channel of Q 2 The padding is 0, and the Q is a super-resolution multiple.
4. The method for reconstructing the lightweight image super-resolution suitable for hardware deployment according to claim 1, wherein the training process of the lightweight image super-resolution reconstruction network is as follows:
the method comprises the steps that an existing image data set is used as a training set, down-sampling is conducted on an original image in the training set to obtain a low-resolution image, data preprocessing is conducted on the original image and the low-resolution image respectively to obtain a sub-image of the original image and a sub-image of the low-resolution image, the sub-image of the original image is used as a label, the sub-image of the low-resolution image is used as input, and a training sample is formed;
and setting a loss function and training parameters, and training the lightweight image super-resolution reconstruction network by adopting a supervised batch learning method.
5. The method for reconstructing the super-resolution lightweight images suitable for hardware deployment according to claim 4, wherein the loss function is:
Figure 109234DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,Lossrepresents a lossThe function of the function(s) is,Xthe label is represented by a number of labels,Yrepresents the output of the lightweight image super-resolution reconstruction network (a)x,y) Which represents the coordinates of the pixel or pixels,Nindicating the size of the label.
6. The method for reconstructing the super-resolution light-weight images suitable for hardware deployment according to claim 1, wherein the size of the sub-images isN×NNThe range of (a) is 30 to 60.
CN202211592921.2A 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment Active CN115601242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211592921.2A CN115601242B (en) 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211592921.2A CN115601242B (en) 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment

Publications (2)

Publication Number Publication Date
CN115601242A true CN115601242A (en) 2023-01-13
CN115601242B CN115601242B (en) 2023-04-18

Family

ID=84854174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211592921.2A Active CN115601242B (en) 2022-12-13 2022-12-13 Lightweight image super-resolution reconstruction method suitable for hardware deployment

Country Status (1)

Country Link
CN (1) CN115601242B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657962A (en) * 2014-12-12 2015-05-27 西安电子科技大学 Image super-resolution reconstruction method based on cascading linear regression
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
US20170339431A1 (en) * 2016-05-23 2017-11-23 Massachusetts lnstitute of Technology System and Method for Providing Real-Time Super-Resolution for Compressed Videos
CN108765511A (en) * 2018-05-30 2018-11-06 重庆大学 Ultrasonoscopy super resolution ratio reconstruction method based on deep learning
CN110136067A (en) * 2019-05-27 2019-08-16 商丘师范学院 A kind of real-time imaging generation method for super-resolution B ultrasound image
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN110378470A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 Optimization method, device and the computer storage medium of neural network model
CN111105352A (en) * 2019-12-16 2020-05-05 佛山科学技术学院 Super-resolution image reconstruction method, system, computer device and storage medium
CN111402126A (en) * 2020-02-15 2020-07-10 北京中科晶上科技股份有限公司 Video super-resolution method and system based on blocks
CN112348103A (en) * 2020-11-16 2021-02-09 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof
CN112734643A (en) * 2021-01-15 2021-04-30 重庆邮电大学 Lightweight image super-resolution reconstruction method based on cascade network
CN112861978A (en) * 2021-02-20 2021-05-28 齐齐哈尔大学 Multi-branch feature fusion remote sensing scene image classification method based on attention mechanism
CN112991203A (en) * 2021-03-08 2021-06-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113538234A (en) * 2021-06-29 2021-10-22 中国海洋大学 Remote sensing image super-resolution reconstruction method based on lightweight generation model
CN114331837A (en) * 2021-12-22 2022-04-12 国网安徽省电力有限公司 Method for processing and storing panoramic monitoring image of protection system of extra-high voltage converter station
CN114519667A (en) * 2022-01-10 2022-05-20 武汉图科智能科技有限公司 Image super-resolution reconstruction method and system
CN114612306A (en) * 2022-03-15 2022-06-10 北京工业大学 Deep learning super-resolution method for crack detection

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
CN104657962A (en) * 2014-12-12 2015-05-27 西安电子科技大学 Image super-resolution reconstruction method based on cascading linear regression
US20170339431A1 (en) * 2016-05-23 2017-11-23 Massachusetts lnstitute of Technology System and Method for Providing Real-Time Super-Resolution for Compressed Videos
CN108765511A (en) * 2018-05-30 2018-11-06 重庆大学 Ultrasonoscopy super resolution ratio reconstruction method based on deep learning
CN110136067A (en) * 2019-05-27 2019-08-16 商丘师范学院 A kind of real-time imaging generation method for super-resolution B ultrasound image
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN110378470A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 Optimization method, device and the computer storage medium of neural network model
CN111105352A (en) * 2019-12-16 2020-05-05 佛山科学技术学院 Super-resolution image reconstruction method, system, computer device and storage medium
CN111402126A (en) * 2020-02-15 2020-07-10 北京中科晶上科技股份有限公司 Video super-resolution method and system based on blocks
CN112348103A (en) * 2020-11-16 2021-02-09 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof
CN112734643A (en) * 2021-01-15 2021-04-30 重庆邮电大学 Lightweight image super-resolution reconstruction method based on cascade network
CN112861978A (en) * 2021-02-20 2021-05-28 齐齐哈尔大学 Multi-branch feature fusion remote sensing scene image classification method based on attention mechanism
CN112991203A (en) * 2021-03-08 2021-06-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113538234A (en) * 2021-06-29 2021-10-22 中国海洋大学 Remote sensing image super-resolution reconstruction method based on lightweight generation model
CN114331837A (en) * 2021-12-22 2022-04-12 国网安徽省电力有限公司 Method for processing and storing panoramic monitoring image of protection system of extra-high voltage converter station
CN114519667A (en) * 2022-01-10 2022-05-20 武汉图科智能科技有限公司 Image super-resolution reconstruction method and system
CN114612306A (en) * 2022-03-15 2022-06-10 北京工业大学 Deep learning super-resolution method for crack detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
亓统帅: "基于软硬件协同优化的图像超分辨FPGA***设计与实现", 《中国优秀硕士学位论文全文数据库》 *
孙龙: "基于深度学习的轻量化图像超分辨率算法研究", 《中国优秀硕士学位论文全文数据库》 *

Also Published As

Publication number Publication date
CN115601242B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN112419151B (en) Image degradation processing method and device, storage medium and electronic equipment
CN107240066A (en) Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN113034358B (en) Super-resolution image processing method and related device
CN107155110A (en) A kind of picture compression method based on super-resolution technique
CN109785252B (en) Night image enhancement method based on multi-scale residual error dense network
Luo et al. Lattice network for lightweight image restoration
CN112801904B (en) Hybrid degraded image enhancement method based on convolutional neural network
CN112017116B (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
Chen et al. MICU: Image super-resolution via multi-level information compensation and U-net
CN110717868A (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
CN112288630A (en) Super-resolution image reconstruction method and system based on improved wide-depth neural network
CN111105376A (en) Single-exposure high-dynamic-range image generation method based on double-branch neural network
CN115018708A (en) Airborne remote sensing image super-resolution reconstruction method based on multi-scale feature fusion
CN111951164A (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN113192147A (en) Method, system, storage medium, computer device and application for significance compression
CN111860363A (en) Video image processing method and device, electronic equipment and storage medium
Zhang et al. Multi-scale-based joint super-resolution and inverse tone-mapping with data synthesis for UHD HDR video
CN115546030B (en) Compressed video super-resolution method and system based on twin super-resolution network
CN117097853A (en) Real-time image matting method and system based on deep learning
CN116797462A (en) Real-time video super-resolution reconstruction method based on deep learning
Fu et al. Low-light image enhancement base on brightness attention mechanism generative adversarial networks
Li et al. RGSR: A two-step lossy JPG image super-resolution based on noise reduction
CN115601242B (en) Lightweight image super-resolution reconstruction method suitable for hardware deployment
WO2023206343A1 (en) Image super-resolution method based on image pre-training strategy
Wang et al. Image quality enhancement using hybrid attention networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant