CN113793263A - Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution - Google Patents

Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution Download PDF

Info

Publication number
CN113793263A
CN113793263A CN202110967124.7A CN202110967124A CN113793263A CN 113793263 A CN113793263 A CN 113793263A CN 202110967124 A CN202110967124 A CN 202110967124A CN 113793263 A CN113793263 A CN 113793263A
Authority
CN
China
Prior art keywords
convolution
resolution image
convolutional
layer
residual error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110967124.7A
Other languages
Chinese (zh)
Other versions
CN113793263B (en
Inventor
仇傲
张伟
罗欣怡
李志鹏
李焱骏
师奕兵
郭一多
罗斌
谢雨洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110967124.7A priority Critical patent/CN113793263B/en
Publication of CN113793263A publication Critical patent/CN113793263A/en
Application granted granted Critical
Publication of CN113793263B publication Critical patent/CN113793263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution, which comprises the steps of firstly carrying out shallow feature extraction on convolution layers with the size of 9 x 9 and the number of channels of 64, then utilizing the characteristic that the cavity convolution improves the receptive field under the condition that the parameter quantity is not changed to construct a multi-scale cavity convolution block, then combining the multi-scale cavity convolution block with a common 3 x 3 convolution layer and a BN layer to form a residual error block, connecting 16 residual error blocks in series to form a residual error network, carrying out nonlinear mapping on features by adopting a multi-path parallel structure to obtain high-level features, and finally carrying out rearrangement on feature maps by sub-pixel convolution layers to finally obtain a high-resolution image SR.

Description

Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution.
Background
In the field of oil logging, the well-around imaging logging is an important branch. The imaging logging around the well can reflect the condition of the oil well by an intuitive well wall image, the development condition of cracks and holes on the well wall can be clearly seen, the imaging logging around the well is an important means for evaluating the oil well, and the resolution evaluation of logging personnel on the well is directly influenced by the definition of the obtained well wall image. High-resolution reconstruction is one of main research directions for image enhancement, the enhancement effect of traditional methods such as a bilinear interpolation method and a bicubic interpolation method is not obvious, and super-resolution reconstruction based on deep learning is researched by more and more people due to the obvious enhancement effect. For example: the deep learning SRCNN algorithm for image super-resolution reconstruction uses bicubic interpolation for upsampling, and then a three-layer neural network is constructed for feature learning and reconstruction.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution so as to realize the rapid enhancement of a logging image and facilitate logging personnel to better observe the image and analyze the underground condition in real time.
In order to achieve the above object, the present invention provides a parallel residual error network high resolution image reconstruction method based on multi-scale hole convolution, which is characterized by comprising the following steps:
(1) acquiring and preprocessing an image;
acquiring a plurality of original high-resolution well logging images by using a well periphery ultrasonic imaging instrument, cutting each high-resolution image to obtain a high-resolution image HR with the same size, and performing n-time down-sampling on each high-resolution well logging image to obtain a low-resolution image LR with the size of H x W, wherein n is a sampling multiple;
(2) constructing an image reconstruction network based on multi-scale void convolution and training;
(2.1) extracting a feature map containing shallow features;
inputting the low-resolution image LR into a convolution layer v1 with the size of 9 × 9 and the number of channels of 64, and performing shallow feature extraction by using a PRelu activation function to obtain 64 feature maps with the size of H × W;
(2.2) constructing two parallel residual error networks;
(2.2.1) constructing a multi-scale cavity rolling block;
feature extraction is carried out on 64 feature maps by adopting 64 convolutional layers v2 with the size of 3 × 3 convolutional kernels and 64 convolutional layers v3 with the size of 3 × 3 convolutional kernels with the expansion rate of 2, then output results of v2 and v3 are added and input into v2 and v3 again, finally output results of v2 and v3 are subjected to feature fusion by using 1 × 1 convolutional kernels, and then the input 64 feature maps are directly added to construct a multi-scale hole convolutional block;
(2.2.2) constructing a single-path residual error network: connecting a multi-scale hole convolution block, a convolution layer v4 of a convolution kernel with the size of 3 x 3 and a normalization layer together, adding the result to 64 input feature maps to form a residual block, and connecting 16 residual blocks in series to form a single-path residual network;
(2.2.3) constructing two parallel residual error networks: simultaneously extracting features of a convolutional layer v5 with 64 convolutional kernels with 5 × 5 sizes and a convolutional layer v6 with 64 convolutional kernels with 5 × 5 sizes and 2 expansion rates, adding output results of v5 and v6, inputting the added output results into v5 and v6 again, performing feature fusion on the output results of v5 and v6 by using 1 × 1 convolutional kernels, and directly adding the input output results and 64 feature maps to construct another multi-scale hole convolutional block; then, the multi-scale hole convolution block is added with a convolution layer v4 and a normalization layer of a convolution kernel with the size of 3 x 3 and input 64 feature maps to form another residual block, and then 16 residual blocks are connected in series to form another single-path residual network;
connecting two single-path residual error networks to the convolution layer in the step (2.1) in a parallel mode, performing feature fusion on the results output by the two parallel residual error networks in the step (2.2) by using a 1-to-1 convolution kernel, inputting the results into the convolution layer v4 of a 3-to-3 convolution kernel and a normalization layer, and directly adding the results and the output of the convolution layer in the step (2.1) to obtain a feature map containing 64 high-level features;
(2.3) reconstructing a high-resolution image;
inputting 64 feature graphs containing high-level features into channels with the number of 64 x n2The convolutional layer v7 to widen the number of channels, and then input to the sub-pixel convolutional layer, so that the single pixels on multiple channel feature maps are combined and arranged into a group of pixels on one channel feature map, that is: h W r n2→ n H W r, r is the number of channels after the last stage output; finally, outputting a high-resolution image SR with the reconstructed size of (n × H) × (n × W) and the channel number of 3 through a convolution layer v8 with the size of 9 × 9 and the channel number of 3;
(2.4) calculating a loss function value;
calculating pixel mean square error MSE of the reconstructed high-resolution image SR and the original high-resolution image HR, and taking the MSE as a loss function value;
Figure BDA0003224342850000031
wherein SR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image SR, and HR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image HR;
(2.5) repeating the steps (2.1) - (2.4), continuing to train the image reconstruction network, and performing parameter optimization by using an Adam optimization algorithm to minimize MSE (mean Square error), so as to finally obtain a trained image reconstruction network model;
(3) and acquiring a logging image in real time, and inputting the logging image into the trained image reconstruction network so as to output a reconstructed high-resolution image.
The invention aims to realize the following steps:
the invention relates to a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution, which comprises the steps of extracting shallow layer features through convolution layers with the size of 9 × 9 and the channel number of 64, constructing a multi-scale cavity convolution block by utilizing the characteristic that the cavity convolution improves the receptive field under the condition that the parameter quantity is not changed, combining the multi-scale cavity convolution block with a common 3 × 3 convolution layer and a BN layer to form a residual error block, connecting 16 residual error blocks in series to form a residual error network, carrying out nonlinear mapping on the features by adopting a multi-path parallel structure to obtain high-level features, and finally, rearranging a feature map through a sub-pixel convolution layer to finally obtain a high-resolution image SR.
Meanwhile, the parallel residual error network high-resolution image reconstruction method based on the multi-scale cavity convolution also has the following beneficial effects:
(1) the receptive field is improved by using the cavity convolution under the condition that the parameter quantity is not changed, and more global characteristics are obtained.
(2) And feature information of different scales is complemented by using a parallel network structure.
(3) Compared with the traditional bicubic interpolation and deep learning super-resolution reconstruction classical algorithms SRCNN, VDSR, SRResNet and the like, the multi-scale parallel network based on the cavity convolution obviously improves the PSNR and SSIM of the image reconstruction objective indexes.
Drawings
FIG. 1 is a flow chart of a parallel residual error network high-resolution image reconstruction method based on multi-scale hole convolution according to the invention;
FIG. 2 is a diagram of a periwell ultrasound imager;
FIG. 3 is a multi-scale hole volume block;
FIG. 4 is a parallel network structure based on multi-scale hole rolling blocks;
FIG. 5 is a test set 4-fold reconstructed mean index analysis of the present algorithm and other classical algorithms;
FIG. 6 is a test set 2-fold reconstructed mean index analysis of the present algorithm and other classical algorithms;
FIG. 7 is a comparison of the effect of multiple single log image participation of the present algorithm and other classical algorithms;
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
In this embodiment, as shown in fig. 1, the parallel residual error network high resolution image reconstruction method based on multi-scale hole convolution of the present invention includes the following steps:
s1, image acquisition and preprocessing;
as shown in fig. 2, the borehole ultrasonic imaging instrument includes a ground control system and an underground logging circuit system, during logging, the ultrasonic transducer probe is driven by the motor transmission device to rotate for 360 degrees, each time the ultrasonic transducer probe rotates for one circle, the tooth sensor generates 250 pulses and the body maark sensor generates 1 pulse, then the main control module of the underground logging circuit shapes the signals, the shaped tooth signals serve as transmitting signals to enable the FPGA to drive the transmitting circuit to generate high-voltage pulses and excite the ultrasonic transducer, and the shaped body mark signals serve as acquisition cycle synchronization signals to mark a new circle acquisition starting point.
The ultrasonic transducer collects full-wave column data of echoes, the full-wave column data are sent to the ground control system through an EDIB bus, the upper computer receives the data converted through the USB port and then analyzes the data, echo amplitude and arrival time data in a data packet are extracted, and a final borehole wall image is synthesized by using software.
After actual well logging, 127 images with the resolution of 140 × 140, 152 images with the resolution of 180 × 180, 860 images with the resolution of 352 × 352 are finally obtained as a training set and 366 images with the resolution of 352 × 352, 265 images with the resolution of 180 × 180, and 30 images with the resolution of 140 × 140 are obtained as a testing set;
and taking the original images in the training set as original high-resolution images, cutting each original high-resolution image to obtain an original high-resolution image HR with the size of 96 × 96, and performing four-time down-sampling on the HR to obtain a low-resolution image LR with the size of H × W-24 so as to perform network training.
S2, constructing an image reconstruction network based on multi-scale hole convolution and training;
s2.1, extracting a feature map containing shallow features;
in this embodiment, 20 LR images are randomly extracted into the feature extraction layer each time, the extracted low-resolution image LR is input to the convolution layer v1 with the size of 9 × 9 and the number of channels of 64, and the shallow feature extraction is performed by using the prilu activation function, so as to obtain 64 feature maps with the size of H × W;
s2.2, constructing two parallel residual error networks;
s2.2.1, constructing a multi-scale cavity rolling block;
feature extraction is simultaneously carried out on 64 feature maps by adopting 64 convolutional layers v2 with the size of 3 × 3 convolutional kernels and 64 convolutional layers v3 with the size of 3 × 3 convolutional kernels with the expansion rate of 2, then output results of v2 and v3 are added and input into v2 and v3 again, finally output results of v2 and v3 are subjected to feature fusion by using 1 × 1 convolutional kernels, and then the output results are directly added with the input 64 feature maps to form a multi-scale hole convolutional block, and the multi-scale hole block is shown in FIG. 3;
s2.2.2, constructing a single-path residual error network: connecting a multi-scale hole convolution block, a convolution layer v4 of a convolution kernel with the size of 3 x 3 and a normalization layer together, adding the result to 64 input feature maps to form a residual block, and connecting 16 residual blocks in series to form a single-path residual network;
s2.2.3, as shown in fig. 4, two parallel residual error networks are constructed: simultaneously extracting features of a convolutional layer v5 with 64 convolutional kernels with 5 × 5 sizes and a convolutional layer v5 with 64 convolutional kernels with 5 × 5 sizes and 2 expansion rates, adding output results of v5 and v6, inputting the added output results into v5 and v6 again, performing feature fusion on the output results of v5 and v6 by using 1 × 1 convolutional kernels, and directly adding the input output results and 64 feature maps to construct another multi-scale hole convolutional block; then, the multi-scale hole convolution block is added with a convolution layer v4 and a normalization layer of a convolution kernel with the size of 3 x 3 and input 64 feature maps to form another residual block, and then 16 residual blocks are connected in series to form another single-path residual network;
connecting two single-path residual error networks to the convolution layer in the step S2.1 in a parallel mode, then performing feature fusion on the results output by the two parallel residual error networks in the step S2.2 by using a 1 x 1 convolution kernel, inputting the results into the convolution layer v4 of a convolution kernel with the size of 3 x 3 and a normalization layer, and directly adding the results and the output of the convolution layer in the step S2.1 to obtain a feature map containing 64 high-level features;
s2.3, reconstructing a high-resolution image;
the 64 feature maps containing high-level features are input to the convolution layer v7 with the channel number 64 × 4 to widen the channel number, and then input to the sub-pixel convolution layer, so that the single pixels on the multiple channel feature maps are combined and arranged into a group of pixels on one channel feature map, that is: h W r 42→ 4 × H (4 × W) × r, in this embodiment, r is the number of channels 64 after the last stage output; finally, outputting a high-resolution image SR with the reconstructed size of (4 × H) × (4 × W) and the channel number of 3 through a convolution layer v8 with the size of 9 × 9 and the channel number of 3;
s2.4, calculating a loss function value;
calculating pixel mean square error MSE of the reconstructed high-resolution image SR and the original high-resolution image HR, and taking the MSE as a loss function value;
Figure BDA0003224342850000061
wherein SR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image SR, and HR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image HR;
s2.5, repeating the steps S2.1-S2.4, continuing to train the image reconstruction network, and performing parameter optimization by using an Adam optimization algorithm to minimize MSE (mean Square error), so as to finally obtain a trained image reconstruction network model;
and S3, acquiring a logging image in real time, and inputting the logging image into the trained image reconstruction network so as to output a reconstructed high-resolution image.
Authentication
In this embodiment, the reconstructed high resolution image (SR) and the original high resolution image (HR) are reconstructed at high resolution, and a peak signal-to-noise ratio (PSNR) and a Structural Similarity (SSIM) index, which are commonly used in an algorithm, are compared. PSNR is compared from pixels, and the higher PSNR, the less distortion representing the reconstructed image, and the more similar the pixels are to those of HR. The peak signal-to-noise ratio is calculated as follows:
Figure BDA0003224342850000062
where n is the number of bits per pixel, and is typically 8.
The SSIM is used for comparing two images from contrast, structural characteristics and brightness, and the closer the SSIM is to 1, the more similar the representative images are, the better the reconstruction effect is. The structural similarity is calculated as follows:
Figure BDA0003224342850000071
Figure BDA0003224342850000072
Figure BDA0003224342850000073
SSIM(X,Y)=l(X,Y)·c(X,Y)·s(X,Y)
wherein X and Y represent the image SR and the image HR, μ, respectivelyX、μYRepresenting the mean values, σ, of the image SR and the image HR, respectivelyX、σYDenotes the standard deviation, σ, of the images SR and HR, respectivelyXYRepresenting the covariance of the images SR and HR, c1、c2、c3Is a constant.
In order to verify the reconstruction effect of the algorithm, the algorithm and the classical algorithms SRCNN, VDSR and SRResNet of the deep learning super-resolution reconstruction are combined, the convolutional layer in the SRResNet is changed into a cavity convolutional layer with the expansion rate of 2, the single-path residual network in the SRResNet is changed into a parallel two-path residual network, and the average PSNR and the average SSIM with the quadruple reconstruction effect of the test set are used at the same time, for example, as shown in FIG. 5. Then, the above steps are repeated to change the network proposed by the algorithm and the classical algorithms SRCNN, VDSR and SRResNet of deep learning super-resolution reconstruction, change the convolution layer in SRResNet into a cavity convolution layer with the expansion rate of 2, and change the single-path residual network in SRResNet into a parallel two-path residual network, and simultaneously use the average PSNR and the average SSIM pair with the double reconstruction effect of the test set as shown in FIG. 6. The image shows that the multi-scale cavity convolution blocks and the parallel network structure are effective to the reconstruction effect, and the parallel residual error network high-resolution image reconstruction method based on the multi-scale cavity convolution, which is provided by the algorithm, is improved to the reconstruction effect.
As shown in fig. 7, the algorithm and the conventional bicubic interpolation are shown, and based on SRCNN, VDSR, ESPCN, SRResNet of deep learning, objective indicators PSNR and SSIM comparison and subjective visual comparison are performed on four randomly selected logging images in a test set.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (1)

1. A parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution is characterized by comprising the following steps:
(1) acquiring and preprocessing an image;
acquiring a plurality of original high-resolution well logging images by using a well periphery ultrasonic imaging instrument, cutting each high-resolution image to obtain a high-resolution image HR with the same size, and performing n-time down-sampling on each high-resolution well logging image to obtain a low-resolution image LR with the size of H x W, wherein n is a sampling multiple;
(2) constructing and constructing an image reconstruction network based on multi-scale cavity convolution and training;
(2.1) extracting a feature map containing shallow features;
inputting the low-resolution image LR into a convolution layer v1 with the size of 9 × 9 and the number of channels of 64, and performing shallow feature extraction by using a PRelu activation function to obtain 64 feature maps with the size of H × W;
(2.2) constructing two parallel residual error networks;
(2.2.1) constructing a multi-scale cavity rolling block;
feature extraction is carried out on 64 feature maps by adopting 64 convolutional layers v2 with the size of 3 × 3 convolutional kernels and 64 convolutional layers v3 with the size of 3 × 3 convolutional kernels with the expansion rate of 2, then output results of v2 and v3 are added and input into v2 and v3 again, finally output results of v2 and v3 are subjected to feature fusion by using 1 × 1 convolutional kernels, and then the input 64 feature maps are directly added to construct a multi-scale hole convolutional block;
(2.2.2) constructing a single-path residual error network: connecting a multi-scale hole convolution block, a convolution layer v4 of a convolution kernel with the size of 3 x 3 and a normalization layer together, adding the result to 64 input feature maps to form a residual block, and connecting 16 residual blocks in series to form a single-path residual network;
(2.2.3) constructing two parallel residual error networks: simultaneously extracting features of a convolutional layer v5 with 64 convolutional kernels with 5 × 5 sizes and a convolutional layer v6 with 64 convolutional kernels with 5 × 5 sizes and 2 expansion rates, adding output results of v5 and v6, inputting the added output results into v5 and v6 again, performing feature fusion on the output results of v5 and v6 by using 1 × 1 convolutional kernels, and directly adding the input output results and 64 feature maps to construct another multi-scale hole convolutional block; then, the multi-scale hole convolution block is added with a convolution layer v4 and a normalization layer of a convolution kernel with the size of 3 x 3 and input 64 feature maps to form another residual block, and then 16 residual blocks are connected in series to form another single-path residual network;
connecting two single-path residual error networks to the convolution layer in the step (2.1) in a parallel mode, performing feature fusion on the results output by the two parallel residual error networks in the step (2.2) by using a 1-to-1 convolution kernel, inputting the results into the convolution layer v4 of a 3-to-3 convolution kernel and a normalization layer, and directly adding the results and the output of the convolution layer in the step (2.1) to obtain a feature map containing 64 high-level features;
(2.3) reconstructing a high-resolution image;
inputting 64 feature graphs containing high-level features into channels with the number of 64 x n2The convolutional layer v7 to widen the number of channels, and then input to the sub-pixel convolutional layer, so that the single pixels on multiple channel feature maps are combined and arranged into a group of pixels on one channel feature map, that is: h W r n2→ n H W r, r is the number of channels after the last stage output; finally, outputting a high-resolution image SR with the reconstructed size of (n × H) × (n × W) and the channel number of 3 through a convolution layer v8 with the size of 9 × 9 and the channel number of 3;
(2.4) calculating a loss function value;
calculating pixel mean square error MSE of the reconstructed high-resolution image SR and the original high-resolution image HR, and taking the MSE as a loss function value;
Figure FDA0003224342840000021
wherein SR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image SR, and HR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image HR;
(2.5) repeating the steps (2.1) - (2.4), continuing to train the image reconstruction network, and performing parameter optimization by using an Adam optimization algorithm to minimize MSE (mean Square error), so as to finally obtain a trained image reconstruction network model;
(3) and acquiring a logging image in real time, and inputting the logging image into the trained image reconstruction network so as to output a reconstructed high-resolution image.
CN202110967124.7A 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution Active CN113793263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110967124.7A CN113793263B (en) 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110967124.7A CN113793263B (en) 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Publications (2)

Publication Number Publication Date
CN113793263A true CN113793263A (en) 2021-12-14
CN113793263B CN113793263B (en) 2023-04-07

Family

ID=78876216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110967124.7A Active CN113793263B (en) 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Country Status (1)

Country Link
CN (1) CN113793263B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025118A (en) * 2022-01-06 2022-02-08 广东电网有限责任公司中山供电局 Low-bit-rate video reconstruction method and system, electronic equipment and storage medium
CN114529519A (en) * 2022-01-25 2022-05-24 河南大学 Image compressed sensing reconstruction method and system based on multi-scale depth cavity residual error network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097512A (en) * 2019-04-16 2019-08-06 四川大学 Construction method and the application of the three-dimensional MRI image denoising model of confrontation network are generated based on Wasserstein
CN110211038A (en) * 2019-04-29 2019-09-06 南京航空航天大学 Super resolution ratio reconstruction method based on dirac residual error deep neural network
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding
CN110930306A (en) * 2019-10-28 2020-03-27 杭州电子科技大学 Depth map super-resolution reconstruction network construction method based on non-local perception
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
US20210239618A1 (en) * 2020-01-30 2021-08-05 Trustees Of Boston University High-speed delay scanning and deep learning techniques for spectroscopic srs imaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding
CN110097512A (en) * 2019-04-16 2019-08-06 四川大学 Construction method and the application of the three-dimensional MRI image denoising model of confrontation network are generated based on Wasserstein
CN110211038A (en) * 2019-04-29 2019-09-06 南京航空航天大学 Super resolution ratio reconstruction method based on dirac residual error deep neural network
CN110930306A (en) * 2019-10-28 2020-03-27 杭州电子科技大学 Depth map super-resolution reconstruction network construction method based on non-local perception
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
US20210239618A1 (en) * 2020-01-30 2021-08-05 Trustees Of Boston University High-speed delay scanning and deep learning techniques for spectroscopic srs imaging

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUANGYANG WU等: "PRED: A PARALLEL NETWORK FOR HANDLING MULTIPLE DEGRADATIONS VIA SINGLE MODEL IN SINGLE IMAGE SUPER-RESOLUTION", 《网页在线公开:HTTPS://IEEEXPLORE.IEEE.ORG/STAMP/STAMP.JSP?TP=&ARNUMBER=8804409》 *
杨伟铭等: "基于并行残差卷积网络的图像超分辨重建", 《空军工程大学学报(自然科学版)》 *
樊帆等: "基于并行通道-空间注意力机制的腹部MRI影像多尺度超分辨率重建", 《计算机应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025118A (en) * 2022-01-06 2022-02-08 广东电网有限责任公司中山供电局 Low-bit-rate video reconstruction method and system, electronic equipment and storage medium
CN114529519A (en) * 2022-01-25 2022-05-24 河南大学 Image compressed sensing reconstruction method and system based on multi-scale depth cavity residual error network
CN114529519B (en) * 2022-01-25 2024-07-12 河南大学 Image compressed sensing reconstruction method and system based on multi-scale depth cavity residual error network

Also Published As

Publication number Publication date
CN113793263B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111127374B (en) Pan-sharing method based on multi-scale dense network
WO2021056969A1 (en) Super-resolution image reconstruction method and device
CN113793263B (en) Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
CN112734646B (en) Image super-resolution reconstruction method based on feature channel division
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN110310227A (en) A kind of image super-resolution rebuilding method decomposed based on high and low frequency information
CN109712077B (en) Depth dictionary learning-based HARDI (hybrid automatic repeat-based) compressed sensing super-resolution reconstruction method
CN113379601A (en) Real world image super-resolution method and system based on degradation variational self-encoder
CN110533591B (en) Super-resolution image reconstruction method based on codec structure
CN111178499B (en) Medical image super-resolution method based on generation countermeasure network improvement
CN111833261A (en) Image super-resolution restoration method for generating countermeasure network based on attention
CN111667407A (en) Image super-resolution method guided by depth information
CN111353935A (en) Magnetic resonance imaging optimization method and device based on deep learning
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
CN113284046A (en) Remote sensing image enhancement and restoration method and network based on no high-resolution reference image
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN112365505A (en) Lightweight tongue body segmentation method based on coding and decoding structure
CN117455770A (en) Lightweight image super-resolution method based on layer-by-layer context information aggregation network
CN102298768B (en) High-resolution image reconstruction method based on sparse samples
CN108267311A (en) A kind of mechanical multidimensional big data processing method based on tensor resolution
CN117173022A (en) Remote sensing image super-resolution reconstruction method based on multipath fusion and attention
CN116664587A (en) Pseudo-color enhancement-based mixed attention UNet ultrasonic image segmentation method and device
CN115984116A (en) Super-resolution method based on remote sensing image degradation
CN114998137A (en) Ground penetrating radar image clutter suppression method based on generation countermeasure network
CN111538003B (en) Single-bit compressed sampling synthetic aperture radar imaging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant