CN114187181B - Dual-path lung CT image super-resolution method based on residual information refining - Google Patents

Dual-path lung CT image super-resolution method based on residual information refining Download PDF

Info

Publication number
CN114187181B
CN114187181B CN202111549446.6A CN202111549446A CN114187181B CN 114187181 B CN114187181 B CN 114187181B CN 202111549446 A CN202111549446 A CN 202111549446A CN 114187181 B CN114187181 B CN 114187181B
Authority
CN
China
Prior art keywords
image
convolution layer
residual
feature
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111549446.6A
Other languages
Chinese (zh)
Other versions
CN114187181A (en
Inventor
郑茜颖
陈伊涵
程树英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202111549446.6A priority Critical patent/CN114187181B/en
Publication of CN114187181A publication Critical patent/CN114187181A/en
Application granted granted Critical
Publication of CN114187181B publication Critical patent/CN114187181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dual-path lung CT image super-resolution method based on residual information refining, which comprises the steps of a low-frequency information extraction path and a high-frequency information extraction path; the low-frequency information extraction path is used for extracting low-frequency characteristics of the low-resolution image; the high-frequency information extraction path comprises a residual information refining module RIDM and a gradient identity path and is used for extracting high-frequency characteristics of the image; the residual information refining module comprises a plurality of information refining modules IDB and a plurality of residual modules, wherein each residual module is connected in a cascading mode, the IDBs are parallel to each other, and each IDB takes the output characteristic of the previous IDB and the output characteristic of the residual module as input. The high-low frequency information of the image is effectively separated by the constructed image super-resolution reconstruction model to have super-resolution tasks more targeted, the characteristic resolution learning capacity of the network is improved, more image edge details are reserved, and the image super-resolution reconstruction quality is improved.

Description

Dual-path lung CT image super-resolution method based on residual information refining
Technical Field
The invention relates to the technical field of image processing, in particular to a dual-path lung CT image super-resolution method based on residual information refining.
Background
With the rapid development of medical imaging and intelligent diagnosis technologies, artificial intelligence methods have become a research hotspot for radiation processing technologies in recent years. CT (Computed Tomography), namely electronic computer tomography, which uses precisely collimated X-ray beams, gamma rays, ultrasonic waves and the like to scan a section around a certain part of a human body together with a detector with extremely high sensitivity, has the characteristics of quick scanning time, clear images and the like, and can be used for checking various diseases. When the CT image is used for medical diagnosis, the definition of the image influences the judgment of doctors, and the high-resolution image can provide more abundant pathological information for the doctors and improve the reliability of diagnosis. In CT imaging, acquiring high resolution images requires longer scan times and higher signal-to-noise ratios. In some cases, however, it is difficult for the patient to remain stationary for a long period of time, and a high signal-to-noise ratio requires more elaborate instrumentation, which increases imaging costs. In order to shorten the scanning time, a method of enlarging the scanning layer thickness is generally adopted, but this will lead to a decrease in resolution, and eventually will limit the later processing of images, analysis and diagnosis of diseases.
Although deep learning-based image super-resolution methods have been greatly developed over the years, these algorithms do not work well in the real world, and so there is still much room for research in this direction. Furthermore, most models are trained on a common dataset, and the super-resolution effect of such models on CT images is often not satisfactory. In the field of medical diagnosis, a relatively fast image processing speed and a relatively good image reconstruction effect are required, and the existing super-resolution model cannot well achieve the two effects.
Disclosure of Invention
Therefore, the invention aims to provide a dual-path lung CT image super-resolution method based on residual information refining, which is used for realizing the technical effect of improving the lung CT image super-resolution reconstruction quality.
In order to achieve the above purpose, the technical scheme of the invention is as follows: the dual-path lung CT image super-resolution method based on residual information refining comprises an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model comprises a low-frequency information extraction path and a high-frequency information extraction path; the high-frequency information extraction path comprises a residual information refining module RIDM and a gradient identity path, and further comprises the following steps:
step S1: a training set is established according to the image degradation model, and N low-resolution images I LR and a real high-resolution image I HR corresponding to the N low-resolution images I LR are obtained; wherein N is an integer greater than 1;
step S2: inputting the low-resolution image I LR obtained in step S1 to a low-frequency information extraction path to extract low-frequency features of the image;
Step S3: extracting shallow features by using a convolution layer from the low-resolution image I LR obtained in the step S1, and extracting residual high-frequency features of the image by using a residual information refining module RIDM in a high-frequency information extraction path;
Step S4: extracting edge features of the image by using a gradient identity path in the high-frequency information extraction path by using the low-resolution image I LR obtained in the step S1;
Step S5: adding the residual high-frequency feature obtained in the step S3 and the edge feature obtained in the step S4 to obtain a high-frequency feature, adding the high-frequency feature and the low-frequency feature obtained in the step S2 to obtain a deep feature, completing up-sampling processing by using sub-pixel convolution, and reconstructing a final high-resolution image I HR;
Step S6: and optimizing the image super-resolution reconstruction model through a loss function.
Further, the step S1 of establishing the training set is operated as follows: according to the original image, using a sliding window to decompose the image into a plurality of small high resolution image patches with the size of 64 multiplied by 64, and using a bicubic interpolation algorithm to downsample the small high resolution image patches to obtain the sizeThe small high resolution image patch and the small low resolution image patch pair are used as training sets.
Further, the low-frequency information extraction path in the step S2 includes a first convolution layer, a second convolution layer and a third convolution layer, where the first convolution layer uses a large convolution kernel of 5×5 to extract a shallow feature F 0; the second convolution layer uses a3 x 3 convolution kernel to further extract deeper features; the third convolution layer increases the nonlinearity of the features by using a convolution kernel of 1 multiplied by 1, so as to realize cross-channel interaction among the features and feature dimension reduction; and a first ReLU activation layer is added after the first convolution layer, the second convolution layer and the third convolution layer, so that the nonlinearity of the characteristics is increased.
Further, the residual information refining module RIDM in the step S3 includes a plurality of information refining modules IDB and a plurality of residual modules; the residual information refining module RIDM takes the shallow characteristic F 0 as input, takes the shallow characteristic F 0 as input characteristic and inputs the shallow characteristic F 0 into the first residual module and the first information refining module IDB respectively, the residual module is used for extracting deeper characteristics, the first residual module outputs a first output characteristic F 1, and the second residual module takes the output of the previous residual module as input to obtain a second output characteristic F 2; the first information refining module IDB refines the shallow characteristic F 0 and the first output characteristic F 1 and outputs a first fine characteristic F r1 and a first coarse characteristic F d1; the second output feature F 2 and the first coarse feature F d1 are used as inputs of a second information refining module IDB, the second information refining module IDB outputs a second fine feature F r2 and a second coarse feature F d2, and the above operations are repeated a plurality of times to obtain a third fine feature F r3, a fourth fine feature F r4 and a fifth fine feature F r5, and a third output feature F 3, a fourth output feature F 4, a fifth output feature F 5 and a sixth output feature F 6; the reserved characteristics of each information refining module IDB and the characteristics output by the last residual error module are connected on a channel, and the characteristics are subjected to dimension reduction through a 1 multiplied by 1 convolution layer to obtain the output characteristics F M of the residual error information refining module RIDM;
FM=f1×1([Fr1,Fr2,Fr3,Fr4,Fr5,F6])+F0
where f 1×1 denotes a convolution layer with a kernel size of 1 x 1.
Further, the information refining module IDB includes a fourth convolution layer and a fifth convolution layer, where the fourth convolution layer is connected in parallel with the fifth convolution layer, the fourth convolution layer is specifically a1×1 convolution layer, the fifth convolution layer is specifically a3×3 convolution layer and is formed in parallel, the fourth convolution layer is used to extract coarse features of subsequent refining, and the fifth convolution layer is used to extract fine features of the input refined.
Further, the residual module includes a sixth convolution layer, a seventh convolution layer, and a second ReLU activation layer, where the second ReLU activation layer is disposed between the sixth convolution layer and the seventh convolution layer, adds the output feature and the input feature, and outputs the added feature and the input feature to the next module.
Further, the gradient identity path in the step S4 includes an edge extraction layer and an edge feature extraction layer; the edge extraction layer uses a Sobel edge extraction operator to extract an edge map of the image; the edge feature extraction layer extracts edge features of the edge map by using a3×3 convolution layer; and adding the edge features and the output of the residual information refining module RIDM, and then performing feature fusion and dimension reduction through a 1X 1 convolution layer.
Further, the loss function in the step S6 uses the average L1 error between the N reconstructed high resolution images and the corresponding real high resolution images, where the expression is:
Where L represents the loss function, I HR is the high resolution image, and I SR is the output image of the network.
Compared with the prior art, the invention has the following beneficial effects: the invention provides a dual-path network with residual information distillation, which is used for processing high-frequency and low-frequency information of an image separately, extracting deep high-frequency characteristics by utilizing a residual information distillation module, and restraining the high-frequency information according to an image edge map, so that the reconstruction effect of high-frequency details of the image is improved, and further, the super-resolution reconstruction quality of the image is improved.
Drawings
FIG. 1 is a schematic diagram of a topology structure of an image super-resolution reconstruction model provided by the invention;
Fig. 2 is a schematic topology diagram of a residual information refining module provided by the present invention;
Fig. 3 is a schematic topology diagram of an information refining module according to the present invention;
Fig. 4 is a schematic topology diagram of a residual module according to the present invention;
FIG. 5 is a schematic flow chart of a dual-path lung CT image super-resolution method based on residual information refining provided by the invention;
FIG. 6 is a graph of the comparison of the algorithm at magnification of 3 for the test image of example 1 of the present invention;
Fig. 7 is a graph of the algorithm comparison result at x 4 magnification of the test image of example 2 of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Referring to fig. 1, fig. 1 is a schematic topology diagram of a lung CT image super-resolution reconstruction model based on residual information refinement according to an embodiment of the present invention. ,
According to the research of the applicant, the existing methods for reconstructing super-resolution images are all prone to adopting an attention mechanism to improve the network expression capability and obtain higher evaluation indexes, but the existing methods do not consider separating high-frequency information and low-frequency information of images into super-resolution, so that detailed characteristic processing results are still to be improved during image reconstruction, and therefore, the embodiment of the invention provides a dual-path lung CT image super-resolution reconstruction model based on residual information refining to solve the problems.
In one implementation, the image super-resolution reconstruction model provided by the embodiment of the invention comprises a low-frequency information extraction path, a high-frequency information extraction path and a reconstruction module; the low-frequency information extraction path is used for extracting low-frequency characteristics of the low-resolution image; the high-frequency information extraction path comprises a residual information refining module RIDM and a gradient identity path, and is used for extracting residual high-frequency characteristics of an image; the residual information refining modules comprise a plurality of information refining modules IDB and a plurality of residual modules, wherein each residual module is connected in a cascading mode, the information refining modules IDB are parallel to each other, and each information refining module IDB takes the output characteristics of the previous information refining module IDB and the output characteristics of the residual module as input; the residual information refining module RIDM performs layer-by-layer refining on the extracted features through interaction between the information refining module IDB module and the residual module, then fuses all the refined features to obtain fusion features with rich information, and finally fuses the low-frequency features and the high-frequency features to obtain corresponding deep features; the reconstruction module is used for carrying out up-sampling and feature reconstruction according to the deep features and outputting a final high-resolution image.
As shown in fig. 1, in the above implementation, the low frequency information extraction path includes three n×n convolution layers (n×n Conv), where n is an odd number greater than 1; the gradient identity path uses Sobel operator and an n×n convolution layer (n×n Conv) to extract features of the image edge map; the residual information refining module RIDM is connected with a residual adder between the gradient identity paths, so that the network is ensured to concentrate on learning the residual characteristics of high frequency. The reconstruction module comprises a sub-pixel convolution layer (PixelShuffle) and an n x n convolution layer (n x n Conv) which are used for upsampling and reconstructing the fused deep features and outputting a final high-resolution image. Specifically, the low-frequency information extraction path in the step S2 includes a first convolution layer, a second convolution layer, and a third convolution layer, where the first convolution layer uses a large convolution kernel of 5×5 to extract a shallow feature F 0; the second convolution layer uses a3 x 3 convolution kernel to further extract deeper features; the third convolution layer increases the nonlinearity of the features by using a convolution kernel of 1 multiplied by 1, so as to realize cross-channel interaction among the features and feature dimension reduction; and a first ReLU activation layer is added after the first convolution layer, the second convolution layer and the third convolution layer, so that the nonlinearity of the characteristics is increased.
In the implementation process, the gradient identity path comprises an edge extraction layer and an edge feature extraction layer; the edge extraction layer uses a Sobel edge extraction operator to extract an edge map of the image; the edge feature extraction layer extracts features of an edge map by using a3×3 convolution layer; adding the edge feature map and the output of the residual information refining module RIDM, and then performing feature fusion and dimension reduction through a1×1 convolution layer; the process can be expressed as:
FM=fRIDM(f1(ILR))
Ig=fg(ILR)
Fg=f2(Ig)
FH=f3(FM+Fg)
i g and F g represent features of the extracted image edge map and the extracted edge map, respectively, F 1,f2 and F 3 are a first convolution layer, a second convolution layer, and a third convolution layer used for extracting features, respectively, F RIDM is a function of the residual information refining module RIDM, and F g represents a Sobel operator.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic topology diagram of a residual information refining module according to an embodiment of the present invention; fig. 3 is a schematic diagram of a topology structure of an information refining module according to an embodiment of the present invention.
In one embodiment, the residual information refining module comprises a plurality of information refining modules IDB and a plurality of residual modules, wherein each residual module is connected in a cascade mode, and the IDB and the residual modules are parallel; the residual information refining module takes a shallow characteristic F 0 as input, takes a shallow characteristic F 0 as input characteristic and inputs the shallow characteristic F 0 into a first residual module and a first information refining module IDB respectively, wherein the residual module is used for extracting deeper characteristics, the first residual module outputs a first output characteristic F 1, and the second residual module takes the output of the previous residual module as input to obtain a second output characteristic F 2; the first information refining module IDB refines the shallow characteristic F 0 and the first output characteristic F 1 and outputs a first fine characteristic F r1 and a first coarse characteristic F d1; the second output feature F 2 and the first coarse feature F d1 are used as inputs of a second information refining module IDB, the second information refining module IDB outputs a second fine feature F r2 and a second coarse feature F d2, and the above operations are repeated a plurality of times to obtain a third fine feature F r3, a fourth fine feature F r4 and a fifth fine feature F r5, and a third output feature F 3, a fourth output feature F 4, a fifth output feature F 5 and a sixth output feature F 6; the reserved characteristics of each information refining module IDB and the characteristics output by the last residual error module are connected on a channel, and the characteristics are subjected to dimension reduction through a 1 multiplied by 1 convolution layer to obtain the output characteristics F M of the residual error information refining module RIDM;
FM=f1×1([Fr1,Fr2,Fr3,Fr4,Fr5,F6])+F0
Where f 1×1 denotes a convolution layer with a kernel size of 1 x 1.
As shown in fig. 3, in the above implementation, each information refining module IDB includes a fourth convolution layer and a fifth convolution layer; the fourth convolution layer is connected with the fifth convolution layer in parallel; the fourth convolution layer is specifically a1×1 convolution layer, the fifth convolution layer is specifically a3×3 convolution layer, and the fourth convolution layer is used for extracting coarse features F d of subsequent refining; extracting the refined thin feature F r by using a fifth convolution layer; the process can be expressed as:
Fd=f1×1([X1,X2])
Fr=f3×3([X1,X2])
Wherein X 1 and X 2 represent two image features of the input, and f 1×1 and f 3×3 represent the functional functions of the fourth and fifth convolution layers, respectively.
Referring to fig. 4, fig. 4 is a schematic topological structure diagram of a residual module according to an embodiment of the invention.
As shown in fig. 4, in one embodiment, the residual module includes a sixth convolution layer, a seventh convolution layer, and a second ReLU activation layer, configured as "sixth convolution layer-second ReLU activation layer-seventh convolution layer", and adds the output features to the input features and outputs to the next module.
In an implementation manner, the embodiment of the present invention further provides an image super-resolution reconstruction method applied to the image super-resolution reconstruction model, and details thereof are described below with reference to fig. 5.
S1, building a training set according to an image degradation model to obtain N low-resolution images I LR and a real high-resolution image I HR corresponding to the N low-resolution images I LR; wherein N is an integer greater than 1.
In particular, the training set may be represented asI denotes the i-th high-low resolution image pair.
Step S2: the low-resolution image is input to a low-frequency information extraction path to extract low-frequency features of the image, expressed as follows:
FL=f1×1(f3×3(f5×5(ILR)))
Where I LR denotes an input low-resolution image, F L denotes a low-frequency feature of the low-frequency information extraction path output, and F 1x1、f3x3 and F 5x5 denote functional functions of convolution layers having different kernel sizes.
Step S3: the low resolution image is input to a high frequency information extraction path to extract high frequency features of the image, expressed as follows:
FH=fH(ILR)
wherein F H denotes an output feature of the low-frequency information extraction path, and F H denotes a functional function of the high-frequency information extraction path.
Step S4: and adding the low-frequency characteristic and the high-frequency characteristic to obtain a deep characteristic, wherein the expression is as follows:
F=FL+FH
Where F represents deep features of the image, and F L and F H represent low-frequency features and high-frequency features of the image, respectively.
Step S5: inputting the deep features into a reconstruction module, performing sub-pixel convolution to finish up-sampling processing, and reconstructing a final high-resolution image, wherein the expression is as follows:
ISR=fREC(F)=fDRIDSR(ILR)
wherein I SR represents a high-resolution image obtained by reconstruction, f REC represents a functional function of a reconstruction module, and f DRIDSR represents a functional function of an image super-resolution reconstruction model.
Step S6: optimizing the image super-resolution reconstruction model through a loss function, wherein the loss function uses average L1 errors between N reconstructed high-resolution images and corresponding real high-resolution images, and the expression is as follows:
Where L represents the loss function, I HR is the high resolution image, and I SR is the output image of the network.
In order to better illustrate the effectiveness of the present invention, the embodiment of the present invention also uses a comparative experiment to compare the reconstruction effects.
Specifically, the present example uses a common COVID-CT dataset [30], containing 349 COVID-19CT images and 397 non COVID-19CT images from 216 patients. In the partitioning of the dataset, 600 pictures were used for training, 100 pictures were validated in the training, and finally 46 pictures were tested. And performing bicubic downsampling on the original high-resolution image to obtain a corresponding low-resolution image.
After the training set is built, training and testing of the model can be performed on the pytorch framework. Data enhancement was used, including random horizontal flipping and 90 °, 180 °, 270 ° rotation. The low resolution image in the training set is cropped into 48 x 48 image blocks, and 16 48 x 48 image block groups are randomly input at a time. The kernel size of the convolutional layer is set to 3×3, and the step size and padding are set to 1; for a convolution layer with a kernel size of 5x5, a padding of 2 is set; the number of information refinement modules and residual modules is set to 6 and all activation functions in the network are relus. The network was parameter optimized using Adam optimizer [19] with β 1=0.9、β2 =0.999 and=10 -8. The learning rate was initially set to 1 x 10 -3, after which every 30 epochs were reduced to half of the original. When the learning rate is reduced to 1×10 -5, the learning rate is not reduced any more, and the whole network trains 200 epochs. The peak signal-to-noise ratio (PSNR), structural Similarity (SSIM), multi-scale structural similarity (MS-SSIM), and non-reference image quality assessment index PI are used to assess model performance.
The invention uses 46 pictures in COVID-19CT to test the performance of the model, and the comparison experiment selects bicubic interpolation Bicubic and 5 representative image super-resolution reconstruction methods to be compared with the experimental result of the invention, wherein the experimental result is shown in table 1, and DRIDSR is the method provided by the invention.
It can be seen from table 1 that (the optimal value and the second best ranked value are shown by black bolded and underlined, respectively), the PSNR and SSIM of the present invention are the highest, and the reconstruction effect is significantly better than some image super-resolution reconstruction methods that are representative at present.
Table 1DRIDSR compares the results of PSNR and SSIM on COVID-CT test set with other methods
Example 1 as shown in fig. 6, an experiment was performed at a magnification of x 3 using a CT image of the lung of a new patient with coronal pneumonia as an experimental image. From the results, compared with other algorithms, the image reconstructed by the algorithm provided by the invention has clearer lung outline, and other parts of soft tissues are clearer and more visible.
Example 2 as shown in fig. 7, experiments were performed at x4 magnification using a CT image of the lung of a new patient with coronal pneumonia as an experimental image. From the results, compared with other algorithms, the image reconstructed by the algorithm provided by the invention has better super-resolution performance at the small nodules in the lung.
In summary, the embodiment of the invention provides a dual-path lung CT image super-resolution reconstruction model and method based on residual information refining, which are characterized in that the constructed image super-resolution reconstruction model is used for processing high-frequency and low-frequency information of an image separately, a residual information distillation module is used for extracting deep high-frequency characteristics, and the high-frequency information is restrained according to an image edge map, so that the reconstruction effect of high-frequency details of the image is improved, and the super-resolution reconstruction quality of the image is further improved.
The foregoing description is only of the preferred embodiments of the invention, and all changes and modifications that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (5)

1. The dual-path lung CT image super-resolution method based on residual information refining is characterized by comprising an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model comprises a low-frequency information extraction path and a high-frequency information extraction path; the high-frequency information extraction path comprises a residual information refining module RIDM and a gradient identity path, and further comprises the following steps:
step S1: a training set is established according to the image degradation model, and N low-resolution images I LR and a real high-resolution image I HR corresponding to the N low-resolution images I LR are obtained; wherein N is an integer greater than 1;
step S2: inputting the low-resolution image I LR obtained in step S1 to a low-frequency information extraction path to extract low-frequency features of the image;
Step S3: extracting shallow features by using a convolution layer from the low-resolution image I LR obtained in the step S1, and extracting residual high-frequency features of the image by using a residual information refining module RIDM in a high-frequency information extraction path;
Step S4: extracting edge features of the image by using a gradient identity path in the high-frequency information extraction path by using the low-resolution image I LR obtained in the step S1;
Step S5: adding the residual high-frequency feature obtained in the step S3 and the edge feature obtained in the step S4 to obtain a high-frequency feature, adding the high-frequency feature and the low-frequency feature obtained in the step S2 to obtain a deep feature, completing up-sampling processing by using sub-pixel convolution, and reconstructing a final high-resolution image I HR;
step S6: optimizing the image super-resolution reconstruction model through a loss function;
The residual information refining module RIDM in the step S3 includes a plurality of information refining modules IDB and a plurality of residual modules; the residual information refining module RIDM takes the shallow characteristic F 0 as input, takes the shallow characteristic F 0 as input characteristic and inputs the shallow characteristic F 0 into the first residual module and the first information refining module IDB respectively, the residual module is used for extracting deeper characteristics, the first residual module outputs a first output characteristic F 1, and the second residual module takes the output of the previous residual module as input to obtain a second output characteristic F 2; the first information refining module IDB refines the shallow characteristic F 0 and the first output characteristic F 1 and outputs a first fine characteristic F r1 and a first coarse characteristic F d1; the second output feature F 2 and the first coarse feature F d1 are used as inputs of a second information refining module IDB, the second information refining module IDB outputs a second fine feature F r2 and a second coarse feature F d2, and the above operations are repeated a plurality of times to obtain a third fine feature F r3, a fourth fine feature F r4 and a fifth fine feature F r5, and a third output feature F 3, a fourth output feature F 4, a fifth output feature F 5 and a sixth output feature F 6; the reserved characteristics of each information refining module IDB and the characteristics output by the last residual error module are connected on a channel, and the characteristics are subjected to dimension reduction through a 1 multiplied by 1 convolution layer to obtain the output characteristics F M of the residual error information refining module RIDM;
FM=f1×1([Fr1,Fr2,Fr3,Fr4,Fr5,F6])+F0
Wherein f 1×1 represents a convolution layer with a kernel size of 1×1;
the information refining module IDB comprises a fourth convolution layer and a fifth convolution layer, wherein the fourth convolution layer is connected with the fifth convolution layer in parallel, the fourth convolution layer is specifically a1 multiplied by 1 convolution layer, the fifth convolution layer is specifically a 3 multiplied by 3 convolution layer, coarse features of subsequent refining are extracted by using the fourth convolution layer, and fine features of input refining are extracted by using the fifth convolution layer;
The residual module comprises a sixth convolution layer, a seventh convolution layer and a second ReLU activation layer, wherein the second ReLU activation layer is arranged between the sixth convolution layer and the seventh convolution layer, and the output characteristics and the input characteristics are added and output to the next module.
2. The dual-path lung CT image super-resolution method based on residual information refinement according to claim 1, wherein said step S1 establishes a training set operation as follows: according to the original image, using a sliding window to decompose the image into a plurality of small high resolution image patches with the size of 64 multiplied by 64, and using a bicubic interpolation algorithm to downsample the small high resolution image patches to obtain the sizeThe small high resolution image patch and the small low resolution image patch pair are used as training sets.
3. The dual-path lung CT image super-resolution method according to claim 1, wherein the low-frequency information extraction path in step S2 includes a first convolution layer, a second convolution layer, and a third convolution layer, and the first convolution layer extracts shallow features F 0 using a5×5 large convolution kernel; the second convolution layer uses a3 x 3 convolution kernel to further extract deeper features; the third convolution layer increases the nonlinearity of the features by using a convolution kernel of 1 multiplied by 1, so as to realize cross-channel interaction among the features and feature dimension reduction; and a first ReLU activation layer is added after the first convolution layer, the second convolution layer and the third convolution layer, so that the nonlinearity of the characteristics is increased.
4. The dual-path lung CT image super-resolution method based on residual information refinement according to claim 1, wherein the gradient identity path in step S4 includes an edge extraction layer and an edge feature extraction layer; the edge extraction layer uses a Sobel edge extraction operator to extract an edge map of the image; the edge feature extraction layer extracts edge features of the edge map by using a 3×3 convolution layer; and adding the edge features and the output of the residual information refining module RIDM, and then performing feature fusion and dimension reduction through a 1X 1 convolution layer.
5. The dual-path lung CT image super-resolution method as claimed in claim 1, wherein the loss function in step S6 uses the average L1 error between the N reconstructed high resolution images and the corresponding real high resolution images as follows:
Where L represents the loss function, I HR is the high resolution image, and I SR is the output image of the network.
CN202111549446.6A 2021-12-17 2021-12-17 Dual-path lung CT image super-resolution method based on residual information refining Active CN114187181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111549446.6A CN114187181B (en) 2021-12-17 2021-12-17 Dual-path lung CT image super-resolution method based on residual information refining

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111549446.6A CN114187181B (en) 2021-12-17 2021-12-17 Dual-path lung CT image super-resolution method based on residual information refining

Publications (2)

Publication Number Publication Date
CN114187181A CN114187181A (en) 2022-03-15
CN114187181B true CN114187181B (en) 2024-06-07

Family

ID=80544259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111549446.6A Active CN114187181B (en) 2021-12-17 2021-12-17 Dual-path lung CT image super-resolution method based on residual information refining

Country Status (1)

Country Link
CN (1) CN114187181B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883362A (en) * 2023-07-12 2023-10-13 四川大学工程设计研究院有限公司 Crack detection method and system based on image recognition and image processing equipment
CN116612206B (en) * 2023-07-19 2023-09-29 中国海洋大学 Method and system for reducing CT scanning time by using convolutional neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning
WO2020015167A1 (en) * 2018-07-17 2020-01-23 西安交通大学 Image super-resolution and non-uniform blur removal method based on fusion network
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
CN111461983A (en) * 2020-03-31 2020-07-28 华中科技大学鄂州工业技术研究院 Image super-resolution reconstruction model and method based on different frequency information
CN112330542A (en) * 2020-11-18 2021-02-05 重庆邮电大学 Image reconstruction system and method based on CRCSAN network
CN113129212A (en) * 2019-12-31 2021-07-16 深圳市联合视觉创新科技有限公司 Image super-resolution reconstruction method and device, terminal device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020015167A1 (en) * 2018-07-17 2020-01-23 西安交通大学 Image super-resolution and non-uniform blur removal method based on fusion network
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning
CN113129212A (en) * 2019-12-31 2021-07-16 深圳市联合视觉创新科技有限公司 Image super-resolution reconstruction method and device, terminal device and storage medium
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
CN111461983A (en) * 2020-03-31 2020-07-28 华中科技大学鄂州工业技术研究院 Image super-resolution reconstruction model and method based on different frequency information
CN112330542A (en) * 2020-11-18 2021-02-05 重庆邮电大学 Image reconstruction system and method based on CRCSAN network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
双路径反馈网络的图像超分辨重建算法;陶状;廖晓东;沈江红;;计算机***应用;20200415(第04期);第185-190页 *

Also Published As

Publication number Publication date
CN114187181A (en) 2022-03-15

Similar Documents

Publication Publication Date Title
Bashir et al. A comprehensive review of deep learning-based single image super-resolution
CN109389552B (en) Image super-resolution algorithm based on context-dependent multitask deep learning
CN115409733B (en) Low-dose CT image noise reduction method based on image enhancement and diffusion model
CN110443768B (en) Single-frame image super-resolution reconstruction method based on multiple consistency constraints
CN114187181B (en) Dual-path lung CT image super-resolution method based on residual information refining
CN111932461B (en) Self-learning image super-resolution reconstruction method and system based on convolutional neural network
CN112215755B (en) Image super-resolution reconstruction method based on back projection attention network
CN113298718A (en) Single image super-resolution reconstruction method and system
CN112508794B (en) Medical image super-resolution reconstruction method and system
CN111368849A (en) Image processing method, image processing device, electronic equipment and storage medium
CN105654425A (en) Single-image super-resolution reconstruction method applied to medical X-ray image
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN115147404B (en) Intracranial aneurysm segmentation method by fusing dual features with MRA image
CN116664397B (en) TransSR-Net structured image super-resolution reconstruction method
Jiang et al. CT image super resolution based on improved SRGAN
Yang et al. Deep learning in medical image super resolution: a review
CN117237196A (en) Brain MRI super-resolution reconstruction method and system based on implicit neural characterization
CN116563100A (en) Blind super-resolution reconstruction method based on kernel guided network
Li et al. Rethinking multi-contrast mri super-resolution: Rectangle-window cross-attention transformer and arbitrary-scale upsampling
Fan et al. SGUNet: Style-guided UNet for adversely conditioned fundus image super-resolution
CN111462004B (en) Image enhancement method and device, computer equipment and storage medium
Du et al. X-ray image super-resolution reconstruction based on a multiple distillation feedback network
CN116128890A (en) Pathological cell image segmentation method and system based on self-adaptive fusion module and cross-stage AU-Net network
CN115239836A (en) Extreme sparse view angle CT reconstruction method based on end-to-end neural network
Zhang et al. Deep residual network based medical image reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant